pingfatigue.com is an independent, vendor-neutral reference on alert fatigue. Not affiliated with PagerDuty, Atlassian, Splunk, or any other vendor. Tool comparisons may contain affiliate links, clearly labelled.
Home/Methodology
REFERENCE

Methodology

How pingfatigue.com verifies alert-volume benchmarks, false-positive ratios, MTTA and MTTR ranges, on-call cost math, notification-fatigue cost math, and the Alert Fatigue Index thresholds. Every number on the site should be re-derivable from one of the primary sources listed below using the formulas in the calculation framework section.

Verified May 2026

Sources

Every benchmark and percentage on the site cites a primary source from the list below. Where two sources disagree (incident.io versus PagerDuty median MTTA, vendor-case-study reduction percentages versus survey-reported reductions) the conservative reading is shown and the variance is flagged explicitly on the page that uses it. See /research for the longer bibliography.

SourceCadenceWhat we take from it
incident.io 2024 State of On-CallAnnualIndustry-aggregate alert volume, sleep disruption, and attrition-intent benchmarks. The 41 percent considered-leaving stat, 62 percent sleep-disruption stat, and 42-pages-per-week median that anchor the Alert Fatigue Index come from this survey.
Catchpoint 2024 SRE ReportAnnualThe 60 to 80 percent false-positive rate cited across /alert-tuning, /correlation-dedup, and the Alert Fatigue Index. Used as the conservative anchor for the industry false-positive median.
DORA Accelerate State of DevOps 2024AnnualMTTR tier benchmarks (Elite less than 1 hour, High less than 1 day, Medium between 1 day and 1 week, Low more than 1 week). Anchors the /mttr-impact tier comparison and the cost-of-noise framing.
Google SRE Book chapter 6 (Monitoring Distributed Systems)Reference (stable)The original SRE doctrine: target less than or equal to 2 urgent alerts per 12-hour shift, symptom-based alerting, page only on actionable conditions. Anchors /slo-vs-threshold, /alert-tuning, and the healthy-threshold rows of the Alert Fatigue Index.
Google SRE Workbook chapter 5 (Alerting on SLOs)Reference (stable)Multi-window multi-burn-rate alerting formula. Anchors the 5-step SLO migration plan on /slo-vs-threshold and the burn-rate examples on /alert-tuning.
PagerDuty Global Incident Management Study 2023AnnualIndustry MTTA bands (8 to 15 minutes typical) and alert volume distributions. Used alongside the incident.io survey to triangulate the Alert Fatigue Index volume and acknowledgement-time rows.
Gloria Mark, UC Irvine (interruption research, 2004 to 2023)Reference (stable)The 23-minute refocus penalty after an interruption is the central knowledge-worker stat used in the Alert Fatigue Calculator on / and the notification-fatigue calculator on /notification-fatigue. Multiple peer-reviewed publications since 2004.
Slack Workforce Index 2023AnnualNotification volume benchmarks for messaging platforms (32 mentions per day average for power users). Anchors the Slack arm of /notification-fatigue and the channel-noise framing on /correlation-dedup.
Microsoft Work Trend IndexAnnualThe 57 percent constantly-interrupted and 68 percent lack-focus-time stats that anchor the knowledge-worker context on /notification-fatigue. Aggregates Microsoft Graph telemetry across enterprise tenants.
Atlassian State of TeamsAnnualMeeting overhead and notification-load survey data. Used as supporting evidence for the work-about-work framing on /notification-fatigue and the on-call overhead context on /on-call-cost.
Asana Anatomy of Work IndexAnnualThe 60 percent time-on-work-about-work and context-switching frequency benchmarks. Triangulates the Gloria Mark refocus penalty against survey-reported productivity loss.
Buffer State of Remote WorkAnnualNotification fatigue ranking among reported remote-work challenges. Used as evidence of the knowledge-worker side of the same cognitive mechanism that produces on-call alert fatigue.
Doodle State of MeetingsAnnualMeeting overload and calendar-interruption metrics. Supporting source for the focus-time scarcity framing on /notification-fatigue.
RescueTime productivity reportsAnnualScreen-time and check-frequency data (messaging app check every 6 minutes on average). Anchors the cadence framing in the /notification-fatigue calculator and the recovery-time math.
Harvard Business Review on email and interruption costReferenceHBR articles on continuous email checking and the cost of context switching. Supporting evidence for the refocus-penalty framing applied to alerting on /on-call-cost.
AHRQ PSNet Alarm Fatigue PrimerReference (stable)US Agency for Healthcare Research and Quality primer on clinical alarm fatigue. Documents 72 to 99 percent false-alarm rates on ICU monitors. Anchors the healthcare parallel on /healthcare-parallel.
Joint Commission NPSG.06.01.01 Alarm SafetyRegulatoryThe 2014 US regulatory mandate making clinical alarm safety a national patient-safety goal. The first regulator to formally require alarm-fatigue mitigation programmes. Anchors the healthcare-versus-DevOps regulatory-asymmetry framing on /healthcare-parallel.
ECRI Top 10 Health Technology HazardsAnnualECRI Institute has listed alarm fatigue as a top-three health technology hazard for over a decade. Used to support the longevity-of-evidence framing on /healthcare-parallel.
Cvach 2012 Monitor Alarm Fatigue integrative reviewReference (stable)Seminal peer-reviewed literature review on monitor alarm fatigue in clinical settings. The dominant single citation for clinical alarm fatigue across the academic literature. Anchors the /healthcare-parallel research chapter.
Ponemon / IBM Cost of a Data Breach 2022 and laterAnnualAverage breach detection times and SOC alert-volume context. Used to frame the SecOps arm of /what-is-alert-fatigue and the Target 2013 case study on /case-studies.

In scope

  • >Alert volume benchmarks (pages per engineer per week) sourced from incident.io 2024 State of On-Call, PagerDuty 2023 Global Incident Management Study, and Catchpoint 2024 SRE Report.
  • >False-positive ratios across DevOps and SOC contexts sourced from Catchpoint, Ponemon, and vendor-published case studies (treated with appropriate scepticism on methodology disclosure).
  • >MTTA and MTTR ranges (DORA tier definitions plus practitioner-survey medians).
  • >Fully-loaded SRE cost ranges (US BLS-anchored, with explicit multipliers for UK and Western Europe).
  • >Notification volume benchmarks for Slack, email, and Microsoft Teams in knowledge-work contexts.
  • >ICU alarm fatigue parallel research (Joint Commission, AHRQ PSNet, ECRI, Cvach 2012).
  • >On-call platform comparison (PagerDuty, incident.io, Opsgenie, FireHydrant, Rootly, Splunk On-Call) scored on alert-fatigue-reduction criteria using each vendor's own public pricing and documentation.

Out of scope

  • xEnterprise-negotiated MSAs and per-customer discounts for on-call platforms. Where pricing is not published, the tools comparison shows quote-only rather than invent a band.
  • xDeeply individual variation in operator response (cognitive load tolerance, prior burnout history, personal sleep schedule). The calculator outputs are population-level anchors, not personal forecasts.
  • xSpecific compliance certifications beyond the regulatory citations shown on /healthcare-parallel (FedRAMP overlays, HIPAA audit-cost depth, ISO 27001 audit schedules).
  • xSOAR-depth SecOps content (playbook authoring, EDR integration depth, MITRE ATT&CK coverage). The site references SOC alert fatigue as a parallel context, not a how-to.
  • xSingle-team retrospectives or named-team case studies beyond the public-record incidents listed on /case-studies (Target 2013, Therac-25, ICU sentinel events, Cloudflare 2019).
  • xHealthcare clinical decision rules. The healthcare parallel applies the alarm-fatigue research to DevOps practice; it does not advise on clinical alarm thresholds.

Calculation framework

The six formulas below are the entire computational backbone of the site. The Alert Fatigue Calculator on /, the on-call cost math on /on-call-cost, the MTTR math on /mttr-impact, and the notification-fatigue calculator on /notification-fatigue all derive from these.

Direct alert-handling cost

annual cost per engineer = pages per week x (MTTA in hours + 23-minute refocus penalty in hours) x fully-loaded hourly rate x 52. The 23-minute refocus penalty comes from Gloria Mark's UC Irvine interruption research. Fully-loaded hourly rate uses base salary x 1.3 (employer payroll tax, benefits, equipment, training) divided by 2,080 working hours. The calculator on / exposes every input.

Night-page premium

after-hours pages carry a higher cognitive and recovery cost than business-hours pages. The calculator applies an effective multiplier on the share of pages that fall outside working hours, drawn from the incident.io 2024 sleep-disruption survey data (62 percent reporting sleep disruption from on-call) and the night-shift recovery literature. Conservative band only; the model never claims more than a 2x multiplier.

Gloria Mark refocus penalty

every alert that breaks focus adds 23 minutes of refocus time on top of MTTA. This is the same cognitive mechanism whether the alert is a pager page (alerting) or a Slack mention (knowledge work). The penalty is applied symmetrically on /on-call-cost and /notification-fatigue so the cross-domain comparison is consistent.

SHRM replacement-cost formula

when attrition-intent (41 percent considered-leaving per incident.io 2024) converts to a senior SRE leaving, the cost is 1 to 1.5x annual salary per Society for Human Resource Management benchmarks. The calculator presents this as a range, not a point estimate, and shows the formula. Replacement cost only kicks in for the share of teams above the noisy threshold; healthy teams do not carry it.

Alert Fatigue Index aggregation

the 7-metric Index aggregates healthy, median, and noisy thresholds for pages per engineer per week, false-positive percentage, MTTA, sleep disruption, attrition intent, correlation adoption, and cost per engineer per year. Each threshold is sourced from a primary document listed in the Sources table; the Index does not invent numbers. Where two sources disagree (PagerDuty MTTA bands versus incident.io MTTA bands) the conservative reading is shown.

Notification-fatigue knowledge-worker model

for the /notification-fatigue calculator: annual cost per worker = notifications per day x (check time + refocus time in hours) x fully-loaded hourly rate x 250 working days. A daily cap is applied so the model does not exceed plausible focus-time bounds; teams that report notifications above the cap are bottlenecked by working-day length, which is the operationally relevant ceiling.

Refresh cadence

The site is re-verified on the first business week of every month against the primary sources in the Sources table. The visible “Verified” label, the schema dateModified field, and the footer Updated stamp all read from a single constant (LAST_VERIFIED_DATE) so the on-page text, the JSON-LD, and the footer are always in lockstep. Cosmetic date refreshes are structurally impossible: bumping the date is a single-line change that touches every page at once.

Out-of-cycle refreshes trigger on:

  • >incident.io, PagerDuty, Catchpoint, or DORA publishes a new annual industry survey with revised benchmarks.
  • >An on-call platform vendor changes published pricing or restructures tiers.
  • >Joint Commission, AHRQ PSNet, or ECRI publishes a regulatory or technology-hazard update that shifts the healthcare parallel.
  • >A primary-source author publishes a correction or retraction on a number cited on the site.
  • >A major public-record incident is post-mortemed with alert-fatigue contribution that should be added to /case-studies.
  • >BLS OEWS or a major salary aggregator shifts the senior-SRE salary anchor by more than 10 percent.

Refreshes that move per-source bands by less than 5 percent are batched into the next monthly pass. Refreshes that introduce a new benchmark category, a regulatory shift, or a primary-source correction ship as soon as the change is confirmed against the source.

Limitations

Calculator outputs are population-level anchors derived from conservative readings of published surveys. They are not personal forecasts. Production economics depend on team composition, regional salary variance, vendor-negotiated contracts, and the severity distribution of incidents on a given team. Always treat the calculator as a starting point for a conversation with engineering leadership and finance, not as the answer.

Vendor-published surveys carry vendor bias. incident.io, PagerDuty, and Catchpoint all have a commercial interest in framing alert fatigue as severe and intervenable. The site mitigates this by triangulating across multiple vendor surveys and by anchoring to primary research where available (Gloria Mark, Cvach 2012, AHRQ PSNet). Where vendor case studies quote reduction percentages well above the survey medians, those numbers are noted but not used as the anchor.

Primary-source aggregation lags real practice by one to two years. The 2024 incident.io and Catchpoint surveys reflect 2023 to early 2024 operational data; the 2024 DORA report reflects survey responses from late 2023. The Index is updated annually as each primary source publishes its new edition.

Regional and seniority variance is significant. US senior SRE base salary anchors the loaded-cost math; teams in the UK, Western Europe, India, or Latin America should substitute their own band. The formula (base x 1.3 loaded multiplier divided by 2,080 hours) is portable; the inputs are not.

The knowledge-worker notification-fatigue model on /notification-fatigue is a cap-bounded estimate. Real-world worker behaviour is shaped by interruption salience, social cost of ignoring a mention, and recovery-time variance across individuals. The model captures the population-level cost; it does not predict any one worker’s outcome.

Editorial position and corrections

pingfatigue.com is an independent reference. There are no paid placements in the tool comparison on /tools, no sponsored content anywhere, and no email-gated downloads. Affiliate links are explicitly labelled and do not affect the scoring methodology; the editorial position is documented in full on /about.

Spotted a stale benchmark, a missing primary source, a vendor change we have not caught yet, or a calculation that does not match the formulas on this page? Email [email protected] with the page URL and the source you would like cited. Substantive corrections are typically actioned within five business days. Non-substantive corrections (typos, link rot, minor structural edits) batch into the next monthly pass.

Read next: the Alert Fatigue Calculator and Index, the full bibliography, or the editorial position and disclosures.

Updated May 2026

Updated May 2026