Methods

Do An Evaluation On A Napkin

Before we open statistical software, we can do a quick “napkin test” to see if your program is likely worth a full evaluation.

Home / Evidence & Performance / Do An Evaluation On A Napkin

Step 1: Write down your before/after change

Use one simple metric first, like total cost per member per month. Example: your population shows a 1% decrease year over year.

Step 2: Find a public benchmark trend

Pull a directional benchmark from public reports, CMS files, payer updates, or industry studies. Example: comparable populations nationally rose by 6%.

Step 3: Compare directions and gap size

  • Your trend: -1%
  • Benchmark trend: +6%
  • Back-of-the-envelope spread: roughly 7 percentage points

That does not prove causality, but it is a strong “worth investigating” flag.

Step 4: Decide if it is worth a formal evaluation

If the napkin gap is meaningful, move to full analytics: matched cohorts, difference-in-differences, regression adjustment, sensitivity testing, and subgroup analysis.

Why this works

  • Fast: gives executives a first signal in hours, not months.
  • Practical: frames whether deeper evaluation spend is justified.
  • Aligned: helps prioritize which programs deserve decision-grade rigor first.

Important caveats (the napkin has limits)

  • No causal proof yet.
  • Case-mix may have shifted.
  • Coding and benefit design changes can distort trend comparisons.
  • One metric alone can miss quality or access tradeoffs.

In short: if your “napkin” says you beat the market, don’t stop there. That is the moment to invest in the full DID/regression work and quantify a defendable program effect.

Related methods

Next: How to evaluate a healthcare program

For more granular data, more recent data, or scientific analysis support, please email us.

Back to Evidence & Performance