Phase 1 of 6
Scoping & Population Definition
Define attributed lives, risk-stratification windows, intervention pathways, and HRRP exposure before any data or model decisions are locked in.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Attributed Population & Risk Window
Define the attributed population for risk stratification
Why This Matters
CMS HRRP applies only to six Medicare FFS condition cohorts (AMI, COPD, HF, pneumonia, CABG, elective THA/TKA) and only readmissions within 30 days of discharge count against the penalty — scoring the wrong denominator leads to optimizing a metric that does not actually reduce penalty exposure. ACO and MA populations have overlapping but distinct attribution rules and different intervention economics. A single model scoring everyone the same way typically misprices risk on at least one sub-population.
Note prompts — click to add
+ Are we explicitly separating the HRRP-exposed cohort from our broader attributed population in scoring and reporting?+ How often does attribution change for our ACO and MA populations, and does the model handle churn gracefully?+ Do we have separate outcome definitions for HRRP-counted readmissions vs. all-cause 30-day readmissions?Which lives is this model scoring — inpatient discharges, ACO attributed panel, MA plan members, or a subset?
Select all that apply
Select the prediction window and trigger event
Why This Matters
HRRP counts unplanned readmissions within 30 days of discharge, so the discharge-time score is the one that directly maps to the penalty denominator — but an at-admission score gives care teams 3–5 more days to intervene. Most organizations benefit from running both: an admission-time score drives in-stay interventions, and a discharge-time score drives the transitional care management pathway. Choosing only one creates a structural blind spot.
Note prompts — click to add
+ Do we need a single score or a score at multiple decision points (admission, daily, discharge, post-discharge day-3)?+ How will care management staffing change if we move from discharge-only to continuous scoring?+ Are we measuring intervention lift against the scoring timing, or only against raw predictive accuracy?When does the model score, and against what outcome window?
Single choice
Set risk-stratification tier thresholds
Why This Matters
Tier thresholds are an operational decision, not a statistical one — they must match care-management capacity. If the top tier is set at the 95th percentile but staffing can only outreach the top 2%, half of the alerts will be silently dropped and the model will look like it is underperforming. Setting tiers against a capacity-based budget first, then measuring lift, is the only sustainable approach.
Note prompts — click to add
+ What is our weekly care-management outreach capacity in patient-touches, and does the top tier fit inside it?+ Have we measured the marginal lift of adding a fourth tier vs. concentrating resources on the top?+ Do the tier thresholds get recalibrated as capacity or case mix changes?How many risk tiers will care management act on, and what drives the cutoffs?
Single choice
Quantify HRRP penalty exposure
Why This Matters
HRRP caps penalties at 3% of all Medicare inpatient payments, and in FY2023 the program imposed $521M in penalties across 2,273 hospitals. The equity-adjusted methodology effective FY2026 stratifies performance by the share of dual-eligible patients, which shifts penalty exposure in both directions — some safety-net hospitals see relief while others face new exposure. Framing the scoring program as a penalty-offset function with a dollar-denominated budget changes how leadership prioritizes care-management investment.
Note prompts — click to add
+ What has our HRRP penalty been in each of the last three fiscal years, and how does it split by condition cohort?+ Have we modeled our exposure under the FY2026 equity-adjusted methodology vs. the legacy methodology?+ Who owns the P&L line for HRRP penalty, and is it reported to the board?What is your measured or modeled HRRP penalty exposure, and which conditions drive it?
Single choice
Define the intervention pathway the score will drive
Why This Matters
A readmission model delivers value only through an intervention — and the published 20–30% readmission reduction figure is specifically for models coupled with structured care management, not models alone. The most common failure mode is deploying a high-AUROC score into a workflow with no intervention capacity, which produces alerts that are never acted on and erodes clinician trust in the tool. The intervention pathway should be locked in before model selection, not after.
Note prompts — click to add
+ Is there a named clinical owner for each intervention pathway this score will feed?+ Have we measured the baseline throughput of each intervention and confirmed it can absorb the flagged volume?+ What is the feedback loop that confirms an intervention occurred, and is it captured as structured data?What care-management action will a high-risk flag actually trigger?
Select all that apply
Confirm HIPAA-governed deployment environment
Where will inference and data storage live, given PHI scope?
Single choice
Trinidy — Population health data is the broadest PHI exposure surface in the enterprise — every attributed life is scored on every refresh. Trinidy keeps the ensemble (tabular + LLM) on-premises so PHI and SDoH never leave the institution's perimeter, even for inference bursts during discharge peaks.
Identify equity strata the model will be evaluated on
Why This Matters
CMS's equity-adjusted HRRP methodology (effective FY2026) stratifies performance by the share of dual-eligible patients, and ONC HTI-1 requires source attribute disclosure and fairness testing for Predictive Decision Support Interventions embedded in certified EHRs. Evaluating fairness on the wrong strata is functionally equivalent to not evaluating it at all — the strata have to map to the regulatory tests the organization will be graded against, not just to internal equity goals.
Note prompts — click to add
+ Do our equity strata align to CMS dual-eligible stratification and ONC HTI-1 source attributes?+ Have we measured data completeness on race, ethnicity, and language — missingness itself is an equity signal?+ Who in compliance signs off on the equity strata list, and how often is it refreshed?Which demographic and social strata will drive fairness reporting?
Select all that apply
Decide build vs. EHR-embedded vs. vendor posture
Why This Matters
EHR-embedded readmission scores were typically trained on national or multi-site cohorts and are documented to degrade on local populations — organization-specific predictive models reduce readmissions 20–30% when coupled with care management, whereas embedded scores often plateau below that lift. The embedded score is also a black box for HTI-1 transparency purposes, which pushes fairness-audit liability back onto the provider.
Note prompts — click to add
+ Have we benchmarked our EHR-embedded score against our own patient outcomes rather than the vendor's published AUROC?+ Does the vendor provide model cards, training-data demographics, and fairness metrics suitable for HTI-1 disclosure?+ What is our exit path if the vendor is acquired, deprecates the model, or fails an equity audit?Are you building your own model, adopting the EHR-embedded score, or using a population-health vendor?
Single choice
Trinidy — Embedded EHR readmission scores (Epic, Oracle Health) are documented to degrade on local populations and lack the equity-audit transparency HTI-1 requires. Trinidy trains and hosts facility-specific models on your own data — typically outperforming the embedded vendor score on your case mix while producing a complete fairness audit trail.