Phase 1 of 6
Scoping & Patient Cohort
Define the chronic condition cohort, reimbursement model (CCM / PCM / RPM), clinical goals, and scope-of-practice boundaries before a single coaching message is generated.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Condition Cohort & Clinical Scope
Select chronic condition cohorts in scope
Why This Matters
Each chronic cohort has a distinct evidence base, telemetry stream, and reimbursement code path — the Livongo RCT (n=1,360, 12 months) showed a 1.1 pt HbA1c reduction in diabetics vs. 0.4 pt usual care, while the Ochsner hypertension program (n=2,508) showed 21.8 mmHg systolic reduction with 71% BP control vs. 31% historical. A program that conflates diabetes and HTN into a single coaching workflow typically underperforms on both, because the escalation triggers, device integrations, and clinical guidelines differ. Cohort selection also dictates which HEDIS bundles (DM, HTN, CAD) you can move.
Note prompts — click to add
+ Which cohort has the strongest internal panel size and data quality to pilot against first?+ Do our care management protocols already exist in structured form per condition, or will we need to codify them?+ Which HEDIS measures are we targeting for improvement, and do they align with the cohort we pick?Confirm which chronic conditions the AI coaching program will serve.
Select all that apply
Define clinical outcome targets per cohort
Why This Matters
AI coaching programs without pre-committed outcome targets drift into engagement metrics (messages sent, response rate) that do not pay and do not move clinical outcomes. Published evidence is specific: Livongo showed a 0.7 pt HbA1c delta vs. usual care; Ochsner showed 21.8 mmHg systolic reduction. Committing to a number in the range of published evidence — rather than above it — is both defensible to payers and achievable on the first cohort.
Note prompts — click to add
+ Are our targets pre-registered with the risk-bearing entity (ACO, health plan, payer) before launch?+ Do we have the baseline measurement infrastructure (HbA1c draws, cuff measurements) to detect the effect size?+ Who owns the outcome measurement once the program is live — the AI team, care management, or quality?Commit to measurable outcome targets the program will be held to.
Select all that apply
Define AI scope-of-practice boundary
Why This Matters
The Stanford HAI / NEJM AI clinical LLM benchmark puts hallucination rates for drug-dosing and contraindication queries at 7–23% — a rate that is unacceptable if the model is permitted to answer those questions at all. The correct architectural response is not a better prompt, it is a narrower scope: the AI is a reinforcement and triage layer on top of a clinician-set care plan, not a replacement for the clinician. The scope-of-practice document is also the artifact a malpractice defense and an HTI-2 audit will ask for first.
Note prompts — click to add
+ Is the scope boundary documented and signed off by clinical leadership, not just the AI team?+ What happens when a patient asks an out-of-scope question — scripted deflection, human escalation, or silent refusal?+ Who reviews and updates the scope quarterly as the model and the evidence evolve?Explicitly document what the AI coaching assistant is permitted to do and what it is not.
Select all that apply
Trinidy — Scope-of-practice enforcement is a guardrail problem. Trinidy runs the guardrail model on-node alongside the coaching LLM — every outbound message is screened locally before reaching the patient, with full audit log of the screening decision.
Select reimbursement model and billing codes
Why This Matters
CMS's 2026 expansion of CCM and PCM codes is the single largest economic shift for AI-driven chronic care in a decade — it turns AI coaching from an engagement cost center into a directly billable service, as long as the documentation trail satisfies the 20-minute / 30-minute time thresholds and the care plan update requirements. RPM codes (99453–99458) add another $120–$160 per patient per month in billable revenue when device telemetry is reviewed and documented. A program that bills neither leaves roughly $1,500–$2,000 per patient per year on the table.
Note prompts — click to add
+ Does our EHR integration capture the time-on-task and care plan updates required to bill CCM / PCM?+ Are our RPM devices FDA-cleared and on the eligible device list for CPT 99453 / 99454?+ Who audits our billing documentation before claims submission — compliance, revenue cycle, or clinical?Confirm which CMS and commercial codes the program is designed to bill.
Select all that apply
Define between-visit engagement cadence
Why This Matters
Engagement cadence is also a reimbursement design decision — CCM requires ≥20 minutes of care management time per month and PCM requires ≥30, and a cadence that under-delivers on engagement will fail the billing audit even when clinical outcomes improve. Published chronic disease programs converge on several-times-per-week cadence for active management cohorts with event-triggered escalation layered on top. Daily check-ins are only sustainable with AI handling the bulk of the messaging.
Note prompts — click to add
+ Does our chosen cadence generate enough billable time per patient per month under CCM / PCM rules?+ What is our opt-out / fatigue threshold before cadence becomes counterproductive?+ Are cadence and modality tuned per risk stratum, or uniform across the panel?Specify the target coaching cadence and modality mix.
Single choice
Map patient engagement channels
Select the channels through which the coaching assistant will interact with patients.
Select all that apply
Confirm deployment topology for PHI-bearing inference
Why This Matters
HIPAA (45 CFR 164.502) permits cloud LLM deployment under a signed BAA, but the operational risk surface is very different from on-prem. Every patient coaching turn passes PHI to the LLM runtime; a single logging misconfiguration or prompt-injection attack at the API boundary can expose the full panel. ONC HTI-2 adds a separate obligation to audit AI-generated patient communications that cloud APIs rarely surface in usable form. Vendor SaaS (Teladoc, Omada, DarioHealth, Lark) is fastest to deploy but concentrates PHI flow at a third party — and Pear Therapeutics' 2023 bankruptcy is a reminder that the chronic-DTx vendor market is not yet settled.
Note prompts — click to add
+ Have we enumerated every hop PHI takes between patient and model, and is each hop under BAA?+ Can our BAA vendor produce HTI-2-grade decision support audit logs, or do we need to instrument them ourselves?+ What is our vendor-failure exposure (see Pear Therapeutics 2023) and do we have a data portability / exit path?Select the inference plane that will host the coaching LLM, guardrail model, and RAG index.
Single choice
Trinidy — Patient coaching transcripts are among the highest-sensitivity PHI in the enterprise — every token contains identifiers, symptoms, and medication context. Trinidy keeps the coaching LLM, guardrail model, and RAG knowledge base entirely inside the institution's perimeter. No PHI transits a third-party LLM API, and every inference is logged locally for HTI-2 audit.
Map data residency and cross-border constraints
Document residency obligations for the PHI feeding the coaching pipeline.
Select all that apply
Trinidy — HIPAA, EU GDPR, and the EU AI Act (Regulation 2024/1689) all create friction with cross-border LLM inference. Trinidy pins inference to the jurisdiction where the patient record lives — no token of PHI crosses a border that the patient has not authorized.