Phase 1 of 6
Scoping & Clinical Workflow Integration
Define the specialties, encounter types, EHR integration surface, and HTI-2 disclosure posture that will govern every downstream architectural choice.
0/11
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Clinical Scope & Encounter Surface
Identify specialties and encounter types in scope
Why This Matters
DAX Copilot, Abridge, and Suki all publish per-specialty accuracy and acceptance deltas — primary care and cardiology are the strongest performers while psychiatry and oncology require heavier template customization and carry more medicolegal sensitivity. Northwestern Medicine reported 11.3% more patients per day on DAX Copilot across their enterprise deployment, but that aggregate hides a wide per-specialty band. Treating a single LLM prompt as one-size-fits-all across specialties is the most common cause of clinician abandonment after a 90-day honeymoon.
Note prompts — click to add
+ Which specialties have enough encounter volume to justify specialty-tuned templates vs. a shared prompt?+ Which specialties have the highest documentation burden measured in pajama-time minutes and should be prioritized for rollout?+ Do we have specialty champions identified for the pilot cohort, or are we pushing from IT alone?Ambient scribe accuracy and template requirements vary dramatically across specialties — lock scope before feature and model work begins.
Select all that apply
Define encounter-to-note latency SLA
Why This Matters
DAX Copilot and Abridge both target draft availability within seconds to minutes of encounter close — clinicians who have to wait until the end of the day to review notes lose the cognitive context that makes review fast and accurate. The latency target also drives infrastructure sizing: sub-30-second draft generation at enterprise concurrency requires substantially more GPU capacity than end-of-day batch. Setting a loose SLA to save on infra usually trades directly against clinician adoption.
Note prompts — click to add
+ What is the longest acceptable delay from encounter close to draft availability for our busiest specialty?+ Have we measured clinician review speed when drafts arrive immediately vs. at end-of-day?+ Does our SLA include the FHIR write-back path to the EHR or stop at draft generation?Select the target latency between end-of-encounter and draft note availability in the EHR.
Single choice
Trinidy — Cloud-hosted ambient scribes round-trip audio and transcript to external infrastructure before generation — adding 2–10 seconds of network and queue time even before the LLM begins. Trinidy runs both the ASR model and the note-generation LLM on-node, keeping draft availability inside the encounter window without PHI egress.
Select primary EHR integration target
Why This Matters
Abridge is embedded natively inside Epic via their co-development partnership, and DAX Copilot has deep integration with both Epic and Oracle Health — the EHR integration is the product, not a feature. Health systems that try to deliver ambient notes through copy-paste or a separate window report 40-60% lower sustained adoption than those with native EHR filing. The integration path also determines whether you can attach source-attribute metadata required under HTI-2.
Note prompts — click to add
+ Which EHR do the specialties in our pilot cohort actually use day-to-day, not just on paper?+ Do we have existing Epic App Orchard / Oracle Health developer program access, or do we need to establish it?+ Is there a multi-EHR path that consolidates to a single scribe product or do we need parallel integrations?Ambient scribe value collapses without native EHR integration — pick the platform and integration mode before model work starts.
Single choice
Define FHIR R4 resource surface for context injection and note filing
Why This Matters
The ONC US Core FHIR R4 profile has been the baseline interoperability surface since HTI-1 certification, and HTI-2 expands the Predictive DSI disclosure requirements to include source attribute capture on AI-generated content — which is carried on Provenance and Composition resources. Note filing without structured DocumentReference or Composition metadata means the AI-generated status is lost once the note enters the EHR, creating a downstream audit problem. Context injection without a bounded resource list means prompt bloat, higher latency, and higher hallucination risk on irrelevant pulled data.
Note prompts — click to add
+ Have we drawn a bounded context-window map — which FHIR resources go into the prompt and in what order?+ Are we writing DocumentReference with category coded for AI-assisted authorship, or filing as free-text notes?+ Do we emit a Provenance resource per note that carries model version and source-attribute metadata?Confirm which FHIR R4 resources you will read for context and which you will write on note finalization.
Select all that apply
Define HTI-2 Predictive DSI disclosure and source-attribute posture
Why This Matters
The ONC HTI-2 final rule expands the Predictive Decision Support Intervention (Predictive DSI) requirements from HTI-1 — certified EHR vendors and the health systems that deploy them must disclose AI-assisted documentation, capture source attributes, publish bias evaluations, and maintain an organizational DSI inventory. Enforcement is active in the 2025-2026 window and AI-generated clinical notes fall squarely inside scope. Health systems that layer an ambient scribe onto an HTI-2-certified EHR without wiring the disclosure surface are inheriting compliance risk they may not have budgeted for.
Note prompts — click to add
+ Have legal and compliance confirmed which of our ambient-scribe outputs are Predictive DSI under HTI-2 §170.315(b)(11)?+ Do our clinicians see a disclosure badge on every AI-generated draft, or only on initial rollout?+ Do we have a published model card covering intended use, training data, known limitations, and bias evaluation?HTI-2 requires disclosing AI-generated clinical documentation as Predictive DSI with captured source attributes — decide the disclosure surface before Q4 2026 enforcement.
Select all that apply
Define clinician attestation and sign-off workflow
Why This Matters
CMS has repeatedly reinforced that the attesting clinician is accountable for the accuracy of any note, regardless of whether a human or AI generated the draft — and E/M and billing integrity audits treat AI-generated notes under the same standard as human-authored ones. Workflows that let clinicians sign a note without explicit review create billing-integrity exposure that shows up months later in RAC audits. The attestation step is also where HTI-2 Predictive DSI disclosure is operationally anchored.
Note prompts — click to add
+ Is our attestation workflow a deliberate review step or a one-click pass-through on the way out the door?+ Do we log the time spent reviewing the draft as evidence of genuine clinician oversight?+ Who owns the policy that defines "adequate review" for AI-generated notes in our organization?Every AI-generated note must have a clinician-of-record attestation — CMS accountability for AI-assisted note accuracy rests with the attesting clinician.
✓ savedDeployment topology for inference plane
Select the physical / logical location for ASR and LLM inference.
Single choice
Trinidy — For HTI-2 transparency with full audit trail plus the latency demanded by in-encounter drafting, on-premises inference is the lowest-friction option. Trinidy runs both the ASR model and the note-generation LLM on the same on-prem fabric — no PHI leaves the data center, audit trails are local, and model swaps are operational rather than procurement events.
Map cross-border and residency constraints
Multi-national systems or research affiliations may pull EU and other jurisdictions into scope.
Select all that apply
Pilot & Rollout Scope
Define pilot cohort size and duration
Ambient scribe programs that skip a structured pilot burn trust with clinicians on the first misfire and rarely recover.
Single choice
Define primary success KPIs
Why This Matters
Validated DAX Copilot deployments consistently report 50-76% documentation time reductions, with Providence Health specifically measuring 51.7% less documentation time, 30.3% burnout decrease, and 62% lower probability of leaving the organization. But "documentation time reduction" measured informally often drifts from the rigorous ScienceDirect and AMA STEPS Forward methodology — resulting in claims that do not survive board-level scrutiny. Pick the KPIs you can measure with the same rigor as the published benchmarks.
Note prompts — click to add
+ Do we have a baseline measurement of per-encounter documentation time for each pilot specialty before rollout?+ Which KPIs do we already measure in the EHR and which require new instrumentation?+ Who signs off on the primary KPI target — CMO, CMIO, or program sponsor?Pick the two or three KPIs that will arbitrate the program — measurable pre-rollout, not only anecdote.
Select all that apply
Quantify physician burnout financial exposure
Why This Matters
Physician burnout driven substantially by documentation is estimated at $4.6B+ annually in turnover alone across the US health system (Shanafelt / Annals of Internal Medicine), with replacement cost per physician typically $500K-$1M. Framing the ambient scribe program as a retention investment rather than an IT project unlocks capital budgets that CIO-only sponsorship cannot. The ROI math is straightforward once the organization-specific turnover baseline is measured.
Note prompts — click to add
+ What is our measured annual physician turnover rate and replacement cost?+ Do we have a Maslach Burnout Inventory or similar baseline we can measure against post-deployment?+ Is the ROI pitch owned by the CMO / CHRO or stuck inside IT?Documentation-driven burnout is the explicit ROI anchor — size it against your own turnover cost.
✓ saved