Phase 1 of 6
Scoping & Clinical-Grade Requirements
Define the diagnostic surface, intended use, clinical integration path, and data sovereignty posture that govern every downstream architectural decision for genomic and precision medicine inference.
0/9
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Clinical Intended Use & Diagnostic Surface
Define the intended use and target diagnostic workflow
Why This Matters
Intended use is the single load-bearing decision in a genomics AI program because it determines which FDA pathway, which CLIA/CAP posture, and which EU AI Act and IVDR obligations attach. A germline variant classifier for carrier screening, a somatic CDx aligned to a targeted therapy, and a research-grade AlphaFold-class target ID pipeline share almost no regulatory surface even if they share infrastructure. The most expensive mistake in this space is scoping one model and then silently broadening its use after deployment — that is how an LDT becomes an unapproved medical device.
Note prompts — click to add
+ Is the intended use documented in a clinical protocol and reviewed by medical affairs before any model is built?+ Which downstream clinician is the end user, and what decision are they making with the output?+ Have we explicitly documented uses that are out of scope, so scope creep is visible?
Required
Specify which clinical decisions the genomic inference stack will inform and at what point in the care pathway.
Select all that apply
Germline variant interpretation (hereditary disease / carrier testing)
Somatic / tumor variant calling and annotation (oncology)
Tumor mutation burden (TMB) and microsatellite instability (MSI) scoring
Pharmacogenomic (PGx) drug-gene interaction prediction
Companion diagnostic (CDx) for a specific therapeutic
Protein structure prediction for target identification
Clinical variant report generation (NLP over literature + ClinVar)
Newborn screening / rare disease diagnostic odyssey
required
✓ saved
Select the FDA regulatory pathway
Why This Matters
The FDA finalized its rule bringing laboratory-developed tests under IVD oversight in 2024 with a multi-year phase-in, which materially changes the regulatory risk for genomic labs that previously relied on the LDT enforcement discretion. Companion diagnostics aligned to targeted oncology therapies typically require PMA with co-approval of the CDx and the drug. Multiple AI-driven companion diagnostics have now been FDA-cleared, which sets expectations for validation rigor that newer entrants will be measured against.
Note prompts — click to add
+ Have we engaged with FDA via a pre-submission (Q-Sub) before building validation evidence?+ If we intend to run as an LDT, have we mapped the 2024 final rule phase-in milestones against our launch plan?+ Who in regulatory affairs owns the pathway decision and signs off on deviations?
Required
Confirm which FDA pathway the intended use falls under, or document why the model is research-use-only.
Single choice
FDA 510(k) — substantial equivalence to a predicate IVD
FDA PMA — premarket approval (novel Class III companion diagnostic)
De Novo classification (novel low/moderate risk)
Laboratory-developed test (LDT) under FDA 2024 final rule phase-in
Research-use-only (RUO) — no clinical decisions
Undecided — pathway still under review with regulatory affairs
required
✓ saved
Apply the FDA Predetermined Change Control Plan (PCCP) framework
Why This Matters
FDA finalized guidance on Predetermined Change Control Plans for AI-enabled device functions in September 2025, giving manufacturers a structured way to pre-authorize a defined envelope of model updates (retraining on new data, threshold tuning, new input modalities) without a new 510(k) or PMA supplement. Genomic models drift as reference databases (ClinVar, gnomAD) update and as cohort composition changes, so a PCCP is often the only practical path to keep a cleared model clinically current.
Note prompts — click to add
+ Does our validation plan distinguish PCCP-scoped updates from changes that trigger a new submission?+ Have we defined the performance envelope inside which retraining is pre-authorized?+ Who approves each in-envelope update and where is the change record filed?
Required
Document which model updates are pre-authorized under a PCCP so iteration does not require re-submission for every change.
recommended
✓ saved
Classify the system under the EU AI Act
Why This Matters
The EU AI Act (Regulation 2024/1689) treats AI systems that are safety components of medical devices and IVDs as high-risk, which brings obligations around data governance, technical documentation, logging, human oversight, accuracy and robustness, and post-market monitoring that stack on top of IVDR obligations. Genomic variant interpretation and CDx-adjacent AI clearly fall inside this envelope. The obligations are not satisfied by a generic cloud inference endpoint — the provider must demonstrate controls end-to-end.
Note prompts — click to add
+ Have we run a formal EU AI Act classification against Annexes I and III, documented in the technical file?+ Is our IVDR technical file structured so the EU AI Act obligations map onto it without duplication?+ What is our post-market monitoring plan for the high-risk classification?
Required
Document whether the system is a high-risk AI system under Regulation (EU) 2024/1689 and what obligations attach.
Single choice
High-risk — medical device / IVD safety component (Annex I)
High-risk — other Annex III category
Limited-risk — transparency obligations only
Minimal-risk — no specific obligations
Not placed on the EU market — out of scope
required
✓ saved
Confirm CLIA and lab accreditation posture
Why This Matters
A high-complexity CLIA-certified lab that runs NGS clinically must show proficiency testing, analytical validation, and personnel competency for the entire pipeline including bioinformatics and any AI-driven variant interpretation. CAP now assesses bioinformatics pipelines explicitly, and NY CLEP is a separate and stringent state review that many national labs discover late. ISO 15189 is the prevailing standard in the EU. Accreditation scope decisions made in Phase 1 save months of rework during inspection.
Note prompts — click to add
+ Is our AI variant interpretation step inside or outside the accredited pipeline boundary?+ Have we budgeted for a separate NYS CLEP submission if we intend to report on NY specimens?+ Who owns the CAP bioinformatics checklist evidence for the AI components?
Required
Map which components of the pipeline fall under CLIA (42 CFR 493) and CAP or ISO 15189 accreditation.
Select all that apply
CLIA high-complexity certification (42 CFR 493)
CAP laboratory accreditation
ISO 15189 (medical laboratories)
ISO/IEC 17025 (for research-only pipelines)
NYS CLEP permit (required for NY specimens)
State-specific licensure (CA, FL, MD, PA, RI)
Not yet determined
required
✓ saved
Establish data residency and sovereignty constraints for genomic data
Required
Map germline, somatic, and pedigree data to jurisdictional constraints before any cloud decision is finalized.
Select all that apply
On-premises in the institution's own data center
Private cloud in-region under BAA (HIPAA)
EU residency required (GDPR / EU AI Act)
UK residency required (UK GDPR)
IRB-mandated no-cloud restriction for certain cohorts
Tribal / Indigenous data sovereignty commitments
Cross-border permitted under SCCs for de-identified metadata only
requiredtrinidy
TrinidyGenomic data cannot be de-identified in any robust sense — a single read can re-identify the patient and implicates every biological relative. Trinidy keeps variant calling, interpretation, and reporting entirely within the institution's own perimeter, so GINA, HIPAA, IRB protocols, and GDPR/EU AI Act obligations are satisfied by construction rather than by contract.
✓ saved
Define human-in-the-loop requirement for clinical reporting
Why This Matters
Both the EU AI Act (high-risk human oversight obligation) and CAP molecular pathology checklists require meaningful human review of clinical genomic reports, and the ACMG/AMP 2015 variant classification standard is written around expert interpretation. A fully autonomous pipeline is only defensible inside the narrow envelope of a specifically cleared CDx workflow. Tier-based review, where the model pre-classifies and the pathologist focuses on variants of uncertain significance (VUS), is the current practical default at most clinical genomic labs.
Note prompts — click to add
+ What is our target sign-out turnaround, and does the review model fit inside it at realistic volume?+ Does the clinician see model confidence, feature attribution, and dissenting evidence — or just a label?+ How do we capture reviewer disagreement as a training signal without contaminating the next version?
Required
Specify where a board-certified clinician must review model output before a result is released.
Single choice
Every variant classification reviewed and signed by a molecular pathologist / geneticist
Tier-based review (all pathogenic and likely pathogenic reviewed; benign auto-released)
Exception-only review (model-confidence-triggered escalation)
Autonomous release for specific cleared CDx workflows
Research-only — no clinical sign-out required
required
✓ saved
Specify inference deployment topology
Required
Select the physical and logical deployment target for variant interpretation and structure prediction.
Single choice
On-premises GPU cluster inside the institution (HIPAA + IRB native)
Sovereign private cloud in-region (EU / UK / US)
Co-located GPU capacity under BAA
Hybrid: on-prem inference + cloud training on de-identified cohorts
Public cloud HPC with PHI routed via hashed identifiers only
requirededgetrinidy
TrinidyWhole-genome variant calling and AlphaFold 3-class structure prediction both want dedicated high-memory GPUs (80GB+ VRAM) with reproducible, auditable job scheduling — not shared multi-tenant cloud inference slots. Trinidy runs the full stack on-premises on the institution's own GPU fabric, with complete job provenance captured by default.
✓ saved
Define acceptable turnaround time by workflow
Required
Set the clinical turnaround time budget each workflow must meet end-to-end from sample to signed report.
Select all that apply
Urgent somatic (targeted panel): < 5 business days
Standard tumor / normal WES: 10–14 days
Whole-genome germline: 14–21 days
Rapid WGS (NICU / critical inpatient): < 48 hours
Pharmacogenomic panel: < 72 hours
Research cohort — batched, non-clinical SLA
required
✓ saved