Phase 1 of 6
Scoping & Clinical-Grade Requirements
Define the diagnostic surface, intended use, clinical integration path, and data sovereignty posture that govern every downstream architectural decision for genomic and precision medicine inference.
0/9
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Clinical Intended Use & Diagnostic Surface
Define the intended use and target diagnostic workflow
Why This Matters
Intended use is the single load-bearing decision in a genomics AI program because it determines which FDA pathway, which CLIA/CAP posture, and which EU AI Act and IVDR obligations attach. A germline variant classifier for carrier screening, a somatic CDx aligned to a targeted therapy, and a research-grade AlphaFold-class target ID pipeline share almost no regulatory surface even if they share infrastructure. The most expensive mistake in this space is scoping one model and then silently broadening its use after deployment — that is how an LDT becomes an unapproved medical device.
Note prompts — click to add
+ Is the intended use documented in a clinical protocol and reviewed by medical affairs before any model is built?+ Which downstream clinician is the end user, and what decision are they making with the output?+ Have we explicitly documented uses that are out of scope, so scope creep is visible?Specify which clinical decisions the genomic inference stack will inform and at what point in the care pathway.
Select all that apply
Select the FDA regulatory pathway
Why This Matters
The FDA finalized its rule bringing laboratory-developed tests under IVD oversight in 2024 with a multi-year phase-in, which materially changes the regulatory risk for genomic labs that previously relied on the LDT enforcement discretion. Companion diagnostics aligned to targeted oncology therapies typically require PMA with co-approval of the CDx and the drug. Multiple AI-driven companion diagnostics have now been FDA-cleared, which sets expectations for validation rigor that newer entrants will be measured against.
Note prompts — click to add
+ Have we engaged with FDA via a pre-submission (Q-Sub) before building validation evidence?+ If we intend to run as an LDT, have we mapped the 2024 final rule phase-in milestones against our launch plan?+ Who in regulatory affairs owns the pathway decision and signs off on deviations?Confirm which FDA pathway the intended use falls under, or document why the model is research-use-only.
Single choice
Apply the FDA Predetermined Change Control Plan (PCCP) framework
Why This Matters
FDA finalized guidance on Predetermined Change Control Plans for AI-enabled device functions in September 2025, giving manufacturers a structured way to pre-authorize a defined envelope of model updates (retraining on new data, threshold tuning, new input modalities) without a new 510(k) or PMA supplement. Genomic models drift as reference databases (ClinVar, gnomAD) update and as cohort composition changes, so a PCCP is often the only practical path to keep a cleared model clinically current.
Note prompts — click to add
+ Does our validation plan distinguish PCCP-scoped updates from changes that trigger a new submission?+ Have we defined the performance envelope inside which retraining is pre-authorized?+ Who approves each in-envelope update and where is the change record filed?Document which model updates are pre-authorized under a PCCP so iteration does not require re-submission for every change.
✓ savedClassify the system under the EU AI Act
Why This Matters
The EU AI Act (Regulation 2024/1689) treats AI systems that are safety components of medical devices and IVDs as high-risk, which brings obligations around data governance, technical documentation, logging, human oversight, accuracy and robustness, and post-market monitoring that stack on top of IVDR obligations. Genomic variant interpretation and CDx-adjacent AI clearly fall inside this envelope. The obligations are not satisfied by a generic cloud inference endpoint — the provider must demonstrate controls end-to-end.
Note prompts — click to add
+ Have we run a formal EU AI Act classification against Annexes I and III, documented in the technical file?+ Is our IVDR technical file structured so the EU AI Act obligations map onto it without duplication?+ What is our post-market monitoring plan for the high-risk classification?Document whether the system is a high-risk AI system under Regulation (EU) 2024/1689 and what obligations attach.
Single choice
Confirm CLIA and lab accreditation posture
Why This Matters
A high-complexity CLIA-certified lab that runs NGS clinically must show proficiency testing, analytical validation, and personnel competency for the entire pipeline including bioinformatics and any AI-driven variant interpretation. CAP now assesses bioinformatics pipelines explicitly, and NY CLEP is a separate and stringent state review that many national labs discover late. ISO 15189 is the prevailing standard in the EU. Accreditation scope decisions made in Phase 1 save months of rework during inspection.
Note prompts — click to add
+ Is our AI variant interpretation step inside or outside the accredited pipeline boundary?+ Have we budgeted for a separate NYS CLEP submission if we intend to report on NY specimens?+ Who owns the CAP bioinformatics checklist evidence for the AI components?Map which components of the pipeline fall under CLIA (42 CFR 493) and CAP or ISO 15189 accreditation.
Select all that apply
Establish data residency and sovereignty constraints for genomic data
Map germline, somatic, and pedigree data to jurisdictional constraints before any cloud decision is finalized.
Select all that apply
Trinidy — Genomic data cannot be de-identified in any robust sense — a single read can re-identify the patient and implicates every biological relative. Trinidy keeps variant calling, interpretation, and reporting entirely within the institution's own perimeter, so GINA, HIPAA, IRB protocols, and GDPR/EU AI Act obligations are satisfied by construction rather than by contract.
Define human-in-the-loop requirement for clinical reporting
Why This Matters
Both the EU AI Act (high-risk human oversight obligation) and CAP molecular pathology checklists require meaningful human review of clinical genomic reports, and the ACMG/AMP 2015 variant classification standard is written around expert interpretation. A fully autonomous pipeline is only defensible inside the narrow envelope of a specifically cleared CDx workflow. Tier-based review, where the model pre-classifies and the pathologist focuses on variants of uncertain significance (VUS), is the current practical default at most clinical genomic labs.
Note prompts — click to add
+ What is our target sign-out turnaround, and does the review model fit inside it at realistic volume?+ Does the clinician see model confidence, feature attribution, and dissenting evidence — or just a label?+ How do we capture reviewer disagreement as a training signal without contaminating the next version?Specify where a board-certified clinician must review model output before a result is released.
Single choice
Specify inference deployment topology
Select the physical and logical deployment target for variant interpretation and structure prediction.
Single choice
Trinidy — Whole-genome variant calling and AlphaFold 3-class structure prediction both want dedicated high-memory GPUs (80GB+ VRAM) with reproducible, auditable job scheduling — not shared multi-tenant cloud inference slots. Trinidy runs the full stack on-premises on the institution's own GPU fabric, with complete job provenance captured by default.
Define acceptable turnaround time by workflow
Set the clinical turnaround time budget each workflow must meet end-to-end from sample to signed report.
Select all that apply