Phase 1 of 6
Scoping & Mission Readiness Constraints
Define the fleet, platforms, readiness KPIs, classification boundary, and deployment topology that will govern every subsequent architectural decision for CBM+ predictive maintenance.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Fleet & Platform Surface
Identify platform classes in scope for predictive maintenance
Why This Matters
Platform class determines the sensor ontology, sustainment command of record, and whether the authoritative maintenance system is ALIS/ODIN (F-35), GCSS-Army (ground), NALCOMIS/OOMA (Navy/Marine aviation), or a bespoke OEM stack. A single tail number may cross multiple sustainment commands over its life, which breaks naive assumptions about label authority. Scoping the platform list up front is what makes the difference between a model that ships and a model that re-scopes in month six.
Note prompts — click to add
+ For each platform class in scope, who is the authoritative sustainment command and what is their system of record?+ Which platforms have complete sensor coverage today versus partial retrofit?+ Have we scoped the model to a single MDS (mission design series) first, or are we trying to span too many?Confirm which platform families the CBM+ model must support. Each class carries a distinct sensor ontology, maintenance cadence, and sustainment command.
Select all that apply
Define the primary fleet readiness KPI the model is accountable to
Why This Matters
GAO reporting on F-35 mission-capable rates has been a sustained congressional focus, and per-flying-hour cost sits on the same scorecard. Choosing a single primary KPI forces the organization to commit to a tradeoff — MC rate and cost-per-flying-hour will sometimes move in opposite directions, and the model has to know which one the business actually optimizes. A mixed scorecard with no priority is usually a sign stakeholders have not yet agreed.
Note prompts — click to add
+ Which KPI appears in the program O-5/O-6 scorecard, and is the model tuned to it?+ How do we handle months where MC rate improves but cost-per-flying-hour regresses?+ Is NMCS or NMCM the real binding constraint in our fleet — do we know?Select the readiness metric that will arbitrate model promotion decisions and executive reporting.
Single choice
Establish prediction horizon and maintenance lead time
Why This Matters
A model that predicts a failure 48 hours before it happens has no value if the nearest qualified technician and the needed LRU are a week out in the supply chain. Prediction horizon must be matched to the actual lead time in the sustainment system — parts availability, MOS/AFSC availability, and depot throughput. DoDI 4151.22 (CBM+) explicitly frames this as a sustainment-system decision, not a model-selection decision.
Note prompts — click to add
+ What is our measured parts-request-to-on-hand lead time for the top 20 critical LRUs?+ Does our prediction horizon actually fit inside the sustainment system's ability to respond?+ Have we tiered horizons by subsystem criticality or are we one-sizing the model?Select the forward-looking window the model must predict over, matched to the sustainment system's ability to act on it.
Single choice
Classify the sustainment data environment
Why This Matters
DFARS 252.204-7012 mandates safeguarding of Covered Defense Information and CUI under NIST SP 800-171. Platform telemetry, failure thresholds, and readiness rates routinely fall inside that boundary, and commercial predictive-maintenance vendors cannot hold this data unless their stack is attested. The most common scoping mistake is assuming sensor telemetry is "just vibration data" — aggregated it reveals mission-capable rates, which are themselves sensitive.
Note prompts — click to add
+ Have we formally classified the training dataset, or inherited a prior determination?+ Does our vendor actually hold the attestation it claims, and is it current?+ If inference runs at Secret, can training at CUI still produce a usable model?Confirm the classification boundary within which the training data, model weights, and inference runtime must reside.
Single choice
Trinidy — Fleet readiness status is operationally sensitive — an adversary who knows MC rates can time operations. Trinidy keeps training, inference, and audit logs entirely within the sustainment command's classified or CUI enclave. No commercial cloud touches operational readiness data.
Specify deployment topology for inference plane
Select the physical deployment target for the CBM+ inference ensemble. On-platform, ship-side, depot, and sustainment-command all carry different constraints.
Single choice
Trinidy — For on-platform inference (aircraft avionics bay, ship engineering space, ground vehicle crew station), classified cloud inference is physically incompatible. Trinidy is the on-platform and depot inference substrate — the same fabric from airborne edge to sustainment command.
Define disconnected / DDIL operations requirement
Why This Matters
A carrier strike group in EMCON, a submarine on deterrent patrol, or an Army forward support battalion in a GPS-denied exercise all need the maintenance model to continue deciding without reach-back. Building for DDIL up front is far cheaper than retrofitting — model size, label sync, and update channel all have to be designed for the disconnected case.
Note prompts — click to add
+ What is the longest disconnection window we need to survive with full decisioning?+ Can our model and feature store fit on the deployed hardware without reach-back?+ How do we reconcile logs and retrain when the node reconnects?Forward-deployed units, carrier strike groups, and expeditionary depots routinely operate denied, degraded, intermittent, or limited-bandwidth.
Single choice
Trinidy — Trinidy is architected for zero-cloud-dependency inference by default. The on-platform or forward depot node scores every reading independently — loss of reach-back to sustainment command has no impact on maintenance decisioning, only on model refresh.
Establish cost-of-action budget and sortie impact target
Why This Matters
DoD O&M runs $80B+ annually in depot and sustainment activity. The C3 AI PANDA contract ceiling of $450M (May 2020) sets the scale at which a single service treats predictive maintenance as a program-of-record investment. Anchoring the model program to a dollar-denominated budget changes how program, engineering, and sustainment prioritize — without it, teams optimize recall in ways the program cannot afford.
Note prompts — click to add
+ What was our unscheduled maintenance spend last year, and what portion does CBM+ target?+ Who owns the sustainment P&L line this model is accountable to?+ Is readiness an executive scorecard metric alongside cost-per-flying-hour?Quantify the sustainment dollar and sortie impact the model is accountable for annually.
Single choice
Program authority and sustainment command of record
Identify the program office, sustainment command, and contracting authority for the predictive maintenance model.
Select all that apply