Phase 1 of 6
Scoping & ISR Mission Constraints
Define the mission, sensor envelope, classification boundary, and latency budget before a single detector is trained. ISR analytics that are not scoped against mission doctrine will fail accreditation before they reach the analyst.
0/10
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Mission Definition & Supported Commands
Identify the supported mission and requesting command
Why This Matters
Project Maven's expansion to 20,000+ users now spans every major CCMD — INDOPACOM, EUCOM, CENTCOM, NORAD/NORTHCOM, SPACECOM, TRANSCOM, and AFRICOM — each with a distinctly different target set, tempo, and analyst workflow. An INDOPACOM maritime detector is not an EUCOM ground-order-of-battle detector; conflating them dilutes training data and produces models that are mediocre at both missions. Joint Pub 2-0 (Joint Intelligence) requires that intelligence production be tied to a named commander's priority intelligence requirement (PIR), and an ISR analytic that cannot name the PIR it supports will not survive functional review.
Note prompts — click to add
+ Which named PIR does this analytic answer, and whose CCIR list does it appear on?+ If we support multiple CCMDs, are we running a unified model or command-tailored variants?+ Has the requesting J2 / G2 validated the analytic's CONOPS in writing?
Required
Confirm which command, service, or agency the ISR analytic supports and the doctrinal mission it feeds.
Select all that apply
INDOPACOM — maritime domain awareness / PRC ORBAT
EUCOM — Eastern flank ground ORBAT and logistics
CENTCOM — counter-terrorism pattern of life
NORTHCOM / NORAD — homeland air and maritime approaches
AFRICOM — VEO tracking / maritime interdiction
SOUTHCOM — counter-narcotics / illicit maritime
SPACECOM — space domain awareness
Service-level (Army G-2 / Navy N2 / AF A2 / USMC G-2)
National agency (NGA / NSA / NRO / DIA)
Coalition / Five Eyes shared mission
required
✓ saved
Map the analytic to JCIDS capability gap and CJADC2 architecture
Why This Matters
CJCSI 3170.01 (JCIDS) governs how capabilities are validated and funded, and the DoD has been explicit that AI/ML ISR capabilities supporting Combined Joint All-Domain Command and Control (CJADC2) must be architected for interoperability from day one — not bolted on after model development. The Replicator initiative and Maven Smart System already publish data-fabric interfaces that new analytics are expected to consume and emit. An analytic that produces outputs only to a proprietary silo will never integrate into the CJADC2 kill-web and therefore never reach operational value.
Note prompts — click to add
+ Have we identified the JCIDS capability document (ICD / CDD / CPD) this analytic traces to?+ What CJADC2 / Maven Smart System data standards will our outputs conform to (track messages, link-16-adjacent formats, Observation schemas)?+ Is our architecture reviewed against the DoD CIO reference architecture for Data and AI?
Required
Tie the ISR analytic to a documented capability gap and the CJADC2 data fabric it must interoperate with.
required
✓ saved
Define analytic latency budget by sensor modality
Why This Matters
FMV object detection at 1–5 seconds is not a user preference — it is the window in which a moving entity remains usefully trackable before it leaves the frame or disappears into cover. SIGINT emitter classification at sub-second is driven by dwell-time adversary emissions that are gone before a batch job completes. Treating latency as an engineering optimization rather than a mission constraint results in analytics that are accurate in the lab and useless in theater.
Note prompts — click to add
+ Where is the longest segment of our current end-to-end latency — sensor-to-node, inference, or dissemination?+ Have we measured latency on the actual link (SATCOM, LPI/LPD, DDIL) rather than in a lab?+ What does the analyst see during the latency gap — last-known position, predicted track, or nothing?
Required
Select the mission-driven latency envelope for each modality the analytic must serve.
Single choice
Sub-1s (SIGINT / ELINT emitter classification)
1–5s (FMV object detection / tracking)
5–30s (EO still imagery change detection)
10–60s (SAR vehicle and structure analysis)
Minutes (HUMINT / text entity resolution)
Tiered by modality (mixed SLA)
requirededgetrinidy
TrinidyReachback to a CONUS data center over a MILSATCOM link can consume 600–1500ms of round-trip alone — incompatible with sub-5-second FMV inference over a contested link. Trinidy hosts detection, tracking, and embedding inference on-platform or at the tactical edge, so the SATCOM hop is reserved for product dissemination, not inference.
✓ saved
Specify deployment environment and platform
Required
Confirm where the inference workload physically runs.
Select all that apply
On-platform compute (UAS / aircraft / ship / ground sensor)
Tactical ground station (SCIF-ready, deployed)
Fixed CONUS / OCONUS SCIF
IL5 DoD cloud region
IL6 classified cloud region (SIPR-accredited)
JWICS / ICAM-accredited enclave (TS/SCI)
Coalition / NOFORN-separated enclave
DDIL / disconnected tactical edge
requirededgetrinidy
TrinidyISR inference on-platform (aircraft, UAS, ship, ground node) is not optional for most tactical modalities — the bandwidth to push raw sensor data to reachback does not exist on contested links. Trinidy runs the full detection / tracking / embedding stack on SWaP-constrained compute at the platform and on classified enclaves at the fixed site, with the same model artifact.
✓ saved
Define mission classification ceiling for training and inference
Why This Matters
ICD 710 governs the classification and control of national intelligence, and an ISR analytic almost always inherits the classification of its training data — which for imagery derived from NTM or certain overhead collection is TS/SCI with compartment. Training in a cloud region accredited for a lower level than the data mandates either data dilution (training on a sanitized subset) or unlawful spillage. Source-and-methods protection under E.O. 13526 is not something that can be retrofitted after a model has been exported to a lower enclave.
Note prompts — click to add
+ What is the highest classification present in our training corpus, and is the training environment accredited to that level?+ Have we had an original classification authority review the trained model for derivative classification?+ Does the analytic's output carry the source classification, and is our dissemination path accredited for it?
Required
Select the highest classification the analytic must operate at.
Single choice
UNCLASSIFIED / CUI
SECRET (SIPRNet)
TOP SECRET
TS/SCI (JWICS)
TS/SCI with SAP compartments
Mixed / spanning classification levels (CDS required)
required
✓ saved
Determine lethality / DoDD 3000.09 applicability
Why This Matters
DoD Directive 3000.09 was reissued in January 2023 and now explicitly covers AI-enabled functions in autonomous and semi-autonomous weapons, with a senior-review requirement before fielding. An ISR analytic that simply detects and cues an analyst is outside 3000.09; the same detector wired into an autonomous targeting pipeline is inside 3000.09 and requires USD(P), USD(R&E), and CJCS review. Programs that discover 3000.09 applicability late have had fieldings delayed by a year or more.
Note prompts — click to add
+ Have we mapped the full decision chain downstream of our analytic to confirm human-in-the-loop boundaries?+ If 3000.09 applies, is the senior review package scoped and funded in the program plan?+ Who in the program office owns the DoDD 3000.09 determination — and is that determination documented?
Required
Classify the analytic against the autonomy-in-weapons review threshold.
Single choice
Analyst-decision support only (detection / cueing / annotation)
Targeting nomination feed (human-in-the-loop targeting cycle)
Autonomous or semi-autonomous function in a weapon system
Defensive autonomous (C-RAM / CIWS-class — pre-approved envelope)
Not a lethality-adjacent analytic
required
✓ saved
Confirm DoD AI Ethical Principles compliance plan
Why This Matters
The DoD adopted five AI Ethical Principles in February 2020 — Responsible, Equitable, Traceable, Reliable, Governable — and DoDI 5000.82 and the DoD Responsible AI Strategy and Implementation Pathway (June 2022) operationalize them into acquisition. CDAO reviews now expect each principle to be addressable with program artifacts (bias testing, model cards, tradecraft traceability, red-team results, human override design). An ISR analytic that cannot point to concrete evidence for each principle is not ready for fielding review, regardless of measured accuracy.
Note prompts — click to add
+ Do we have a model card or equivalent artifact covering each of the five principles?+ Which principle is our weakest and what artifact closes the gap?+ Is Responsible AI represented on the program IPT, or treated as an afterthought?
Required
Map the analytic against the five DoD AI Ethical Principles (Responsible, Equitable, Traceable, Reliable, Governable).
required
✓ saved
Define mission tempo and DDIL resilience posture
Required
Set the analytic's behavior when links degrade or disappear.
Select all that apply
Continuous high-bandwidth uplink assumed
Intermittent SATCOM (hours of disconnection tolerated)
LPI/LPD-only intervals (low bandwidth)
Fully disconnected operations (days)
GPS-denied environment
EW / jamming assumed
Local autonomy with deferred sync required
requirededgetrinidy
TrinidyContested-link operations are the assumption, not the exception, in any plausible peer conflict. Trinidy's on-platform inference persists through SATCOM denial, GPS jamming, and LPI/LPD-only windows — analytics continue to produce tracks and alerts that sync when the link returns.
✓ saved
Authorities & Accreditation Perimeter
Map collection authorities (Title 10 / Title 50 / E.O. 12333 / FISA)
Why This Matters
E.O. 12333 governs IC collection activities and constrains how intelligence collected under Title 50 authorities may be combined with Title 10 military operations data; FISA places additional requirements on any data touching U.S. persons. An ISR ML pipeline that silently combines Title 50 SIGINT with Title 10 ISR without an authority review can invalidate downstream products and, in extremis, produce unlawful collection or retention. Every dataset ingested into training should be traceable to a specific authority and an approved retention rule.
Note prompts — click to add
+ Do we have an authority-of-collection register for every dataset in training and fine-tuning?+ Has IC / DoD general counsel reviewed the mixing of Title 10 and Title 50 data in the training corpus?+ What is our U.S. persons data handling procedure, and is it consistent with AG-approved procedures under E.O. 12333 §2.3?
Required
Confirm the statutory and executive authority under which the training and operational data were collected.
required
✓ saved
Confirm ATO / cATO path and RMF baseline
Why This Matters
The DoD Cloud SRG defines Impact Levels 2, 4, 5, and 6; IL5 covers controlled unclassified information and mission-critical non-classified workloads, and IL6 covers classified up to SECRET. A traditional ATO on an IL6 system can take 12–24 months, which is why the CIO's February 2022 cATO memo is now the preferred path for ML systems — but cATO requires mature DevSecOps, continuous monitoring, and a clearly scoped authorization boundary. Programs that assume they can cATO without a matured control baseline end up on the slow traditional path anyway.
Note prompts — click to add
+ Do we already have a cATO-capable DevSecOps pipeline, or is cATO aspirational?+ Can we inherit from an existing accredited host (Maven Smart System, Palantir Gotham on IL6)?+ Who is the Authorizing Official and have they agreed to the RMF baseline in writing?
Required
Select the Authority to Operate path the analytic will be accredited against.
Single choice
Traditional ATO under NIST SP 800-53 / DoD RMF
Continuous ATO (cATO) under DoD cATO guidance (Feb 2022)
Inherited ATO from hosting platform (e.g. Maven Smart System)
Joint Test & Evaluation path (pre-ATO, test only)
Interim ATO / IATT
Not yet determined
required
✓ saved