Phase 1 of 6
Scoping, Mission & SWaP Constraints
Define the mission set, platform class, latency envelope, size/weight/power budget, and connectivity posture that will govern every downstream architectural decision.
0/10
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Mission Set & Platform Class
Identify platform classes in scope for edge inference
Why This Matters
Platform class is the single largest determinant of allowable model footprint, thermal envelope, and update cadence — an Orin NX 16GB wearable and an AGX Orin vehicle compute have an order of magnitude difference in inference capacity. Replicator Initiative Tranche 1 and 2 selections span all-domain platforms, and a model architected for one class rarely ports cleanly to another without quantization and distillation work. Platform mix also determines whether you need a single unified model or a family of size-tiered variants sharing a common training pipeline.
Note prompts — click to add
+ Have we inventoried every platform class the program of record expects to support in the next 24 months, not just the lead platform?+ Which platform class sets the hardest SWaP ceiling, and does our base model architecture fit under it?+ Do we need a family of tiered models (tiny / small / medium) or a single model with platform-specific quantization?
Required
Select the tactical platforms the model must run on.
Select all that apply
Dismounted warfighter / wearable compute
Ground vehicle (manned / optionally manned)
Small UAS (Group 1 / 2)
Group 3-5 UAS / loitering munition
Rotary-wing aircraft
Fixed-wing / FVL / fighter pod
Maritime surface vessel
Subsurface / UUV
Space / on-orbit payload
Fixed tactical operations center (TOC)
required
✓ saved
Define primary mission function
Why This Matters
Mission function determines which DoD policy regime applies — a perception model for navigation sits under LOAC and DoDD 3000.09 differently than a targeting-support model, and a sensor-to-shooter function triggers the autonomy-in-weapons review path under the January 2023 update to DoDD 3000.09. JP 3-0 Joint Operations and joint targeting doctrine require a clear human-machine role definition long before the system is fielded. Mixing mission functions in a single model almost always forces the program to the most restrictive review path, so scoping is also risk management.
Note prompts — click to add
+ Does any mission function we support cross into the "autonomy in weapon systems" envelope defined by DoDD 3000.09?+ Which LOAC principles (distinction, proportionality, precaution) does each mission function implicate?+ Have we separated targeting-adjacent functions from navigation functions in our architecture, or are they entangled?
Required
Select the mission function the edge model primarily supports.
Select all that apply
ISR — target detection / classification
Perception for autonomous navigation
Electronic warfare / SIGINT classification
Acoustic / seismic signature classification
Tactical decision support / COA recommendation
Tactical LLM summarization of sensor feeds
Active protection system cueing
Counter-UAS detection and tracking
Sensor-to-shooter targeting support
required
✓ saved
Set end-to-end inference latency budget
Why This Matters
Latent AI's LEIP platform under Project Linchpin measured 3× faster inference on Jetson AGX Orin vs. an unoptimized baseline, and the Army reported a 70% improvement in effective decision speed when sensor data is processed at the edge rather than cloud. A cloud round-trip in a degraded SATCOM environment is not just slow — it fails entirely, which makes the cloud latency number a survivability metric, not a performance metric. The latency budget has to be allocated across feature extraction, inference, fusion, and post-processing before any single stage is over-engineered.
Note prompts — click to add
+ What is our current measured P99 on target hardware, and where is the hot spot — preprocessing, inference, or fusion?+ Have we validated latency on the Orin / Orin NX / Thor platform actually specified by the program office, not a surrogate?+ What is our graceful-degradation behavior when a sub-model misses its latency slice?
Required
Select the P99 latency budget the model must hold on target hardware.
Single choice
Sub-20ms (UAS / munition perception)
Sub-50ms (ground vehicle APS / perception fusion)
Sub-100ms (dismounted threat classification)
Sub-250ms (tactical decision support)
Sub-1s (tactical LLM summarization)
Tiered by platform / function
requirededgetrinidy
TrinidyHuman reaction time is 200–250ms — tactical AI must be meaningfully faster to be useful, and cloud round-trip alone consumes that entire envelope. NEXUS OS runs the full perception-to-recommendation pipeline on-platform with sub-100ms P99 on Jetson AGX Orin, and sub-20ms on optimized UAS payloads.
✓ saved
Define Size, Weight, Power (SWaP) envelope
Why This Matters
SWaP is the non-negotiable ceiling that determines whether a model class is feasible at all — a dismounted warfighter carrying Orin NX at roughly 1kg inclusive of battery cannot absorb a 200W AGX-class workload regardless of software optimization. The 15–75W envelope typical of rugged embedded GPU deployments (Jetson Thor/Orin family) forces quantization, pruning, and distillation into the model development plan from day one. Programs that defer SWaP analysis until integration almost always discover a 2–3× over-budget draw that forces a model redesign at the worst possible time.
Note prompts — click to add
+ Have we measured sustained (not peak) power draw at the actual inference duty cycle the mission demands?+ Is our thermal budget compatible with the platform's passive cooling, or do we assume active cooling that won't be fielded?+ Do we have a SWaP margin of at least 20% for fusion, comms, and payload growth over the program life?
Required
Select the SWaP ceiling for the target platform.
Single choice
< 10W / < 500g (small UAS, wearable supplemental)
10–25W / < 1kg (dismounted wearable compute)
25–75W / 1–3kg (vehicle / larger UAS)
75–250W (vehicle mounted, rotary-wing)
> 250W (TOC / fixed node)
Tiered across a family of platforms
requirededge
✓ saved
Connectivity posture and denied-environment assumption
Why This Matters
The CJADC2 architecture explicitly assumes disconnected, intermittent, limited (DIL) operating conditions, and any edge AI that depends on connectivity for inference becomes a single point of failure in exactly the conflict it was fielded for. Near-peer EW capabilities deny SATCOM and terrestrial links with high confidence — Replicator Initiative platforms are being selected on the basis of autonomous operation under these conditions. Assuming a connectivity floor that won't exist in combat is the fastest way to field a system that fails its first operational test.
Note prompts — click to add
+ Is our model scoring path literally free of any network call, or does a "local" path still reach back for a feature or a lookup?+ What is our failure mode when comms drop — does the model continue, degrade, or stop?+ How do we receive model updates when comms return, and is that channel cryptographically verified?
Required
Select the expected connectivity profile across the mission envelope.
Single choice
Fully disconnected — no comms assumed
Intermittent — burst SATCOM / LPI-LPD only
Mesh / tactical radio — neighbor peers only
Degraded cloud — DIL (disconnected, intermittent, limited)
Reliable connectivity assumed (garrison / CONUS)
requirededgetrinidy
TrinidySATCOM is assumed degraded or jammed in near-peer conflict. NEXUS OS is architected as zero-network-dependency by default — every inference, fusion, and decision support step runs locally, with connectivity treated as an optional bonus for telemetry and model updates, never a critical path.
✓ saved
Data sovereignty and air-gap requirement
Why This Matters
DoD Cloud Computing SRG Impact Levels (IL2 / IL4 / IL5 / IL6) constrain where workloads may physically run and who may touch them, and a mismatch between the model's training environment and its deployment environment is an authorization blocker that surfaces at ATO time, not at architecture time. ITAR (22 CFR 120-130) and EAR (15 CFR 730-774) add export-control obligations that can invalidate otherwise-clean cloud deployments if foreign-persons access is possible. Sovereignty is a design input, not a documentation exercise.
Note prompts — click to add
+ What is the highest classification the model will touch in training, and does that match the deployment enclave?+ Are any components (base models, datasets, libraries) subject to ITAR/EAR restrictions that constrain who can touch them?+ Has our ATO path been mapped end-to-end, including the specific IL level our cloud training environment is accredited for?
Required
Select the data sovereignty posture required by the mission.
Single choice
Air-gapped — no network exfil permitted
Sovereign cloud only (AWS GovCloud / Azure Government)
IL5 classified cloud (DoD CC-SRG Impact Level 5)
IL6 classified cloud (SECRET workloads)
C2S / SC2S (IC high-side, TS/SCI workloads)
Commercial cloud with FedRAMP High
requiredtrinidy
TrinidyEnemy capture of a device must not expose training data, weights, or prior inference history. NEXUS OS keeps all weights, feature caches, and inference logs inside the platform's encrypted enclave — no cross-border data flow, no cloud residency, full air-gap compatible.
✓ saved
Classification level of training data and weights
Why This Matters
Training data classification propagates to the weights — a model trained on SECRET imagery inherits SECRET classification, and the weights themselves become classified material that must be handled accordingly. DFARS 252.204-7012 requires covered contractors to protect CUI at a minimum of NIST SP 800-171 controls, and CMMC 2.0 Level 2 aligns to that control set with third-party assessment for prioritized programs. Programs that discover weight classification after training has already happened in a mismatched enclave have restarted from scratch — this decision must be explicit, not inferred.
Note prompts — click to add
+ At what classification level do our weights exist once training completes, and where are they stored?+ Are we prepared to destroy weights at that classification level if a device is compromised?+ Does our training environment's accreditation match the classification our weights will carry?
Required
Select the expected classification of the training data and model weights.
Single choice
CUI (Controlled Unclassified Information)
Confidential
Secret
Top Secret
Top Secret / SCI
Mixed (tiered training pipeline)
required
✓ saved
Map the JCIDS / acquisition pathway
Why This Matters
The acquisition pathway drives the documentation burden, the test and evaluation regime, and the sustainment model — a Software Acquisition Pathway program has fundamentally different artifact expectations than a traditional JCIDS program of record. Replicator Initiative and DIU OTAs compress timelines dramatically but require production transition planning that is often underestimated. Choosing the wrong pathway can add 18+ months of rework.
Note prompts — click to add
+ Is the acquisition pathway matched to the maturity of the capability, or are we over/under-buying on rigor?+ Who is our customer's contracting officer and have we validated the pathway choice with them?+ What is the transition plan from prototype to program of record if this starts under OTA / MTA?
Recommended
Identify the acquisition and requirements pathway for this capability.
Single choice
Traditional JCIDS (CJCSI 3170.01) program of record
Middle-Tier Acquisition (MTA) / rapid prototyping
Software Acquisition Pathway (SWP)
OTA (Other Transaction Authority) prototype
Replicator Initiative / DIU pathway
SBIR / STTR Phase II–III
CSO / urgent operational need
recommended
✓ saved
Deployment topology for the inference plane
Required
Select the physical deployment target for the model.
Single choice
On-platform (Jetson Thor / Orin AGX / Orin NX)
On-platform FPGA / ASIC accelerator
Dismounted wearable compute
Tactical edge node (brigade / battalion TOC)
Sovereign cloud (GovCloud / Azure Government)
Hybrid: on-platform inference + sovereign-cloud training
requirededge
✓ saved
Define human-machine teaming posture
Why This Matters
The DoD AI Ethics Principles (adopted February 2020) require AI systems to be governable — humans must be able to disengage or deactivate, which is a design constraint, not a policy wrapper. DoDD 3000.09 (January 2023 update) governs autonomy in weapon systems and sets a senior-review requirement for systems that select and engage targets without human input. The Political Declaration on Responsible Military Use of AI and Autonomy (February 2023) reinforces the human-oversight expectation internationally. Picking the wrong posture triggers a fundamentally different review path.
Note prompts — click to add
+ Is our human-oversight posture explicit in the architecture, or assumed at the CONOPS level only?+ Do we have a documented disengagement / kill-switch path that has been tested under degraded conditions?+ Does any capability we plan to field cross into the DoDD 3000.09 senior-review envelope?
Required
Select the human oversight model for the deployed AI.
Single choice
Human-in-the-loop (operator approves every action)
Human-on-the-loop (operator supervises, can intervene)
Human-out-of-the-loop / fully autonomous (requires DoDD 3000.09 review)
Tiered by function (advisory for some, autonomous for others)
required
✓ saved