Phase 1 of 6
Scoping & Joint Planning Process Constraints
Define the planning process (JOPP or MDMP), echelon, classification envelope, and commander decision gates that govern every downstream architecture choice.
0/9
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Planning Process, Echelon & Mission Set
Identify planning processes in scope
Why This Matters
JOPP, MDMP, and MCPP share a family resemblance but differ materially in phase gates, staff integration, and output format — a planner tuned only for MDMP will not produce doctrinally valid JOPP products, and vice versa. JP 5-0 (Joint Planning) and ADP 5-0 (The Operations Process) are the authoritative references and must anchor any planner tuning. Trying to one-size an AI planner across processes is the fastest route to doctrinally non-compliant output that planners refuse to use.
Note prompts — click to add
+ Which doctrinal process is the primary consumer — and are we willing to produce a second tuned variant for a second process?+ Have we mapped our planner output templates to the canonical JP 5-0 / ADP 5-0 / MCWP 5-1 formats?+ Who on the staff owns doctrinal fidelity review before AI-generated products are released?Confirm which doctrinal planning process the AI planner must support end-to-end.
Select all that apply
Define echelon of command supported
Why This Matters
A brigade-level MDMP cycle runs 12–24 hours under today's manual process; a JTF JOPP cycle runs days to weeks. These are different planning problems — the brigade planner mostly compresses COA development and wargaming, while the JTF planner must also handle operational design, lines of effort, and assessment frameworks. AUSA (2024) projects AI compression of MDMP "by an order of magnitude," and Harvard Belfer Center (2024) projects JOPP compression from days to hours — but only at the echelon the planner was tuned for.
Note prompts — click to add
+ What is our current manual cycle time at the target echelon, and what is our compression goal?+ Does our planner need to produce subordinate unit tasking, or only the parent-echelon plan?+ Are we building for one echelon or establishing a reusable pattern across echelons?Echelon drives planning depth, staff integration, and the volume of subordinate tasks to be generated.
Single choice
Mission sets the planner must cover
Mission type drives the applicable doctrine, threat templates, and ROE profiles.
Select all that apply
Define MDMP / JOPP phase coverage
Why This Matters
The ARL COA-GPT research (Feb 2024, NATO ICMCIS) demonstrated second-scale COA generation against a StarCraft II evaluation environment, but COA generation is only one of six MDMP phases — the highest-value programs cover mission analysis, COA development, wargaming, and orders production together. Piecewise deployment leaves the human planner stitching products across phases, which erases much of the compression benefit. Scoping phase coverage up front is the single biggest lever on realized time savings.
Note prompts — click to add
+ Which phase has the largest time-sink today and is therefore the highest ROI for AI assist?+ Do we have a doctrinally valid product template for every phase the planner will cover?+ What is the planner's handoff format to human planners at each phase gate?Select which phases of the planning process the AI planner supports.
Select all that apply
Classification envelope of the planner
Why This Matters
OPSEC classification of operation plans is not an afterthought — OPLAN and OPORD content is typically SECRET or higher, and force disposition, commander's intent, and decision points are among the most sensitive categories of DoD information. A planner that ingests OPLAN content inherits the classification of its input, which rules out every commercial cloud-hosted LLM endpoint that is not IL5/IL6-accredited. The Microsoft Azure Government Copilot for Defense announcement (2024) exists precisely to address this gap.
Note prompts — click to add
+ At what classification level does the planner ingest OPLAN / OB data, and does that match our inference environment accreditation?+ Is the planner cleared to sit on SIPR, or is a cross-domain guard required?+ Have we documented the classification of every data source the planner touches?Classification level caps deployment topology, model sourcing, and data flows.
Single choice
Trinidy — Planning data at SECRET and above cannot transit commercial infrastructure. Trinidy runs LLM inference, RAG over doctrine, and wargaming orchestration entirely within the classified enclave — SIPR, JWICS, or coalition equivalent — with no data egress at any stage.
Commander decision gates and human-in-the-loop policy
Why This Matters
DoD Directive 3000.09 (updated January 2023) and the Political Declaration on Responsible Military Use of AI and Autonomy (February 2023) together establish that consequential military AI must have explicit human judgment at decision points — AI planners are squarely inside this envelope even when not targeting lethal effects. A planner that auto-advances products between phases without a documented human gate will not pass AI ethics review, and retrofitting gates after deployment costs more than designing them in from the start.
Note prompts — click to add
+ Have we mapped each planner output to an explicit human approval gate, with a named approver role?+ What is our audit mechanism when a gate is bypassed under time pressure?+ Does our TTP describe what the human is actually validating at each gate, or is review performative?Specify which AI outputs require commander or staff approval before downstream use.
Select all that apply
Planning cycle time compression target
Why This Matters
AUSA analysis (2024) projects AI/ML will speed MDMP "by an order of magnitude," enabling mid-execution replanning that manual MDMP cannot sustain. Harvard Belfer Center (2024) independently assesses agentic AI could compress JOPP from days to hours at the operational level. Setting the compression target explicitly — and tying it to a named operational tempo constraint like the adversary's decision cycle — forces the program to measure what it is worth rather than what is easy to measure.
Note prompts — click to add
+ What adversary decision tempo are we trying to outpace, and does our compression target close that gap?+ Who owns measuring cycle time pre- and post-deployment, and have we baselined manually?+ Does the target account for mandatory human gates, or is it assuming fully automated flow?Define the target cycle time reduction that justifies the program.
Single choice
Deployment topology
Select the physical and network topology for the planner.
Single choice
Trinidy — For planning at SECRET and above, cloud-hosted inference is incompatible with data residency. Trinidy runs on-premises inside the classified enclave — garrison, forward operating, or mobile command post — with the same LLM, RAG, and wargaming fabric across all three.
Integration with existing planning and C2 systems
Identify the C2 and planning systems the AI planner must read from and write to.
Select all that apply