Phase 1 of 6
Scoping & Acquisition Authority
Fix the acquisition pathway, classification ceiling, program-office constraints, and forecast horizon before any data pipeline or model is chosen. The AAF pathway you are supporting determines both the cadence of decisions and which artifacts the AI must be able to produce.
0/9
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Acquisition Pathway & Authority
Identify the Adaptive Acquisition Framework pathway in scope
Why This Matters
DoDI 5000.02 was reissued in 2022 to formally operationalize the Adaptive Acquisition Framework, and each pathway has materially different evidence windows — an MTA decision may need a threat assessment in 60 days, while an MDAP Milestone B is supported by a multi-year JCIDS record. Building an acquisition intelligence capability without first naming the pathway almost always produces an analytic product too slow for MTA and too shallow for MDAP.
Note prompts — click to add
+ Which pathway is the primary user of this capability, and which are secondary?+ Have we mapped the required decision artifacts (CDD, APB, ADM, T&E strategy) back to the model outputs?+ Is the program office sponsoring this AI capability, or is it a staff-level initiative?DoDI 5000.02 (2022) defines six pathways — each has its own cadence, milestones, and evidence expectations. The AI must fit the pathway, not the reverse.
Select all that apply
Confirm JCIDS insertion point (CJCSI 3170.01)
Why This Matters
CJCSI 3170.01 governs JCIDS and is where validated capability requirements are born. Adversary technology intelligence that does not make it into an ICD or CDD in a defensible form is, in practice, not shaping acquisition — it is informing staff reading. The FY2025 NDAA Section 232 JCIDS reform pilot targets compression of ICD-to-program-of-record from the current 5–7 years to 18 months, which raises the bar on how quickly AI-assisted assessments must be generated and defended.
Note prompts — click to add
+ Which JROC or FCB will receive this analysis, and when is their next decision forcing function?+ Do we have a named J8 or Service requirements sponsor for the output?+ Are our outputs structured to drop into the JCIDS document template, or do they require re-synthesis?JCIDS products (ICD, CDD, CPD) are where adversary capability intelligence must land to actually shape requirements.
Single choice
Define forecast horizon the model must support
Why This Matters
A model calibrated for near-term threat indications will over-weight recent OSINT and miss early-signal R&D workforce or publication shifts; a model tuned to 10-year trajectories will smooth over the near-term deltas a CDD threat annex needs. Most operational failures of acquisition intelligence programs are not modeling failures — they are horizon-mismatch failures.
Note prompts — click to add
+ Which horizon maps to the decision we are actually supporting?+ Do we have separate model heads for near-term indications vs. long-range trajectory?+ How will we validate forecast accuracy at a 5–7 year horizon when ground truth arrives years later?Major defense programs span 10–20 years from concept to fielding; acquisition decisions today must anticipate adversary capability 5–10 years out.
Single choice
Set classification ceiling and enclave
Why This Matters
Technical and scientific intelligence is among the most tightly controlled IC product, and classified S&T assessments can reveal what the US knows and how it knows it. Hosting an AI system one classification level below its inputs is not a convenience — it is a sources-and-methods incident waiting to happen. Cross-domain fusion is achievable but requires an explicit CDS/ICD 503 approach from day one, not retrofitted later.
Technical intelligence ranges from open-source to TS/SCI with special handling caveats. The inference stack must live at the ceiling of its inputs.
Single choice
Trinidy — Commercial cloud LLMs cannot ingest TS/SCI S&T intelligence under sources-and-methods protection. On-prem technical-intel LLMs in a classified residency run inside the existing IC enclave — no S&T assessment or raw collection ever leaves the accreditation boundary.
Confirm statutory MDAP applicability (10 U.S.C. § 2430)
Why This Matters
10 U.S.C. § 2430 thresholds place a program under statutory MDAP oversight, and the Weapons Systems Acquisition Reform Act of 2009 adds independent cost estimating, AoA sufficiency, and technical risk review obligations. MS-B certification under 10 U.S.C. § 2366b explicitly requires a threat-informed technical maturity assessment — the AI output must be auditable enough to survive that certification.
Note prompts — click to add
+ Is the supported program above the MDAP threshold today or forecast to be so?+ Who is the MDAP oversight point of contact at OSD and has our analytic approach been reviewed there?+ Have we mapped our evidence products to the WSARA and § 2366b certification record?Major Defense Acquisition Programs trigger WSARA, MS-B certification under 10 U.S.C. § 2366b, and OSD-level oversight that shapes which evidence the AI must produce.
✓ savedIdentify program-office stakeholders and decision forcing functions
Why This Matters
Acquisition intelligence that is not wired to a specific program office and a specific decision calendar becomes a staff study — technically interesting and operationally inert. The single most reliable predictor of whether an AI capability actually shapes acquisition outcomes is whether a named PM or PEO has it on their milestone tracker. Start the stakeholder map before the model architecture.
Note prompts — click to add
+ Who is the named PEO / PM sponsor, and what is their next decision forcing function?+ Does the Service requirements sponsor own the JCIDS insertion point for the output?+ How is the consumer kept in the loop between major decisions — pushed updates or pull?PEO, PM, Service requirements sponsor, user representative, OSD staff — name the consumers and their decision calendar.
✓ savedDefine the decision artifacts the model must produce
Why This Matters
Raw model output is rarely usable in an acquisition record of decision. Artifacts that make it into a CDD threat annex or an ADM package must carry citations, confidence bands, dissent indicators, and classification markings that match the template. Formatting is not cosmetic — it is the difference between an AI product that shapes the decision and one that sits unread next to the human analyst draft.
Note prompts — click to add
+ Do we have template-formatted outputs for each artifact, or only free-text assessments?+ Have we pre-coordinated with the JROC/FCB or ADM secretariat on acceptable citation formats?+ Is every assessment traceable to its sources, including classified sources with handling caveats?Select the acquisition artifacts the AI outputs must drop into without re-synthesis.
Select all that apply
Set latency SLA for routine vs. surge acquisition questions
Acquisition intelligence is not real-time, but program offices need specific latency contracts for each class of question.
Select all that apply
Budget authority and funding color
Why This Matters
The DoD FY26 budget is approximately $895B with the RDT&E request near $145B; a Service acquisition intelligence capability funded from RDT&E has different reprogramming and sustainment constraints than one funded from O&M. Funding color is not a back-office detail — it dictates whether the capability can be sustained past the initial pilot.
R&D (RDT&E), procurement, O&M, and OSD research colors shape what the analytic capability can do and how long it can do it.
✓ saved