Phase 1 of 6
Scoping, Program & Privacy Constraints
Define the insider-threat program scope, the cleared population under monitoring, the legal-authority perimeter, and the investigative workflow that the model will feed — before any feature engineering begins.
0/9
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Program Authority & Scope
Confirm the statutory and executive authority for the insider threat program
Why This Matters
E.O. 13587 established the National Insider Threat Task Force (NITTF) and the minimum standards every cleared agency and cleared contractor must meet. Programs that cannot point to the specific authority under which a given data source is collected are indefensible when challenged by the employee, their union, or an IG. The authority mapping is the first artifact any adjudicator, inspector, or employee counsel will ask for.
Note prompts — click to add
+ Has our insider-threat program office published a written authority-to-collect matrix per data source?+ Are we a cleared contractor operating under 32 CFR Part 117 NISPOM, a DoD component under DoDD 5205.16, or an IC element under CNSSD 504?+ When did legal counsel last re-review the authority mapping against new data sources we have onboarded?Identify the foundational authorities under which your program operates. Programs without an articulated authority mapping fail legal review on first examination.
Select all that apply
Define the cleared population in scope
Why This Matters
Insider-threat monitoring scope is not uniform. A cleared federal civilian, a uniformed service member, and a cleared contractor operate under three distinct bodies of law — Title 5 / Privacy Act, UCMJ, and 32 CFR Part 117 NISPOM respectively. Mixing populations under a single monitoring model without segmenting by authority creates legal exposure and, in unionized populations, collective-bargaining violations.
Note prompts — click to add
+ Have we inventoried every cleared population this program will monitor and the specific authority that governs each?+ Do we have different data-retention and use-limitation rules per population, and are they enforced technically or only procedurally?+ Are uncleared privileged users (IT admins, cloud operators) in scope, and under what authority?Scope the monitored population. Different populations carry different authorities, different notice requirements, and different bargaining constraints.
Select all that apply
Map Trusted Workforce 2.0 and Continuous Vetting integration
Why This Matters
Trusted Workforce 2.0, rolled out by DCSA, moved the cleared community from 5-year / 10-year periodic reinvestigation to Continuous Vetting. Per DCSA, over 3.6M cleared personnel are enrolled in CV as of 2023, and the NBIS (National Background Investigation Services) platform is the system of record. Insider-threat UAM is not Continuous Vetting — but the two programs share data and workflow, and confusing them in policy creates both double-jeopardy risk for the employee and compliance risk for the program.
Note prompts — click to add
+ Do we clearly distinguish CV alerts (sourced from DCSA / NBIS) from UAM alerts (sourced from on-network behavior) in our case management?+ Is CV data co-mingled with UAM data, and if so under what authority?+ Who in legal has signed off on the interface between CV and the insider-threat analytic?Confirm how the program intersects with DCSA Continuous Vetting (CV) and the Trusted Workforce 2.0 framework.
✓ savedAlign program governance with NITTF maturity framework
Why This Matters
Layering an AI analytic on top of a program that has not yet met NITTF minimum standards compounds governance risk — the analytic's findings become contested because the program underneath is contested. Reach NITTF minimum standards first, then instrument with AI. NITTF-published maturity language is also the vocabulary in which IG and NITTF reviewers will score the program, so it is worth adopting explicitly.
Note prompts — click to add
+ What is our current NITTF maturity rating, self-assessed or independently assessed?+ Which minimum standards have we not yet met, and is the AI program blocked on any of them?+ Have we aligned program documentation to NITTF terminology?Map the program against NITTF maturity / minimum standards before attempting to deploy AI on top of it.
Single choice
Define the monitored data-source perimeter
Why This Matters
Every source added to the analytic should have (a) an authority to collect, (b) a published SORN or equivalent notice, (c) a defined retention window, and (d) an access-control model. Unbounded data-source growth is the single most common cause of Privacy Act and collective-bargaining findings in mature insider-threat programs.
Note prompts — click to add
+ Is there a written per-source use-limitation and retention schedule, or only a generic program retention policy?+ Have we explicitly excluded sources that would implicate Hatch Act, whistleblower, or union-organizing activity?+ Who approves onboarding a new data source — the program manager alone, or the program manager plus legal and privacy?Check every category of data source that will feed the analytic. Adding a source later triggers re-authorization.
Select all that apply
Scope out protected activity explicitly
Why This Matters
SEAD 4 (National Security Adjudicative Guidelines) and 5 U.S.C. § 2302 create explicit no-go zones for insider-threat monitoring. A model that flags whistleblower complaints, union activity, or protected mental-health service utilization as risk indicators is not just legally defective — it actively chills the behavior the federal workforce is entitled to. Scoping these out in the feature layer, not only in the adjudication layer, is the defensible architecture.
Note prompts — click to add
+ Are protected-activity exclusions enforced at the feature-engineering layer, or only as a manual review step?+ Has an independent privacy / civil-liberties officer reviewed the feature list for proxy leakage (e.g., EAP attendance inferred from badge timing)?+ Do we have a tested procedure for an employee to challenge a referral they believe stems from protected activity?Identify categories of employee behavior the analytic must NOT treat as risk signal, even when statistically correlated.
Select all that apply
Deployment topology for the analytic
Specify where UAM analytics, feature computation, and adjudication workflow reside.
Single choice
Trinidy — Cleared-workforce PII — clearance status, foreign-contact reports, financial disclosures, UAM keystroke and clipboard data — must not leave the classified enclave. Trinidy runs UAM analytics, feature store, and model inference entirely on-premises inside the agency or contractor enclave, with zero cloud egress for cleared-workforce signals.
Define the referral / adjudication workflow the model feeds
Why This Matters
The single most consequential design decision in an insider-threat AI program is the rule that the model recommends, never decides. SEAD 4 adjudicative guidelines make clearance eligibility a human judgment based on the whole-person concept and reasonable suspicion based on articulable facts — not a probability threshold from a classifier. A program whose architecture allows an AI score to gate an employment action is a program that will lose on appeal.
Note prompts — click to add
+ Is there any point in the workflow where a model output drives action without a human in the loop?+ Can every model-derived referral be traced to the specific articulable facts supporting it?+ Does our legal counsel agree the workflow satisfies due-process standards applicable to our population?The analytic is a referral input. Adjudication of clearance eligibility under SEAD 4 is a human responsibility.
Single choice
Define false-positive tolerance at the referral gate
Why This Matters
A false referral is not harmless. It consumes an Insider Threat Hub analyst-hour, it creates a record in the employee's file that may follow them through future adjudications, and a high false-referral rate erodes trust in the program from both the workforce and leadership. Calibrating against a false-referral budget — not a raw F1 score — is how mature programs keep the analytic deployable.
Note prompts — click to add
+ What is our current measured false-referral rate, and is it tracked as a program KPI?+ Who signs off on the analyst workload the false-referral rate implies?+ Are we measuring false-referral rate by population (contractor vs. civilian vs. military)?Select the acceptable rate of false referrals — referrals that, upon human review, show no actionable concern.
Single choice