Phase 1 of 6
Foundation & Scoping
Define the fraud problem, data landscape, regulatory context, and deployment environment before any modeling begins.
0/12
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Problem Definition
Define fraud types to detect
Required
Select all fraud types in scope for this model.
Select all that apply
Card-Not-Present (CNP)
ACH / Wire Fraud
Check Fraud
Account Takeover (ATO)
First-Party Fraud
Synthetic Identity Fraud
Merchant Fraud
Insider / Employee Fraud
required
✓ saved
Establish decision latency target
Required
Select the latency requirement that governs your model architecture.
Single choice
Real-time: <10ms
Real-time: <100ms
Near-real-time: <1 second
Batch: minutes to hours
requiredtrinidy
TrinidyTrinidy edge inference nodes achieve sub-10ms for XGBoost/LightGBM models locally — eliminating cloud round-trip latency entirely. Critical for real-time payment decisioning.
✓ saved
Define acceptable false positive rate
Required
Select your institution's agreed FP tolerance.
Single choice
<0.1% (minimal customer friction)
0.1%–0.5% (low)
0.5%–1% (moderate)
>1% (aggressive detection prioritized)
required
✓ saved
Define acceptable false negative rate
Required
Select the missed-fraud tolerance your business has agreed to.
Single choice
<1% missed fraud (highest recall)
1%–5% missed fraud
5%–10% missed fraud
>10% (precision-focused)
required
✓ saved
Document regulatory and compliance constraints
Required
Select all regulations that apply to this model deployment.
Select all that apply
GLBA (Gramm-Leach-Bliley)
FFIEC Guidance
SR 11-7 Model Risk Management
GDPR
CCPA / CPRA
BSA / AML Requirements
PCI-DSS
DORA (EU Digital Ops Resilience)
required
✓ saved
Specify deployment environment
Required
Select your primary inference deployment target.
Single choice
Public cloud (AWS / Azure / GCP)
Private cloud / on-premises data center
Edge inference node (co-located)
Air-gapped / classified environment
Hybrid (cloud + edge)
requirededgetrinidy
TrinidyIf the answer is edge, air-gapped, or on-premises, Trinidy is the inference substrate. Modular deployment to existing secured sites — no data center build required.
✓ saved
Data Landscape Assessment
Inventory available transaction features
Required
Check all feature types available in your transaction data.
Select all that apply
Transaction amount & currency
Timestamp & time-of-day
Merchant category code (MCC)
Device ID & fingerprint
IP address & geolocation
User account age & history
Velocity / frequency signals
Card / account metadata
required
✓ saved
Assess historical fraud label quality
Required
How would you rate the quality of your historical fraud labels?
Single choice
High — confirmed labels with <30 day lag
Medium — some labeling lag or ambiguity
Low — significant lag, incomplete coverage
Unknown — labels not yet assessed
required
✓ saved
Quantify class imbalance ratio
Required
What is your approximate fraud rate in the training dataset?
Single choice
<0.1% (extreme imbalance)
0.1%–0.5%
0.5%–2%
>2%
Unknown — not yet measured
required
✓ saved
Assess data recency and concept drift risk
Recommended
Fraud patterns shift seasonally and with new attack vectors. How stale is your oldest training data?
recommended
✓ saved
Identify graph-structure data availability
Recommended
Account-to-account links, shared device IDs, merchant networks — graph data enables GNN approaches.
recommended
✓ saved
Identify text/memo field availability
Optional
Transaction descriptions, notes, and memo fields can carry NLP signal for NER-based feature extraction.
optional
✓ saved