Phase 1 of 6
Scoping & Decisioning Surface
Define the credit products, latency envelope, adverse-selection tolerance, and adverse-action generation requirements that govern every downstream architectural, data, and governance decision.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Credit Products & Decisioning Surface
Identify credit products in scope for real-time scoring
Why This Matters
Credit products differ by an order of magnitude in latency envelope, loss severity, and regulatory treatment, and almost never share a single model cleanly. A BNPL checkout scoring in 500ms has nothing architecturally in common with a mortgage model that runs in 30 seconds against 1,500 features — and the CFPB, OCC, and EU AI Act treat them differently under adverse-action and high-risk classification rules. The most common failure pattern is one-sizing a card-origination model onto BNPL or auto after product launch and inheriting the wrong feature surface and the wrong governance posture.
Note prompts — click to add
+ Which products share enough feature overlap and loss profile to justify a shared backbone vs. dedicated sub-models?+ Have we inventoried every product we underwrite today plus what product is launching in the next 12 months?+ Who owns the per-product loss attribution so we can measure per-product model ROI and adverse-selection rate?
Required
Confirm which credit products your model must decision at application time.
Select all that apply
Credit card — new account origination
Credit card — line increase / line management
Unsecured personal loan
Auto loan (indirect / dealer-channel)
Auto loan (direct / refinance)
Mortgage — pre-qualification / point-of-sale
Mortgage — full underwriting
HELOC / home equity
BNPL / point-of-sale installment
Small business / SMB lending
Student loan / student refi
required
✓ saved
Define end-to-end decisioning latency envelope
Why This Matters
The 1-5 second real-time envelope is not a soft target — it is set by merchant checkout flow, dealer F&I desk workflow, and consumer abandonment curves. Upstart publicly reports 91% fully-automated decisioning in under a few seconds; the economic advantage of AI underwriting largely evaporates if the model cannot return a decision inside the channel's natural decision window. Explainability adds 30-50% overhead (SHAP KernelExplainer runs 500 model evaluations per prediction), so the latency budget must plan for score + explanation as a single unit, not score alone.
Note prompts — click to add
+ What is our current P99 end-to-end latency, and where are the hot spots — bureau pull, feature retrieval, inference, or explanation?+ Have we stress-tested the explanation path, not just the score path, at P99?+ What is our timeout fallback — a rules-based decline or a deferred manual review?
Required
Select the P99 latency budget your scoring + explanation pipeline must hold under peak load.
Single choice
< 500ms (BNPL / point-of-sale checkout)
< 2s (card / personal loan point-of-application)
< 5s (auto loan / indirect dealer channel)
< 30s (mortgage pre-qualification)
Batch / overnight (full mortgage underwriting)
Tiered by product (mixed SLA)
requiredtrinidy
TrinidyCloud-routed scoring plus SHAP explanation frequently breaks the 2-second BNPL and POS envelope because explainability adds 30-50% compute overhead on top of inference. Trinidy runs the gradient-boost stage and SHAP KernelExplainer on-node, preserving the sub-2s envelope even when the adverse-action explanation must be generated in-line.
✓ saved
Quantify acceptable adverse-selection rate
Why This Matters
Adverse selection is the signature failure mode of credit models — the model's accepted pool systematically concentrates the borrowers who exploit the features it weights most heavily. Upstart reports 53% lower default rate at equal approval volume, and Zest AI reports ~28% charge-off reduction, but both are measured against vintage-expected curves rather than point-in-time approval rates. Without a vintage-anchored tolerance, model teams optimize approval lift that looks great for 60 days and produces a charge-off wave 12-18 months later.
Note prompts — click to add
+ What is our per-vintage expected loss curve, and are we measuring realized loss against it monthly?+ Do we distinguish adverse-selection loss from macro-driven loss (rate moves, unemployment shifts)?+ Who in Risk owns the adverse-selection metric and signs off on the tolerance?
Required
Define the adverse-selection tolerance (fraction of approved loans that default beyond the expected loss curve).
Single choice
< 0.5% excess default rate vs. expected (conservative)
0.5% - 1.5% (balanced growth posture)
1.5% - 3% (growth-focused)
> 3% (aggressive expansion / market-entry)
Not currently measured vs. vintage-expected
required
✓ saved
Confirm adverse-action notice generation requirement
Why This Matters
ECOA / Regulation B (12 CFR 1002) requires a specific, accurate, human-understandable adverse action notice with up to four principal reasons within 30 days of the decision. The CFPB's 2022 Circular and 2024 enforcement posture made it unambiguous that "the model said so" is not compliant — the top factors must map to real features the applicant can understand and, in principle, remediate. An AI model that can't produce reason codes isn't a credit model; it's a research artifact. Designing reason-code generation into the scoring path (rather than bolting it on) is the difference between a 2-second decision and a 15-second decision.
Note prompts — click to add
+ Are reason codes generated on the inference path, or on a post-decision batch job?+ Who has legally approved our reason-code taxonomy — risk, legal, compliance?+ How do we test that reason codes remain accurate as the model is retrained?
Required
Specify whether the model must generate Reg B / ECOA-compliant adverse-action reason codes at decision time.
Single choice
Yes — reason codes must be generated in-line with the score (real-time)
Yes — reason codes generated post-decision, within 30-day Reg B window
Partial — model outputs SHAP values, downstream system builds notice
Not applicable (model informs but does not decide adverse action)
required
✓ saved
Define fair-lending posture and Less Discriminatory Alternative (LDA) search scope
Why This Matters
The CFPB in 2024 made clear that examiners are now actively searching for LDAs and that failing to search is itself evidence of non-compliance — "the decision to use an algorithmic tool can itself constitute a discriminatory policy." Any AI credit model in production should have an auditable record of alternative models considered, the disparate-impact results of each, and the business justification for the selection. This is a documentation artifact, not a modeling exercise — and it must exist before the regulator asks.
Note prompts — click to add
+ Do we have a documented LDA search protocol, or have we only disparate-impact-tested the final champion?+ Who signs off on the business justification for a selected model when a less-discriminatory alternative exists?+ How frequently is the LDA search refreshed — per retrain, per material threshold change?
Required
CFPB 2024 guidance requires active LDA search when adverse impact is detected. Define the scope of alternative-model search that will be documented.
Single choice
Full LDA search — multiple model families tested, all DI-tested, selection justified
Bounded LDA search — final candidates DI-tested, selection justified
Disparate-impact testing of champion only
No structured LDA search performed
required
✓ saved
Confirm data residency and open-banking consent scope
Required
Map applicant, bureau, and open-banking data to jurisdictional and consent constraints before architecture is finalized.
Select all that apply
US — GLBA Safeguards Rule compliant perimeter
FCRA-covered bureau data residency
CFPB Section 1033 consumer-permissioned data
EU GDPR — data must remain in EU
UK GDPR — UK residency required
Canada PIPEDA / OSFI guidance
Cross-border permitted under SCCs / approved vendors
On-premises only — no cloud scoring endpoints
requiredtrinidy
TrinidyCredit bureau data (FCRA-covered), bank-transaction cash-flow data (1033-permissioned), and payroll data (employment-verification consent) each carry different residency and retention obligations. Trinidy keeps scoring, explanation, and audit logging entirely inside the institution's perimeter so consumer-permissioned data does not traverse third-party SaaS inference endpoints.
✓ saved
Specify human-in-the-loop / override policy
Why This Matters
EU AI Act Article 14 (effective August 2026 for high-risk systems, which explicitly includes credit scoring per Annex III) requires that human oversight be meaningful, not ceremonial — the reviewer must be able to understand, question, and override the automated decision. SR 11-7 reaches the same conclusion from a different angle: the model's use must be governed by defined change, monitoring, and escalation procedures. A "human in the loop" who cannot actually interpret the model's output is non-compliant under both regimes.
Note prompts — click to add
+ Do our human reviewers have access to SHAP reason codes and feature values at review time?+ What fraction of automated declines receive substantive human review vs. rubber-stamp approval?+ Are human override rates tracked as a model monitoring signal?
Required
EU AI Act Art. 14 requires human oversight for high-risk credit scoring. Define the override and escalation surface.
Single choice
Fully automated for approvals, human review for all declines
Fully automated for approvals and low-risk declines, human review for edge cases
Hybrid — human review triggered by explainability confidence or segment rules
Fully automated with audit-trail-only human oversight
Fully manual — model is advisory only
required
✓ saved
Specify deployment topology for scoring + explanation
Required
Select the physical/logical deployment target for the scoring ensemble and explanation layer.
Single choice
On-premises / private data center
Private cloud / dedicated VPC
Public cloud with PCI/GLBA attested endpoint
Hybrid — on-prem inference + cloud training
SaaS vendor-hosted scoring (Upstart / Zest / etc.)
requiredtrinidy
TrinidyGLBA Safeguards Rule, FCRA reuse restrictions, and the sensitivity of SSN + income + bank-transaction data make cloud-routed scoring a compliance drag. Trinidy runs the full scoring + SHAP + LLM-explanation stack inside the institution's security perimeter — no applicant data transits public cloud APIs.
✓ saved