Phase 1 of 6
Scoping & Mandate
Define the AUM tier, mandate types, rebalancing cadence, explainability bar, and benchmark set that govern every downstream modeling and governance decision.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
AUM Tier & Mandate Surface
Classify AUM tier for the target deployment
Why This Matters
AUM tier sets the economic value of a basis point and therefore the tolerable cost of the entire AI stack. Vanguard Digital Advisor manages $311.9B with minimal per-account compute budget — a latency profile acceptable to Aladdin serving $25T would bankrupt a mass-affluent robo on infra cost alone. It also sets the regulatory center of gravity: Reg BI dominates retail, institutional is driven by fiduciary duty under the Investment Advisers Act, and cross-border mandates pull in EU AI Act and IFRS S1/S2 simultaneously.
Note prompts — click to add
+ Is our AUM tier mix uniform enough to share one portfolio AI stack, or do we need segmented deployments?+ What is the basis-point budget for all AI infrastructure combined, expressed as a percentage of management fee?+ How does our tier map to the regulatory regime that will be applied at examination — retail-dominant or institutional-dominant?AUM tier changes everything — optimizer complexity, compliance depth, infra footprint, and the marginal value of a basis point.
Single choice
Enumerate mandate types the model must serve
Why This Matters
A single optimizer tuned for long-only multi-asset will underperform on long-short hedge mandates and actively violate SFDR Article 9 disclosure rules on ESG mandates. Mandate type also dictates which objective function the optimizer is actually solving — Markowitz mean-variance is appropriate for balanced accounts, risk-parity for institutional multi-asset, and Black-Litterman once views are integrated. Mixing mandate types under one model is the single most common architectural mistake at mid-market shops that scaled from one product line.
Note prompts — click to add
+ Do our mandate types share enough structure to justify a single optimizer, or do we need a family?+ Which mandates have hard regulatory constraints (SFDR, UCITS, ERISA) that must be enforced by the model, not by downstream review?+ Who signs off that our optimizer is appropriate for every mandate it touches?Mandate types materially change the objective function, constraint set, and universe of acceptable models.
Select all that apply
Define rebalancing cadence and trigger logic
Why This Matters
Betterment has published that opportunistic rebalancing delivers +0.18% annual return improvement over calendar-based rebalancing — small per year but compounding meaningfully over mass-affluent time horizons. The cadence choice also dictates latency and compute profile: opportunistic requires sub-minute market-data-triggered inference, while calendar-based can run as an overnight batch. A cadence chosen without modeling the compute cost tends to fail either the SLA or the budget.
Note prompts — click to add
+ Do we have measured evidence of the return lift from our current rebalancing cadence vs. alternatives?+ Is the rebalancing decision a human-in-the-loop advisor action or an automated execution?+ Have we modeled the compute and tax-lot cost of opportunistic rebalancing at scale?Calendar vs. drift vs. opportunistic rebalancing — and whether the model decides or merely recommends.
Single choice
Specify explainability and narrative requirements
Why This Matters
EU AI Act Annex III classifies AI systems used for creditworthiness and essential financial services as high-risk — carrying explicit explainability, human oversight, and logging obligations that now apply to portfolio AI serving EU retail clients. SEC Form ADV requires material AI use to be disclosed with factual backing, and the 2024 Delphia ($225K) and Global Predictions ($175K) AI-washing actions are the first enforcement of that standard. A portfolio AI without a narrative layer that can be substantiated against the model's actual behavior is a Form ADV liability waiting to surface.
Note prompts — click to add
+ For every material AI-driven rebalance, can we produce a rationale that would survive SEC examination without retroactive construction?+ Is our explanation layer derived from the model or written separately — and if separately, how do we prevent narrative-vs-performance drift?+ Do we have a defined list of material AI uses that have been disclosed on Form ADV?Regulators and clients now expect a plain-language rationale for every material rebalance — not only a weight change.
Select all that apply
Trinidy — Narrative generation over proprietary allocation data must stay inside the institution's perimeter — leaking a query pattern leaks the strategy. Trinidy hosts the RAG-grounded narrative generator on-prem alongside the optimizer, so rationales are generated from live portfolio data without third-party API exposure.
Select benchmark set and performance attribution scheme
Why This Matters
ESMA's 2023 research showing +2.8% average AI return improvement explicitly flagged that the lift comes from downside protection and rebalancing discipline — not pure alpha — and that AI portfolios holding bonds and international equities are systematically mis-evaluated against an S&P 500 benchmark. Benchmark selection is also an ESMA performance advertising rule compliance question: misleading reference comparisons are actionable. A model validated against the wrong benchmark produces a misleading performance story that does not survive an adversarial review.
Note prompts — click to add
+ Is our benchmark set documented per mandate, or do we default to S&P 500 out of habit?+ Have we checked our benchmark choices against ESMA performance advertising rules for EU distribution?+ Who approves a benchmark change, and is it the same person who owns the performance story?Benchmarks define what "performance" actually means — pick them before the model, not after.
Select all that apply
Confirm deployment topology for the optimizer and narrative stack
Latency is 1–10s — cloud is acceptable — but strategy confidentiality and data residency may force on-prem.
Single choice
Trinidy — Portfolio construction logic, factor models, and alpha signals are core IP. Inference on proprietary cloud APIs leaks query patterns that sophisticated counterparties can reverse-engineer. Trinidy isolates the full agentic pipeline — optimizer, compliance rail, narrative generator — inside the institution's perimeter with model-agnostic serving.
Define multi-jurisdiction client data boundary
Global wealth managers cannot aggregate EU, Chinese, and Indian client data in a single US cloud.
Select all that apply
Trinidy — Trinidy regional inference nodes enable per-jurisdiction portfolio analysis without violating cross-border transfer rules. Each region runs its own optimizer and narrative generator against locally resident client data.
Establish AI-washing substantiation process
Why This Matters
The SEC charged Delphia $225K and Global Predictions $175K in February 2024 in the first AI-washing enforcement actions under the Investment Advisers Act. Delphia claimed AI analyzed client data to drive recommendations without the actual capability; Global Predictions claimed to offer "the first regulated AI financial advisor." More enforcement is explicitly promised. Every external AI claim must have a living pointer to the model behavior that substantiates it — "AI-powered" is a material statement under Form ADV, not a marketing tagline.
Note prompts — click to add
+ Do we have a documented mapping from external AI marketing claims to the model capabilities that substantiate them?+ Who in legal / compliance signs off before "AI-powered" appears in any client-facing material?+ Is our Form ADV AI disclosure current with the actual model behavior in production?Every AI marketing claim must be traceable to a measurable capability — Delphia and Global Predictions learned this expensively in 2024.
✓ saved