Phase 1 of 6
Scoping & Forecast Horizon Constraints
Define the capex decision, forecast horizons, site hierarchy, and accuracy tolerances that will govern every subsequent modeling choice. Capacity planning forecasts drive multi-year capital commitments, so framing the problem precisely is the highest-leverage step.
0/8
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Capex Decision & Forecast Surface
Identify capex decisions the forecast will gate
Why This Matters
A forecast that informs tower lease commitments has a different accuracy and horizon profile than one that informs a spectrum refarm or small cell densification program. Lumping every capex decision into one forecast produces a model that under-serves all of them. GSMA Intelligence tracks global telecom capex around $300B per year, and the operator-level allocation between macro, densification, and spectrum is the single highest-stakes planning decision made annually.
Note prompts — click to add
+ Which capex line items are currently sized by spreadsheet extrapolation rather than ML inference?+ Do we have separate forecast models per capex class, or is one model used across all?+ What is the blast radius of a 10% demand forecast error on each capex line — macro vs. small cell vs. spectrum?Confirm which capital decisions the model must inform with quantified demand.
Select all that apply
Define forecast horizons and granularity
Why This Matters
Forecast error compounds with horizon — ML models trained on operator data typically deliver ±8–12% accuracy at 12 months vs. ±30% for spreadsheet extrapolation, but the spread at 36 months is much wider. Multi-horizon models that share a feature base but optimize per-horizon losses consistently outperform a single long-horizon model. Granularity matters equally: a national forecast that is accurate in aggregate can still misplace every individual upgrade if site-level variance is ignored.
Note prompts — click to add
+ What is our current forecast accuracy at 12 months, measured retrospectively against actuals?+ Do we produce site-level forecasts, or only market-level with top-down allocation?+ How do we reconcile 30-day tactical forecasts with 12-month capex planning forecasts — are they the same model or independent?Select the forecast horizons the model must produce, plus the spatial and temporal granularity.
Select all that apply
Establish target forecast accuracy by horizon
Why This Matters
Industry analysts estimate 15–25% of annual RAN capex is misallocated due to forecasting errors — on a Tier 1 operator this is $500M+ annually. Setting an explicit accuracy target per horizon is the only way to link model quality back to capex efficiency in defensible P&L terms. An accuracy target also forces a conversation about which features are available per horizon — 30-day forecasts can use live PM counters, 3-year forecasts cannot.
Note prompts — click to add
+ Have we computed WAPE and quantile loss on last year's forecast vs. measured traffic, per site?+ What accuracy threshold would materially change our capex approval process?+ Do we evaluate forecast accuracy at the site level or only aggregate, and does that match how we spend capex?Define MAPE / WAPE / quantile-loss targets per horizon that the model must hit to be considered capex-grade.
Single choice
Quantify capex-at-risk budget the forecast protects
Why This Matters
Framing the forecast as a revenue-protection and capex-efficiency function with a dollar-denominated scope changes how network planning, finance, and engineering prioritize features and horizons. Without an explicit capex-at-risk envelope, model teams optimize accuracy in ways the business cannot monetize — and a 2% WAPE improvement looks identical to a 20% WAPE improvement on a dashboard. The cost of inference is a rounding error against a single misplaced macro upgrade cycle.
Note prompts — click to add
+ What was last year's capex that was explicitly gated or reprioritized by a forecast output?+ Who owns the P&L line for stranded capex from forecast error — finance, network planning, or deployment?+ Is forecast accuracy a board-reported KPI alongside capex-to-revenue ratio?Tie the forecast's dollar impact to an annual capex envelope it is permitted to move.
Single choice
Define confidence interval and quantile policy
Why This Matters
A point forecast gives planners no basis to distinguish a site where demand is tightly bounded from one with wide uncertainty, which is precisely the information capex committees need. Quantile regression models (LightGBM quantile, DeepAR, TFT) output explicit P10/P50/P90 at training cost comparable to a point forecast. The conversation shifts from "is the forecast right?" to "how much headroom do we build for the P90 case?" — which is the right question for capital commitment.
Note prompts — click to add
+ Do our current forecasts ship as point estimates or as quantile bands?+ Are planners trained to consume P90 upside for capacity sizing and P10 downside for decommissioning?+ What is our policy for sites where the P10-to-P90 spread is wider than the planning action threshold?Planners should consume uncertainty ranges, not false-precision point estimates. Specify the quantiles the model must output.
Single choice
Map site hierarchy and reconciliation constraints
Why This Matters
Unreconciled hierarchical forecasts produce the frustrating outcome that the sum of site forecasts does not equal the market forecast, and finance cannot tell which number is correct. Reconciliation approaches (MinT, hierarchical forecasting with Hyndman-style reconciliation) make all levels coherent while preserving accuracy at each level. This is a first-order architectural choice, not a post-processing step.
Note prompts — click to add
+ Do our current forecasts reconcile across sector / site / market, or does each team produce its own inconsistent number?+ Who arbitrates when bottom-up and top-down forecasts disagree today?+ What is the hierarchical level at which capex decisions are actually made, and does our forecast match that level?Confirm the sector to site to cluster to market hierarchy the forecast must respect and reconcile.
Select all that apply
Confirm data residency and competitive data constraints
Traffic topology, competitive signals, and subscriber behavior are commercially sensitive — specify where inference and training may run.
Select all that apply
Trinidy — Network topology, PM counter history, and competitive intelligence data are among an operator's most proprietary assets. Trinidy keeps training and inference entirely within the operator's perimeter — no traffic data, competitive signals, or site economics leave the infrastructure boundary.
Define capex approval integration point
Why This Matters
A forecast that requires manual re-entry into a capex approval tool will be selectively used, selectively overridden, and impossible to audit retrospectively. Integration via standard APIs (TMF Open APIs for network planning, vendor-specific REST for Ericsson Intelligent Automation Platform, Nokia MantaRay, Cisco Crosswork) preserves the full decision chain from model output to approved capital commitment — which is also what SR 11-7-style model governance requires.
Note prompts — click to add
+ Where in the capex approval workflow does the forecast currently enter, and is that entry auditable?+ If a planner overrides the forecast, do we capture the override and reason for model feedback?+ Have we inventoried which planning tools (NetAct, MantaRay, IAP, Crosswork, Netcracker) we need to feed?Specify how forecast output lands in the capital approval workflow.
Single choice