Phase 1 of 6
Scoping & Spectrum Constraints
Define the bands, licenses, inference cadence, and regulatory envelope that will govern every optimization decision the model is allowed to recommend.
0/9
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Bands, Licenses & Regulatory Surface
Inventory licensed bands in scope for optimization
Why This Matters
Each band sits under a different FCC part and carries different incumbent-protection, power, and emission obligations — a single optimization model that ignores the band-specific rules will eventually recommend an illegal configuration. C-band post-auction 107 carries ongoing coordination obligations with satellite incumbents above 3.98 GHz, and CBRS under Part 96 is not a license you "own" at all but a dynamic assignment from a Spectrum Access System. Treating all licensed bands as equivalent is the fastest way to generate a regulatory incident.
Note prompts — click to add
+ Which of our bands sit under Part 24, Part 27, Part 96, and Part 101 — and is that mapping current in our model configuration?+ Do we have documented incumbent-protection obligations on C-band and 2.5 GHz that the optimizer must respect?+ Is the SAS-managed CBRS spectrum treated as a first-class, dynamically changing input rather than a static license?Confirm which spectrum holdings the model is allowed to recommend allocation changes on.
Select all that apply
Define inference cadence and oscillation guardrails
Why This Matters
Industry deployments (Ericsson, Nokia ReefShark, Huawei MetaAAU, Viavi) converge on a 1–5 second cadence because faster loops oscillate — the model reacts to its own previous action before the RAN has stabilized — and slower loops miss dynamic traffic shifts. GSMA's 2023 spectrum-management AI survey found 61% of operators cited the 1–5 second window as their operational target. Your cadence must be declared explicitly and guarded by damping logic before the first production deployment.
Note prompts — click to add
+ What is our target inference cadence per parameter class, and have we validated it does not oscillate?+ Do we have hysteresis or minimum-dwell guardrails on each recommendation to prevent flapping?+ How do we detect oscillation in production — counter-metric drift, or waiting for an ops ticket?Select the optimization loop interval that balances responsiveness against parameter-change oscillation.
Single choice
Trinidy — Cloud-routed RAN optimization introduces 50–200ms of uncontrolled jitter per inference cycle — enough to destabilize a 1-second loop. Trinidy runs the inference on-node at the cell site or edge cluster so the 1–5 second cadence is deterministic under all network conditions.
Specify closed-loop vs. recommendation-only operating mode
Why This Matters
O-RAN Alliance guidance and most operator policies treat closed-loop RAN automation as a spectrum of auto-apply scopes, not a binary — power and tilt inside a pre-validated envelope behave very differently from a reallocation that changes a licensed-band boundary. Declaring the mode up front forces a governance conversation that otherwise surfaces only after the first production incident. A model that can only recommend is also a model whose audit trail is dramatically simpler.
Note prompts — click to add
+ Which parameter classes are we comfortable auto-applying, and who signs off on that envelope?+ Is there a kill-switch that reverts the entire optimizer to recommend-only in under 30 seconds?+ Do we log every auto-applied change with full before/after state for post-incident review?Define which classes of recommendations may auto-apply vs. require human approval.
Select all that apply
Confirm FCC RF exposure compliance envelope
Why This Matters
FCC 47 CFR 1.1307 and 2.1093 set the general-population and occupational RF exposure limits that every licensed transmitter must respect, and the 2019 FCC reassessment (FCC 19-126) reaffirmed these limits through the 100 GHz band. An AI optimizer that recommends a power increase or a beamforming tilt change is adjusting parameters the compliance calculation depends on — which means the exposure envelope must be a hard constraint in the model, not a post-hoc check. A violation is an operational incident and an FCC enforcement matter at the same time.
Note prompts — click to add
+ Is the RF exposure envelope encoded as a hard constraint on every power and tilt recommendation?+ Have we validated that the worst-case beam direction still sits inside the 1.1307 limit?+ Who owns the re-certification when the optimizer's envelope changes, and how fast can we do it?Ensure the optimizer cannot recommend configurations that exceed 47 CFR 1.1307 / 2.1093 RF exposure limits.
✓ savedMap incumbent / federal protection obligations
Why This Matters
CBRS under FCC Part 96 is the most automated example — every transmission requires a live grant from one of the certified Spectrum Access Systems (Google, Federated Wireless, CommScope), and the Environmental Sensing Capability network can revoke that grant within seconds to protect DoD radar. Post-auction 107 C-band carriers still have coordination obligations with fixed-satellite-service earth stations above 3.98 GHz. An optimizer that treats these as static license data rather than live, changing constraints will eventually recommend a configuration the SAS will refuse — or worse, one that causes incumbent interference.
Note prompts — click to add
+ Is the live SAS grant state a real-time input to the optimizer, or a cached assumption?+ Have we documented every incumbent coordination obligation per band and mapped it to a model constraint?+ What happens to in-flight recommendations when an ESC event revokes a CBRS grant?Document the bands where federal or incumbent protection (NTIA, SAS, ESC, satellite) creates non-negotiable exclusion zones.
Select all that apply
Confirm international / ITU-R harmonization constraints
Identify bands where ITU-R Radio Regulations or ETSI harmonization limit parameter choices.
Select all that apply
Define multi-vendor RAN scope
Why This Matters
Proprietary vendor optimization suites typically cost $200K–$500K per 1,000 sites per year and only manage their own vendor's equipment — which is why most tier-1 operators today run three or four parallel optimization stacks that cannot coordinate across a border cell. The open-RAN / SMO trend under O-RAN WG3 reframes optimization as a Near-RT RIC xApp that runs against any compliant vendor. Declaring multi-vendor scope up front is the single decision that determines whether your optimizer is a cost center or a capacity multiplier.
Note prompts — click to add
+ How many vendor-specific optimization suites are we paying for today, and what do they cost per site per year?+ Are border cells between vendors a known capacity problem that a unified optimizer would close?+ Do our RAN contracts allow third-party Near-RT RIC xApps, or do they lock optimization to the vendor SMO?Confirm which vendors' equipment the optimizer must manage — vendor lock-in cost is proportional to how narrowly this is scoped.
Select all that apply
Trinidy — Proprietary optimization suites (Ericsson Intelligent Automation Platform, Nokia MantaRay, Samsung SMO) only manage their own vendor's RAN. Trinidy runs operator-trained models against normalized O-RAN interfaces so a single optimizer spans mixed Ericsson / Nokia / Samsung / Mavenir / Rakuten estates.
Specify inference deployment topology
Select the physical deployment target for the optimization inference plane.
Single choice
Trinidy — For 1-second inference cadence and sovereign control of spectrum telemetry, cloud inference is incompatible with the latency and data-residency envelope. Trinidy deploys at the site, at the edge aggregation point, and at the Near-RT RIC layer — all on the same fabric, all operator-owned.
Define throughput / capacity target for the optimizer
Why This Matters
Industry case studies converge on 15–30% throughput improvement as the realistic capacity gain from AI-driven optimization on existing spectrum — Ericsson reported 30% average-user throughput uplift at T-Mobile US DSS cells, Elisa Finland reported 40% spectral efficiency on Nokia ReefShark RL beamforming, and Huawei MetaAAU produced a 4.2 dB SINR gain on China Mobile. Setting a measurable target early forces the program to be benchmarked against a dollar-denominated equivalent — the capacity either justifies deferring the next spectrum purchase or it does not.
Note prompts — click to add
+ What is our dollar-equivalent value of a 20% capacity gain vs. acquiring new spectrum in our top market?+ Is the capacity target tracked alongside fraud-free optimizer recommendations, or is it the only KPI?+ Who owns the P&L line that counts deferred spectrum purchases as a benefit of the program?Set the measurable capacity gain the program is accountable for delivering.
Single choice