Phase 1 of 6
Scoping & Latency/SLA Constraints
Fix the enterprise workload, the sub-10ms latency envelope, the MEC footprint, and the multi-tenant contract before a single model is deployed at the tower.
0/9
Phase Progress
Required Recommended Optional Open-Source Proprietary Trinidy
Enterprise Workload & SLA Surface
Identify latency-critical enterprise workloads in scope
Why This Matters
Workload class dictates both the true latency budget and the failure mode. A robot arm closed-loop control loop that drops a frame stalls a production line; an AR overlay that slips past 20ms produces motion sickness; a CV quality inspection that misses a frame at 1.2m/s conveyor speed means a defect ships. Aggregating these onto a single shared inference pool without per-workload SLA segmentation is how MEC deployments silently under-serve their most valuable tenant.
Note prompts — click to add
+ Which workloads are genuinely sub-10ms versus "the customer asked for fast"?+ Have we mapped workload class to on-node isolation class, or are they all one shared pool?+ What happens to each workload on a single missed inference — stall, reject, degrade, or ignore?
Required
Confirm which enterprise MEC workload classes your inference tier must carry.
Select all that apply
Industrial robotics / closed-loop motion control
AGV / autonomous guided vehicles in logistics
Real-time computer-vision quality inspection
AR / VR / XR rendering and pose correction
Connected vehicle / V2X roadside inference
Real-time medical imaging and diagnostic overlay
Drone / UAV autonomy and BVLOS control
Private 5G video analytics (safety, intrusion, PPE)
required
✓ saved
Define end-to-end latency SLA (UE to inference to UE)
Why This Matters
3GPP Release 17 and 18 URLLC targets sit at 1–10ms user-plane latency, and ETSI MEC was defined precisely so inference could land inside that envelope. The envelope is non-negotiable at the physics layer — at 10ms one-way, you have roughly 3,000km of fiber round-trip budget before the model even fires, which is why centralized cloud is structurally disqualified. Setting the SLA correctly at scoping has 10× the leverage of any later optimization.
Note prompts — click to add
+ What is our measured P99 today from UE through the gNB to the inference endpoint, not just inference compute?+ Which tenants contractually require deterministic latency versus best-effort?+ Have we pressure-tested the SLA at 5× peak concurrent sessions per cell?
Required
Select the committed round-trip latency the inference plane must hold at P99 under peak load.
Single choice
< 1ms (URLLC control-plane — 3GPP Release 17 target)
< 10ms (MEC inference — autonomous / robotics / AR)
< 20ms (XR motion-to-photon comfort threshold)
< 50ms (video analytics / connected vehicle)
Tiered by tenant / workload (mixed SLA)
requirededgetrinidy
TrinidyNearest public cloud region is typically 30–80ms from an enterprise site — 3–8× over a 10ms ceiling, before the model even loads. Trinidy NEXUS OS runs inference on tower-local hardware with deterministic latency, collapsing the round trip into the radio and the fabric.
✓ saved
Define jitter and tail-latency tolerance
Why This Matters
Enterprises paying a premium for MEC are not paying for low average latency — they are paying for the tail. A P99 of 8ms with a P99.9 of 40ms will ship a visible defect roughly once per 1,000 conveyor frames, which at 1.2m/s is one every ~14 minutes. Hyperscaler best-effort edge cannot commit to P99.9 because their compute pools are shared across tenants. Carrier MEC that wants a premium tier must commit to jitter, not just median latency.
Note prompts — click to add
+ Do our inference SLAs include a P99.9 number, or only median and P99?+ How do we currently isolate neighboring-tenant bursts at the compute layer?+ Have we measured jitter under sustained 80% CPU/GPU utilization, or only idle?
Required
Specify the acceptable jitter and P99.9 variance — deterministic latency is what differentiates carrier MEC from hyperscaler edge.
Single choice
< 0.5ms P99.9 jitter (safety-critical closed-loop)
< 2ms P99.9 jitter (robotics / CV inline reject)
< 5ms P99.9 jitter (XR / AR)
< 10ms P99.9 jitter (standard enterprise video)
Best-effort (no jitter SLA committed)
requirededgetrinidy
TrinidyShared-pool inference infrastructure introduces tail-latency variance that violates deterministic SLAs. Trinidy provides hardware-level tenant isolation on-node — a neighboring tenant's burst cannot steal cycles from yours.
✓ saved
Specify deployment topology (MEC placement)
Why This Matters
ETSI MEC ISG defines the reference architecture, but operators still have real choice about how far toward the RAN to push compute. Placing inference at the aggregation edge saves capex but typically adds 5–15ms round-trip, which forfeits several high-value workloads. Far-edge placement at the tower is the only topology that delivers sub-10ms to enterprise sites without a dedicated on-prem MEC node at every customer.
Note prompts — click to add
+ What is the measured latency delta between our far-edge and aggregation-edge options?+ Which enterprise workloads require far-edge vs. which can sit at aggregation?+ Do our cabinet and shelter sites have the power and thermal envelope for far-edge inference hardware?
Required
Select the physical placement of the inference plane within the 5G architecture.
Single choice
Far-edge at RAN site / cell tower (ETSI MEC co-located with gNB)
Aggregation edge (regional hub serving 5–50 sites)
Network edge / Central Office (tens of ms from UE)
On-premises customer MEC (enterprise campus private 5G)
Hybrid — per-tenant placement by workload
requirededgetrinidy
TrinidyT4 DevCo purpose-built hardware fits existing MEC cabinet and shelter form factors within a 500W–2kW site power budget. NEXUS OS runs the multi-tenant inference substrate natively on RAN-adjacent compute.
✓ saved
Map multi-tenancy and tenant isolation requirements
Required
Confirm the number of tenants per site and the isolation contract you will commit to.
Single choice
Single tenant per site (dedicated enterprise MEC)
2–5 tenants per site with hardware isolation
5–20 tenants per site with namespace isolation
Multi-tenant with shared inference pool (best-effort)
Tenant topology not yet finalized
requirededgetrinidy
TrinidyNEXUS OS isolates tenants at the hardware layer — separate inference queues, memory regions, and scheduling domains. Billing-grade metering and per-tenant SLA dashboards are built in.
✓ saved
Confirm MEC standards alignment and RAN vendor compatibility
Why This Matters
The MEC ecosystem is fragmented across standards bodies and vendor SMOs, and an inference plane that does not align with your chosen SMO becomes an orchestration island. O-RAN Alliance A1/E2/O1/O2 interfaces are how the SMO talks to RAN and to the MEC host — missing these means no closed-loop automation. ETSI MEC 011 defines the MEC platform API that workloads call. Getting the interface contract right at scoping saves a six-to-twelve-month integration program later.
Note prompts — click to add
+ Which SMO vendor is the program of record and which interfaces must we implement against it?+ Are we targeting ETSI MEC 011 native, or a vendor-proprietary API that we will have to wrap?+ What is our O-RAN Alliance conformance posture — full, partial, or roadmap?
Required
Identify the MEC and O-RAN reference frameworks the inference plane must integrate with.
Select all that apply
ETSI MEC ISG (MEC 003 / 011 / 012 reference architecture)
3GPP Release 17 / 18 URLLC user-plane targets
3GPP Release 19 (MEC and AI/ML enhancements)
O-RAN Alliance SMO (Service Management & Orchestration)
O-RAN WG1–WG11 specifications (A1 / E2 / O1 / O2 interfaces)
Ericsson Intelligent Automation Platform
Nokia MantaRay SMO
Samsung SMO
Rakuten Symphony / Mavenir Converged Packet Core
VMware Telco Cloud Platform / Red Hat OpenShift
required
✓ saved
Confirm spectrum and operating authority
Why This Matters
CBRS Part 96 requires SAS coordination and enforces Tier 1 incumbent protection — a CBRS enterprise MEC site that is not SAS-registered cannot legally transmit, and this affects both radio and the hosted inference product when spectrum drops. FirstNet Band 14 has additional public-safety priority and pre-emption rules that shape which tenants can be colocated at a site. Spectrum regime also dictates which tenants can be colocated under the same MEC host.
Note prompts — click to add
+ Have we confirmed SAS registration and PAL/GAA tier for every CBRS site hosting enterprise MEC?+ Do any sites host FirstNet Band 14, and have we mapped the pre-emption implications on co-tenants?+ Is our enterprise offering spectrum-agnostic, or tied to specific bands by site?
Required
Map the spectrum regimes the enterprise 5G + MEC offering will operate under.
Select all that apply
Licensed macro spectrum (carrier-owned)
CBRS Part 96 (SAS-coordinated, Tier 2 PAL / Tier 3 GAA)
FirstNet Authority — Band 14 (public safety tenants only)
FCC Part 15 (unlicensed adjuncts)
FCC Part 90 (private land mobile)
International licensed enterprise spectrum
Cross-border enterprise site (multi-regime)
required
✓ saved
Define revenue and SLA tier structure
Why This Matters
GSMA Intelligence forecasts $23.4B in 5G MEC infrastructure revenue by 2027 with 38% attributable to latency-critical AI — but realizing that share depends on a revenue model the operator can actually bill against. Per-inference metering requires measurement infrastructure; reserved capacity demands credible capacity planning; SLA tiers require jitter measurement that holds up in a dispute. Choosing a tier model at scoping drives telemetry requirements downstream.
Note prompts — click to add
+ What is our billing-grade metering source of truth for inference calls?+ Do our SLA tiers have measurable, provable jitter/latency numbers or only aspirational ones?+ Are we pricing above or below AWS Wavelength / Azure Operator Distributed Services on comparable workloads?
Recommended
Specify the commercial packaging of the inference tier.
Single choice
Per-inference metered ($/call)
Reserved capacity ($/GPU-hour or $/accelerator-hour)
SLA-tiered subscription (bronze / silver / gold)
Bundled with private 5G connectivity
Wholesale to hyperscaler / aggregator
Hybrid — multiple tiers in parallel
recommended
✓ saved
Confirm data residency and sovereignty constraints
Required
Map tenant data to jurisdictional and sovereignty requirements before architecture is finalized.
Select all that apply
GDPR — data must remain in EU
UK GDPR — UK residency
US federal tenants — FedRAMP / CJIS / FirstNet handling
State / regional sovereignty requirements
Enterprise sovereignty (no hyperscaler dependency)
Cross-border tenants under SCCs
requiredtrinidy
TrinidyTower-local inference keeps tenant data inside the operator's perimeter and inside the jurisdiction of the site. NEXUS OS provides the audit trail that proves no inference payload traversed a border.
✓ saved