Edge Computing Data Processing for 5G Mobile Apps
5G provides 1-10 ms latency to the base station. But between the base station and your cloud server — another 50-150 ms RTT over standard routes. Edge computing moves computation closer to devices: on MEC servers (Multi-access Edge Computing) installed near 5G base stations. For the right tasks this makes real difference. For wrong ones — just added complexity and cost.
When Edge Makes Sense and When It Doesn't
Edge computing is justified when latency is critical (< 20 ms) or raw data volume is too large for cloud transfer.
Suitable tasks:
- Real-time video stream processing (AR annotations, object detection without sending video to cloud)
- IoT device management requiring < 10 ms response (industrial automation, medical devices)
- Multiplayer gaming with regional matchmaking
- Local telemetry aggregation before batch cloud upload
Tasks where edge isn't needed:
- Regular REST API requests where 100 ms and 50 ms are indistinguishable to users
- ML inference on models > 500 MB (cheaper to keep in cloud)
- Any task without strict latency requirements
Mobile App Architecture with Edge
On the mobile side, edge computing requires several architectural changes.
Service Discovery. The app doesn't know the nearest edge node address upfront — it depends on device location and infrastructure load. When connecting to 5G, the app queries Edge Discovery Service (standardized in ETSI MEC 011) and gets the nearest MEC server endpoint:
// iOS
let discoveryClient = MECDiscoveryClient(appId: "com.myapp.edge")
let edgeEndpoint = try await discoveryClient.resolveNearestEdge(
location: locationManager.location,
serviceType: .videoProcessing
)
For platforms without ETSI MEC API (most commercial clouds — AWS Wavelength, Azure Edge Zones, Google Distributed Cloud Edge) — proprietary SDKs: AWSWavelengthClient, Azure SDK for Edge Zones.
Fallback to cloud. Edge nodes are less reliable than cloud regions. Code must handle edge unavailability: on timeout > 50 ms or HTTP 503 from edge — automatically retry to cloud endpoint. Switching should be transparent to UX. Implement via Circuit Breaker pattern with half-open state: after 3 errors in a row, circuit opens, all requests go to cloud, after 30 seconds attempt probe request to edge, if successful close circuit.
Data partitioning. Not all data flows through edge. Two-tier architecture:
| Data Type | Route | Reason |
|---|---|---|
| Video frames for processing | Edge | No need to send to cloud, process locally |
| Detection results | Cloud | Small volume, needs persistence |
| User settings | Cloud | Access from any device |
| IoT control commands | Edge | < 10 ms requirement |
| Command history | Cloud | Audit, analytics |
Implementation: AR with Edge Inference
Practical example: AR app for warehouse logistics. Camera scans barcodes on boxes, edge server on MEC recognizes and returns item data, app overlays AR annotation.
Full pipeline: camera → frame buffering → JPEG compression (720p, quality 60) → HTTP/2 POST to edge → YOLOv8 inference on edge GPU → JSON with bounding boxes → AR overlay via ARKit / ARCore.
Latency target: frame → annotation < 80 ms. With edge (20 ms to MEC) + inference (15-30 ms on GPU) + network overhead (10 ms) = 45-60 ms. Realistic. Via regular cloud (150 ms RTT) + inference = 180-200 ms. AR at that latency feels unnatural.
On iOS use AVCaptureSession with AVCaptureVideoDataOutput, downscale via vImageScale_ARGB8888 before sending. URLSession with HTTP/2 and keep-alive — don't create new connection per frame, that's +30-50 ms handshake latency.
On Android use CameraX with ImageAnalysis.Analyzer, JPEG compression via YuvToRgbConverter → Bitmap → compress(JPEG, 60), OkHttp with HTTP/2 and connection pooling.
Request frequency: not every frame, only on camera motion > threshold or timer every 100 ms. Video 30 fps = 30 requests per second = unacceptable. 10 requests per second with client-side overlay interpolation — acceptable compromise.
Offline and Degradation
5G isn't everywhere, even if the app positions itself as "5G." On 4G edge latency loses value (RTT to MEC can be 80-120 ms). On 3G/WiFi work in cloud-only or offline-first mode.
Detect network conditions: NWPathMonitor (iOS) / ConnectivityManager.NetworkCallback (Android). On 4G switch automatically to cloud endpoint. On no network, local ML inference via CoreML / TensorFlow Lite with model packaged in app (smaller, less accurate variant of edge model).
Pricing is individual — depends on MEC infrastructure provider, traffic volume, and edge logic complexity. Development timeline for mobile part with edge integration: 2 to 4 months. Development of edge logic itself (ML models, processing pipeline) — separate work, estimated after requirements analysis.







