Analysis of Technical Limitations of Target AR Devices for Games
Developing AR game without analyzing target devices — means designing in vacuum. What works perfectly on HoloLens 2 with x86 processor and 4 GB RAM becomes unrealizable on Meta Quest 3 in passthrough mode, and what works on Quest 3 may not launch on budget Android smartphone with ARCore. Technical limitations analysis — first technical document that should appear in AR project, before any architecture design.
Why AR devices differ more than seems
At first glance: all AR devices show virtual content over real world. In practice: they do this fundamentally different ways, and these differences dictate capabilities and constraints at physical device level.
Optical see-through (OST) — HoloLens 2, Magic Leap 2. Glasses literally transparent, augmented reality optically overlaid. Consequence: impossible to display opaque content (can't "cover" real object with virtual), colors perceived differently (dark shades nearly invisible over real world), field of view limited (HoloLens 2 — around 52°). For games this means: all content must be sufficiently bright and contrasty, mechanics can't rely on occlusion of real objects by virtual.
Video see-through (VST) — Meta Quest 3, PICO 4 in AR/passthrough mode, mobile AR. Camera films real world and renders it together with virtual content. Advantage: full compositing control, can do occlusion. Limitation: latency between real world and its video image (Quest 3 — around 12–15 ms), color artifacts on object edges, passthrough quality depends on device camera.
Mobile AR (iOS ARKit, Android ARCore) — VST through smartphone main camera. Limitations: no depth (no stereo vision), tracking only through SLAM on single camera, placement accuracy ~1–2 cm, no haptic feedback like controllers.
Key technical parameters for analysis
Tracking capabilities. What device can do out of box: plane detection, image tracking, object tracking, face tracking, hand tracking, spatial anchors, scene understanding (mesh reconstruction)?
ARKit on iPhone 12+ supports LiDAR scanning, giving real world mesh and allowing virtual objects to "hide" behind real surfaces through occlusion. ARCore lacks depth sensor on most Android devices and uses monocular depth estimation — significantly less accurate.
Compute budget. AR requires parallel work: camera + SLAM + render + game logic. On Snapdragon 888 this works fine with proper optimization. On Snapdragon 680 (budget segment) ARCore SLAM algorithm already takes 20–30% CPU, leaving significantly less for game logic. Analysis should include concrete CPU/GPU budgets for each target platform.
Memory constraints. AR game textures often not compressed aggressively as regular games because they overlay real world and compression artifacts more noticeable. On devices with 2–3 GB RAM this creates pressure.
Display characteristics. FOV, refresh rate, pixel density — critical for OST devices. HoloLens 2 has ~60 Hz refresh rate and limited FOV, affecting which effects and animations look acceptable.
Analysis format
Result — technical document with capability matrix by devices and constraints list that should be accounted in architecture.
Typical matrix structure: device × capability (hand tracking, plane detection, occlusion, spatial anchors) × status (supported / not supported / limited) × constraints and SDK dependencies.
Based on this matrix determine minimum viable platform — device whose constraints set lower bound for project, and premium experience — what available on more powerful platforms.
In practice analyzing for mobile AR game with multiplayer revealed that ARWorldMap (iOS-only API for sharing spatial anchors) lets two iPhones in same space see identically positioned objects. Android equivalent through ARCore Cloud Anchors requires Google server connection and adds latency 500–2000 ms on sync. This fundamentally changes multiplayer mechanic design: on-device sync for iOS works nearly instant, cloud-based sync for Android requires completely different UX approach.
| Analysis Volume | Timeline |
|---|---|
| One device / one platform | 3–5 days |
| Comparative analysis 3–5 devices | 1–2 weeks |
| Full analysis + architecture recommendations | 2–3 weeks |
Cost calculated after determining target platforms and project requirements.





