Augmented Reality in Mobile Applications: ARKit, ARCore and AR Foundation
AR in production is not "placing a 3D model on a plane." It's fighting tracking drift, scale mismatches, lighting that makes the virtual object unrealistic, and rendering lag that causes nausea after 30 seconds of use.
Plane Detection and Tracking Stability
ARKit (iOS 11+) and ARCore (Android) use Visual-Inertial Odometry (VIO) — joint processing of camera and IMU data. Tracking is lost in three predictable scenarios: insufficient lighting (less than ~50 lux), texturally uniform surfaces (white wall, glass table), and rapid camera movements.
In practice this means: if the product is designed for furniture fitting in residential interiors, add explicit UI warning when ARCamera.TrackingState.limited(.insufficientFeatures). An application that silently loses tracking and doesn't explain why gets 2-star reviews.
Plane detection is configured via ARWorldTrackingConfiguration.planeDetection = [.horizontal, .vertical]. Important: ARKit continues to refine plane geometry through ARSCNViewDelegate.renderer(_:didUpdate:for:) — if you don't handle updates, an object placed on the "detected" plane will float as the anchor refines.
AR Foundation: Cross-Platform with Nuances
Unity AR Foundation is an abstraction layer over ARKit and ARCore. It solves the problem of one code base but introduces its own limitations. Some ARKit features (for example, ARBodyTrackingConfiguration for full body tracking) are unavailable through AR Foundation and require a native plugin.
For React Native and Flutter, there's no direct AR Foundation equivalent. Use ViroReact (React Native) or ar_flutter_plugin for simple scenarios, but for production quality — native modules with a bridge. Hybrid approach: the AR scene is rendered by a native ARKit/ARCore view, control from JS/Dart via method channel.
Try-on: Product Try-on Through AR
Trying on glasses, jewelry, cosmetics — a separate class of tasks. Here plane detection is insufficient, you need face tracking.
ARKit provides ARFaceTrackingConfiguration — 52 blend shape coefficients for facial expressions, 3D face mesh, position and orientation in space. Works only on devices with TrueDepth camera (iPhone X and newer).
For Android, the equivalent is ML Kit Face Mesh Detection or Google ARCore Augmented Faces (Pixel only and some flagships). For cross-platform try-on in our practice, we use Banuba Face AR SDK — it covers both platforms, provides ready-made masks and stable tracking even on mid-range Android.
Try-on quality critically depends on 3D product models. Models must be optimized for real-time: no more than 10-15K polygons for jewelry, PBR materials with correct roughness/metallic maps, LOD for distant distances.
Lighting and Realism
ARKit since iOS 13 supports Environmental Texturing — automatic creation of environment map from the camera for realistic reflections on virtual objects. Enabled via ARWorldTrackingConfiguration.environmentTexturing = .automatic. Without it, metal and glass materials look plastic.
ARCore provides Light Estimation — intensity and color temperature of ambient light, which is applied to the virtual object shader. In practice this is the difference between an object that "fits" into the scene and an obviously overlaid 3D model.
| Task | iOS | Android | Cross-platform |
|---|---|---|---|
| Plane detection | ARKit | ARCore | AR Foundation, Unity |
| Face tracking | ARKit (TrueDepth) | ARCore Augmented Faces | Banuba, Snap Camera Kit |
| Image tracking | ARKit (Vision) | ARCore Augmented Images | AR Foundation |
| Object detection | ARKit 3D Object Scanning | ARCore | no unified SDK |
| Persistence (anchor saving) | ARKit World Map | ARCore Cloud Anchors | — |
Timeline and Evaluation
Simple AR scene with one 3D model placed on a plane — 1-2 weeks. Face try-on with product catalog — from 6 weeks: 3D pipeline for model optimization, tracking integration, UI selection and result saving. Full-fledged AR shopping with cloud anchors and multiplayer — from 3 months.







