Implementing AR Plane Scanning in Mobile Application
Plane detection — foundation of most AR scenarios: furniture placement, measurements, floor navigation. Without stable plane detection, AR object "floats" and doesn't feel real to user. Basic plane detection is straightforward, but making it feel "stands like real" is not.
Platform APIs and Their Real Limitations
ARKit (iOS). ARWorldTrackingConfiguration with planeDetection: [.horizontal, .vertical]. ARKit returns ARPlaneAnchor with ARPlaneGeometry — plane mesh, which updates as scanning progresses. Problem: first few seconds ARKit returns small rectangle, which aggressively changes size and orientation. Place object immediately — it "jumps" at next update.
Solution — minimum confidence threshold and debounce on updates. ARPlaneAnchor has no explicit confidence field, but plane size (extent) serves as indirect maturity indicator: don't show placement UI until extent.x < 0.3 and extent.z < 0.3 meters.
ARCore (Android). Plane with TrackingState.TRACKING and PlaneType.HORIZONTAL_UPWARD_FACING / VERTICAL. ARCore additionally provides Plane.getSubsumedBy() — when two planes merge. This breaks logic if anchors were tied to original planes — need to transfer Anchor to subsuming plane.
Vertical planes. ARKit reliably detects vertical planes on textured surfaces (wallpapered wall — good, monochrome white wall — bad). ARCore works even less confidently with vertical detection. For products where wall mounting critical (pictures, shelves), better combine plane detection + LiDAR (iPhone 12 Pro+) to reconstruct missing geometry.
LiDAR and Its Impact on Scanning Quality
On LiDAR devices (iPhone 12 Pro, 13 Pro, 14 Pro, 15 Pro, iPad Pro), ARKit builds dense environment mesh via ARMeshAnchor. Plane detection with LiDAR works differently: planes derived from mesh, not from visual SLAM. This gives:
- Plane detection in 1-2 seconds instead of 5-10
- Stable boundaries even on monochrome surfaces
- Correct step, ramp, sloped plane detection
For apps where LiDAR devices are primary audience (professional surveying, repair, construction), switching between ARWorldTrackingConfiguration and config with sceneReconstruction: .mesh yields quality jump.
Visualizing Scan Progress
User doesn't know they need to "move camera" — need clear hint. Typical patterns:
- Animated scanning beam from bottom of screen
- Plane outline that "grows" as detection progresses
- Text instruction with auto-hide after first successful detection
For drawing plane boundaries in RealityKit — ModelEntity with wireframe material tied to PlaneAnchor. In SceneKit — SCNNode with SCNGeometry from ARPlaneGeometry.boundaryVertices. In ARCore Scenekit/Filament — own mesh from Plane.getPolygon().
Typical Production Problems
Plane "breaks" at rapid camera movement — tracking state transitions to LIMITED(.excessiveMotion). Need to block placement and show warning, not crash.
On dark surfaces (dark laminate, black rug) ARKit and ARCore lose features for visual SLAM. Warning via ARCamera.TrackingState.Reason.insufficientFeatures — mandatory to handle and report to user.
Timeline
Basic plane detection with visual hint and object placement — 5-8 days. Refinement for LiDAR, multi-plane selection, ARWorldMap saving — another 5-7 days. Cost depends on target devices and visual complexity.







