Implementing LiDAR Scanning in iOS AR Applications
LiDAR sensor appeared in iPad Pro 2020, iPhone 12 Pro and newer. ARKit uses its data via ARWorldTrackingConfiguration with enabled sceneReconstruction — and this changes everything about plane detection quality, object occlusion, and scene initialization speed.
Without LiDAR ARKit determines horizontal planes in 2–5 seconds, vertical much longer. With LiDAR get environment mesh in fractions of second. This isn't marketing — this is difference between "AR object appears immediately" and "user waves phone 10 seconds before anything happens".
Where LiDAR Integration Breaks
First problem — ARMeshGeometry gives too dense mesh. Typical frame: 50,000–200,000 vertices per medium room. If pass directly to SceneKit or RealityKit without LOD and culling, FPS drops on A14.
Solution: use ARMeshAnchor and ARMeshGeometry.faces for sparse mesh, for display — ModelEntity with MeshResource.generate(from:) only for visible sections. ARView in RealityKit knows this via sceneUnderstanding.options with .occlusion flag — activates only necessary mesh subset for occlusion calculation, not rendering all.
Second problem — raycast in LiDAR mode. ARRaycastQuery with type .estimatedPlane works differently than .existingPlaneGeometry. On LiDAR devices correct path: ARRaycastQuery(origin:direction:allowing:.estimatedPlane, alignment:.any) with subsequent mesh refinement. Adding .existingPlaneGeometry as fallback — get double hits and placement artifacts.
Third — sessionWasInterrupted. When user backgrounds app, LiDAR session resets accumulated mesh. On restoration must call session.run(configuration, options: [.removeExistingAnchors, .resetSceneReconstruction]) — without .resetSceneReconstruction old ARMeshAnchors overlay new ones with drift.
How We Build LiDAR Pipeline
Use RealityKit 2 as main render layer: directly integrated with ARKit 5+ and uses Metal for mesh rendering without SceneKit CPU overhead. Configuration:
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .meshWithClassification
config.environmentTexturing = .automatic
config.frameSemantics = [.personSegmentationWithDepth]
arView.session.run(config)
meshWithClassification enables surface classification (floor, wall, ceiling, window, door) — allows filtering ARMeshAnchor by ARMeshClassification and reacting only to needed surface type. For furniture placement or indoor navigation apps this is critical.
For object occlusion behind real items enable:
arView.environment.sceneUnderstanding.options = [.occlusion, .physics]
.physics adds AR object collisions with real surfaces — AR cube falls on table and doesn't pass through.
Case: furniture try-on app, iPhone 13 Pro. Without LiDAR occlusion sofa "hung" over user's legs in selfie. With .occlusion legs correctly occlude AR object. Plane initialization time — 0.3 seconds vs 4.2 seconds on non-LiDAR device.
Fallback for Non-LiDAR Devices
LiDAR only iPhone 12 Pro+. For broad coverage write two paths:
if ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh) {
// LiDAR path
} else {
// Plane detection fallback
config.planeDetection = [.horizontal, .vertical]
}
This not simple if — different UX scenarios. On non-LiDAR show "point at surface" indicator, on LiDAR — immediately offer placement.
What's Included
- Setting up
ARWorldTrackingConfigurationwithsceneReconstructionand surface classification - Implementing occlusion and physics via RealityKit
sceneUnderstanding - Optimizing mesh rendering (LOD, frustum culling, sparse mesh)
- Correct raycast with LiDAR and fallback for non-LiDAR devices
- Handling session interruptions and state recovery
- Testing on real devices (iPhone 12 Pro+, iPad Pro)
Timeline
| Complexity | Timeline |
|---|---|
| Basic LiDAR integration with occlusion | 1–2 weeks |
| Full pipeline + fallback + optimization | 3–5 weeks |
| Custom surface classifiers + AR physics | 6–8 weeks |
Cost calculated individually after analyzing AR scene requirements and target devices.







