AR Application Performance Optimization
AR app on ARKit heats iPhone 12 to 45°C in 8 minutes and drains battery 1% per minute. Frame rate — 45–50 FPS instead of 60. This isn't "slightly slow" — it's app unusable. AR optimization is special discipline: can't sacrifice tracking quality for FPS, can't remove lighting and lose realism, must work simultaneously with CPU (ARKit/ARCore processing), GPU (rendering) and Neural Engine (if ML models).
AR Session Architecture and Problem Sources
ARKit/ARCore work continuously: camera frame capture → feature detection → plane evaluation → world model update → rendering. Each stage — computational load. On iPhone ARKit uses Neural Engine for plane tracking, heavily unloading CPU/GPU. On Android ARCore heavier on GPU on devices without NPU.
Typical bottlenecks:
Loading heavy 3D models in ARSCNView without optimization: SCNNode with 500K polygons without LOD, 4096×4096 textures without mipmapping. GPU renders equally detailed object frontmost and 10 meters away.
Enabled tracking features not used. ARWorldTrackingConfiguration with isAutoFocusEnabled = true and environmentTexturing = .automatic without real need — constant system load.
Physics in SCNScene with SCNPhysicsBody on each object at dozens of AR objects — SceneKit physics engine not optimized for mobile AR scenes with many bodies.
ARKit Optimization
Session Configuration
let configuration = ARWorldTrackingConfiguration()
// Enable only what's really needed
configuration.planeDetection = [.horizontal] // not .vertical if not needed
configuration.isAutoFocusEnabled = false // fixed focus — less load
configuration.environmentTexturing = .none // disable if no PBR materials
// For simple scenes — lighter tracking
let simpleConfig = AROrientationTrackingConfiguration() // orientation only, no world tracking
For apps needing only face tracking — ARFaceTrackingConfiguration instead of ARWorldTrackingConfiguration. CPU load difference — noticeable.
Rendering via Metal Instead SceneKit
ARSCNView convenient, but for complex scenes MTKView + custom Metal renderer gives full draw call control. SceneKit adds overhead on node management and physics. With ARSession + MTKView:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let commandBuffer = commandQueue.makeCommandBuffer()!
// Render captured image (camera)
renderCapturedImage(frame.capturedImage, commandBuffer: commandBuffer)
// Render AR content
renderVirtualContent(frame, commandBuffer: commandBuffer)
commandBuffer.present(drawable)
commandBuffer.commit()
}
Gives 20–30% FPS gain on scenes with 10+ AR objects vs ARSCNView.
Culling and LOD
SCNNode.isHidden = true for off-screen objects — SceneKit doesn't render hidden nodes, but performs physics and update. Correct — remove objects from scene: node.removeFromParentNode().
// Manual frustum culling
func shouldRenderNode(_ node: SCNNode, camera: ARCamera) -> Bool {
let screenPoint = camera.projectPoint(node.worldPosition,
orientation: .portrait,
viewportSize: viewportSize)
return screenPoint.x > -0.1 && screenPoint.x < 1.1 &&
screenPoint.y > -0.1 && screenPoint.y < 1.1
}
ARCore Optimization (Android)
Session config:
val config = Config(session)
config.planeFindingMode = Config.PlaneFindingMode.HORIZONTAL_ONLY
config.lightEstimationMode = Config.LightEstimationMode.DISABLED // +15% battery
config.depthMode = Config.DepthMode.DISABLED // if depth not needed
session.configure(config)
LightEstimationMode.ENVIRONMENTAL_HDR most expensive, gives realistic reflections. On devices without Depth API (most mid-range) use only if key feature.
Rendering via Filament (Google's PBR renderer): ARCore apps with Filament render PBR materials via Vulkan on supported devices — noticeably faster than OpenGL ES. Ready example — arcore-android-sdk samples with Filament integration.
Case: AR Furniture Catalog
Client: furniture viewing AR app. Sofas and tables — designer 3D models, 800K–1.2M polygons each. On iPhone 13 — 24 FPS placing 2 objects. Problem obvious.
Work: export models via Blender with decimation to 50K polygons for AR version (detail loss invisible 1–2 meter distance on phone). Texture conversion from PNG 4096×4096 to ASTC 2048×2048. Add LOD: high detail for objects closer 1.5 meters, medium farther. Result: 58–60 FPS stable, temperature normalized.
Monitoring AR Session Performance
// Get rendering metrics
arView.renderOptions.insert(.disableMotionBlur) // remove heavy post-effects
// Log frame stats
arView.session.delegate = self
func session(_ session: ARSession, didUpdate frame: ARFrame) {
print("Tracked features: \(frame.rawFeaturePoints?.points.count ?? 0)")
}
On Android: Frame.getTimestamp() difference between frames — if > 33ms, frame dropped.
Timeline
AR app performance audit — 2–3 days. Rendering and session config optimization — 1–2 weeks. If model optimization needed — additional, depends on asset count.







