3D Model Optimization for AR (LOD, Textures, Polygon Count)
A 3D artist delivered a chair model with 1.2 million polygons and 8192×8192 PNG textures. In Cinema 4D, rendering looks perfect. In AR on iPhone, it's 18 FPS with device overheating in 3 minutes. The problem isn't the app or ARKit. The model wasn't prepared for rendering on mobile GPU.
Polygon Count: What You Actually Need
At typical AR viewing distances of 0.5–3 meters, detail above 100–150K polygons is visually indistinguishable from 50K. Practical rule:
| Object Size | Recommended Polygons | Viewing Distance |
|---|---|---|
| Small (cup, phone) | 5,000–15,000 | 0.3–1 m |
| Medium (chair, lamp) | 15,000–50,000 | 0.5–2 m |
| Large (sofa, cabinet) | 30,000–80,000 | 1–3 m |
| Very large (car) | 50,000–120,000 | 2–10 m |
Decimation in Blender: Modifier → Decimate → Collapse with Ratio = 0.05–0.1 for complex models produces solid results without manual retopology. Quadric Edge Collapse Decimation preserves edge shapes better than Unsubdivide.
After decimation—mandatory testing in AR on target device, not just editor preview. Object silhouettes matter more than internal details. We decode them differently.
Textures: Formats and Sizes
The biggest performance impact comes from textures, not polygon count.
ASTC (Adaptive Scalable Texture Compression) is the correct format for mobile AR. Supported by all ARM Mali and Qualcomm Adreno GPU since 2014+. ASTC 6×6 achieves approximately 2.37 bpp versus 32 bpp for PNG—13 times less GPU memory with minimal quality loss.
ETC2 is a universal fallback for older devices (GLES 3.0+). Lower compression quality than ASTC, but wider support.
Never use PNG/JPEG in AR scene textures. PNG decodes to full RGBA8888—for a 2048×2048 texture, that's 16 MB of GPU memory. ASTC 2048×2048 is around 2 MB.
Generate ASTC via astcenc (command line):
astcenc -cl input.png output.astc 6x6 -medium
In Xcode Asset Catalog: add texture, set Compression = Lossy → automatic ASTC on iOS. On Android—compile via Android Studio or CI through texturetool.
Texture size. Rule: texture size is proportional to visible object area on screen. For an object occupying 20% of screen—512×512 texture produces indistinguishable results from 2048×2048. Mipmap is mandatory: SceneKit and ARCore automatically use mipmap level matching display size.
LOD for AR
Unlike game engines, ARKit SceneKit has no built-in LOD manager. Implement via SCNLevelOfDetail:
let highPolyGeometry = loadGeometry("chair_high.usdz") // 50K polygons
let medPolyGeometry = loadGeometry("chair_med.usdz") // 15K polygons
let lowPolyGeometry = loadGeometry("chair_low.usdz") // 5K polygons
let lod1 = SCNLevelOfDetail(geometry: medPolyGeometry, screenSpaceRadius: 100)
let lod2 = SCNLevelOfDetail(geometry: lowPolyGeometry, screenSpaceRadius: 30)
node.geometry?.levelsOfDetail = [lod1, lod2]
screenSpaceRadius is the bounding sphere radius in screen pixels. At 100, the model occupies roughly 200×200 pixels. Values are empirically tuned per object.
On Android ARCore with Filament, LOD is implemented via MaterialInstance swap or RenderableManager.Builder.boundingBox() for culling.
USDZ / glTF: Correct Format for AR
USDZ (iOS, macOS)—container based on OpenUSD from Pixar. Contains geometry, materials, animations, physics. Supports AR Quick Look without code. Reality Converter (macOS app from Apple) converts from FBX/OBJ/glTF to USDZ with simultaneous texture optimization.
glTF 2.0 (Android/Cross-platform)—open standard, natively supported by Filament and Sceneform. glb is the binary variant, preferred for AR (single file instead of many). Optimize via gltf-pipeline:
gltf-pipeline -i model.gltf -o model_opt.glb \
--draco.compressMeshes --draco.quantizePositionBits 14
Draco compression (Google) reduces geometry size 5–10x via lossy compression. Quality is controlled via quantizeBits—14 bits suffices for most AR objects.
Timeline
Optimizing one 3D model (decimation + textures + LOD)—0.5–1 day. Batch optimization of 20–50 models—1–2 weeks with pipeline automation setup.







