AI liveness detection for identity verification in mobile app

NOVASOLUTIONS.TECHNOLOGY is engaged in the development, support and maintenance of iOS, Android, PWA mobile applications. We have extensive experience and expertise in publishing mobile applications in popular markets like Google Play, App Store, Amazon, AppGallery and others.
Development and support of all types of mobile applications:
Information and entertainment mobile applications
News apps, games, reference guides, online catalogs, weather apps, fitness and health apps, travel apps, educational apps, social networks and messengers, quizzes, blogs and podcasts, forums, aggregators
E-commerce mobile applications
Online stores, B2B apps, marketplaces, online exchanges, cashback services, exchanges, dropshipping platforms, loyalty programs, food and goods delivery, payment systems.
Business process management mobile applications
CRM systems, ERP systems, project management, sales team tools, financial management, production management, logistics and delivery management, HR management, data monitoring systems
Electronic services mobile applications
Classified ads platforms, online schools, online cinemas, electronic service platforms, cashback platforms, video hosting, thematic portals, online booking and scheduling platforms, online trading platforms

These are just some of the types of mobile applications we work with, and each of them may have its own specific features and functionality, tailored to the specific needs and goals of the client.

Showing 1 of 1 servicesAll 1735 services
AI liveness detection for identity verification in mobile app
Complex
~2-4 weeks
FAQ
Our competencies:
Development stages
Latest works
  • image_mobile-applications_feedme_467_0.webp
    Development of a mobile application for FEEDME
    756
  • image_mobile-applications_xoomer_471_0.webp
    Development of a mobile application for XOOMER
    624
  • image_mobile-applications_rhl_428_0.webp
    Development of a mobile application for RHL
    1054
  • image_mobile-applications_zippy_411_0.webp
    Development of a mobile application for ZIPPY
    947
  • image_mobile-applications_affhome_429_0.webp
    Development of a mobile application for Affhome
    862
  • image_mobile-applications_flavors_409_0.webp
    Development of a mobile application for the FLAVORS company
    445

Implementing AI Liveness Detection (Life Proof Verification) for Authentication

A photo from a phone, a mask, a YouTube video—all attempts to circumvent biometric verification. Liveness Detection solves one task: ensure that a living person is in front of the camera, not an artifact. But behind this task lies a non-trivial choice between active and passive checking, ISO 30107 attack levels, and latency tolerance.

Active vs Passive Liveness

Active liveness — user performs an action: turn head, blink, speak a code. Plus—high resistance to photo spoofing. Minus—UX: 15–20% of users don't pass on first attempt; conversion drops. Also vulnerable to deepfake videos where movement is synthesized to command.

Passive liveness — analyze one frame or short sequence without instructions. User just looks at camera. Worse with 2D attacks (high-quality photos), better for UX. Modern passive models of ISO 30107-3 Level 2 class withstand attacks with printed photos and 2D screens.

For KYC apps requiring iBeta PAD Level 2 (ISO 30107-3), combine: passive check + depth analysis via ARKit/ARCore.

Implementation on iOS via ARKit

ARKit on iPhone X+ and iPad Pro with TrueDepth camera provides real-time depth map access. ARFaceTrackingConfiguration creates ARFaceAnchor with 52 blend shapes—including eyeBlinkLeft, eyeBlinkRight, jawOpen. This is already full-fledged active liveness without third-party SDK.

let config = ARFaceTrackingConfiguration()
config.maximumNumberOfTrackedFaces = 1
session.run(config)

// In ARSCNViewDelegate
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
    guard let faceAnchor = anchor as? ARFaceAnchor else { return }
    let eyeBlink = faceAnchor.blendShapes[.eyeBlinkLeft]?.floatValue ?? 0
    if eyeBlink > 0.7 { livenessDetector.recordBlink() }
}

Depth map from depthData on AVCapturePhotoOutput allows rejecting flat images: if stddev of face depth <5mm—a flat surface is in front of camera.

Limitation: ARKit FaceTracking works only on devices with TrueDepth (front True Depth camera). iPhone SE, iPad mini—not supported. For these devices, fallback—RGB-only model on CoreML.

Implementation on Android via ML Kit Face Mesh

com.google.mlkit:face-mesh-detection (ML Kit 18.0+) provides 468 3D face mesh points from a single camera. This is not true depth, but 3D reconstruction from 2D—more accurate than nothing, weaker than ARKit TrueDepth.

val options = FaceMeshDetectorOptions.Builder()
    .setUseCase(FaceMeshDetectorOptions.FACE_MESH)
    .build()
val detector = FaceMeshDetection.getClient(options)

detector.process(inputImage)
    .addOnSuccessListener { faces ->
        faces.firstOrNull()?.let { face ->
            val zVariance = face.allPoints.map { it.position.z }.variance()
            if (zVariance < FLAT_THRESHOLD) rejectAsFlatImage()
        }
    }

On Android 10+ with ARCore-compatible device, better to use ArCoreApk + AugmentedFace—get true depth via structured light or ToF (devices with such sensors: Pixel 6+, Samsung S21+).

Third-Party SDK: When Justified

Iproov, Jumio, Onfido, Sumsub—ready-made liveness SDK with ISO 30107-3 Level 2 certification. Certification itself costs hundreds of thousands of dollars and takes months. If product operates in regulatory environment requiring specifically certified solution—own implementation is not cost-effective.

If regulator doesn't require certification and task is protection from casual attacks (photo spoofing, phone video)—custom implementation on ARKit + CoreML / ML Kit + TFLite handles it and is cheaper than licenses.

Common Mistakes

Threshold without context. livenessScore > 0.85 in code without explanation—after a month nobody remembers where the number came from and how it changed. Need configurable threshold with A/B testing and FRR/FAR metrics.

Ignoring deepfake attacks. Passive liveness without depth is vulnerable to GAN-generated faces. If in threat model—need texture inconsistency analysis (GAN artifacts in frequency domain via FFT) or server inference on heavier model.

Implementation Stages

Analyze threat model → choose active/passive/hybrid liveness → choose SDK or custom development → integrate with camera and biometric flow → test attacks (photo, screen, mask, deepfake) → threshold tuning → integration with IDV pipeline → publication.

Timeline: integrate ready-made SDK—2–4 weeks. Custom implementation on ARKit/ML Kit with model training—8–14 weeks. Cost is calculated individually.