iOS Core ML Machine Learning Development

NOVASOLUTIONS.TECHNOLOGY is engaged in the development, support and maintenance of iOS, Android, PWA mobile applications. We have extensive experience and expertise in publishing mobile applications in popular markets like Google Play, App Store, Amazon, AppGallery and others.
Development and support of all types of mobile applications:
Information and entertainment mobile applications
News apps, games, reference guides, online catalogs, weather apps, fitness and health apps, travel apps, educational apps, social networks and messengers, quizzes, blogs and podcasts, forums, aggregators
E-commerce mobile applications
Online stores, B2B apps, marketplaces, online exchanges, cashback services, exchanges, dropshipping platforms, loyalty programs, food and goods delivery, payment systems.
Business process management mobile applications
CRM systems, ERP systems, project management, sales team tools, financial management, production management, logistics and delivery management, HR management, data monitoring systems
Electronic services mobile applications
Classified ads platforms, online schools, online cinemas, electronic service platforms, cashback platforms, video hosting, thematic portals, online booking and scheduling platforms, online trading platforms

These are just some of the types of mobile applications we work with, and each of them may have its own specific features and functionality, tailored to the specific needs and goals of the client.

Showing 1 of 1 servicesAll 1735 services
iOS Core ML Machine Learning Development
Complex
~1-2 weeks
FAQ
Our competencies:
Development stages
Latest works
  • image_mobile-applications_feedme_467_0.webp
    Development of a mobile application for FEEDME
    756
  • image_mobile-applications_xoomer_471_0.webp
    Development of a mobile application for XOOMER
    624
  • image_mobile-applications_rhl_428_0.webp
    Development of a mobile application for RHL
    1052
  • image_mobile-applications_zippy_411_0.webp
    Development of a mobile application for ZIPPY
    947
  • image_mobile-applications_affhome_429_0.webp
    Development of a mobile application for Affhome
    862
  • image_mobile-applications_flavors_409_0.webp
    Development of a mobile application for the FLAVORS company
    445

Machine Learning Development (Core ML) in iOS Applications

Converting a model from Python environment to mobile production immediately reveals format incompatibilities, inference latency issues, and lack of update mechanisms without App Store releases. Core ML solves these problems natively—if integrated correctly into your app architecture.

Common Time Wasters During Core ML Integration

The most frequent mistake: model conversion without considering target hardware. coremltools allows specifying minimum_deployment_target and compute unit: cpuOnly, cpuAndGPU, or cpuAndNeuralEngine. Failing to specify cpuAndNeuralEngine for A12+ means the model won't hit the Neural Engine and runs on CPU—5–10x slower for convolutional networks.

Second issue: input format. Core ML expects CVPixelBuffer with specific kCVPixelFormatType. If your app gets UIImage from camera via AVCapturePhotoOutput, intermediate conversion is needed: UIImageCIImageCVPixelBuffer. Doing this on the main thread guarantees dropped frames. The entire capture-to-inference pipeline must run on a DispatchQueue with QoS .userInteractive or through the Vision framework, which manages buffers automatically.

Vision + CoreML is the right combination for most tasks: VNCoreMLRequest handles scaling, normalization, and buffer management. For sequences of inferences on video streams, use VNSequenceRequestHandler—it caches state between frames.

Core ML Integration Process

Start with source model audit: format (ONNX, TensorFlow SavedModel, PyTorch TorchScript), weight size, operation count. Use coremltools 7.x for conversion. For quantized models, use ct.optimize.coreml with LinearQuantizer or PalettizationConfig. 8-bit quantization cuts model size by 4x without noticeable accuracy loss on most classifiers.

Real example: fintech client needed document forgery detection on-device. Original TFLite model (MobileNetV3, 12 MB) gave 280 ms on iPhone 12. After converting to .mlpackage with computeUnits = .cpuAndNeuralEngine and Float16 compression: 34 ms on the same device. Wrapped inference in MLModelConfiguration with allowLowPrecisionAccumulationOnGPU = true.

For model updates without releases, set up loading via CloudKit or custom S3-compatible storage. MLModel(contentsOf:) accepts local URLs—model downloads in background, verifies via SHA-256, swaps atomically through FileManager.replaceItem. Old version stored as fallback.

Architecturally, isolate the ML layer in a separate module (Swift Package) with MLInferenceService protocol. Enables mock implementations in tests and reuse across multiple targets.

What's Included

  • Source model audit and conversion path selection
  • Conversion to .mlmodel / .mlpackage via coremltools
  • Optimization: quantization, pruning, compute unit selection
  • Integration via Vision or direct MLModel API
  • OTA model update mechanism (CloudKit / S3)
  • Unit tests for inference with reference inputs/outputs
  • Profiling via Xcode Instruments (Core ML Instrument)

Timeline

Integrating a pre-converted model into existing app: 3–5 business days. Converting, optimizing, and setting up OTA updates from scratch: 1–2 weeks. Cost calculated individually after requirements and source model review.