Predictive Maintenance Implementation in Mobile Applications
Predictive Maintenance in mobile context — is not just dashboard with charts. It's system that collects data from sensors (vibration, temperature, current), runs it through ML model and forecasts failure before equipment stops. Mobile application here acts as interface for field technicians: they receive alert, open equipment card, see anomaly on trend and decide on node replacement.
Where It Really Gets Complex
Sensor data collection. Sensors send data via different protocols: Modbus RTU/TCP, OPC-UA, MQTT, sometimes BLE. Mobile app rarely communicates with them directly — usually edge server (Raspberry Pi, Siemens IoT2040) collects data and pushes to cloud. App task — subscribe to MQTT topics or polling REST API and correctly handle telemetry gaps (sensor disconnected for 2 minutes — not anomaly, connection break).
On Android MQTT subscription convenient to keep in ForegroundService with persistent notification — only way to guarantee real-time data without kill by aggressive battery savers on Xiaomi and Huawei. Using WorkManager for MQTT — mistake: doesn't guarantee intervals under 15 minutes.
Time series visualization. Displaying 10,000 points on vibration chart — not drawLine loop. On iOS Charts (formerly danielgindi/Charts) struggles with more than 2,000 points without decimation. Solution: LTTB (Largest-Triangle-Three-Buckets) — downsampling algorithm preserving curve visual shape while reducing points 10–20 times. Implemented client-side before render.
ML model: server or on-device? For industrial systems model usually lives on server — data volume and complexity (LSTM, Isolation Forest, XGBoost) assume server inference. But if object in no-internet zone (mine, remote field), need on-device variant. CoreML on iOS and TFLite on Android handle lightweight models (pruned LSTM, ONNX-converted Random Forest). Model updates on network via background download.
How We Build It
Typical stack: mobile app (React Native or Flutter for cross-platform, Swift/Kotlin for native requirements) + MQTT client (Eclipse Paho or mqtt_client for Flutter) + backend on Python (FastAPI + Celery for scheduled inference) + TimescaleDB for telemetry storage.
At ML level: anomaly model trained on historical normal equipment operation. Usually apply Isolation Forest for initial detection and LSTM Autoencoder for more accurate anomaly type classification. Models exported to ONNX for inference unification.
Alert threshold tuned per-device, not globally — same pump in different conditions gives different baseline vibration.
Implementation Process
Start with audit: what sensors, protocols, data volume, need offline mode. Then — prototype integration with real equipment (without this timeline estimate meaningless). Parallel — collect historical data for model training.
Development in iterations: first raw data output to app, then charts, then threshold alerts, then ML alerts. Each stage checked with technicians on real object.
Timeline Estimates
MVP with one sensor type connection, dashboard and threshold alerts — 4–6 weeks. Full system with ML model, multiple equipment types, offline mode and ERP integration — 3–5 months. Cost calculated individually after infrastructure analysis and forecasting accuracy requirements.







