Implementing AI Anomaly Detection for IoT Sensors in a Mobile App
A threshold alert "if temperature > 80°C" triggers late: the problem is already formed by the time threshold is exceeded. An anomaly is deviation from normal pattern, detectable hours or days before critical values. A motor that normally heats to 45°C in 20 minutes, heating to 45°C in 8 minutes today — that's an anomaly, even though temperature is normal.
Algorithms: What's Practical for Mobile
For real-time IoT anomalies on mobile, low-memory algorithms with fast inference are needed. Not everything that works in Jupyter with full dataset works for online detection.
Z-score with adaptive baseline. Not fixed mean/std over entire period, but sliding window (Exponentially Weighted Moving Average). Works on device without models:
class EWMAnomalyDetector(
private val alpha: Double = 0.1, // smoothing coefficient
private val threshold: Double = 3.0 // sigma count
) {
private var ewma: Double? = null
private var ewmVar: Double = 0.0
fun detect(value: Double): AnomalyResult {
val mean = ewma ?: run { ewma = value; return AnomalyResult.Normal(value, 0.0) }
val deviation = value - mean
ewmVar = alpha * deviation * deviation + (1 - alpha) * ewmVar
val sigma = sqrt(ewmVar).coerceAtLeast(1e-9)
val zScore = abs(deviation) / sigma
ewma = alpha * value + (1 - alpha) * mean
return if (zScore > threshold) {
AnomalyResult.Anomaly(value, zScore, deviation > 0)
} else {
AnomalyResult.Normal(value, zScore)
}
}
}
Isolation Forest. Trains offline, inference is fast. Best for multivariate data (multiple sensors) without neural nets. Model converts to TFLite via ONNX or through sklearn2pmml + custom converter.
LSTM Autoencoder. Best for time series with patterns (daily consumption cycles, work shifts). Trains to reconstruct normal sequences; high reconstruction error = anomaly. On device — TFLite LSTM ops with int8 quantization reduces memory from ~15 MB to ~4 MB.
// iOS: LSTM Autoencoder inference via Core ML
class LSTMAnomalyDetector {
let model: SensorAnomalyDetector // generated Core ML class
private let windowSize = 60 // 60 samples = 1 minute at 1 Hz
private var buffer: [[Double]] = []
func process(reading: SensorReading) -> AnomalyScore? {
buffer.append(reading.toFeatureVector())
guard buffer.count >= windowSize else { return nil }
let window = Array(buffer.suffix(windowSize))
let input = try? MLMultiArray(shape: [1, NSNumber(value: windowSize), NSNumber(value: reading.dimensions)],
dataType: .double)
// fill input...
guard let prediction = try? model.prediction(input: input!),
let reconstructed = prediction.output as? MLMultiArray else { return nil }
let mse = computeMSE(original: window, reconstructed: reconstructed)
buffer.removeFirst() // sliding window
return AnomalyScore(mse: mse, isAnomaly: mse > model.threshold)
}
}
Multi-Level Detection: Device + Server
Optimal architecture is two-level. On device: light EWMA or simple threshold detector for instant reaction (< 100 ms). On server: heavy model (Isolation Forest, LSTM AE) with full historical context for precise classification.
Mobile app receives events from both levels:
- Device → direct local notification if app running
- Server → FCM/APNs with confirmed anomaly and classification
@Serializable
data class AnomalyEvent(
val sensorId: String,
val sensorName: String,
val timestamp: Long,
val value: Double,
val baseline: Double,
val deviationPercent: Double,
val anomalyType: AnomalyType, // SPIKE, DRIFT, PATTERN_BREAK, FLATLINE
val severity: Severity, // LOW, MEDIUM, HIGH, CRITICAL
val possibleCause: String? // filled by server via LLM
)
Anomaly Analytics: Patterns and Clustering
Single anomaly may be random. Systematic anomalies by schedule are problems. Analytics screen shows:
- Heatmap of anomalies by sensor and time of day
- Anomaly clusters by type (DBSCAN on server)
- Correlations between sensors: "anomalies on T-3 always precede P-7 anomalies by 15 minutes"
Insights appear only after weeks of data accumulation — important to archive all anomalies with context from day one.
Managing False Positives
First complaint about anomaly detection: "too many false alerts." Management tools:
- Feedback loop: "This is normal" button on anomaly card — sends negative sample, server considers in retraining
- Suppressions: "don't alert on T-5 sensor from 06:00 to 08:00 — scheduled warmup"
- Confidence threshold: show only anomalies with confidence > 0.8
Developing anomaly detection module for mobile IoT app with two-level architecture and feedback loop: 5–8 weeks. Pricing is calculated individually.







