Stability Metrics Alerts Setup for Mobile App

NOVASOLUTIONS.TECHNOLOGY is engaged in the development, support and maintenance of iOS, Android, PWA mobile applications. We have extensive experience and expertise in publishing mobile applications in popular markets like Google Play, App Store, Amazon, AppGallery and others.
Development and support of all types of mobile applications:
Information and entertainment mobile applications
News apps, games, reference guides, online catalogs, weather apps, fitness and health apps, travel apps, educational apps, social networks and messengers, quizzes, blogs and podcasts, forums, aggregators
E-commerce mobile applications
Online stores, B2B apps, marketplaces, online exchanges, cashback services, exchanges, dropshipping platforms, loyalty programs, food and goods delivery, payment systems.
Business process management mobile applications
CRM systems, ERP systems, project management, sales team tools, financial management, production management, logistics and delivery management, HR management, data monitoring systems
Electronic services mobile applications
Classified ads platforms, online schools, online cinemas, electronic service platforms, cashback platforms, video hosting, thematic portals, online booking and scheduling platforms, online trading platforms

These are just some of the types of mobile applications we work with, and each of them may have its own specific features and functionality, tailored to the specific needs and goals of the client.

Showing 1 of 1 servicesAll 1735 services
Stability Metrics Alerts Setup for Mobile App
Medium
from 4 hours to 2 business days
FAQ
Our competencies:
Development stages
Latest works
  • image_mobile-applications_feedme_467_0.webp
    Development of a mobile application for FEEDME
    756
  • image_mobile-applications_xoomer_471_0.webp
    Development of a mobile application for XOOMER
    624
  • image_mobile-applications_rhl_428_0.webp
    Development of a mobile application for RHL
    1052
  • image_mobile-applications_zippy_411_0.webp
    Development of a mobile application for ZIPPY
    947
  • image_mobile-applications_affhome_429_0.webp
    Development of a mobile application for Affhome
    862
  • image_mobile-applications_flavors_409_0.webp
    Development of a mobile application for the FLAVORS company
    445

Configuring Stability Metrics Alerts for Mobile Apps

Stability metrics without alerts are beautiful graphs no one watches. Alerts without proper thresholds create noise that teams stop reading in a week. The goal is to build a system where every alert signals real degradation requiring action.

Key Stability Metrics

Crash-Free Users Rate — the percentage of users who had zero crashes in a period. Third-party benchmarks: Google Play considers > 1.09% crashes per session "poor." Apple recommends > 99% Crash-Free Users. Important nuance: count by unique users, not sessions — otherwise one user with 10 crashes skews the metric.

ANR Rate (Android) — number of ANR events per 1,000 users daily. Google Play Vitals threshold for poor behavior: > 0.47% ANR Rate.

Watchdog Termination Rate (iOS) — ratio of sessions with Watchdog Termination to total sessions. No official Apple benchmark, but a good target is < 0.1%.

App Hang Rate (iOS) — percentage of sessions with UI freeze > 250ms. Xcode Organizer shows this metric as Hang Rate.

Setting Up Alerts in Firebase Crashlytics

// Firebase Alert Webhook (configured in Firebase Console)
// On velocity alert — POST to your endpoint

// Example Firebase payload:
{
  "type": "crashlytics.velocityAlert",
  "data": {
    "issue": {
      "id": "issue_id",
      "title": "Fatal Exception: java.lang.NullPointerException",
      "crashPercentage": 2.3,
      "firstVersion": "2.1.0",
      "latestVersion": "2.3.1"
    }
  }
}

Velocity Alert triggers when the percentage of affected sessions spikes rapidly. Set the threshold in Firebase Console → Crashlytics → Alerts.

Alerts in Sentry with CRON Checks

# Sentry API — create Monitor via REST
import requests

response = requests.post(
    "https://sentry.io/api/0/organizations/YOUR_ORG/monitors/",
    headers={"Authorization": "Bearer YOUR_TOKEN"},
    json={
        "name": "Crash-Free Rate Drop",
        "type": "cron_job",
        "config": {
            "schedule_type": "interval",
            "schedule": [1, "hour"]
        }
    }
)

More convenient is Sentry Alerts via UI: Issues → Alerts → New Alert Rule:

  • Condition: Number of users affected > 50 in 1 hour
  • Action: Notify Slack channel #mobile-incidents

Datadog Alerts Based on RUM Metrics

# Datadog Monitor query (Metric Alert)
rum(mobile,*).crash_count{env:production,service:ios-app}.rollup(sum, 3600)

# Condition: > 100 crashes per hour → CRITICAL
# > 50 crashes per hour → WARNING

For Crash-Free Rate:

# Computed metric in Datadog
(1 - (sum:rum.crash_count{service:ios-app} / sum:rum.session_count{service:ios-app})) * 100

# Alert: if < 99% → WARNING, < 98% → CRITICAL

Alert Routing

# PagerDuty + Alertmanager (for Prometheus-based monitoring)
route:
  group_by: ['service', 'platform']
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 4h
  routes:
    - match:
        severity: critical
        service: mobile
      receiver: pagerduty-mobile-oncall
    - match:
        severity: warning
        service: mobile
      receiver: slack-mobile-channel

receivers:
  - name: pagerduty-mobile-oncall
    pagerduty_configs:
      - service_key: YOUR_PD_SERVICE_KEY
  - name: slack-mobile-channel
    slack_configs:
      - api_url: YOUR_SLACK_WEBHOOK
        channel: '#mobile-stability'

What Doesn't Work: Common Configuration Mistakes

Alerting on absolute crash count without normalization. As the audience grows, crashes increase even if Crash-Free Rate is stable. The alert fires constantly and teams stop responding.

Single threshold for all versions. A new release just launched — it has a small user base, but a high crash rate may be statistically insignificant. Add a condition sessions > 1000 before checking thresholds.

No alerts on improvement. If Crash-Free Rate suddenly jumps from 97% to 99.9% — that's also interesting: maybe a hotfix worked. Bi-directional alerts help understand release impact.

What We Do

  • Define baseline stability metrics for the current version
  • Configure velocity alerts in Crashlytics/Sentry for actual traffic
  • Set up Datadog/New Relic Monitors with session normalization
  • Route CRITICAL → PagerDuty, WARNING → Slack
  • Document a runbook for each alert type

Timeline

Basic alert configuration: 4 hours – 1 day. Full system with routing and runbooks: 2 days. Pricing is calculated individually.