Setting Up HTTP Request Monitoring (Response Time, Error Rate) in Mobile Applications
Response Time and Error Rate—two primary signals of API health. If average response time for /api/feed grows from 200 ms to 1800 ms, users feel it already, but App Store reviews appear next day. With monitoring—you know within 5 minutes.
What to Measure and Where to Store
Two levels of metrics collection:
Client-side (in-app): measure response time from user device—includes network latency. Reflects real experience but noisy: one user's poor network doesn't mean server issue.
Server-side (APM): instrument server, measure server time only. Doesn't see network delay but accurately shows backend state.
Correct approach: both levels. Client-side for UX understanding, server-side for diagnosis.
Interceptor for Axios in React Native
import axios, { AxiosInstance, AxiosRequestConfig, AxiosResponse } from 'axios';
type RequestMetric = {
endpoint: string;
method: string;
statusCode: number;
durationMs: number;
timestamp: number;
error?: string;
};
const metricsBuffer: RequestMetric[] = [];
const FLUSH_INTERVAL_MS = 30_000;
const FLUSH_BATCH_SIZE = 50;
function createMonitoredAxios(): AxiosInstance {
const instance = axios.create({ baseURL: API_BASE_URL });
instance.interceptors.request.use((config: AxiosRequestConfig) => {
(config as any).metadata = { startTime: Date.now() };
return config;
});
instance.interceptors.response.use(
(response: AxiosResponse) => {
recordMetric(response.config, response.status, null);
return response;
},
(error) => {
const status = error.response?.status ?? 0;
recordMetric(error.config, status, error.message);
return Promise.reject(error);
}
);
return instance;
}
function recordMetric(config: any, status: number, error: string | null) {
const durationMs = Date.now() - (config?.metadata?.startTime ?? Date.now());
const url = config?.url ?? 'unknown';
const endpoint = new URL(url, API_BASE_URL).pathname; // normalize without query
metricsBuffer.push({
endpoint,
method: (config?.method ?? 'GET').toUpperCase(),
statusCode: status,
durationMs,
timestamp: Date.now(),
error: error ?? undefined,
});
if (metricsBuffer.length >= FLUSH_BATCH_SIZE) flushMetrics();
}
Normalize URL to pathname—don't want thousands of unique metrics /api/users/123, /api/users/456. Need pattern /api/users/:id.
Client-Side Aggregation: P50/P95/P99
Average (mean) response time is deceptive: 90% of requests in 100 ms and 10% in 5000 ms give average 590 ms—doesn't reflect reality. Percentiles are more accurate:
function calculatePercentiles(durations: number[]): { p50: number; p95: number; p99: number } {
const sorted = [...durations].sort((a, b) => a - b);
const p = (percentile: number) => sorted[Math.floor(sorted.length * percentile / 100)];
return { p50: p(50), p95: p(95), p99: p(99) };
}
P99—response time for 99% of requests. If P99 grows at stable P50—issue with slow requests for small user portion (specific endpoint, specific OS, specific region).
Sending Metrics: Batching and Prioritization
async function flushMetrics() {
if (metricsBuffer.length === 0) return;
const batch = metricsBuffer.splice(0, FLUSH_BATCH_SIZE);
try {
await fetch(`${METRICS_ENDPOINT}/ingest`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ metrics: batch, appVersion: APP_VERSION, platform: Platform.OS }),
});
} catch {
// On send error—return to buffer, but not exceeding MAX_BUFFER_SIZE
metricsBuffer.unshift(...batch.slice(0, MAX_BUFFER_SIZE - metricsBuffer.length));
}
}
Send metrics via non-critical fetch—send errors shouldn't affect UX. Buffer limited—during offline, don't accumulate gigabytes.
Ready Solution: Firebase Performance Monitoring
@react-native-firebase/perf does most automatically: intercepts fetch/XHR, measures time, sends to Firebase. Console shows dashboard with percentiles by endpoint.
import perf from '@react-native-firebase/perf';
// Custom trace for critical operation
const trace = await perf().startTrace('checkout_flow');
trace.putAttribute('userId', userId);
// ... operation ...
await trace.stop();
For most apps, Firebase Performance—right choice. For enterprise with self-hosted requirements—Datadog RUM Mobile or custom send to InfluxDB/Prometheus.
Response Time Alerts
Alert threshold: P95 > 2× baseline in 5-minute window. For example, if baseline P95 = 400 ms and grows to 900 ms—alert to Slack. Error rate > 5% in 10 minutes—PagerDuty.
Estimate
Firebase Performance Monitoring with custom traces and basic alerts: 1 week. Custom metrics system with batching, percentiles, and Datadog/Grafana dashboard: 2–4 weeks.







