Firebase A/B Testing Integration in Mobile Applications
A/B test in a mobile app is not just showing two groups different screens. Need to guarantee stable group assignment between sessions, correctly measure conversion, not clutter analytics with events from multiple overlapping experiments. Firebase A/B Testing solves this on top of Remote Config and Firebase Analytics without separate infrastructure.
How Experiment Works
Firebase A/B Testing is a layer over Remote Config. Experiment is created in console: you set a Remote Config parameter, control group (current value), and variants (new values). Firebase distributes users across groups on server, and on fetchAndActivate each gets their own parameter value. On client side no architecture changes — same code that works with Remote Config works with A/B tests.
Experiment target metric is any Firebase Analytics event: purchase, screen_view, custom onboarding_completed. Firebase Console itself calculates statistical significance and shows Bayesian probability that variant beats control.
Typical Implementation Pitfalls
Untimely activate. If fetchAndActivate triggers after user already saw screen with control variant, "re-flashing" can occur — UI rebuilds to new variant mid-session. User sees both variants, experiment data gets corrupted. Rule: apply config before rendering target screen, or adopt "activate only on next cold start" policy — via fetch() without immediate activate().
Experiment overlap. If two A/B tests change the same screen, their results cannot be interpreted separately. Firebase allows running multiple experiments in parallel, but team is responsible for no conflicts. Need a table of active experiments and their parameters.
Minimum sample size. Firebase warns about statistical insignificance, but teams often stop experiment early seeing "nice numbers" on day three. For conversions below 5% need minimum 500–1000 conversions per group. Otherwise result is noise.
Implementation on iOS
// Config already configured via RemoteConfig
// In experiment parameter: "paywall_position" = "bottom" (control) / "center" (variant)
remoteConfig.fetchAndActivate { [weak self] _, _ in
let position = RemoteConfig.remoteConfig()["paywall_position"].stringValue
DispatchQueue.main.async {
self?.paywallViewModel.position = position == "center" ? .center : .bottom
}
}
Must log trigger event — Firebase A/B Testing uses it to filter "saw experiment":
Analytics.logEvent("experiment_paywall_viewed", parameters: [
"variant": RemoteConfig.remoteConfig()["paywall_position"].stringValue ?? "unknown"
])
On Flutter (via firebase_remote_config)
final remoteConfig = FirebaseRemoteConfig.instance;
await remoteConfig.setConfigSettings(RemoteConfigSettings(
fetchTimeout: const Duration(seconds: 10),
minimumFetchInterval: const Duration(hours: 1),
));
await remoteConfig.fetchAndActivate();
final paywallPosition = remoteConfig.getString('paywall_position');
What's Included in Work
- Setup Remote Config with experiment-specific parameters
- Typed access to experimental parameters
- Integration with startup point (before target screen render)
- Configure target events in Firebase Analytics for conversion measurement
- Experiment design consultation: hypothesis, metric, minimum sample
Timeline
From 1 day (if Remote Config already connected) to 3 days (from scratch, including analytics and methodology consultation). Cost estimated individually.







