Setting up A/B testing in a mobile application

NOVASOLUTIONS.TECHNOLOGY is engaged in the development, support and maintenance of iOS, Android, PWA mobile applications. We have extensive experience and expertise in publishing mobile applications in popular markets like Google Play, App Store, Amazon, AppGallery and others.
Development and support of all types of mobile applications:
Information and entertainment mobile applications
News apps, games, reference guides, online catalogs, weather apps, fitness and health apps, travel apps, educational apps, social networks and messengers, quizzes, blogs and podcasts, forums, aggregators
E-commerce mobile applications
Online stores, B2B apps, marketplaces, online exchanges, cashback services, exchanges, dropshipping platforms, loyalty programs, food and goods delivery, payment systems.
Business process management mobile applications
CRM systems, ERP systems, project management, sales team tools, financial management, production management, logistics and delivery management, HR management, data monitoring systems
Electronic services mobile applications
Classified ads platforms, online schools, online cinemas, electronic service platforms, cashback platforms, video hosting, thematic portals, online booking and scheduling platforms, online trading platforms

These are just some of the types of mobile applications we work with, and each of them may have its own specific features and functionality, tailored to the specific needs and goals of the client.

Showing 1 of 1 servicesAll 1735 services
Setting up A/B testing in a mobile application
Medium
~2-3 business days
FAQ
Our competencies:
Development stages
Latest works
  • image_mobile-applications_feedme_467_0.webp
    Development of a mobile application for FEEDME
    756
  • image_mobile-applications_xoomer_471_0.webp
    Development of a mobile application for XOOMER
    624
  • image_mobile-applications_rhl_428_0.webp
    Development of a mobile application for RHL
    1054
  • image_mobile-applications_zippy_411_0.webp
    Development of a mobile application for ZIPPY
    947
  • image_mobile-applications_affhome_429_0.webp
    Development of a mobile application for Affhome
    862
  • image_mobile-applications_flavors_409_0.webp
    Development of a mobile application for the FLAVORS company
    445

Setting up A/B testing in mobile app

A/B test — controlled experiment where one user segment sees variant A, another — variant B, and we measure which variant gives better result by target metric. In practice, most A/B tests in mobile apps are either set up incorrectly (no statistical significance, experiment stopped too early), or don't run at all — because there's no infrastructure.

Tool selection

Tool Suitable for Minus
Firebase A/B Testing Simple UI/text/parameters Limited targeting flexibility
Amplitude Experiment Product hypotheses with retention analysis Paid, requires Amplitude Analytics
Statsig Full cycle: flags, experiments, analysis Requires setup
Growthbook Open-source, self-hosted Infrastructure costs

For most mobile projects, Firebase A/B Testing — reasonable start. Integration through Remote Config, no additional SDKs.

Firebase A/B Testing: setup

Firebase A/B Testing built on Remote Config. First define parameter:

// Get value from Remote Config
let remoteConfig = RemoteConfig.remoteConfig()
remoteConfig.configSettings = RemoteConfigSettings()
remoteConfig.configSettings.minimumFetchInterval = 0  // in debug

remoteConfig.fetchAndActivate { status, error in
    let ctaText = remoteConfig.configValue(forKey: "checkout_cta_text").stringValue
    self.checkoutButton.setTitle(ctaText, for: .normal)
}

In Firebase Console → A/B Testing create experiment:

  1. Select checkout_cta_text as Target Parameter
  2. Control: "Proceed to checkout"
  3. Variant A: "Buy now"
  4. Target metric: purchase (conversion event)
  5. Participant percentage: 50%
  6. Minimum sample size: Firebase calculates automatically

Critical A/B test errors

Stopping test on first significant results — most common mistake. If watching p-value daily and stop test when p < 0.05 first time — probability of false positive significantly exceeds stated 5%. Must stop test when predetermined sample size reached.

One test — one metric. Can't optimize conversion rate and session length simultaneously with one test. If both grow — good, but target must be one.

Novelty effect. New design gives click spike first week simply because it's new. For behavioral tests minimum duration — 2 weeks. For retention tests — 4 weeks.

Statsig for complex experiments

When need more flexible segmentation (test only Moscow users with > 3 sessions):

// iOS Statsig SDK
import StatsigSDK

Statsig.initialize(sdkKey: "client-xxx") {
    let experiment = Statsig.getExperiment("checkout_flow_v2")
    let variant = experiment.getValue(forKey: "flow_type", defaultValue: "standard")

    if variant == "simplified" {
        self.showSimplifiedCheckout()
    } else {
        self.showStandardCheckout()
    }
}
// Android
val experiment = Statsig.getExperiment("checkout_flow_v2")
val flowType = experiment.getString("flow_type", "standard")

Statsig supports Stratified Sampling — uniform user distribution across strata (platform, country, subscription plan). Without stratification, random distribution may create cohorts with different composition, distorting results.

Exposure logging

For correct analysis important to log fact of variant display — not just conversion:

Analytics.logEvent("experiment_exposure", parameters: [
    "experiment_id": "checkout_cta_v2",
    "variant": variantName,
    "user_id": userId
])

This allows analyzing conversion only among users who actually saw experiment, not all participants.

What's included in the work

  • Tool selection for tasks and stack (Firebase / Statsig / Amplitude Experiment)
  • SDK integration and Remote Config / Feature Flags setup
  • A/B layer implementation in code with correct variant handling
  • Target metrics setup and conversion events
  • Sample size and test duration configuration
  • Exposure logging for analysis

Timeline

One A/B test on Firebase Remote Config: 1–2 days. Infrastructure for regular A/B testing (Statsig/Growthbook): 3–5 days. Cost calculated individually.