A/B Testing Page Elements
A/B testing is a controlled experiment where one user group sees the original version (control) and another sees a modified version (variant). A statistically significant difference in conversion proves the effect of the change.
What Can Be Tested
- Headline and CTA button text
- Button color, size, and placement
- Form: number of fields, field order, labels
- Hero images and videos
- Social proof (testimonials, counters)
- Trust elements (security badges, guarantees)
- Pricing and price display
Minimum Sample Size
Before launching a test, calculate the required sample size:
from scipy import stats
import math
def calculate_sample_size(baseline_rate, min_detectable_effect, alpha=0.05, power=0.8):
"""
baseline_rate: current conversion (0.05 = 5%)
min_detectable_effect: minimum effect (0.10 = 10% relative improvement)
"""
p1 = baseline_rate
p2 = baseline_rate * (1 + min_detectable_effect)
z_alpha = stats.norm.ppf(1 - alpha / 2)
z_beta = stats.norm.ppf(power)
p_avg = (p1 + p2) / 2
n = (z_alpha * math.sqrt(2 * p_avg * (1 - p_avg)) +
z_beta * math.sqrt(p1 * (1 - p1) + p2 * (1 - p2))) ** 2 / (p2 - p1) ** 2
return math.ceil(n)
# Example: 3% conversion, want to detect 15%+ improvement
n = calculate_sample_size(0.03, 0.15)
print(f"Need {n} users per variant = {n*2} total")
# Need 3,842 users per variant = 7,684 total
Client-Side A/B Test Implementation
// Simple A/B test without external tools
function getVariant(testName) {
const stored = localStorage.getItem(`ab_${testName}`)
if (stored) return stored
const variant = Math.random() < 0.5 ? 'control' : 'variant'
localStorage.setItem(`ab_${testName}`, variant)
// Send to analytics
gtag('event', 'ab_test_assignment', {
test_name: testName,
variant: variant
})
return variant
}
// Apply test
const variant = getVariant('cta_button_color')
const ctaButton = document.getElementById('main-cta')
if (variant === 'variant') {
ctaButton.style.backgroundColor = '#FF6B35' // orange instead of blue
ctaButton.textContent = 'Start for Free'
} else {
// Keep default
}
// Track conversion
ctaButton.addEventListener('click', () => {
gtag('event', 'cta_click', {
test_name: 'cta_button_color',
variant: variant
})
})
Server-Side A/B Test (Preferred)
Client-side tests cause flickering. Server-side is better:
// Laravel Middleware: assign variant before rendering
class AbTestMiddleware
{
public function handle(Request $request, Closure $next)
{
$testName = 'checkout_form_v2';
$userId = auth()->id() ?? $request->session()->getId();
// Deterministic assignment by user ID
$variant = (crc32($userId . $testName) % 2 === 0) ? 'control' : 'variant';
$request->merge(['ab_variants' => [$testName => $variant]]);
View::share('ab_variants', [$testName => $variant]);
$response = $next($request);
$response->headers->set('X-AB-Variant', $variant);
return $response;
}
}
{{-- In template --}}
@if($ab_variants['checkout_form_v2'] === 'variant')
@include('checkout.form-v2')
@else
@include('checkout.form-v1')
@endif
Results Analysis
from scipy.stats import chi2_contingency, proportions_ztest
import numpy as np
def analyze_ab_test(control_visitors, control_conversions,
variant_visitors, variant_conversions):
# Conversion rates
cr_control = control_conversions / control_visitors
cr_variant = variant_conversions / variant_visitors
relative_change = (cr_variant - cr_control) / cr_control * 100
# Z-test for proportions
count = np.array([variant_conversions, control_conversions])
nobs = np.array([variant_visitors, control_visitors])
z_stat, p_value = proportions_ztest(count, nobs)
print(f"Control: {cr_control:.2%} ({control_conversions}/{control_visitors})")
print(f"Variant: {cr_variant:.2%} ({variant_conversions}/{variant_visitors})")
print(f"Relative change: {relative_change:+.1f}%")
print(f"P-value: {p_value:.4f}")
print(f"Statistically significant: {'YES' if p_value < 0.05 else 'NO'}")
analyze_ab_test(
control_visitors=3842, control_conversions=115,
variant_visitors=3891, variant_conversions=148
)
Common Mistakes
- Stopping the test early — p-value fluctuates initially, must reach required sample size
- Testing multiple changes at once — unclear which change caused the effect (that's Multivariate testing)
- Ignoring segments — test is neutral overall but improves mobile by 25%
- Not accounting for seasonality — test during a representative period
Delivery Time
Setting up an A/B test with server-side split, analytics integration, and results analysis — 2–4 business days.







