Microservices Backend Architecture for Mobile Applications
Microservices transition is justified not because it's trendy, but when monolith creates concrete operational problems: 15+ developer team conflicts in one codebase, release cycle slows to two weeks due to interdepartment coordination, or one "hot" feature (streaming, payment processing) needs separate scaling instead of dragging entire monolith.
Where Microservices Create More Problems Than Solutions
Start with antipatterns — that's where budgets usually go.
Distributed monolith. Split monolith to 15 services, but each mobile client request synchronously waits for chain: API Gateway → UserService → OrderService → ProductService → PaymentService. One service lags — entire request hangs. Not microservice architecture, monolith over HTTP with extra latency overhead. Sign: services can't work independently, deploying one requires coordination.
Grapevine data consistency. OrderService synchronously calls InventoryService, gets 200 OK, but between checking inventory and deducting, another request grabbed last item. Race condition at distributed system level. Solution — Saga pattern: either choreography (events via Kafka/RabbitMQ) or orchestration (Temporal, Apache Camel).
Overhead on small teams. Three developers, five services — each deploy requires updating five Docker images, five Helm charts, five sets of environment variables. Velocity drops, configuration errors grow. For teams under 8 people, Modular Monolith or monolith-first with later decomposition is more honest.
Building Microservices Architecture for Mobile App
Domain Decomposition (Domain-Driven Design)
Bounded Context — core principle: each service owns its data and doesn't read others' databases directly. Typical e-commerce mobile app decomposition:
| Service | Responsibility | Database |
|---|---|---|
| user-service | Registration, profile, authentication | PostgreSQL |
| catalog-service | Products, categories, search | PostgreSQL + Elasticsearch |
| order-service | Orders, statuses, history | PostgreSQL |
| payment-service | Payment methods, transactions | PostgreSQL |
| notification-service | Push, email, SMS | Redis + PostgreSQL |
| media-service | Media upload and processing | S3 + PostgreSQL |
API Gateway — Single Entry Point for Mobile Client
Mobile client never knows service topology. One host, one TLS certificate. Gateway handles: JWT authentication (don't duplicate in each service), rate limiting, API version routing, request transformation.
Technologies: Kong (production-proven, plugins for auth/rate limit/logging), AWS API Gateway if infrastructure on AWS, Traefik for Kubernetes-native solution. For teams wanting custom logic — custom Gateway in Go (BFF — Backend for Frontend), especially if mobile client needs aggregating data from multiple services into one response.
Asynchronous Communication
Synchronous service calls only where result needed immediately (payment limit check). Everything else via message broker.
Apache Kafka for high-load scenarios: event sourcing, audit logs, metric streaming. Topic order.created — subscribed by notification-service (send push), analytics-service (update metrics), loyalty-service (award points). Each independent, no coupling.
RabbitMQ for task queue: video transcoding, PDF generation, email sending. Simpler operationally, sufficient for most mobile apps.
Case: fitness streaming platform, 200,000 MAU. Original Node.js monolith started timing out on /api/workouts/start — synchronously called: logging, progress update, view counter increment, achievement check. Decomposition: workout-service publishes workout.started event to Kafka, other services handle asynchronously. Endpoint latency: 800 ms to 40 ms.
Service Mesh and Observability
In microservices without observability you're blind. Essential minimum:
- Distributed tracing — Jaeger or Zipkin with OpenTelemetry SDK in each service. When mobile client complains about slow response, trace shows which service is responsible
-
Centralized logging — ELK Stack (Elasticsearch + Logstash + Kibana) or Loki + Grafana. Correlation ID passed through all services in headers (
X-Trace-ID) - Service mesh — Istio or Linkerd for mTLS between services, circuit breaker, retry, timeout at infrastructure level, no code changes
Circuit Breaker Protecting Mobile Client
If payment-service degrades, order-service must quickly return fallback, not wait 30-second timeout. Resilience4j (Java), Polly (.NET), go-circuitbreaker — circuit breaker opens after N consecutive errors, quickly returns 503, retries after 30 seconds. Mobile client gets response in 100 ms, not timeout.
Implementation Process
Monolith to microservices migration — not "rewrite everything at once." Strangler Fig pattern: new features as separate service, old gradually extracted from monolith. Start with least-coupled domains (usually notifications, media, analytics).
Timeline: Architecture design + basic infrastructure (Gateway, broker, monitoring) — 4–6 weeks. Full decomposition of product monolith to 5–8 services with CI/CD — 16–24 weeks.







