Microservices Architecture Implementation for Web Application
Microservices architecture — breaking monolith into independently deployable services, each owning its business area. Each service has its own DB, own deployment, own team. Not about request scale — about team scale and change frequency.
When to Migrate to Microservices
Microservices solve organizational problems, not technical. Readiness signs:
- 3+ teams develop one monolith and block each other
- Different system parts require different scaling
- Critical parts (payments, notifications) need independent deployment
- Different tech stacks justified for different tasks
Well-architected monolith often better than premature decomposition.
Service Decomposition
By Business Capability:
┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ User Svc │ │ Product Svc │ │ Order Svc │ │ Payment Svc │
│ │ │ │ │ │ │ │
│ Auth │ │ Catalog │ │ Cart │ │ Stripe │
│ Profiles │ │ Search │ │ Checkout │ │ Refunds │
│ Permissions │ │ Inventory │ │ History │ │ Invoices │
└──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘
│ │ │ │
└─────────────────┴─────────────────┴─────────────────┘
Message Bus (Kafka)
Each service — own PostgreSQL (or MongoDB, Redis where appropriate). No shared DBs between services.
Inter-Service Communication
Synchronous (REST/gRPC) — request-response, fits user requests:
// Order Service calls Product Service to check availability
const productClient = new ProductServiceClient(process.env.PRODUCT_SERVICE_URL);
async function createOrder(items: OrderItem[]) {
// Check product availability synchronously
const availability = await productClient.checkAvailability(
items.map(i => ({ productId: i.productId, quantity: i.quantity }))
);
if (availability.some(a => !a.available)) {
throw new InsufficientStockError();
}
// ...
}
Asynchronous (Events/Kafka) — for operations not needing immediate response:
// Order Service publishes event after order creation
await kafka.producer.send({
topic: 'order.events',
messages: [{
key: order.id,
value: JSON.stringify({
type: 'OrderCreated',
orderId: order.id,
customerId: order.customerId,
items: order.items,
total: order.total,
occurredAt: new Date().toISOString()
})
}]
});
// Notification Service subscribed to 'order.events'
kafka.consumer.on('order.events', async (event) => {
if (event.type === 'OrderCreated') {
await notificationService.sendConfirmationEmail(event.customerId, event.orderId);
}
});
Strangler Fig Pattern for Monolith Migration
Gradual migration without "big rewrite":
- Identify most isolated monolith module (usually — notifications, search, or auth)
- Place proxy (API Gateway) before monolith
- Extract module into separate service
- Switch proxy to new service
- Remove code from monolith
- Repeat for next module
# API Gateway (nginx) routes by path
location /api/auth/ {
proxy_pass http://auth-service:3001;
}
location /api/notifications/ {
proxy_pass http://notification-service:3002;
}
location /api/ {
proxy_pass http://monolith:8080; # rest to monolith
}
Data Management
Database per Service — each service owns its data:
# docker-compose.yml
services:
user-db:
image: postgres:15
environment:
POSTGRES_DB: users
order-db:
image: postgres:15
environment:
POSTGRES_DB: orders
product-db:
image: postgres:15
environment:
POSTGRES_DB: products
notification-db:
image: redis:7
Saga Pattern for distributed transactions (see separate page).
Shared Data via API — if Order Service needs user data, it requests User Service via API, doesn't write to its DB.
Infrastructure
| Component | Tool |
|---|---|
| Container orchestration | Kubernetes |
| API Gateway | Kong, Traefik, AWS API Gateway |
| Service Discovery | Consul, Kubernetes DNS |
| Config Management | Consul KV, Vault |
| Message Broker | Apache Kafka, RabbitMQ |
| Distributed Tracing | Jaeger, Zipkin |
| Centralized Logging | ELK Stack, Loki + Grafana |
| Health Checks | Kubernetes liveness/readiness probes |
Observability
Each service should export:
- Metrics to Prometheus (RED: Rate, Errors, Duration)
- Traces to Jaeger (OpenTelemetry SDK)
- Logs in structured JSON → Loki or Elasticsearch
// OpenTelemetry tracing in Node.js
import { trace, context } from '@opentelemetry/api';
const tracer = trace.getTracer('order-service');
async function processOrder(orderId: string) {
const span = tracer.startSpan('processOrder');
span.setAttribute('order.id', orderId);
try {
await context.with(trace.setSpan(context.active(), span), async () => {
await validateOrder(orderId); // child span
await chargePayment(orderId); // child span
await notifyCustomer(orderId); // child span
});
span.setStatus({ code: SpanStatusCode.OK });
} catch (err) {
span.recordException(err);
span.setStatus({ code: SpanStatusCode.ERROR });
throw err;
} finally {
span.end();
}
}
Implementation Timeline
- Decompose monolith and extract first service — 3–6 weeks
- Infrastructure setup (Kubernetes + Kafka + tracing) — 2–4 weeks in parallel
- Full migration of medium monolith (5–10 services) — 4–8 months
- Gradual migration via Strangler Fig — 1–2 years for large monolith







