Implementing Event-Driven Architecture for Web Applications
Event-Driven Architecture (EDA) is an architectural style where system components interact through publishing and consuming events. A producer publishes an event to a broker and doesn't know who will handle it. Consumers subscribe to events of interest independently. This radically reduces coupling between components.
When EDA is Appropriate
- Multiple systems need to be notified of a single event (new order → inventory + notification + analytics)
- Processing takes time and blocking HTTP request is inefficient
- Peak load that needs to be smoothed through a queue
- Audit and change history requirements
- Integration with external systems via webhooks or CDC
Event Structure
interface DomainEvent<T = unknown> {
id: string; // UUID — for idempotency
type: string; // 'user.registered', 'order.placed'
version: string; // '1.0' — for schema evolution
source: string; // 'order-service'
correlationId: string; // end-to-end ID across all services
causationId?: string; // ID of the event that caused this
occurredAt: string; // ISO 8601
data: T;
}
// Concrete event
interface OrderPlacedEvent extends DomainEvent<{
orderId: string;
customerId: string;
items: Array<{ productId: string; quantity: number; price: number }>;
total: number;
shippingAddress: Address;
}> {
type: 'order.placed';
}
Apache Kafka — Primary Broker for EDA
import { Kafka, Partitioners } from 'kafkajs';
const kafka = new Kafka({
clientId: 'order-service',
brokers: process.env.KAFKA_BROKERS.split(',')
});
// Producer
const producer = kafka.producer({
createPartitioner: Partitioners.LegacyPartitioner
});
async function publishOrderPlaced(order: Order): Promise<void> {
await producer.send({
topic: 'order.events',
messages: [{
key: order.id, // partition by order ID
value: JSON.stringify({
id: uuidv4(),
type: 'order.placed',
version: '1.0',
source: 'order-service',
correlationId: context.correlationId,
occurredAt: new Date().toISOString(),
data: {
orderId: order.id,
customerId: order.customerId,
items: order.items,
total: order.total
}
} satisfies OrderPlacedEvent),
headers: {
'content-type': 'application/json',
'schema-version': '1.0'
}
}]
});
}
// Consumer — Inventory Service
const consumer = kafka.consumer({ groupId: 'inventory-service' });
await consumer.subscribe({ topics: ['order.events'], fromBeginning: false });
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
const event = JSON.parse(message.value.toString()) as DomainEvent;
// Idempotency: check if we've already processed this event
const processed = await idempotencyRepo.exists(event.id);
if (processed) return;
try {
if (event.type === 'order.placed') {
await inventoryService.reserveStock(event.data.orderId, event.data.items);
}
await idempotencyRepo.mark(event.id);
} catch (error) {
// Publish to Dead Letter Topic for analysis
await deadLetterProducer.send({
topic: 'order.events.dlq',
messages: [{
value: message.value,
headers: { 'failure-reason': error.message }
}]
});
}
}
});
Outbox Pattern — Guaranteed Delivery
Naive approach: save to database then publish to Kafka—risks losing the event if failure occurs between steps. Correct approach: Transactional Outbox:
// Within a single database transaction
async function createOrder(dto: CreateOrderDto): Promise<Order> {
return db.transaction(async (trx) => {
// 1. Save order
const order = await trx('orders').insert({ ...orderData }).returning('*');
// 2. Save event to outbox table (in the same transaction!)
await trx('outbox_events').insert({
id: uuidv4(),
aggregate_id: order.id,
event_type: 'order.placed',
payload: JSON.stringify(orderPlacedEvent),
status: 'pending',
created_at: new Date()
});
return order;
});
}
A separate Outbox Poller reads pending events and publishes them to Kafka:
// Cron job or background worker
async function processOutbox(): Promise<void> {
const events = await db('outbox_events')
.where({ status: 'pending' })
.orderBy('created_at')
.limit(100)
.forUpdate()
.skipLocked();
for (const event of events) {
try {
await kafka.producer.send({
topic: getTopicForEventType(event.event_type),
messages: [{ key: event.aggregate_id, value: event.payload }]
});
await db('outbox_events')
.where({ id: event.id })
.update({ status: 'published', published_at: new Date() });
} catch {
await db('outbox_events')
.where({ id: event.id })
.update({ retry_count: db.raw('retry_count + 1') });
}
}
}
Alternative — Debezium CDC: reads PostgreSQL WAL and publishes changes to Kafka without additional code.
Choreography vs Orchestration
| Choreography | Orchestration | |
|---|---|---|
| Coordination | Services react to events | Central orchestrator |
| Coupling | Low | Medium |
| Flow visibility | Hard to trace | Explicit in orchestrator code |
| Testing | Harder | Easier |
Event Sourcing as a Special Case of EDA
With Event Sourcing, all state changes are events published to both the event store and broker. Projections are built from these same events. EDA and ES work well together.
Implementation Timeline
- EDA for one scenario (one producer + 2–3 consumers) — 1–2 weeks
- Outbox Pattern + idempotency + DLQ — another 1 week
- Full EDA for 5–10 services with monitoring — 4–8 weeks







