Event-Driven Architecture for Web Application

Our company is engaged in the development, support and maintenance of sites of any complexity. From simple one-page sites to large-scale cluster systems built on micro services. Experience of developers is confirmed by certificates from vendors.
Development and maintenance of all types of websites:
Informational websites or web applications
Business card websites, landing pages, corporate websites, online catalogs, quizzes, promo websites, blogs, news resources, informational portals, forums, aggregators
E-commerce websites or web applications
Online stores, B2B portals, marketplaces, online exchanges, cashback websites, exchanges, dropshipping platforms, product parsers
Business process management web applications
CRM systems, ERP systems, corporate portals, production management systems, information parsers
Electronic service websites or web applications
Classified ads platforms, online schools, online cinemas, website builders, portals for electronic services, video hosting platforms, thematic portals

These are just some of the technical types of websites we work with, and each of them can have its own specific features and functionality, as well as be customized to meet the specific needs and goals of the client.

Showing 1 of 1 servicesAll 2065 services
Event-Driven Architecture for Web Application
Complex
~2-4 weeks
FAQ
Our competencies:
Development stages
Latest works
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822
  • image_crm_chasseurs_493_0.webp
    CRM development for Chasseurs
    847
  • image_website-sbh_0.png
    Website development for SBH Partners
    999
  • image_website-_0.png
    Website development for Red Pear
    451

Implementing Event-Driven Architecture for Web Applications

Event-Driven Architecture (EDA) is an architectural style where system components interact through publishing and consuming events. A producer publishes an event to a broker and doesn't know who will handle it. Consumers subscribe to events of interest independently. This radically reduces coupling between components.

When EDA is Appropriate

  • Multiple systems need to be notified of a single event (new order → inventory + notification + analytics)
  • Processing takes time and blocking HTTP request is inefficient
  • Peak load that needs to be smoothed through a queue
  • Audit and change history requirements
  • Integration with external systems via webhooks or CDC

Event Structure

interface DomainEvent<T = unknown> {
  id: string;            // UUID — for idempotency
  type: string;          // 'user.registered', 'order.placed'
  version: string;       // '1.0' — for schema evolution
  source: string;        // 'order-service'
  correlationId: string; // end-to-end ID across all services
  causationId?: string;  // ID of the event that caused this
  occurredAt: string;    // ISO 8601
  data: T;
}

// Concrete event
interface OrderPlacedEvent extends DomainEvent<{
  orderId: string;
  customerId: string;
  items: Array<{ productId: string; quantity: number; price: number }>;
  total: number;
  shippingAddress: Address;
}> {
  type: 'order.placed';
}

Apache Kafka — Primary Broker for EDA

import { Kafka, Partitioners } from 'kafkajs';

const kafka = new Kafka({
  clientId: 'order-service',
  brokers: process.env.KAFKA_BROKERS.split(',')
});

// Producer
const producer = kafka.producer({
  createPartitioner: Partitioners.LegacyPartitioner
});

async function publishOrderPlaced(order: Order): Promise<void> {
  await producer.send({
    topic: 'order.events',
    messages: [{
      key: order.id,  // partition by order ID
      value: JSON.stringify({
        id: uuidv4(),
        type: 'order.placed',
        version: '1.0',
        source: 'order-service',
        correlationId: context.correlationId,
        occurredAt: new Date().toISOString(),
        data: {
          orderId: order.id,
          customerId: order.customerId,
          items: order.items,
          total: order.total
        }
      } satisfies OrderPlacedEvent),
      headers: {
        'content-type': 'application/json',
        'schema-version': '1.0'
      }
    }]
  });
}
// Consumer — Inventory Service
const consumer = kafka.consumer({ groupId: 'inventory-service' });

await consumer.subscribe({ topics: ['order.events'], fromBeginning: false });

await consumer.run({
  eachMessage: async ({ topic, partition, message }) => {
    const event = JSON.parse(message.value.toString()) as DomainEvent;

    // Idempotency: check if we've already processed this event
    const processed = await idempotencyRepo.exists(event.id);
    if (processed) return;

    try {
      if (event.type === 'order.placed') {
        await inventoryService.reserveStock(event.data.orderId, event.data.items);
      }
      await idempotencyRepo.mark(event.id);
    } catch (error) {
      // Publish to Dead Letter Topic for analysis
      await deadLetterProducer.send({
        topic: 'order.events.dlq',
        messages: [{
          value: message.value,
          headers: { 'failure-reason': error.message }
        }]
      });
    }
  }
});

Outbox Pattern — Guaranteed Delivery

Naive approach: save to database then publish to Kafka—risks losing the event if failure occurs between steps. Correct approach: Transactional Outbox:

// Within a single database transaction
async function createOrder(dto: CreateOrderDto): Promise<Order> {
  return db.transaction(async (trx) => {
    // 1. Save order
    const order = await trx('orders').insert({ ...orderData }).returning('*');

    // 2. Save event to outbox table (in the same transaction!)
    await trx('outbox_events').insert({
      id: uuidv4(),
      aggregate_id: order.id,
      event_type: 'order.placed',
      payload: JSON.stringify(orderPlacedEvent),
      status: 'pending',
      created_at: new Date()
    });

    return order;
  });
}

A separate Outbox Poller reads pending events and publishes them to Kafka:

// Cron job or background worker
async function processOutbox(): Promise<void> {
  const events = await db('outbox_events')
    .where({ status: 'pending' })
    .orderBy('created_at')
    .limit(100)
    .forUpdate()
    .skipLocked();

  for (const event of events) {
    try {
      await kafka.producer.send({
        topic: getTopicForEventType(event.event_type),
        messages: [{ key: event.aggregate_id, value: event.payload }]
      });
      await db('outbox_events')
        .where({ id: event.id })
        .update({ status: 'published', published_at: new Date() });
    } catch {
      await db('outbox_events')
        .where({ id: event.id })
        .update({ retry_count: db.raw('retry_count + 1') });
    }
  }
}

Alternative — Debezium CDC: reads PostgreSQL WAL and publishes changes to Kafka without additional code.

Choreography vs Orchestration

Choreography Orchestration
Coordination Services react to events Central orchestrator
Coupling Low Medium
Flow visibility Hard to trace Explicit in orchestrator code
Testing Harder Easier

Event Sourcing as a Special Case of EDA

With Event Sourcing, all state changes are events published to both the event store and broker. Projections are built from these same events. EDA and ES work well together.

Implementation Timeline

  • EDA for one scenario (one producer + 2–3 consumers) — 1–2 weeks
  • Outbox Pattern + idempotency + DLQ — another 1 week
  • Full EDA for 5–10 services with monitoring — 4–8 weeks