Developing Event Schema for Microservices
Contract between microservices through queues is Event Schema. Without strict schema — there is chaos: producer renamed a field, consumer fell at 3:00 AM. A properly designed event schema with versioning makes changes explicit and controllable.
Event Schema Design Principles
Events describe facts, not commands. OrderShipped is a fact. ShipOrder is a command. Event happened and cannot be undone (only compensated by another event).
Schema should be self-sufficient. Consumer should not make additional requests to process event. All needed data is in event body.
Backward compatibility by default. Old consumers should work with new events without changes.
Event Structure
{
"eventId": "01HQ2XK4VB8M9QXYZ123456789",
"eventType": "order.shipped",
"eventVersion": "1.2",
"occurredAt": "2026-03-28T14:22:00.000Z",
"producedBy": "order-service",
"correlationId": "req-abc-123",
"causationId": "cmd-xyz-456",
"aggregateType": "Order",
"aggregateId": "12345",
"aggregateVersion": 7,
"payload": {
"orderId": 12345,
"userId": 67890,
"carrier": "DHL",
"trackingCode": "JD123456789DE",
"estimatedDelivery": "2026-03-31",
"items": [
{"sku": "PROD-001", "quantity": 2, "warehouseId": "WH-MSK"}
]
}
}
Required envelope fields:
-
eventId— ULID or UUID, unique identifier for idempotency -
eventType— hierarchical,domain.aggregate.action -
eventVersion— semantic versioning of payload schema -
occurredAt— UTC ISO 8601 -
correlationId— for tracing request chain -
aggregateId+aggregateVersion— for optimistic locking
Avro Schema with Evolution
{
"type": "record",
"name": "OrderShipped",
"namespace": "com.example.orders.events",
"doc": "Order shipment event from warehouse",
"fields": [
{"name": "eventId", "type": "string"},
{"name": "eventType", "type": "string", "default": "order.shipped"},
{"name": "occurredAt", "type": {"type": "long", "logicalType": "timestamp-millis"}},
{"name": "orderId", "type": "long"},
{"name": "userId", "type": "long"},
{"name": "carrier", "type": "string"},
{"name": "trackingCode", "type": "string"},
{
"name": "estimatedDelivery",
"type": ["null", "string"],
"default": null,
"doc": "ISO date, may be absent for some carriers"
},
{
"name": "warehouseId",
"type": ["null", "string"],
"default": null,
"doc": "Added in v1.1 — optional field for backward compatibility"
},
{
"name": "shippingCost",
"type": ["null", {"type": "bytes", "logicalType": "decimal", "precision": 10, "scale": 2}],
"default": null,
"doc": "Added in v1.2"
}
]
}
Evolution rules for backward compatibility:
- New fields — always with
default(null or value) - Cannot delete required fields
- Cannot change field type
- Cannot rename fields (add alias, then rename in major version)
Versioning and Compatibility Strategies
# Configure Schema Registry — BACKWARD compatibility for all order events
curl -X PUT http://schema-registry:8081/config/order-events-value \
-H "Content-Type: application/vnd.schemaregistry.v1+json" \
-d '{"compatibility": "BACKWARD_TRANSITIVE"}'
# BACKWARD_TRANSITIVE — new schema is compatible with ALL previous versions,
# not just the latest
Major change (breaking change) — new topic:
order-events-v1 → for consumers on old schema
order-events-v2 → new schema, consumers migrate gradually
Transition period: producer publishes to both topics. After full migration — order-events-v1 deprecated.
Event Catalog — Documenting Schemas
For multi-service team, central event registry is critical. Use AsyncAPI:
# asyncapi.yaml
asyncapi: 3.0.0
info:
title: Order Service Events
version: 1.0.0
description: Events published by Order Service
channels:
order-events:
address: order-events
messages:
OrderCreated:
$ref: '#/components/messages/OrderCreated'
OrderShipped:
$ref: '#/components/messages/OrderShipped'
OrderCancelled:
$ref: '#/components/messages/OrderCancelled'
components:
messages:
OrderCreated:
name: OrderCreated
title: Order Created
summary: Published on successful creation of new order
contentType: application/avro
headers:
type: object
properties:
correlationId:
type: string
description: ID of incoming HTTP request
payload:
type: object
required: [eventId, orderId, userId, items, totalAmount]
properties:
eventId:
type: string
format: ulid
orderId:
type: integer
format: int64
userId:
type: integer
format: int64
items:
type: array
items:
type: object
properties:
sku:
type: string
quantity:
type: integer
price:
type: number
totalAmount:
type: number
createdAt:
type: string
format: date-time
Typed Event Publisher (TypeScript/Node.js)
import { SchemaRegistry } from '@kafkajs/confluent-schema-registry';
import { Kafka } from 'kafkajs';
interface EventEnvelope<T> {
eventId: string;
eventType: string;
eventVersion: string;
occurredAt: string;
producedBy: string;
correlationId?: string;
aggregateType: string;
aggregateId: string;
aggregateVersion: number;
payload: T;
}
interface OrderShippedPayload {
orderId: number;
userId: number;
carrier: string;
trackingCode: string;
estimatedDelivery?: string;
}
class OrderEventPublisher {
private registry: SchemaRegistry;
private producer: ReturnType<Kafka['producer']>;
async publishOrderShipped(data: OrderShippedPayload, correlationId?: string): Promise<void> {
const envelope: EventEnvelope<OrderShippedPayload> = {
eventId: ulid(),
eventType: 'order.shipped',
eventVersion: '1.2',
occurredAt: new Date().toISOString(),
producedBy: 'order-service',
correlationId,
aggregateType: 'Order',
aggregateId: String(data.orderId),
aggregateVersion: await this.getAggregateVersion(data.orderId),
payload: data,
};
const schemaId = await this.registry.getLatestSchemaId('order-events-value');
const encoded = await this.registry.encode(schemaId, envelope);
await this.producer.send({
topic: 'order-events',
messages: [{
key: String(data.orderId),
value: encoded,
headers: {
'correlation-id': correlationId ?? '',
'event-type': 'order.shipped',
},
}],
});
}
}
Event Schema Testing
// Contract testing — verify producer publishes what consumer expects
@SpringBootTest
class OrderEventContractTest {
@Test
void orderShippedEvent_shouldMatchConsumerExpectations() throws Exception {
OrderShipped event = OrderShipped.newBuilder()
.setEventId(UUID.randomUUID().toString())
.setOrderId(12345L)
.setUserId(67890L)
.setCarrier("DHL")
.setTrackingCode("JD123456789DE")
.build();
// Serialize with Avro
byte[] serialized = avroSerializer.serialize("order-events", event);
// Deserialize as consumer (different service)
OrderShipped deserialized = (OrderShipped) avroDeserializer.deserialize("order-events", serialized);
assertThat(deserialized.getOrderId()).isEqualTo(12345L);
assertThat(deserialized.getTrackingCode()).isEqualTo("JD123456789DE");
// Check backward compatibility: old consumer without warehouseId field
OldOrderShipped oldDeserialized = (OldOrderShipped) oldDeserializer.deserialize("order-events", serialized);
assertThat(oldDeserialized.getOrderId()).isEqualTo(12345L);
// warehouseId absent — no failure
}
}
Timeline
Day 1 — workshop with service teams: create Event Storming map, define all domain events and boundaries.
Day 2 — develop Avro schemas for each event type, define naming rules and envelope structure. Register in Schema Registry.
Day 3 — implement typed Event Publishers in each producer service, AsyncAPI documentation.
Day 4 — contract tests, integrate compatibility checks into CI/CD pipeline, instructions for team on schema evolution rules.







