Web Application Architecture Design

Our company is engaged in the development, support and maintenance of sites of any complexity. From simple one-page sites to large-scale cluster systems built on micro services. Experience of developers is confirmed by certificates from vendors.
Development and maintenance of all types of websites:
Informational websites or web applications
Business card websites, landing pages, corporate websites, online catalogs, quizzes, promo websites, blogs, news resources, informational portals, forums, aggregators
E-commerce websites or web applications
Online stores, B2B portals, marketplaces, online exchanges, cashback websites, exchanges, dropshipping platforms, product parsers
Business process management web applications
CRM systems, ERP systems, corporate portals, production management systems, information parsers
Electronic service websites or web applications
Classified ads platforms, online schools, online cinemas, website builders, portals for electronic services, video hosting platforms, thematic portals

These are just some of the technical types of websites we work with, and each of them can have its own specific features and functionality, as well as be customized to meet the specific needs and goals of the client.

Showing 1 of 1 servicesAll 2065 services
Web Application Architecture Design
Complex
~3-5 business days
FAQ
Our competencies:
Development stages
Latest works
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822
  • image_crm_chasseurs_493_0.webp
    CRM development for Chasseurs
    847
  • image_website-sbh_0.png
    Website development for SBH Partners
    999
  • image_website-_0.png
    Website development for Red Pear
    451

Web Application Architecture Design

Web application architecture is a set of decisions that are difficult or expensive to change later. The choice of database, how services are organized, scaling strategy — each of these decisions sets boundaries for what can be built in two years without a complete rewrite.

Good architectural decisions aren't those that use the most modern technologies. They're the ones that account for real constraints: team size, expected load, operational budget, and pace of product change.

Where to Start

Before choosing technologies, answer structural questions:

What's the load pattern? Read-heavy (news portal, directory) — one caching strategy. Write-heavy (exchange, monitoring system) — another. Mixed (e-commerce) — third.

What's acceptable latency? For a trading platform 100ms is catastrophic. For a CMS — acceptable.

Are there traffic spikes? If traffic is even — simpler. If Black Friday once a year brings 100x load — need autoscaling or buffering via queues.

Where are transaction boundaries? Can the database be split or does everything depend on ACID?

Typical Web Application Layers

[Client]
    ↓ HTTPS
[CDN / Edge Cache]
    ↓ Cache Miss
[Load Balancer]
    ↓
[Application — N instances]
    ├── [Cache — Redis/Memcached]
    ├── [Queue — RabbitMQ/Kafka]
    └── [Database — Primary + Replica]
              ↓
         [Object Storage — S3]

Each layer solves one problem. CDN — static content and edge caching. Load Balancer — distribution and TLS termination. Application — business logic. Redis — hot data and sessions. Queue — asynchronous tasks that can't run within an HTTP request.

Monolith vs Microservices

Standard question with a standard wrong answer too often.

Monolith is the right choice for most new projects with teams up to 15–20 people. Reasons:

  • Single transaction across multiple aggregates without saga patterns
  • Simple deployment and observability (one process — one log)
  • Refactoring without network contracts
  • No distributed data consistency problems

Moving to microservices is justified when teams work on independent domains, deployments start blocking each other, and specific services need different scaling (e.g., image processing service vs CRUD API).

Monolith with clear module boundaries:

src/
├── modules/
│   ├── catalog/       # products, categories, search
│   │   ├── domain/
│   │   ├── application/
│   │   └── infrastructure/
│   ├── orders/        # orders, cart, checkout
│   ├── users/         # auth, profiles
│   └── notifications/ # email, push, sms
└── shared/
    ├── events/        # domain events (for future decomposition)
    └── infrastructure/ # HTTP client, logger

This structure allows extracting a module into a service when needed — boundaries are already defined.

Database Selection

PostgreSQL fits 90% of tasks. Relational model, JSONB for flexible data, full-text search, partitioning, replication — all out of the box. Start with PostgreSQL and change when specific problems arise — correct strategy.

Additional storage by purpose:

Task Tool
Sessions, cache, rate limiting Redis
Full-text search with facets Elasticsearch / OpenSearch
Analytics and OLAP ClickHouse
Graph data Neo4j / PostgreSQL with recursive CTE
Message queues Redis Streams, RabbitMQ, Kafka

Data Schema and Migrations

Early data schema mistakes are the most expensive. Several principles:

Use UUID instead of serial/bigint for IDs if horizontal scaling or public API is planned. UUID v7 is sortable and works well as a clustered index.

-- UUID v7 generated in application
CREATE TABLE orders (
  id          UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id     UUID NOT NULL REFERENCES users(id),
  status      TEXT NOT NULL DEFAULT 'draft',
  total_cents INTEGER NOT NULL,
  currency    CHAR(3) NOT NULL DEFAULT 'USD',
  created_at  TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  updated_at  TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

-- Trigger for updated_at (better than in ORM)
CREATE TRIGGER set_updated_at
BEFORE UPDATE ON orders
FOR EACH ROW EXECUTE FUNCTION trigger_set_timestamp();

Migrations — only forward, never backward-incompatible. Cycle: add column (nullable) → deploy code that writes it → make NOT NULL with DEFAULT → drop old column.

Caching

Three levels:

HTTP cache — for public resources. Cache-Control: public, max-age=3600, stale-while-revalidate=86400. CDN caches at edge, browser — locally.

Application cache — Redis for expensive-to-compute data. Cache-Aside pattern:

async function getProduct(id: string): Promise<Product> {
  const cached = await redis.get(`product:${id}`);
  if (cached) return JSON.parse(cached);

  const product = await db.product.findUniqueOrThrow({ where: { id } });

  await redis.set(`product:${id}`, JSON.stringify(product), 'EX', 3600);
  return product;
}

// Invalidation on update
async function updateProduct(id: string, data: Partial<Product>) {
  const updated = await db.product.update({ where: { id }, data });
  await redis.del(`product:${id}`);
  // Invalidate dependent keys
  await redis.del(`category:products:${updated.categoryId}`);
  return updated;
}

Query cache — PostgreSQL itself caches query plans. Correct indexes matter more than any application-level caching.

Asynchronous Processing

Everything taking 200ms+ or that can fail should go to a queue:

  • Email sending
  • PDF/image generation
  • External service integrations
  • Data import
  • Aggregate recalculation
// Pattern: API accepts, queues, responds 202
app.post('/api/orders/:id/invoice', async (req, res) => {
  const { id } = req.params;

  await queue.add('generate-invoice', {
    orderId: id,
    userId: req.user.id,
  }, {
    attempts: 3,
    backoff: { type: 'exponential', delay: 2000 },
  });

  res.status(202).json({ message: 'Invoice generating, will send to email' });
});

Observability

Three pillars: logs, metrics, traces.

// Structured logs (Pino)
import pino from 'pino';

const logger = pino({
  level: process.env.LOG_LEVEL ?? 'info',
  formatters: {
    level: (label) => ({ level: label }),
  },
});

// Bind request-id to all logs within request
app.use((req, res, next) => {
  req.log = logger.child({
    requestId: req.headers['x-request-id'] ?? crypto.randomUUID(),
    method: req.method,
    path: req.path,
  });
  next();
});

Metrics via Prometheus format: /metrics endpoint with RED metrics (Rate, Errors, Duration) per route.

Timeline

Architecture design isn't a one-time document but an iterative process. Initial design for new product: one–two weeks for requirements research, ADR (Architecture Decision Records) for key decisions, data schema, tech stack selection. Result isn't a Visio diagram but a set of validated decisions with trade-off justification.

Architectural review of existing project — three–five days: codebase analysis, bottleneck identification, evolution plan without rewriting.