Loki/Grafana Logging Setup for Web Application

Our company is engaged in the development, support and maintenance of sites of any complexity. From simple one-page sites to large-scale cluster systems built on micro services. Experience of developers is confirmed by certificates from vendors.
Development and maintenance of all types of websites:
Informational websites or web applications
Business card websites, landing pages, corporate websites, online catalogs, quizzes, promo websites, blogs, news resources, informational portals, forums, aggregators
E-commerce websites or web applications
Online stores, B2B portals, marketplaces, online exchanges, cashback websites, exchanges, dropshipping platforms, product parsers
Business process management web applications
CRM systems, ERP systems, corporate portals, production management systems, information parsers
Electronic service websites or web applications
Classified ads platforms, online schools, online cinemas, website builders, portals for electronic services, video hosting platforms, thematic portals

These are just some of the technical types of websites we work with, and each of them can have its own specific features and functionality, as well as be customized to meet the specific needs and goals of the client.

Our competencies:
Development stages
Latest works
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822
  • image_crm_chasseurs_493_0.webp
    CRM development for Chasseurs
    847
  • image_website-sbh_0.png
    Website development for SBH Partners
    999
  • image_website-_0.png
    Website development for Red Pear
    451

Setting Up Logging (Loki/Grafana) for Your Web Application

Loki is not Elasticsearch. The key difference: Loki doesn't index log content, only labels (tags). This makes it orders of magnitude cheaper to store and simpler to operate. You pay with limited full-text search speed on log bodies. For most web applications, this tradeoff is justified.

Stack: Promtail (or Alloy) → Loki → Grafana

Deployment via Docker Compose

version: '3.8'
services:
  loki:
    image: grafana/loki:3.0.0
    ports:
      - "3100:3100"
    volumes:
      - ./loki-config.yml:/etc/loki/local-config.yaml
      - loki_data:/loki
    command: -config.file=/etc/loki/local-config.yaml

  promtail:
    image: grafana/promtail:3.0.0
    volumes:
      - /var/log:/var/log:ro
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - ./promtail-config.yml:/etc/promtail/config.yml
    command: -config.file=/etc/promtail/config.yml

  grafana:
    image: grafana/grafana:11.0.0
    ports:
      - "3000:3000"
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=false
      - GF_SECURITY_ADMIN_PASSWORD=admin_password
    volumes:
      - grafana_data:/var/lib/grafana
      - ./grafana/provisioning:/etc/grafana/provisioning

volumes:
  loki_data:
  grafana_data:

Loki Configuration

# loki-config.yml
auth_enabled: false

server:
  http_listen_port: 3100
  grpc_listen_port: 9096

common:
  path_prefix: /loki
  storage:
    filesystem:
      chunks_directory: /loki/chunks
      rules_directory: /loki/rules
  replication_factor: 1
  ring:
    kvstore:
      store: inmemory

schema_config:
  configs:
    - from: 2024-01-01
      store: tsdb
      object_store: filesystem
      schema: v13
      index:
        prefix: index_
        period: 24h

limits_config:
  retention_period: 744h   # 31 days
  ingestion_rate_mb: 16
  ingestion_burst_size_mb: 32
  max_query_length: 721h

compactor:
  working_directory: /loki/compactor
  retention_enabled: true
  delete_request_store: filesystem

Promtail Configuration

# promtail-config.yml
server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:
  - job_name: nginx
    static_configs:
      - targets: [localhost]
        labels:
          job: nginx
          env: production
          __path__: /var/log/nginx/access.log

    pipeline_stages:
      - regex:
          expression: '^(?P<ip>\S+) - (?P<user>\S+) \[(?P<timestamp>[^\]]+)\] "(?P<method>\S+) (?P<path>\S+) \S+" (?P<status>\d+) (?P<bytes>\d+)'
      - labels:
          status:
          method:
      - timestamp:
          source: timestamp
          format: "02/Jan/2006:15:04:05 -0700"

  - job_name: laravel
    static_configs:
      - targets: [localhost]
        labels:
          job: laravel-app
          env: production
          __path__: /var/www/app/storage/logs/laravel.log

    pipeline_stages:
      - multiline:
          firstline: '^\[\d{4}-\d{2}-\d{2}'
          max_wait_time: 3s
      - regex:
          expression: '^\[(?P<timestamp>[^\]]+)\] (?P<env>\w+)\.(?P<level>\w+): (?P<message>.*)'
      - labels:
          level:
          env:
      - timestamp:
          source: timestamp
          format: "2006-01-02 15:04:05"

  - job_name: docker
    docker_sd_configs:
      - host: unix:///var/run/docker.sock
        refresh_interval: 5s
    relabel_configs:
      - source_labels: [__meta_docker_container_name]
        target_label: container
      - source_labels: [__meta_docker_container_log_stream]
        target_label: logstream

Sending Logs from Application

For Laravel — direct push to Loki via HTTP API:

// app/Logging/LokiHandler.php
namespace App\Logging;

use Monolog\Handler\AbstractProcessingHandler;
use Monolog\LogRecord;

class LokiHandler extends AbstractProcessingHandler
{
    public function __construct(
        private string $lokiUrl,
        private array $labels = []
    ) {
        parent::__construct();
    }

    protected function write(LogRecord $record): void
    {
        $timestamp = (string)($record->datetime->getTimestamp() * 1_000_000_000);

        $payload = [
            'streams' => [[
                'stream' => array_merge($this->labels, [
                    'level' => $record->level->getName(),
                    'channel' => $record->channel,
                ]),
                'values' => [[$timestamp, $record->formatted]],
            ]],
        ];

        // Fire and forget — don't block the request
        $context = stream_context_create(['http' => [
            'method' => 'POST',
            'header' => 'Content-Type: application/json',
            'content' => json_encode($payload),
            'timeout' => 1,
        ]]);
        @file_get_contents("{$this->lokiUrl}/loki/api/v1/push", false, $context);
    }
}
// config/logging.php
'loki' => [
    'driver' => 'monolog',
    'handler' => App\Logging\LokiHandler::class,
    'with' => [
        'lokiUrl' => env('LOKI_URL', 'http://loki:3100'),
        'labels' => [
            'app' => 'web-app',
            'env' => env('APP_ENV', 'production'),
        ],
    ],
],

LogQL — Loki Query Language

LogQL is similar to PromQL. Basic patterns:

# All Laravel errors in the last hour
{job="laravel-app", level="error"} |= "Exception"

# Nginx 5xx
{job="nginx"} | json | status >= 500

# Error rate per minute
rate({job="laravel-app", level="error"}[1m])

# Top slow requests (if request_time in log)
{job="nginx"}
  | regexp `request_time=(?P<rt>[0-9.]+)`
  | unwrap rt
  | quantile_over_time(0.95, [5m]) by (path)

# Count errors by type
sum by (level) (
  count_over_time({job="laravel-app"}[5m])
)

Grafana: Datasource and Dashboard

Auto-provision datasource:

# grafana/provisioning/datasources/loki.yml
apiVersion: 1
datasources:
  - name: Loki
    type: loki
    url: http://loki:3100
    isDefault: true
    jsonData:
      maxLines: 1000
      derivedFields:
        - datasourceUid: prometheus
          matcherRegex: "request_id=(\\w+)"
          name: RequestID
          url: '${__value.raw}'

Basic dashboard includes:

  • Logs panel with filtering by level and job labels
  • Time series with rate({job="laravel-app", level="error"}[1m])
  • Stat panel — error count for last 24 hours
  • Table with top-20 error messages via count_over_time

Alerting in Grafana

# Alert rule for error rate spike
apiVersion: 1
groups:
  - name: app-alerts
    rules:
      - uid: error-rate-spike
        title: High error rate
        condition: C
        data:
          - refId: A
            queryType: ''
            relativeTimeRange:
              from: 300
              to: 0
            model:
              expr: 'sum(rate({job="laravel-app", level="error"}[5m]))'
          - refId: C
            queryType: ''
            model:
              type: threshold
              conditions:
                - evaluator:
                    params: [0.1]
                    type: gt
        noDataState: OK
        execErrState: Error
        for: 2m
        annotations:
          summary: "Error rate > 0.1/s for 2 minutes"
        labels:
          severity: warning

Comparison with ELK

Loki wins on storage cost (no inverted index — stores only compressed chunks) and operational simplicity. ELK wins when complex full-text search and aggregation by log fields without prior label parsing are needed.

For most web applications with JSON logs and Grafana as a unified dashboard — Loki is preferable.

Timeline

Deploying Loki + Promtail + Grafana, configuring Nginx and application log collection, basic dashboards and one alert for critical errors: 1-2 working days.