Setting Up Logging (Graylog) for Your Web Application
Graylog occupies a niche between ELK (powerful, complex) and Loki (simple, limited). Built-in web interface with search, alerting, and dashboard — no Kibana as a separate component. Perfect for teams needing centralized log management without deep customization.
Architecture: Graylog ← MongoDB (configuration) + OpenSearch/Elasticsearch (data)
Deployment
# docker-compose.yml
version: '3.8'
services:
mongodb:
image: mongo:6.0
volumes:
- mongo_data:/data/db
opensearch:
image: opensearchproject/opensearch:2.12.0
environment:
- cluster.name=graylog
- discovery.type=single-node
- plugins.security.disabled=true
- "OPENSEARCH_JAVA_OPTS=-Xms2g -Xmx2g"
- bootstrap.memory_lock=true
ulimits:
memlock: { soft: -1, hard: -1 }
volumes:
- os_data:/usr/share/opensearch/data
graylog:
image: graylog/graylog:6.0
environment:
- GRAYLOG_PASSWORD_SECRET=your_random_64_char_secret
# echo -n "admin_password" | sha256sum
- GRAYLOG_ROOT_PASSWORD_SHA2=your_sha256_password_hash
- GRAYLOG_HTTP_EXTERNAL_URI=http://graylog.example.com:9000/
- GRAYLOG_MONGODB_URI=mongodb://mongodb:27017/graylog
- GRAYLOG_ELASTICSEARCH_HOSTS=http://opensearch:9200
ports:
- "9000:9000" # Web UI
- "12201:12201" # GELF UDP
- "12201:12201/udp"
- "5044:5044" # Beats
- "514:514/udp" # Syslog UDP
depends_on:
- mongodb
- opensearch
volumes:
mongo_data:
os_data:
Generate GRAYLOG_PASSWORD_SECRET:
pwgen -N 1 -s 96
Hash the password:
echo -n "your_admin_password" | sha256sum | awk '{print $1}'
Input Sources (Inputs)
Graylog receives logs through Inputs — configured in System → Inputs:
GELF UDP (recommended for applications):
- Port: 12201
- Possible loss at high load (UDP), but minimal overhead
GELF TCP (more reliable):
- Port: 12201
- Use if delivery guarantee is critical
Beats (for Filebeat):
- Port: 5044
Syslog UDP/TCP:
- For system logs and network equipment
Sending Logs from Laravel
Via GELF (Graylog's native protocol):
composer require graylog2/gelf-php
// app/Logging/GraylogLogger.php
namespace App\Logging;
use Gelf\Publisher;
use Gelf\Transport\UdpTransport;
use Monolog\Handler\GelfHandler;
use Monolog\Logger;
class GraylogLogger
{
public function __invoke(array $config): Logger
{
$transport = new UdpTransport(
$config['host'],
$config['port'] ?? 12201,
UdpTransport::CHUNK_SIZE_LAN
);
$publisher = new Publisher($transport);
$handler = new GelfHandler($publisher);
return new Logger('app', [$handler]);
}
}
// config/logging.php
'graylog' => [
'driver' => 'custom',
'via' => App\Logging\GraylogLogger::class,
'host' => env('GRAYLOG_HOST', 'graylog'),
'port' => 12201,
],
'stack' => [
'driver' => 'stack',
'channels' => ['daily', 'graylog'],
],
Context fields automatically become fields in Graylog:
Log::error('Payment failed', [
'user_id' => $user->id,
'order_id' => $order->id,
'amount' => $order->amount,
'provider_error' => $response->error,
]);
Filebeat for Nginx Logs
# /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
paths: [/var/log/nginx/access.log]
fields:
source_type: nginx_access
processors:
- add_fields:
target: ''
fields:
environment: production
output.logstash:
hosts: ["graylog-server:5044"]
Extractors and Pipelines
Graylog allows parsing fields from messages via Extractors (for individual fields) or Processing Pipelines (for complex logic).
Pipeline for Nginx logs (System → Pipelines):
rule "parse nginx access log"
when
has_field("source_type") AND to_string($message.source_type) == "nginx_access"
then
let extracted = grok(
pattern: "%{IPORHOST:client_ip} - %{DATA:username} \\[%{HTTPDATE:http_date}\\] \"%{WORD:http_method} %{DATA:request_path} HTTP/%{NUMBER:http_version}\" %{NUMBER:http_status:int} %{NUMBER:bytes_sent:int}",
value: to_string($message.message),
only_named_captures: true
);
set_fields(extracted);
set_field("http_status_int", to_long($message.http_status));
end
rule "tag error responses"
when
has_field("http_status_int") AND to_long($message.http_status_int) >= 500
then
set_field("is_error", true);
add_tag("http_error");
end
Streams — Log Routing
Streams partition the log flow by categories with different retention policies:
- All Nginx Access — source_type = nginx_access → retention 30 days
- Application Errors — level = ERROR or CRITICAL → retention 90 days
- Security Events — tags contain "security" → retention 180 days
Each stream can have its own Index Set with independent rotation settings.
Index Sets — Storage Management
System → Index Sets → Create index set:
Index prefix: app-errors
Max number of indices: 90
Index rotation: Time-based, Daily
Index retention: Delete, max 90 indices
Shards: 2
Replicas: 0 (for single-node)
Alerts
Graylog supports Event Definitions — condition-based alerts:
Alerts → Event Definitions → Create:
Title: High 5xx error rate
Condition: Aggregation
- Stream: All Nginx Access
- Group by: (none)
- Count messages
- Filter: http_status >= 500
- Execute every: 5 minutes
- Condition: count > 50
Notification:
Type: HTTP Notification
URL: https://api.telegram.org/bot<TOKEN>/sendMessage
Body: {"chat_id": "<ID>", "text": "High error rate: ${event.message}"}
Dashboard
In Graylog, dashboards are built from search widgets. Standard set for web applications:
- Message count (all logs, 24h) — number
- HTTP status codes (Pie chart, http_status field)
- Error rate (Line chart, level:ERROR filter, group by time)
- Top request paths (Table, Top values by request_path)
- Geographic distribution (Map, if GeoIP enabled)
Timeline
Deploying Graylog + OpenSearch + MongoDB, configuring Inputs, Filebeat for Nginx, GELF logging from application, basic Pipeline rules, Index Sets with retention policy, initial alerts: 1-2 working days.







