AI-powered road sign and marking recognition
Autonomous systems and ADAS require reliable recognition of signs and road markings—in rain, fog, at night, during partial road closures, and on roads with faded road markings. The task is two-tiered: sign/marking detection in a frame and its classification. Both levels must be completed within a combined 30–50 ms.
Road sign recognition
import cv2
import numpy as np
from ultralytics import YOLO
import torch
import torch.nn as nn
class TrafficSignRecognizer:
def __init__(self, detector_path: str, classifier_path: str,
class_names: list):
# Детектор: YOLO находит все знаки на кадре
self.detector = YOLO(detector_path)
# Классификатор: EfficientNet-B3 или MobileNetV3
self.classifier = torch.load(classifier_path)
self.classifier.eval()
self.class_names = class_names
# Предобработка для классификатора
from torchvision import transforms
self.transform = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize((64, 64)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
@torch.no_grad()
def recognize(self, frame: np.ndarray) -> list[dict]:
# Детекция: находим bboxes знаков
det_results = self.detector(frame, conf=0.45, classes=[])
signs = []
for box in det_results[0].boxes:
x1, y1, x2, y2 = map(int, box.xyxy[0])
# Небольшой padding для лучшей классификации
pad = 8
x1, y1 = max(0, x1-pad), max(0, y1-pad)
x2, y2 = min(frame.shape[1], x2+pad), min(frame.shape[0], y2+pad)
roi = frame[y1:y2, x1:x2]
if roi.size == 0:
continue
# Классификация ROI
tensor = self.transform(roi).unsqueeze(0)
logits = self.classifier(tensor)
probs = torch.softmax(logits, dim=-1)
top_prob, top_idx = probs.max(-1)
signs.append({
'class': self.class_names[top_idx.item()],
'confidence': float(top_prob),
'det_confidence': float(box.conf),
'bbox': [x1, y1, x2, y2]
})
return signs
Road marking detection
class LaneMarkingDetector:
def __init__(self, model_name: str = 'clrnet'):
"""
CLRNet (Cross Layer Refinement Network) — лучшее соотношение
точности и скорости для lane detection в 2024.
CULane F1 = 0.806, TuSimple F1 = 0.971.
"""
if model_name == 'clrnet':
self.model = self._load_clrnet()
elif model_name == 'ufld':
# Ultra-Fast Lane Detection v2 — быстрее, чуть хуже
self.model = self._load_ufld()
def detect_markings(self, frame: np.ndarray) -> dict:
lanes = self.model(frame)
# Классификация типов разметки
markings = {
'solid_white': [],
'dashed_white': [],
'solid_yellow': [],
'double_yellow': [],
'stop_line': []
}
for lane in lanes:
marking_type = self._classify_marking(frame, lane)
markings[marking_type].append(lane)
return markings
def _classify_marking(self, frame: np.ndarray,
lane_points: list) -> str:
"""По цвету и прерывистости определяем тип разметки"""
# Сэмплируем цвет вдоль линии
colors = []
for x, y in lane_points[::5]:
if 0 <= int(y) < frame.shape[0] and 0 <= int(x) < frame.shape[1]:
colors.append(frame[int(y), int(x)])
if not colors:
return 'solid_white'
mean_color = np.mean(colors, axis=0)
# Жёлтый: высокий R и G, низкий B
if mean_color[2] > 150 and mean_color[1] > 120 and mean_color[0] < 100:
return 'solid_yellow'
return 'solid_white'
Challenging conditions: problems and solutions
| Condition | Problem | Solution |
|---|---|---|
| Night | The signs are visible only with headlights. | Learning on Night Data (CURE-TSD) |
| Rain | Glare, blur | Deblurring + augmentation with wet signs |
| Snow on the sign | Partial overlap | Few-shot + masked examples in the dataset |
| Faded markings | Low contrast | CLAHE preprocessing + data augmentation |
| Several signs nearby | Bbox overlap | NMS with IoU 0.3, not 0.5 |
Datasets for training
- GTSRB (Germany): 43 classes, 50k+ images - classic for signs
- Mapillary Traffic Sign Dataset: 100k images, 313 classes, real roads
- CULane: 133k frames, difficult conditions for lane detection
- BDD100K: 100k videos, signs + markings + bad weather
Localized signs (RF, BY, UA) always require additional training on local GOST standards. The standard GTSRB does not include signs from Soviet standards, "bricks" of specific shapes, or temporary signs on an orange background.
Performance
EfficientDet-D2 for sign detection: [email protected] = 0.87 on GTSRB, 22ms latency on RTX 3060. For onboard use on Qualcomm Snapdragon Ride: quantized to INT8 using QNN, 35ms latency—meets ADAS requirements.
| Task | Term |
|---|---|
| Sign detector + classifier (1 country) | 4–7 weeks |
| Lane marking detection | 3-5 weeks |
| Combined system of signs and markings | 7–12 weeks |







