AI ADAS Advanced Driver Assistance Systems Development

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
AI ADAS Advanced Driver Assistance Systems Development
Complex
from 2 weeks to 3 months
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    823
  • image_logo-aider_0.jpg
    AIDER company logo development
    762
  • image_crm_chasseurs_493_0.webp
    CRM development for Chasseurs
    848

Development of an AI-based driver assistance system (ADAS)

ADAS (Advanced Driver Assistance Systems) is a level 1–2 autonomous system: it assists the driver but does not replace them. Specific functions include lane departure warning (LDW), automatic emergency braking (AEB), adaptive cruise control (ACC), and blind spot monitoring (BSW). Each of these represents a separate CV task with its own latency and accuracy requirements.

Key Features and Implementation

import cv2
import numpy as np
from ultralytics import YOLO
import torch

class ADASSystem:
    def __init__(self, config: dict):
        self.lane_detector = self._load_lane_model(config)
        self.object_detector = YOLO(config['object_model'])  # YOLOv8n для скорости
        self.depth_estimator = self._load_depth_model(config)

        self.camera_matrix = np.array(config['camera_intrinsics'])
        self.focal_length = self.camera_matrix[0, 0]
        self.baseline = config.get('stereo_baseline', None)

        # Пороги для предупреждений
        self.ttc_warning = 2.5   # секунды — предупреждение
        self.ttc_critical = 1.5  # секунды — AEB
        self.lane_offset_threshold = 0.3  # метра

    def lane_departure_warning(self, frame: np.ndarray,
                                vehicle_speed: float) -> dict:
        """
        Детекция полосы: классика — UFLD (Ultra-Fast Lane Detection)
        или CLRNet для сложных условий (пересечения, плохая разметка).
        """
        lanes = self.lane_detector(frame)
        if len(lanes) < 2:
            return {'warning': False, 'reason': 'no_lanes'}

        # Центр автомобиля относительно полосы
        frame_center = frame.shape[1] // 2
        lane_center = (lanes[0][-1][0] + lanes[1][-1][0]) // 2
        offset_px = frame_center - lane_center

        # Перевод пикселей в метры через гомографию
        offset_m = offset_px * (3.5 / abs(lanes[1][-1][0] - lanes[0][-1][0]))

        warning = abs(offset_m) > self.lane_offset_threshold
        return {
            'warning': warning,
            'offset_meters': offset_m,
            'lane_width': abs(lanes[1][-1][0] - lanes[0][-1][0])
        }

    def collision_warning(self, frame: np.ndarray,
                           ego_speed: float) -> dict:
        detections = self.object_detector(frame, conf=0.5,
                                           classes=[0, 2, 3, 5, 7])
        depth_map = self.depth_estimator(frame)

        warnings = []
        for box in detections[0].boxes:
            x1, y1, x2, y2 = map(int, box.xyxy[0])
            cx = (x1 + x2) // 2

            # Дистанция из depth map (стерео или монокулярная)
            roi_depth = depth_map[y1:y2, x1:x2]
            distance = float(np.percentile(roi_depth, 10))  # ближняя часть объекта

            # TTC при текущей скорости
            if distance > 0 and ego_speed > 0:
                ttc = distance / ego_speed  # упрощённо, без учёта скорости объекта
            else:
                ttc = float('inf')

            if ttc < self.ttc_critical:
                action = 'AEB'
            elif ttc < self.ttc_warning:
                action = 'WARNING'
            else:
                continue

            warnings.append({
                'class': self.object_detector.model.names[int(box.cls)],
                'distance_m': distance,
                'ttc_sec': ttc,
                'action': action
            })

        return {'warnings': sorted(warnings, key=lambda x: x['ttc_sec'])}

Performance: Why Latency Matters More Than Accuracy

At 100 km/h, a car travels 27.8 meters per second. A system latency of 100 ms equals 2.78 meters of blind spot. Therefore, for ADAS:

Function Max latency Recommended model
AEB (emergency braking) < 30ms YOLOv8n + TensorRT INT8
LDW (Lane Departure Warning) < 50ms CLRNet or UFLD
BSW (blind spots) < 100ms YOLOv8s
ACC (cruise control) < 100ms Depth + detection
Road signs < 200ms EfficientDet-D2

YOLOv8n with TensorRT FP16 on NVIDIA Orin: 3–5ms per frame. On Qualcomm SA8295P (Snapdragon Ride): 8–12ms via QNN SDK.

Monocular depth estimation

If there's no stereo camera, use MonoDepth2 or DPT (Dense Prediction Transformer). The accuracy is worse than stereo, but sufficient for warnings:

from transformers import AutoImageProcessor, AutoModelForDepthEstimation

class MonocularDepth:
    def __init__(self):
        self.processor = AutoImageProcessor.from_pretrained(
            "LiheYoung/depth-anything-large-hf"
        )
        self.model = AutoModelForDepthEstimation.from_pretrained(
            "LiheYoung/depth-anything-large-hf"
        )

    @torch.no_grad()
    def estimate(self, image: np.ndarray) -> np.ndarray:
        inputs = self.processor(images=image, return_tensors="pt")
        outputs = self.model(**inputs)
        depth = outputs.predicted_depth.squeeze().numpy()
        # Масштабируем в метры через калибровочный коэффициент
        return depth

Depth Anything v2 Large gives AbsRel = 0.076 on KITTI - enough to determine distance up to ±10% at 10-30 meters.

Certification and standards

ADAS systems for production vehicles require compliance with:

  • ISO 26262 (Functional Safety, ASIL-B/C for AEB)
  • ISO/SAE 21434 (Cybersecurity)
  • UNECE R79/R130 (regulatory requirements for LDW and AEB)

For in-shop or carport use (not public road), the requirements are more lenient - we use automotive grade without full ISO 26262 certification.

Project type Term
Prototype of one function (LDW or AEB) 6–10 weeks
L2 ADAS Complex (4–6 functions) 4–7 months
Automotive-grade with ISO 26262 12–24 months