University of Bologna
Master Degree in Computer Science and Engineering - Smart Vehicular System Project
Alessandro Becci |
0001125114 |
alessandro.becci@studio.unibo.it |
Luca Tonelli |
0001126379 |
luca.tonelli11@studio.unibo.it |
1. Introduction
1.1. The Project
This Lane Departure Warning system represents a critical safety feature designed to prevent unintentional lane departures on roadways. Our implementation leverages the CARLA simulation environment to detect when a vehicle begins to drift from its lane, providing timely warnings to alert the driver.
The system employs computer vision techniques, specifically utilizing the YOLOPv2 model, to:
-
Identify lane markings in real-time
-
Determine the vehicle’s position relative to lane boundaries
-
Generate visual alerts when detecting imminent unintended lane crossings
This technology serves as an essential component of modern driver assistance systems, helping to reduce accidents caused by driver distraction, fatigue, or momentary inattention.
1.2. Motivation
Unintentional lane departures are a significant cause of road accidents, often resulting from driver distraction or fatigue. Lane Departure Warning systems aim to mitigate these incidents by alerting drivers when they unintentionally drift from their lanes.
Note
|
Key Research Findings
The study "The effectiveness of lane departure warning systems—A reduction in real-world passenger car injury crashes" found that:
|
2. Requirements
This section outlines the requirements for the Lane Departure Warning system implemented in CARLA. Requirements define the capabilities and characteristics that the system must exhibit to fulfill its purpose of detecting lane departures and alerting drivers. They are divided into functional requirements, which describe what the system should do, and non-functional requirements, which specify how the system should perform its functions.
2.1. Functional Requirements
The main functionalities of the system are:
-
Lane marking detection in real-time.
-
Vehicle position tracking relative to lane boundaries.
-
Warning generation when lane departure is detected.
-
Visual alert display to the driver upon lane departure detection.
-
Support for Highway Road Types.
-
Reporting of lane departure events to the driver interface.
-
Logging relevant events (lane detection status, warnings triggered, etc.) on an MQTT broker.
2.2. Non-Functional Requirements
The principal features of the system are:
-
Responsiveness → Operating in real-time driving scenarios, the Lane Departure Warning system must process camera feeds at high frequency to detect lane boundaries and vehicle position with minimal latency.
-
Reliability → The system shall operate consistently under diverse conditions, maintaining optimal performance in varying lighting (day and night) and weather scenarios (clear, rain, fog).
-
Integration with CARLA simulation environment for comprehensive testing and validation of the system under controlled conditions.
-
Support for Xbox One joypad and G29 steering wheel as input devices for testing and manual control within the simulation environment.
3. Design Of Proposed Solution
3.1. Sensors used
For this lane departure warning system, we use a RGB camera as the primary sensing device. The camera captures real-time images of the roadway ahead of the vehicle with a resolution of 640x640 pixels. This resolution provides sufficient detail for lane detection while remaining computationally efficient for real-time processing.
3.1.1. Camera Positioning
The camera is mounted on the rearview mirror, a location that offers several advantages:
-
Unobstructed forward view of the road
-
Minimal intrusion into the driver’s field of vision
-
Standard mounting position in modern commercial vehicles
The exact positioning coordinates are (x=0.6, y=0.0, z=1.41) with a pitch of 0 degrees. This precise placement ensures optimal coverage of the road surface and minimizes distortion while maintaining a practical implementation suitable for mass-production vehicles.


3.2. Picture Preprocessing
Every frame captured from the RGB camera undergoes several critical preprocessing steps before being fed into the lane detection system:
3.2.1. Region of Interest Selection
First, the system defines a trapezoid-shaped region of interest (ROI) on the raw camera image:
src_points = np.float32([
[width * 0.25, height * 0.55], # Top-left
[width * 0.75, height * 0.55], # Top-right
[width * 0.85, height * 0.95], # Bottom-right
[width * 0.15, height * 0.95] # Bottom-left
])
This trapezoid specifically targets the road area ahead of the vehicle, eliminating irrelevant portions of the image such as the sky, roadside objects, and the vehicle’s hood that could interfere with lane detection.

3.2.2. Perspective Transformation
Next, a bird’s-eye view transformation is applied to the selected region using OpenCV’s perspective transformation functions:
# Get the perspective transformation matrix
M = cv2.getPerspectiveTransform(src_points, dst_points)
# Apply the perspective transformation
warped = cv2.warpPerspective(image_to_analyze, M, (define_crop_size, define_crop_size))
This transformation:
-
Removes perspective distortion where distant lane lines appear to converge
-
Creates a uniform representation where lane width is consistent regardless of distance
-
Makes subsequent lane detection calculations more straightforward by converting to a 2D plane

3.2.3. Format Conversion for Neural Network
Finally, the image is converted from HWC (Height, Width, Channel) format to CHW (Channel, Height, Width) format required by our PyTorch-based neural network:
warped_chw = warped.transpose(2, 0, 1) # HWC to CHW
This standardized preprocessing pipeline ensures that our lane detection system receives consistent, optimized input regardless of lighting conditions or road characteristics, improving both the accuracy and reliability of the system.
3.3. Picture Processing
The image processing pipeline through YOLOPv2 involves several sophisticated steps to detect lane markings and potential lane departures.
3.3.1. Neural Network Inference
Each captured frame from the vehicle-mounted camera undergoes analysis through a YOLOPv2 (You Only Look Once Panoptic) model:
def analyzeImage(image):
# Convert image format for neural network
img0 = image.transpose(1, 2, 0) # CHW to HWC
img = letterbox(img0, new_shape=img0.shape[:2])[0]
img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, HWC to CHW
# Normalize and prepare tensor
img = torch.from_numpy(np.ascontiguousarray(img)).to(device)
img = img.half() if half else img.float()
img /= 255.0 # Normalize to 0.0-1.0 range
3.3.2. Multi-Task Output Processing
YOLOPv2 simultaneously produces three critical outputs from a single forward pass:
-
Object detection results (
pred
) - identifying traffic participants -
Drivable area segmentation (
seg
) - determining where the vehicle can safely travel -
Lane line segmentation (
ll
) - precisely identifying lane markings
3.3.3. Lane Detection and Analysis
The system processes segmentation masks to isolate lane markings:
# Extract and resize segmentation masks
da_seg_mask = driving_area_mask(seg)
ll_seg_mask = lane_line_mask(ll)
# Resize masks to match original image dimensions
da_seg_mask_resized = cv2.resize(da_seg_mask, img0.shape[:2][::-1])
ll_seg_mask_resized = cv2.resize(ll_seg_mask, img0.shape[:2][::-1])
3.3.4. Bounding Box Filtering
Lane markings are identified through contour analysis and filtered based on specific parameters:
# Find contours of lane lines
contours, _ = cv2.findContours(red_lane_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Filter boxes based on size and orientation
MIN_BOX_WIDTH = 40
MIN_BOX_HEIGHT = 40
ORIENTATION_THRESHOLD = 4.0 # Avoid horizontal boxes
red_boxes = []
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
if w > MIN_BOX_WIDTH and h > MIN_BOX_HEIGHT:
aspect_ratio = w / h
if aspect_ratio < ORIENTATION_THRESHOLD:
red_boxes.append((x, y, w, h))
This filtering ensures that only valid lane markings are considered, eliminating noise and irrelevant shapes.
3.3.5. Nested Box Elimination
The system removes redundant or nested boxes to prevent double-counting of lane markings:
red_boxes = filter_nested_boxes(red_boxes, iou_threshold=0.8)
3.3.6. Lane Crossing Detection
Lane departure is determined through geometric analysis of identified lane markings:
if red_boxes:
# Calculate lane center
leftmost_red = min([x for x, y, w, h in red_boxes])
rightmost_red = max([x + w for x, y, w, h in red_boxes])
lane_center_x = (leftmost_red + rightmost_red) // 2
img_center_x = combined.shape[1] // 2
# Measure distance from center
center_distance = lane_center_x - img_center_x
# Apply thresholds for different crossing states
CENTERED_THRESHOLD = 30
CROSSING_THRESHOLD = 53
if len(red_boxes) == 1: # Single box scenario
alignment_status = "CROSSING: SBX"
crossing = True
else:
if abs(center_distance) < CENTERED_THRESHOLD:
alignment_status = "CENTERED"
elif abs(center_distance) < CROSSING_THRESHOLD:
alignment_status = "CROSSING: SL/SR"
crossing = True
else:
alignment_status = "CROSSING: LEFT/RIGHT"
crossing = True
The system classifies the vehicle’s position relative to the lanes into several states:
-
CENTERED - Vehicle is properly aligned within lane
-
CROSSING: SL/SR - Vehicle is slightly crossing to left or right
-
CROSSING: LEFT/RIGHT - Vehicle is significantly crossing lane boundaries
-
CROSSING: SBX - Single box detection indicating probable crossing
This detailed analysis enables the lane departure warning system to accurately detect unintentional lane departures and alert the driver in real-time.

3.4. Technologies Used
In our project, we employed YOLOPv2, an advanced multi-task learning network designed for panoptic driving perception. This model efficiently integrates three critical tasks in autonomous driving: traffic object detection, drivable area segmentation, and lane detection. By utilizing a shared encoder and task-specific decoders, YOLOPv2 achieves high accuracy and speed, making it suitable for real-time applications.
The architecture of YOLOPv2 comprises a shared encoder and three task-specific decoders:
-
Shared Encoder: YOLOPv2 adopts the Extended Efficient Layer Aggregation Networks (E-ELAN) as its backbone for feature extraction. E-ELAN employs group convolution, enabling different layers to learn more diverse features, thereby enhancing both efficiency and performance.
-
Object Detection Decoder: This decoder implements an anchor-based multi-scale detection scheme. Features from the Path Aggregation Network (PAN) and Feature Pyramid Network (FPN) are combined to fuse semantic information with local features, facilitating detection on multi-scale fused feature maps. Each grid in the feature map is assigned multiple anchors of different aspect ratios, with the detection head predicting the position offsets, scaled height and width, as well as the probability and confidence for each class.
-
Drivable Area Segmentation Decoder: Unlike previous models where features for segmentation tasks are derived from the last layer of the neck, YOLOPv2 connects the drivable area segmentation head prior to the FPN module. This approach utilizes features from less deep layers, which are more suitable for this task. To compensate for potential information loss, an additional upsampling layer is applied in the decoder stage.
-
Lane Detection Decoder: This decoder focuses on identifying lane markings, which is crucial for lane-keeping and lane-changing maneuvers in autonomous driving systems. It branches out from the FPN layer to extract features from deeper levels. Given that lane markings are often slender and challenging to detect, deconvolution is applied in the decoder stage to improve performance.
3.4.1. Technical Specifications
YOLOPv2 operates with remarkable efficiency while maintaining high accuracy:
-
Input Resolution: 640×640 pixels
-
Inference Speed: 30+ FPS on consumer-grade GPU hardware
-
Model Size: ~40MB, enabling deployment on embedded automotive systems
-
Lane Line Detection: Accuracy of 87.3% and IoU of 27.2%
-
Half-precision Support: FP16 computation for accelerated inference
3.4.2. Key Advantages for Lane Departure Systems
YOLOPv2 offers several critical advantages for lane departure warning applications:
-
Unified Processing: By handling object detection and lane segmentation simultaneously, the system gains contextual awareness of the entire driving scene
-
Low Latency: Critical for time-sensitive warning systems, with end-to-end processing under 33ms
-
Resilience to Conditions: Robust performance across varying lighting, weather, and road conditions
-
Memory Efficiency: Shared feature extraction reduces computational overhead
-
Integration Potential: The multi-task architecture allows expansion to additional ADAS functions with minimal additional hardware
3.5. Event Publishing in MQTT Broker
The Lane Departure Warning system utilizes MQTT (Message Queuing Telemetry Transport) protocol to publish events when lane departures are detected. This enables real-time communication between different components of the autonomous driving system and facilitates integration with warning systems, data logging services, and monitoring applications.
3.5.1. HiveMQ Cloud Platform
For this project, we use HiveMQ Cloud as our MQTT broker service. HiveMQ Cloud ensures secure communication through TLS/SSL encrypted connections that include proper certificate validation, providing peace of mind in terms of security. Furthermore, it guarantees a high level of availability with a 99.9% uptime service level agreement, which is critical for reliable message delivery. The service is designed to scale, seamlessly supporting thousands of concurrent connections, and adheres to both MQTT 3.1.1 and MQTT 5.0 protocol standards. Its global infrastructure ensures that users have low-latency access from anywhere in the world.
3.5.2. MQTT Client Configuration
The system establishes a secure connection to the HiveMQ Cloud broker using the following configuration:
def setup_mqtt_client():
# Retrieve the username and password from local storage or environment variables.
# These credentials are stored locally and are not pushed to GitHub.
mqtt_client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION2, client_id="carla_lane_detector")
mqtt_client.tls_set(cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_TLS)
mqtt_client.username_pw_set(username, password)
mqtt_client.connect("hivemqconnectionstring.s1.eu.hivemq.cloud", 8883)
mqtt_client.loop_start()
return mqtt_client
3.5.3. Event Structure and Publishing
When the system detects a lane departure, it constructs and publishes a structured JSON message:
mqtt_message = {
"event": "lane_crossing",
"system": "YOLOP",
"crossing": crossing,
"timestamp": datetime.now().isoformat(),
}
mqtt_client.publish(
topic="carla/lane_detection",
payload=json.dumps(mqtt_message),
qos=1
)
The message contains:
-
Event Type: The specific event being reported ("lane_crossing")
-
Detection System: The algorithm that detected the crossing ("YOLOP")
-
Crossing Status: Boolean indication of lane boundary warning
-
Timestamp: ISO-formatted date and time for event sequencing and correlation
Messages are published to the topic carla/lane_detection
with Quality of Service (QoS) level 1, ensuring at-least-once delivery guarantee.
3.5.4. Benefits for Autonomous Driving Integration
The MQTT-based event publishing architecture offers several advantages for our lane departure warning system:
-
Real-time Alerting: Sub-second notification of critical safety events
-
System Decoupling: Detection and response systems can evolve independently
-
Distributed Processing: Events can trigger responses across multiple vehicle systems
-
Data Persistence: Integration with long-term storage for performance analysis
-
Standards Compliance: Following industry standards facilitates integration with other systems
This design allows for flexible extension and integration with various components of autonomous driving systems while maintaining reliable communication channels for safety-critical information.
4. Testing
Our ADAS requires rigorous testing to ensure reliability and safety. To facilitate this, we’ve implemented a comprehensive testing framework with recording and playback capabilities that allow us to reproduce specific driving scenarios consistently
4.1. Record and Playback Mode
4.1.1. Record Mode
The record mode captures vehicle control inputs during a drive session, allowing us to create reproducible test cases from real driving scenarios. When enabled, the system logs throttle, brake, and steering commands along with timestamps.
python carla_sync.py --record --weather "Clear Noon"
This generates a JSON file containing a sequence of control commands:
[
{
"timestamp": 6.08974165096879,
"throttle": 0.0,
"brake": 0.0,
"steer": 0.0
},
{
"timestamp": 6.123074986040592,
"throttle": 0.6,
"brake": 0.0,
"steer": -0.05
},
{
"timestamp": 6.156408321112394,
"throttle": 0.8,
"brake": 0.0,
"steer": -0.25
}
]
The system automatically generates sequential filenames for recordings using timestamps and sequence numbers, storing them in the test_commands/recorded directory:
def get_sequential_filename():
"""Generate a sequential filename for recordings"""
recorded_dir = os.path.join("test_commands", "recorded")
os.makedirs(recorded_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
pattern = os.path.join(recorded_dir, f"control_log_*.json")
existing_files = glob.glob(pattern)
next_number = len(existing_files) + 1
filename = f"control_log_{timestamp}_{next_number:03d}.json"
return os.path.join(recorded_dir, filename)
4.1.2. Playback Mode
The playback mode replays previously recorded driving sessions, creating consistent test conditions. This allows us to evaluate our lane detection algorithms under identical driving scenarios.
python carla_sync.py --playback control_log_20240407_120145_001.json --weather "Clear Noon"
During playback, the system reads the control commands from the specified JSON file and applies them sequentially to the vehicle:
if args.playback and playback_index < len(playback_data):
control_data = playback_data[playback_index]
control.throttle = control_data["throttle"]
control.brake = control_data["brake"]
control.steer = control_data["steer"]
playback_index += 1
4.1.3. Synchronous Mode Importance
Initially, we encountered problems with playback reliability when using asynchronous mode. The timing differences between recording and playback sessions led to inconsistent behavior. Switching to CARLA’s synchronous mode resolved these issues by ensuring that the simulation steps forward only after all sensor data has been processed.
with CarlaSyncMode(world, camera_rgb, camera, fps=30) as sync_mode:
while True:
# Get data from all sensors
out = sync_mode.tick(timeout=2.0)
The CarlaSyncMode context manager enforces timing consistency by:
-
Enabling CARLA’s synchronous mode
-
Setting a fixed delta time between simulation steps
-
Ensuring all sensor data is received before advancing the simulation
This synchronization is crucial for creating reproducible test scenarios, as it guarantees that control inputs are applied at consistent simulation times.
4.2. Detection Logging System
The DetectionLogger class tracks lane invasion detections from both our YOLOP-based lane detection system and CARLA’s built-in lane invasion sensor. This allows us to compare and validate our detection algorithm against CARLA’s ground truth.
4.2.1. How Detection Logging Works
The logger is triggered in two different scenarios:
-
YOLOP Detection: When our computer vision model detects a lane crossing
if crossing != yolop_lane_invasion_detected: detection_logger.log_detection("YOLOP", crossing)
-
CARLA Detection: When CARLA’s lane invasion sensor is triggered
def lane_invasion_callback(event): detection_logger.log_detection("CARLA", True)
4.2.2. Detection Agreement Logic
An important aspect of our testing framework is the ability to identify when both detection systems agree on a lane crossing event. Since the YOLOP vision-based system and CARLA’s ground truth sensor doesn’t trigger at exactly the same moment, we implement a time window-based agreement system.
The agreement_window parameter (set to 2 seconds by default) defines the maximum time difference allowed between YOLOP and CARLA detections for them to be considered as referring to the same lane crossing event. When calculating statistics, the system groups detections into crossing events and identifies agreements:
def get_stats(self):
crossing_events = []
current_event = {"start": None, "end": None, "yolop": False, "carla": False}
sorted_detections = sorted(self.detections, key=lambda x: x[0])
for timestamp, detector, status, _ in sorted_detections:
# Only consider positive crossing detections
if not status:
continue
if current_event["start"] is None:
# Start a new event
current_event = {
"start": timestamp,
"end": timestamp,
"yolop": detector == "YOLOP",
"carla": detector == "CARLA"
}
elif timestamp - current_event["end"] > self.agreement_window:
# This detection is beyond our time window, save the current event and start a new one
crossing_events.append(current_event)
current_event = {
"start": timestamp,
"end": timestamp,
"yolop": detector == "YOLOP",
"carla": detector == "CARLA"
}
else:
# This detection belongs to the current event
current_event["end"] = timestamp
if detector == "YOLOP":
current_event["yolop"] = True
else:
current_event["carla"] = True
This approach groups detections that occur within the agreement window into a single "crossing event." If both YOLOP and CARLA detect a lane crossing within this time window, it’s considered an agreement.
4.2.3. Test Results Visualization
During playback mode, the system displays real-time statistics about detection performance, including:
-
Total number of detection events
-
YOLOP-only detections (potential false positives)
-
CARLA-only detections (potentially missed by our system)
-
Confirmed crossings (when both systems agree)
def update_test_display(test_display):
stats = detection_logger.get_stats()
cv2.putText(test_display, f"YOLOP only: {stats.get('yolop_only', 0)}",
(30, 170), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 128, 0), 1, cv2.LINE_AA)
cv2.putText(test_display, f"CARLA only: {stats.get('carla_only', 0)}",
(30, 210), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 1, cv2.LINE_AA)
cv2.putText(test_display,
f"Confirmed Crossings: {stats.get('agreements', 0)}",
(30, 250), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 1, cv2.LINE_AA)
4.3. Environmental Testing
The system supports testing under various weather conditions using CARLA’s weather presets. This allows us to evaluate the robustness of our lane detection algorithm across different lighting and atmospheric conditions:
python carla_sync.py --playback control_log.json --weather "Cloudy Noon"
python carla_sync.py --playback control_log.json --weather "WetNoon"
python carla_sync.py --playback control_log.json --weather "HardRainNoon"
The test results for each scenario are logged to log/untracked/test_log.txt, creating a comprehensive record of algorithm performance across different conditions.
4.4. Tests Results
The diagrams below show comparison test results between two lane departure detection systems - YOLOP and CARLA. YOLOP is our vision-based system based on YOLOPv2 that infers lane departures, instead CARLA is the sensor that directly indicates when there is a lane invasion.
We defined various test scenarios, recorded with our recording system and tested with the playback system on a multi-lane road that circumnavigates the city in the Town 4 map. These scenarios include:
-
short_left_crossing
-
straight
-
drift
-
long
-
5_crossing
Then, we tested the scenarios with different weather conditions, including:
-
Clear Sunset
-
Cloudy Night
-
Mid Rainy Night (Only for short tests)
-
Mid Rain Sunset
-
Wet Noon
4.4.1. Analysis By Test Scenario

In the "5_crossing" test, both YOLOP and CARLA systems show perfect detection rates across all weather conditions, with both systems detecting exactly 5 lane departure events in clear sunset, cloudy night, and mid-rain sunset conditions.

The "drift" test shows that it is very normal for only YOLOP to detect drifts, as the lanes are not actually crossed. YOLOP registers between 2-4 events depending on weather conditions, with peaks during rainy conditions, while CARLA’s lack of detections confirms that no genuine lane crossings occurred.

The "long" test shows that our system is quite reliable on detecting a lane invasion event, with a detection rate of 100% across all weather conditions.

The "short_left_crossing" test shows that our system is quite reliable on detecting a lane invasion event, with a detection rate of 100% across all weather conditions. However, our system shows a significant number of false positives, especially in the Mid Rainy Night condition, where it detects 2 events while CARLA only detects 1. This indicates that our system is more sensitive to lane crossings in adverse weather conditions, which may lead to false alarms.

The "straight" test is made to not do any lane invasion or drift at all. The results show that both systems are able to detect the absence of lane crossings, with CARLA showing 0 detections and YOLOP showing false positive detections in rainy conditions. This indicates that our system is not perfect and can still produce false positives even when no lane crossings occur.
4.4.2. Performance Analysis
We logged also the amount of time taken to do the inference of the image via the lane detection system. The results are shown in the following diagram, for the case of the Long Test with the Clear Sunset weather condition.

The lane detection model demonstrates efficient real-time performance across multiple frames. Analysis of the processing logs revealed:
-
Initialization overhead: The first frames showed significantly longer processing times (1.4611s and 0.1936s), representing model initialization and resource allocation
-
Steady-state performance: Minimum processing time: 0.0437 seconds (43.7ms) Maximum processing time: 0.0615 seconds (61.5ms)
-
Typical processing range: 0.045-0.055 seconds
-
This performance translates to approximately 16-22 frames per second during steady-state operation.
Processing Phase | Time (seconds) |
---|---|
Initialization (first frame) |
1.4611 |
Secondary initialization |
0.1936 |
Steady-state minimum |
0.0437 |
Steady-state maximum |
0.0615 |
Steady-state average |
~0.0475 |
This analysis result are obtained through a laptop with a RTX4070 GPU.
5. Deployment
This section outlines the deployment process for the lane detection system, covering environment setup, configuration, and execution procedures.
5.1. System Requirements
5.1.1. Hardware Requirements
-
NVIDIA GPU (recommended for optimal YOLOP model performance)
-
Minimum 8GB RAM
-
20GB free disk space
5.1.2. Software Requirements
-
Windows 10/11 or Linux (Ubuntu 18.04+)
-
Python 3.7
-
CARLA 0.9.15 simulator
-
HiveMQ account (for MQTT-based event publishing)
5.2. Environment Setup
# Create a new conda environment
conda env create -f environment.yml
conda activate carla-env
5.3. Environment Configuration
-
Create a
.env
file in the project root with your HiveMQ credentials:HIVE_MQ_USERNAME=your_username HIVE_MQ_PASSWORD=your_password
-
Ensure CARLA simulator is correctly installed.
-
Download the YOLOP model weights from and place them in the
data/weights/
directory.
5.4. Deployment Structure
project_root/
├── analysis/ # Analysis modules
├── camera_lanes_analysis_async.py # Main simulation script
├── camera_lanes_analysis_sync.py # Synchronous simulation script suite for testing and recording
├── data/ # Model and data storage
│ └── weights/ # YOLOP model weights
│ └── yolopv2.pt # YOLOP model file
├── environment.yml # Conda environment configuration
├── launcher.py # Main launcher script
├── log/ # Log storage
│ ├── tracked/ # Performance logs
│ └── untracked/ # Test result logs
├── test_commands/ # Test scenario files
│ └── recorded/ # Recorded control sequences
├── utils/ # Utility modules
│ ├── YOLOPModel.py # Lane detection model
│ ├── carla.py # CARLA helpers
│ ├── DetectionLogger.py # Detection event logging
│ ├── image_cropper.py # Image preprocessing
│ └── utils.py # General utility functions for YOLOPv2
└── wheel_config.ini # Steering wheel configuration
5.5. Running the System
This section describes how to run the lane detection system, including the launcher mechanism and the two operational modes: asynchronous (for normal usage) and synchronous (for testing).
5.5.1. System Launcher
Our system includes a launcher component (launcher.py
) that simplifies the startup process by coordinating the launch of both the CARLA simulator and our lane detection application. The launcher handles:
-
Starting the CARLA simulator with appropriate parameters
-
Waiting for CARLA to fully initialize
-
Launching the selected lane detection script (either asynchronous or synchronous mode)
The launcher ensures proper sequencing and configuration, reducing errors during startup and making the system more accessible to users without extensive technical knowledge.
5.5.2. Asynchronous Mode
The asynchronous mode (camera_lanes_analysis_async.py
) is our primary operational script for real-time lane detection. This script:
-
Provides real-time lane detection with minimal latency
-
Supports multiple controller inputs (G29 Steering wheel, Xbox One controller, Keyboard)
-
Displays real-time visualization of lane detection results
-
Connects to the CARLA simulator and manages all vehicle controls
-
Processes camera feeds using the YOLOP model for lane detection
-
Compares YOLOP detection with CARLA’s built-in lane invasion detection
-
Sends notifications through MQTT for integration with other systems
This mode is optimized for real-world performance and offers the most responsive experience when operating the system manually.

5.5.3. Synchronous Mode (Test Suite)
The synchronous mode (camera_lanes_analysis_sync.py
) is specifically designed for testing and validation. Key features include:
-
Fixed framerate execution for consistent, reproducible results
-
Recording capability that saves control inputs (throttle, brake, steering) to JSON files
-
Playback functionality to replay recorded driving sessions exactly
-
Detailed statistics collection for comparing detection methods
-
Support for different weather conditions through command-line options
-
Logging of test results for later analysis


5.6. Troubleshooting
5.6.1. Common Issues
-
CARLA Connection Failure: Ensure CARLA server is running on the specified host and port.
-
Model Initialization Error: Check that YOLOP model files are correctly placed in the expected directory.
-
Controller Not Detected: Verify controller is connected and correctly configured in
wheel_config.ini
. -
HiveMQ Connection Failure: Confirm credentials in
.env
file and network connectivity to HiveMQ cloud.
5.6.2. Log Files
Examine logs in the log/
directory for detailed error information:
-
log/tracked/frame_performance_log.txt
: Processing performance metrics -
log/untracked/test_log.txt
: Test results and statistics
6. Conclusion
Our Lane Departure Warning system implementation successfully demonstrates the viability of vision-based approaches for detecting unintentional lane departures in various driving scenarios. Throughout this project, we achieved several key objectives:
-
Successfully implemented a real-time lane detection system using YOLOPv2 with inference times averaging approximately 0.0475 seconds (47.5ms) per frame
-
Developed a comprehensive testing framework with recording and playback capabilities to ensure consistent evaluation
-
Validated our approach against CARLA’s ground truth across multiple driving scenarios and environmental conditions
-
Established a reliable event publishing system using MQTT for integration with other vehicle systems
The detection performance analysis demonstrated good agreement between our vision-based approach and CARLA’s built-in lane invasion detection across most test scenarios. Our system particularly good at detecting actual lane crossing events with high reliability, though it occasionally exhibited increased sensitivity in adverse weather conditions.
6.1. Limitations and Challenges
We encountered several challenges during implementation and testing. Our system demonstrated higher sensitivity in rainy conditions, resulting in occasional false positives. Additionally, we had to exclude the rainy night condition from the long test scenario due to a significant issue where the CARLA sensor failed to detect crossings. This sensor failure caused our model to behave unpredictably, including skipping frames, making meaningful comparison impossible.
6.2. Future Work
Based on our findings, several avenues for future improvement include:
-
Implementing adaptive detection thresholds based on environmental conditions to reduce false positives in adverse weather
-
Developing a more robust testing methodology less dependent on CARLA’s built-in sensors
-
Extending the system to handle more complex road scenarios, including construction zones and degraded lane markings
Overall, this project demonstrates the effectiveness of modern computer vision approaches for lane departure warning systems while highlighting areas where further research and development are necessary to achieve production-level reliability across all driving conditions.