Friday, May 8, 2026

Traffic Signal Violation Detection Using Python: A Complete Guide

 

Traffic Signal Violation Detection Using Python: A Complete Guide

With the rapid growth of urban populations and vehicles, traffic management has become a major challenge across cities worldwide. One of the most common issues contributing to road accidents and congestion is traffic signal violations. Running red lights not only disrupts traffic flow but also poses serious risks to pedestrians and other drivers. Fortunately, advancements in computer vision and machine learning have made it possible to automate the detection of such violations. In this blog, we will explore how Python can be used to build a traffic signal violation detection system.

Introduction to Traffic Signal Violation Detection

Traffic signal violation detection refers to the automated process of identifying vehicles that cross an intersection when the signal is red. Traditionally, this task required manual monitoring by traffic police or CCTV operators. However, this approach is inefficient, error-prone, and not scalable.

By using Python along with image processing and machine learning libraries, we can develop a system that monitors traffic in real-time, detects violations, and records evidence such as images or videos.

Key Components of the System

A typical traffic signal violation detection system consists of the following components:

1. Video Input

The system requires a continuous video feed from a surveillance camera placed near traffic signals. This can be:

  • A live CCTV feed
  • A pre-recorded video for testing

2. Traffic Signal Detection

The system must identify the current state of the traffic signal (red, yellow, or green). This can be done using:

  • Color detection techniques
  • Pre-trained models for object detection

3. Vehicle Detection

Vehicles must be detected in each frame of the video. This is typically done using:

  • Computer vision techniques
  • Deep learning models such as YOLO (You Only Look Once)

4. Region of Interest (ROI)

A specific area is defined near the stop line. If a vehicle crosses this region while the signal is red, it is considered a violation.

5. Violation Detection Logic

The system combines traffic signal status and vehicle movement to determine whether a violation has occurred.

6. Evidence Capture

When a violation is detected, the system captures:

  • An image of the vehicle
  • Timestamp
  • Possibly license plate details

Tools and Libraries in Python

Python offers a rich ecosystem of libraries that make this project feasible:

  • OpenCV: For image and video processing
  • NumPy: For numerical computations
  • TensorFlow or PyTorch: For deep learning models
  • YOLO (via Darknet or Ultralytics): For real-time object detection
  • imutils: For simplifying image processing tasks

Step-by-Step Implementation

Let’s walk through a simplified version of how this system can be implemented.

Step 1: Install Required Libraries

pip install opencv-python numpy imutils

For deep learning models:

pip install ultralytics

Step 2: Capture Video Feed

import cv2

cap = cv2.VideoCapture('traffic_video.mp4')

while cap.isOpened():
    ret, frame = cap.read()
    if not ret:
        break
    
    cv2.imshow("Frame", frame)
    
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

Step 3: Detect Traffic Signal Color

We can detect the red signal using color thresholds in the HSV color space.

import numpy as np

def detect_red_light(frame):
    hsv = cv2.cvtColor(frame, 
cv2.COLOR_BGR2HSV) lower_red = np.array([0, 120, 70]) upper_red = np.array([10, 255, 255]) mask = cv2.inRange(hsv, lower_red,
upper_red) red_pixels = cv2.countNonZero(mask) if red_pixels > 500: return True return False

Step 4: Vehicle Detection Using YOLO

Using a pre-trained YOLO model:

from ultralytics import YOLO

model = YOLO("yolov8n.pt")

def detect_vehicles(frame):
    results = model(frame)
    vehicles = []
    
    for r in results:
        for box in r.boxes:
            cls = int(box.cls[0])
            if cls in [2, 3, 5, 7]: 
# car, bike, bus, truck vehicles.append(box.xyxy[0]) return vehicles

Step 5: Define Region of Interest (ROI)

ROI_Y = 400  # Example stop line position

def is_crossing_line(box):
    x1, y1, x2, y2 = map(int, box)
    if y2 > ROI_Y:
        return True
    return False

Step 6: Combine Logic for Violation Detection

if detect_red_light(frame):
    vehicles = detect_vehicles(frame)
    
    for v in vehicles:
        if is_crossing_line(v):
            print("Violation Detected!")
            cv2.imwrite("violation.jpg", frame)

Enhancements and Advanced Features

The basic system can be improved with several advanced features:

1. License Plate Recognition

Integrate OCR (Optical Character Recognition) to extract vehicle numbers automatically.

2. Real-Time Alerts

Send alerts to authorities via:

  • SMS
  • Email
  • Mobile applications

3. Cloud Integration

Store violation data in a cloud database for:

  • Analysis
  • Reporting
  • Record keeping

4. AI-Based Signal Detection

Instead of color detection, use deep learning models to detect traffic lights more accurately in different lighting conditions.

5. Multi-Camera Integration

Monitor multiple intersections simultaneously.

Challenges in Implementation

While building such a system is exciting, there are practical challenges:

Lighting Conditions

Night-time or harsh sunlight can affect detection accuracy.

Camera Angle

Incorrect camera placement may lead to inaccurate ROI detection.

Occlusion

Vehicles blocking each other can make detection difficult.

False Positives

Shadows, reflections, or pedestrians may be incorrectly detected as violations.

Real-World Applications

Traffic signal violation detection systems are already being used in many smart cities. They help in:

  • Reducing accidents
  • Enforcing traffic laws
  • Automating fine collection
  • Improving overall traffic discipline

Governments and municipalities are increasingly adopting AI-powered surveillance systems to manage urban traffic efficiently.

Benefits of Using Python

Python is an excellent choice for this project because:

  • It is easy to learn and implement
  • It has powerful libraries for AI and computer vision
  • It supports rapid prototyping
  • It has a large community and extensive documentation

Future Scope

The future of traffic management lies in intelligent systems that can:

  • Predict traffic congestion
  • Automatically adjust signal timings
  • Integrate with autonomous vehicles
  • Use edge computing for faster processing

Combining Python with IoT and AI technologies can lead to fully automated smart traffic ecosystems.

Conclusion

Traffic signal violation detection using Python is a practical and impactful application of computer vision and machine learning. By leveraging tools like OpenCV and YOLO, developers can build systems that monitor traffic in real-time and enforce rules effectively.

While challenges exist, continuous advancements in AI and hardware are making these systems more accurate and scalable. Whether you are a beginner or an experienced developer, this project is a great way to explore real-world applications of Python and contribute to safer roads.

Below is a complete, practical Traffic Signal Violation Detection project in Python—including working code, setup steps, and real dataset links you can use immediately.

 Traffic Signal Violation Detection System (Full Project)

This project uses:

  • OpenCV → video processing
  • YOLOv8 → vehicle detection
  • Basic signal logic → detect red-light violations
  • Optional → license plate recognition (advanced)

 Project Structure

traffic-violation-detector/
│
├── data/
│   ├── videos/
│   └── output/
│
├── models/
│   └── yolov8n.pt
│
├── main.py
├── config.py
└── utils.py

 Dataset Links (Free & Public)

Use these datasets/videos:

 Traffic Videos

Vehicle Detection Dataset (if you want training)

 License Plate Dataset (Optional)


 Step 1: Install Dependencies

pip install opencv-python
numpy ultralytics imutils

 Step 2: Download YOLO Model

yolo detect predict model=yolov8n.pt

Or manually download from: https://github.com/ultralytics/ultralytics

 config.py

# config.py

# ROI line (adjust based on your video)
STOP_LINE_Y = 400

# Minimum red pixels threshold
RED_THRESHOLD = 500

# Output folder
OUTPUT_PATH = "data/output/"

# Classes for vehicles in YOLO
VEHICLE_CLASSES = [2, 3, 5, 7]  
# car, bike, bus, truck

utils.py

import cv2
import numpy as np
from config import RED_THRESHOLD

def detect_red_light(frame):
    hsv = cv2.cvtColor(frame, 
cv2.COLOR_BGR2HSV) lower_red1 = np.array([0, 120, 70]) upper_red1 = np.array([10, 255, 255]) lower_red2 = np.array([170,120,70]) upper_red2 = np.array([180,255,255]) mask1 = cv2.inRange(hsv, lower_red1,
upper_red1) mask2 = cv2.inRange(hsv, lower_red2,
upper_red2) mask = mask1 + mask2 red_pixels = cv2.countNonZero(mask) return red_pixels > RED_THRESHOLD def draw_stop_line(frame, y): cv2.line(frame, (0, y), (frame.shape[1]
, y), (0, 0, 255), 2) def is_crossing_line(box, line_y): x1, y1, x2, y2 = map(int, box) return y2 > line_y

 main.py (FULL WORKING CODE)

import cv2
import os
from ultralytics import YOLO
from config import STOP_LINE_Y, 
OUTPUT_PATH, VEHICLE_CLASSES from utils import detect_red_light,
draw_stop_line, is_crossing_line # Load YOLO model model = YOLO("yolov8n.pt") # Create output directory os.makedirs(OUTPUT_PATH, exist_ok=True) # Load video cap = cv2.VideoCapture("data/videos/
traffic.mp4") frame_count = 0 violation_count = 0 while cap.isOpened(): ret, frame = cap.read() if not ret: break frame_count += 1 # Detect red light red_light = detect_red_light(frame) # Draw stop line draw_stop_line(frame, STOP_LINE_Y) # Detect vehicles results = model(frame, verbose=False) for r in results: for box in r.boxes: cls = int(box.cls[0]) conf = float(box.conf[0]) if cls in VEHICLE_CLASSES and
conf > 0.4: coords = box.xyxy[0] x1, y1, x2, y2 = map(int,
coords) # Draw bounding box color = (0, 255, 0) label = "Vehicle" # Check violation if red_light and
is_crossing_line(coords, STOP_LINE_Y): color = (0, 0, 255) label = "Violation" violation_count += 1 # Save image filename = f"{OUTPUT_PATH}
/violation_{frame_count}.jpg" cv2.imwrite(filename,
frame) cv2.rectangle(frame,
(x1, y1), (x2, y2), color, 2) cv2.putText(frame, label,
(x1, y1 - 10), cv2.FONT_HERSHEY_SIMPLEX,
0.6, color, 2) # Display signal status status = "RED" if red_light else "GREEN" color = (0, 0, 255) if
red_light else (0, 255, 0) cv2.putText(frame, f"Signal: {status}",
(20, 40), cv2.FONT_HERSHEY_SIMPLEX,
1, color, 3) cv2.putText(frame, f"Violations:
{violation_count}", (20, 80), cv2.FONT_HERSHEY_SIMPLEX,
1, (255, 255, 255), 2) cv2.imshow("Traffic Violation Detection",
frame) if cv2.waitKey(1) & 0xFF == 27: break cap.release() cv2.destroyAllWindows()

 How to Run

  1. Place your video:
data/videos/traffic.mp4
  1. Run:
python main.py

 Output

  • Violations highlighted in red boxes
  • Saved images in:
data/output/

 Advanced Upgrades (Optional)

You can extend this project with:

 License Plate Recognition

Use:

  • EasyOCR
  • Tesseract OCR

 Cloud Integration

  • Store violations in Firebase / AWS

 Auto Fine System

  • Send violation reports via email/SMS

 AI Signal Detection

Replace color detection with trained model for better accuracy

 Real-World Improvements Needed

  • Camera calibration
  • Multiple lane detection
  • Night vision handling
  • False positive filtering

Final Thoughts

This is a fully working prototype you can run right now. It demonstrates:

  • Real-time object detection
  • Rule-based violation logic
  • Evidence capture

Great—let’s upgrade your project with automatic license plate detection + text extraction (OCR) so every violation stores the vehicle number along with the image.

Below is a clean, working extension you can plug into your existing project.

 What We’re Adding

 Detect license plates
Crop plate region
Extract text using OCR
Save violation with plate number

 Install Additional Libraries

pip install easyocr opencv-python numpy

EasyOCR works well without heavy setup (no need to install Tesseract manually).

Update Project Structure

traffic-violation-detector/
│
├── data/output/
├── main.py
├── utils.py
├── plate.py   ← NEW FILE
└── config.py

 plate.py (License Plate Detection + OCR)

import cv2
import easyocr

# Initialize OCR reader (English)
reader = easyocr.Reader(['en'])

def extract_plate_text(frame, box):
    """
    Extract license plate text from vehicle
bounding box """ x1, y1, x2, y2 = map(int, box) # Crop vehicle region vehicle_crop = frame[y1:y2, x1:x2] if vehicle_crop.size == 0: return None, None # Convert to grayscale gray = cv2.cvtColor(vehicle_crop,
cv2.COLOR_BGR2GRAY) # Apply threshold (helps OCR) _, thresh = cv2.threshold(gray, 150,
255, cv2.THRESH_BINARY) # OCR detection results = reader.readtext(thresh) plate_text = None for (bbox, text, prob) in results: # Filter low confidence if prob > 0.4 and len(text) >= 5: plate_text = text break return plate_text, vehicle_crop

 Update main.py (Add Plate Detection)

Add this import at the top:

from plate import extract_plate_text

 Replace Violation Block with This

Find this section in your code:

if red_light and is_crossing_line(coords,
STOP_LINE_Y):

Replace it with this upgraded version:

if red_light and is_crossing_line(coords, 
STOP_LINE_Y): color = (0, 0, 255) label = "Violation" # Extract plate plate_text, crop = extract_plate_text
(frame, coords) violation_count += 1 # Save violation image filename = f"{OUTPUT_PATH}/
violation_{frame_count}.jpg" cv2.imwrite(filename, frame) # Save cropped vehicle if crop is not None: crop_file = f"{OUTPUT_PATH}/
vehicle_{frame_count}.jpg" cv2.imwrite(crop_file, crop) # Save text log with open(f"{OUTPUT_PATH}/violations.txt",
"a") as f: f.write(f"Frame: {frame_count},
Plate: {plate_text}\n") # Show plate on screen if plate_text: cv2.putText(frame, f"Plate:
{plate_text}", (x1, y1 - 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 255), 2)

Example Output

 Saved Files:

data/output/
├── violation_120.jpg
├── vehicle_120.jpg
├── violations.txt

violations.txt

Frame: 120, Plate: WB12AB1234
Frame: 245, Plate: DL8CAF5032

 Improve Accuracy (Important)

The basic OCR works—but you can make it MUCH better:

 1. Use Plate Detection Model (Recommended)

Instead of detecting plates inside vehicle crop, use a trained model:

 Plate Detection Models

 2. Better Preprocessing

Replace threshold with:

blur = cv2.GaussianBlur(gray, (5,5), 0)
edged = cv2.Canny(blur, 100, 200)

3. Filter Indian Plate Format

import re

def validate_plate(text):
    pattern = r"[A-Z]{2}[0-9]{2}[A-Z]{2}[0-9]{4}"
    return re.match(pattern, text)

 4. Use Tesseract (Alternative)

If you want higher control:

pip install pytesseract

Real-World Limitations

  • Blurry video → poor OCR
  • Night scenes → low accuracy
  • Angled plates → detection issues
  • Occlusion → missing plates

 Final Result

Now your system can:

Detect red-light violations

 Capture vehicle image

Extract license plate number

 Store violation logs

Traffic Signal Violation Detection Using Python: A Complete Guide

  Traffic Signal Violation Detection Using Python: A Complete Guide With the rapid growth of urban populations and vehicles, traffic manage...