Back to Blog
research March 22, 2025 14 min read

What 1,000 Hours of Indian CCTV Footage Taught Us About Traffic

Vehicle distributions, temporal patterns, violation hotspots, and edge cases — the data tells a story that no traffic survey captures.

HP
Hansraj Patel
What 1,000 Hours of Indian CCTV Footage Taught Us About Traffic

Looking at What Nobody Watches

India records an enormous amount of traffic video. Tens of millions of cameras, running 24/7, capturing every vehicle, every violation, every near-miss. Almost none of it is analyzed. It is recorded, stored for 30 days, and overwritten.

We spent six months processing 1,000 hours of Indian CCTV footage through our pipeline. 120 camera locations across 8 cities: Delhi, Mumbai, Bangalore, Hyderabad, Ahmedabad, Pune, Jaipur, and Lucknow. Highway cameras, intersection cameras, toll plaza cameras, residential area cameras.

The raw numbers: 108 million frames processed. 2.1 billion object detections. 340 million unique tracks. 12.4 million license plates read. 890,000 events flagged.

What follows is what the data told us.

What India’s Roads Actually Look Like in Data

Every traffic planning document in India cites vehicle registration statistics. According to VAHAN (the national vehicle registration database), India has approximately 300 million registered vehicles. But registration data does not tell you what is actually on the road.

Vehicle Type Distribution

Across our 1,000 hours, the observed vehicle type distribution was:

Vehicle Type% of TrafficNotes
Two-wheelers46.2%Motorcycles 31%, scooters 12%, e-scooters 3.2%
Cars22.8%Hatchbacks dominate (58% of cars)
Auto-rickshaws9.4%Higher in cities (14% in Bangalore)
Commercial trucks7.1%Heavily time-dependent (see below)
Buses4.3%Public transport + school + private
E-rickshaws3.8%Growing fast — 0.8% in 2023 data
Bicycles2.1%Higher in smaller cities and early morning
Tractors & agricultural1.8%Strongly regional and seasonal
Pedestrians in roadway1.4%Not sidewalk pedestrians — in the road
Animals0.7%Cows 0.4%, dogs 0.2%, other 0.1%
Other (handcarts, jugaads, etc.)0.4%The long tail

The first thing that strikes you: two-wheelers are nearly half of all traffic. Traffic management systems designed around cars miss half the picture.

The second thing: 0.7% animals. That sounds small until you compute the absolute number. At a busy intersection with 3,000 vehicles per hour, 0.7% means 21 animal encounters per hour. One every three minutes. Each one is a potential accident.

The Maruti Effect

Within the car category, the brand distribution tells a story about Indian roads:

Brand% of Cars
Maruti Suzuki41.3%
Hyundai18.7%
Tata12.4%
Mahindra8.9%
Kia5.2%
Toyota4.1%
Honda3.8%
Other5.6%

Maruti Suzuki commands 41% of observed car traffic. This has practical implications for computer vision: our detection model’s “car” prior is effectively a “Maruti” prior. If the model gets Maruti vehicles right, it gets 41% of car detection right. We weight our training data accordingly.

The E-Rickshaw Explosion

E-rickshaws barely existed in our 2023 baseline data (0.8% of traffic). In our 2025 data, they are at 3.8%. In Delhi specifically, they are at 6.2%. This is the fastest-growing vehicle category on Indian roads.

Most CV models trained before 2023 do not have an e-rickshaw class. They classify e-rickshaws as auto-rickshaws (wrong — different size, speed, behavior) or bicycles (wrong — three wheels, carries passengers). This is a real problem for any deployed system that does not retrain regularly.

Temporal Patterns

Traffic is not stationary. The same intersection looks completely different at 7 AM, 2 PM, and 11 PM. Our data reveals sharp temporal patterns that any deployed system must account for.

The Daily Cycle

Aggregated across all urban cameras, hourly traffic volume follows a distinctive Indian pattern:

Vehicles per hour (normalized, urban arterial)

100% │                    ╭─╮
 90% │               ╭───╯ ╰──╮
 80% │         ╭────╯         ╰──╮
 70% │       ╭─╯                  ╰─╮
 60% │     ╭─╯                      ╰──╮
 50% │    ╭╯                            ╰─╮
 40% │   ╭╯                               ╰╮
 30% │  ╭╯                                  ╰╮
 20% │ ╭╯                                    ╰╮
 10% │╭╯                                      ╰╮
  0% │╯                                        ╰──
     └───────────────────────────────────────────────
      0  2  4  6  8  10 12 14 16 18 20 22 24
                     Hour of Day

Key observations:

Morning peak: 8:30-10:30 AM. Not 7-9 AM as in Western cities. Indian office hours typically start at 9:30-10:00 AM. School drop-off creates a sub-peak at 7:30-8:30 AM.

No distinct lunch dip. Western traffic data shows a clear valley between morning and evening peaks. Indian traffic stays elevated through the afternoon. Commercial activity (deliveries, errand runs, shop workers) fills the midday.

Evening peak: 5:30-8:30 PM. Broader and higher than the morning peak. Office exits, school pickups, evening shopping, and social trips all overlap. This is when accident rates spike.

Late-night commercial surge: 11 PM-2 AM. Truck traffic peaks after midnight. Many Indian cities restrict heavy vehicle entry during daytime. Trucks wait at city limits and enter after 11 PM. This creates a secondary peak that is invisible to any analysis that only looks at daytime data.

Minimum: 3-5 AM. Traffic drops to 5-8% of peak. But this is not zero. Night-shift workers, early morning delivery vehicles, long-distance trucks, and — importantly — wrong-way drivers and drunk drivers are overrepresented in this window.

Two-Wheeler Behavior by Time

Two-wheelers show a different temporal pattern than cars:

  • Pre-dawn (4-6 AM): Newspaper delivery, milk delivery. Predominantly single-rider motorcycles.
  • Morning rush (8-10 AM): Heavy two-wheeler traffic. Triple-riding is most common in this window (two adults + child going to school/work).
  • Afternoon (12-3 PM): Food delivery riders dominate two-wheeler traffic. Identifiable by insulated bags.
  • Evening rush (5-8 PM): Peak two-wheeler volume. Lane splitting reaches maximum density.
  • Night (9 PM-12 AM): Two-wheeler percentage drops sharply. Remaining riders show higher violation rates (no helmet, no lights).

Weekend vs. Weekday

MetricWeekdayWeekendChange
Peak hour volume4,200 veh/hr2,800 veh/hr-33%
Two-wheeler share48%38%-21%
Car share21%32%+52%
Morning peak start8:30 AM10:30 AM+2 hrs
Evening peak end8:30 PM10:00 PM+1.5 hrs
Commercial truck volume100%65%-35%

Weekends are fundamentally different. Car share increases because families drive together for leisure. Two-wheeler share drops because commuters stay home. The morning peak shifts later. And overall volume drops by a third.

Any system that uses a fixed model for all days will either over-alert on weekends (applying weekday thresholds to lower traffic) or under-alert on weekdays. Our system maintains separate behavioral baselines for weekdays, Saturdays, and Sundays.

Violation Patterns

Our pipeline flags events that represent potential violations. Over 1,000 hours, we flagged 890,000 events. After sampling and manual verification (3% sample, ~27,000 events reviewed), we estimate a precision of 84% — meaning 84% of flagged events are genuine violations.

Most Common Violations

Violation Type% of TotalPrecision
Signal jumping (red light)28.4%91%
Wrong-way driving18.2%87%
No helmet (two-wheeler)16.7%82%
Illegal parking / stopping12.3%79%
Lane violation / wrong lane8.9%74%
Over-speeding6.1%88%
Triple-riding4.8%81%
Using phone while driving2.3%68%
No seatbelt1.4%72%
Other0.9%

Signal jumping is the most detected violation because it has the clearest signal: a vehicle’s trajectory crosses a stop line while the signal state is red. The signal state comes from either a hardware integration or visual signal detection.

Wrong-way driving is the second most common. This surprised us. 18.2% of all violations are vehicles traveling against traffic flow. In absolute terms, that is approximately 160,000 wrong-way events across 1,000 hours. Many of these are short-duration (a motorcycle traveling 20 meters against traffic to reach a turn) but each one is a collision risk.

Phone usage detection has the lowest precision (68%). Detecting a small phone in a driver’s hand through a CCTV image at 30+ meters is at the limit of what current resolution and models can achieve. This is a problem where higher-resolution cameras or DrikNetra-style multi-frame enhancement could make a significant difference.

Violation Hotspots

Not all locations are equal. We found that 15% of camera locations generated 62% of all violations. These are hotspot locations — specific intersections, stretches, and zones where violations cluster.

Common hotspot characteristics:

  • Missing or non-functional signals. Intersections where the traffic signal is present but dark or flashing amber generate 3.2x more violations than functional signal intersections.
  • Poor visibility of enforcement. Locations without visible police presence or enforcement cameras show 2.8x higher violation rates.
  • Road geometry encouraging shortcuts. Medians with gaps, service road junctions, and U-turn points are wrong-way driving hotspots.
  • Near schools and hospitals. Counter-intuitively, areas near schools have higher violation rates during peak hours as parents rush and take shortcuts.

This spatial analysis is something that detection alone cannot provide. Detecting a red-light violation is Level 1. Understanding that this intersection has a 3x violation rate because its signal is non-functional is Level 4 — reasoning about why violations happen, not just that they happen.

Reasoning, not detection.

Edge Cases and the Long Tail

The most interesting findings come from the long tail — events that are rare but reveal fundamental limitations of detection-only approaches.

The Jugaad Vehicle Problem

“Jugaad” is a Hindi word meaning improvised innovation. On Indian roads, it refers to vehicles that are not manufactured — they are assembled from parts of other vehicles. A tractor engine on a wooden frame with truck tires. A motorcycle engine powering a three-wheeled cargo platform.

Jugaad vehicles appear in 0.3% of frames that contain them (they are concentrated on specific road types — NH highways near industrial areas and rural-urban boundaries). No existing vehicle taxonomy includes them. No pre-trained model detects them. They are a genuine open-set detection problem.

Our system currently classifies most jugaad vehicles as “unknown_vehicle” — which is better than misclassifying them as a standard vehicle type. But it is not a solution. These vehicles often violate registration and safety norms. Detecting them specifically has enforcement value.

Animal Crossings

We logged 42,000 animal-on-road events across 1,000 hours. The breakdown:

AnimalEventsAvg DurationTypical Behavior
Cow28,40045 secondsStationary or slow-walking. Often sits in lane.
Dog8,90012 secondsCrosses rapidly. Unpredictable direction changes.
Buffalo2,80060 secondsHerd crossings. Blocks entire road.
Horse1,20030 secondsUsually with rider or cart.
Camel48040 secondsRegional (Rajasthan cameras only).
Other220variesGoats, donkeys, elephants (rare).

Cows account for 67% of animal-on-road events. The average cow-on-road event lasts 45 seconds — during which vehicles must navigate around a stationary or slow-moving obstacle in the travel lane. This is a daily reality on Indian roads that no Western traffic dataset captures.

A detection system sees “cow.” A reasoning system sees “cow stationary in lane 2, vehicles diverting to lane 1, creating merge conflict at downstream intersection.” The difference matters for incident prediction and traffic management.

Band Baarat and Ceremonial Processions

We captured 127 band baarat (wedding procession) events and 43 funeral procession events. These are uniquely Indian traffic phenomena.

A band baarat occupies an entire lane (sometimes the entire road), moves at walking speed (3-5 km/h), includes dancers, horses, decorated vehicles, and loud amplified music. It can last 30 minutes at a single camera location and creates a rolling road block.

Detection identifies the individual elements: people, horses, decorated vehicles. But without reasoning about the collective behavior — the slow speed, the road occupation, the blocked traffic behind — the system cannot generate a meaningful alert. It is not 200 pedestrians jaywalking. It is a wedding procession that will block this road for 30 minutes and requires traffic diversion.

This is why reasoning, not detection, is the path forward. A detection model sees objects. A reasoning system sees situations, understands context, and generates actionable intelligence.

Wrong-Side Driving at Night

Our most concerning finding: wrong-way driving events are 4.7x more frequent between 11 PM and 5 AM compared to daytime hours. And the duration is longer — nighttime wrong-way events average 340 meters of wrong-side travel, compared to 45 meters during daytime.

Daytime wrong-way driving is typically short and intentional — a motorcycle cutting across to reach a nearby turn. Nighttime wrong-way driving is often sustained and potentially unintentional — a driver on a divided highway who entered from the wrong side or missed a U-turn.

These nighttime events are the highest-risk traffic situations in our dataset. A vehicle traveling at 60 km/h on the wrong side of a divided highway at 2 AM, when oncoming traffic is also traveling at 60+ km/h, creates a closing speed of 120+ km/h. The time between visual contact and collision can be under 3 seconds.

This is a scenario where prediction (Level 5 reasoning) becomes life-saving. If the system detects wrong-way entry within the first 2 seconds and alerts the vehicle (via connected infrastructure) or alerts traffic management, there is time to prevent a head-on collision.

What the Data Tells Us About Building Better Systems

Six months of analysis reinforced several convictions:

1. Detection is table stakes. Knowing what objects are in a frame is necessary but wildly insufficient. The value is in understanding what those objects are doing, why, and what they will do next.

2. Temporal context is essential. Single-frame analysis misses everything that matters: how long has that vehicle been stopped? Is this cow walking across the road or settling in? Is traffic volume normal for this time of day? Systems that analyze frames independently discard the most valuable signal.

3. The long tail is where the value is. Common events (cars driving normally) are boring. Rare events (wrong-way driving, animal crossings, ceremonial processions, vehicle breakdowns) are where AI adds value that human monitoring cannot match. Humans cannot watch 200 cameras simultaneously. AI can.

4. Context changes everything. A motorcycle in a lane is normal. A motorcycle in a lane traveling against traffic is a violation. A motorcycle in a lane traveling against traffic at 2 AM on a divided highway is a life-threatening emergency. Same object, same detection, three completely different responses. Only reasoning produces the correct response.

5. Indian traffic is not chaotic — it is complex. There are patterns. They are just different patterns than Western traffic models expect. Two-wheelers follow specific lane-splitting behaviors. Auto-rickshaws have predictable stopping patterns. Even cows have preferred resting spots. The patterns exist. They just require Indian data to learn.

Following the Data

We will continue processing and analyzing Indian traffic video at scale. Every hour of footage improves our understanding of what Indian roads actually look like — not what traffic surveys say they look like, not what planning documents assume, but what actually happens at 2 AM on a Tuesday at an intersection in East Delhi.

50 million cameras, zero intelligence. We are changing that, one hour of footage at a time.

Follow our research on the blog and on Twitter. We publish findings regularly. If you operate traffic cameras in India and are interested in analyzing your footage, reach out — we are always looking for more data to learn from.