Technology has transformed modern vehicles. Advanced driver assistance systems, automated braking, lane-keeping technology, and fully autonomous vehicles promise safer roads and fewer accidents.
But even the most advanced technology isn’t perfect.
Driverless vehicles, also known as autonomous vehicles, are now operating in several U.S. cities. Companies like Waymo have logged tens of millions of miles using vehicles that operate without a human driver behind the wheel.
While early safety data suggests autonomous systems may reduce certain types of crashes, accidents involving driverless vehicles still occur. When they do, determining who is responsible can be far more complicated than in a traditional car accident.
If you or a loved one is injured in a crash involving a driverless vehicle, understanding how liability works is essential.
Are Driverless Vehicles Safer Than Human Drivers?
Autonomous vehicle technology is designed to eliminate some of the most common causes of crashes, human error, distraction, and impaired driving.
Several recent studies suggest that in some environments, certain autonomous systems may experience lower crash rates than human drivers.
For example:
-
A study analyzing 7.1 million fully autonomous miles found roughly an 80% reduction in injury-reported crashes compared to human drivers.
-
A larger analysis covering more than 50 million autonomous miles also reported statistically significant reductions in injury-related crashes.
-
Insurance data comparing millions of autonomous miles to traditional driving has shown lower rates of property damage and bodily injury claims.
These findings suggest that autonomous technology may improve safety in many scenarios.
However, improved statistics do not mean zero risk.
Driverless vehicles still face challenges in complex and unpredictable real-world conditions, such as unusual pedestrian behavior, construction zones, poor weather, or unexpected road hazards.
Real World Incidents Involving Driverless Vehicles
Despite encouraging safety data, several high-profile incidents have raised questions about the limitations of autonomous systems.
The 2018 Uber Fatal Crash (Arizona)
In Tempe, Arizona, an autonomous test vehicle operated by Uber struck and killed a pedestrian crossing the street at night.
Investigators later determined that the vehicle’s software detected the pedestrian but misclassified the object multiple times. The system’s emergency braking function had also been disabled during autonomous testing.
The case sparked national debate about how autonomous vehicles should be tested and regulated.
The 2023 Cruise Pedestrian Incident (San Francisco)
In October 2023, a pedestrian struck by a human-driven vehicle was thrown into the path of a driverless vehicle operated by Cruise.
The autonomous vehicle then ran over the victim and dragged her a short distance before stopping.
Regulators later stated that key details about the incident were not initially included in company reports. The event led to regulatory scrutiny and temporary operational suspensions.
Civil Lawsuits Involving Autonomous Systems
Autonomous vehicle technology can also introduce new types of risks.
In one recent lawsuit, a cyclist alleged that a vehicle’s automated “safe exit” system failed to properly detect a bike lane before allowing a passenger to open the door, causing serious injuries.
These cases highlight an important legal reality:
Even if a vehicle is autonomous, multiple parties may still share responsibility.
Who Is at Fault in a Driverless Car Accident?
Determining fault in an autonomous vehicle crash is often more complicated than a typical car accident.
Potentially responsible parties may include:
1. The Autonomous Vehicle Company
If a crash is linked to a software error, sensor failure, or system design flaw, the company operating the autonomous system may be liable under product liability law.
2. The Vehicle Manufacturer
If a mechanical component, such as braking systems, steering mechanisms, or sensors, malfunctions, the automaker may share responsibility.
3. A Human Safety Driver or Passenger
Some autonomous systems still require a human driver or operator who may be responsible for intervening if something goes wrong.
4. Another Human Driver
In many cases, crashes involving autonomous vehicles are actually triggered by human drivers behaving unpredictably.
5. Third Parties
Poor road design, construction zones, malfunctioning traffic signals, or other external hazards can also contribute to accidents.
Because of these factors, driverless vehicle crashes often require technical investigation, digital data analysis, and expert testimony.
How Do You File a Claim After a Driverless Vehicle Accident?
Step 1: Identify the Vehicle Operator
Was the vehicle:
-
Fully autonomous?
-
Using driver-assist technology?
-
Operating as part of a ride-hailing service?
Each scenario can involve different insurance coverage and legal responsibilities.
Step 2: Preserve Evidence
Autonomous vehicles record large amounts of operational data, including:
-
Camera footage
-
Sensor data
-
Vehicle decision logs
Preserving this information quickly can be critical to determining what happened.
Step 3: Identify the Responsible Insurance Policy
Autonomous fleets often carry commercial liability coverage, which may apply to injuries caused by the vehicle.
Step 4: Investigate Product Liability Issues
In some cases, the crash may involve defective design, inadequate safety systems, or software failures.
These claims often require engineering and accident reconstruction experts.
How Can Carabin Law Help?
Technology like autonomous vehicles may reduce some crashes caused by human error, but even advanced systems are not flawless. When accidents involving driverless vehicles occur, determining fault can be complex and may involve multiple companies, insurers, and technical experts.
If you or a loved one is injured in a crash involving an autonomous vehicle, it’s important to understand your rights and have the accident thoroughly investigated.
Call Carabin Law today. Every case matters. Every client counts.



