Senators grill Waymo and Tesla on robotaxi safety — what’s actually at stake

Senators grill Waymo and Tesla on robotaxi safety — what’s actually at stake

A US Senate hearing this week put two very different visions of “self-driving” on the same stage: Waymo’s tightly geofenced robotaxi service and Tesla’s mass-market driver-assistance stack that’s sold (and updated) to hundreds of thousands of owners. Senators pressed both companies on safety, legal liability, remote operation, and the geopolitical anxiety that the US could “lose” autonomous vehicles to China.

If you only skim the headlines, the hearing can sound like a familiar Washington ritual: lawmakers ask stern questions, executives promise to be safe, and nothing moves. But buried in the testimony and the senators’ lines of attack are three real policy fights that will determine what shows up on public roads next:

  1. What counts as “safe enough” for driverless deployment, and who has the authority to say so.
  2. Who pays when something goes wrong—the passenger, the manufacturer, the fleet operator, or some combination.
  3. Whether autonomy becomes an industrial policy issue, where supply chains, data flows, and national security concerns shape technology choices as much as engineering does.

This explainer unpacks those fights, why Waymo and Tesla keep talking past each other, and what a workable national framework would need to include.

Two companies, two definitions of “self-driving”

One reason the AV debate never settles is that the words are overloaded.

Waymo: driverless… but inside a carefully defined box

Waymo operates a commercial robotaxi service that is designed around geofencing: the system is trained, validated, and monitored within specific operational design domains (ODDs)—the cities, neighborhoods, road types, and conditions where it is supposed to behave predictably.

That approach has obvious constraints (the car doesn’t go everywhere), but it gives regulators and the public something concrete to evaluate: a fleet, a service area, and a set of operating rules that can be measured.

Waymo also presents its case in the language of safety engineering: frameworks, dashboards, and comparisons to baseline human driving in the same cities. On its safety page, the company highlights results it says show substantial reductions in serious crashes versus an “average human driver” over the same distance in its operating environments, plus third‑party insurance analysis that found fewer bodily injury and property damage claims over tens of millions of miles.

Tesla: “self-driving” as a product feature that ships everywhere

Tesla’s autonomy pitch is the opposite: the company’s driver-assistance capabilities are packaged as consumer features—Autopilot and Full Self-Driving (Supervised)—that run on a large and diverse fleet, often without strict geographic limits.

In the hearing, senators used that difference to draw a line between companies that restrict where autonomy is allowed versus a company that sells a generalized system and expects drivers to supervise it.

A practical way to phrase the dispute:

  • Waymo wants the freedom to deploy vehicles without traditional controls (like steering wheels and pedals) in limited places where the company can demonstrate performance.
  • Tesla wants a regulatory environment that accommodates software-defined vehicles and rapid over-the-air iteration, with the argument that old rules assume a slower, hardware-centric world.

Both perspectives are partly right—and that’s why the rules are hard.

Why “94 percent of crashes are human error” is both true and misleading

Hearing announcements and AV marketing often cite a statistic like: “94 percent of crashes are attributable to human error.” You can see that framing in the Senate committee’s own description of the hearing, alongside the claim that fully autonomous vehicles could remove that error from driving.

The statistic points to something real: human drivers are distracted, impaired, aggressive, sleepy, and inconsistent.

But as a policy argument it can be misleading, because it implies a simple subtraction: remove humans, remove crashes. In practice you’re not subtracting a cause—you’re substituting a driver.

That substitution creates new questions:

  • Can the system perceive rare edge cases reliably?
  • How does it behave in the “messy middle” of human driving: informal negotiation at crosswalks, double-parked vehicles, temporary signage, and confusing construction zones?
  • What is the safe fallback when the system is uncertain?

And, critically: even if autonomous systems reduce the average crash rate, the remaining failures may look very different from human failures. That matters for trust and for regulation.

What senators were really asking about safety

The hearing’s safety questions clustered around three themes.

1) Specific incidents are becoming proxies for system-level trust

Waymo was pressed about high-visibility incidents such as failures to stop correctly around school buses during pick-up and drop-off situations, and about a recent crash in which a Waymo robotaxi struck a child near an elementary school in Santa Monica.

In that Santa Monica incident, Waymo says its vehicle detected the child as they emerged from behind a stopped SUV, braked hard, and reduced speed significantly before contact was made. In a transparency post, Waymo framed the event as a demonstration of benefit: the system reduced impact speed compared to what the company’s model suggests a fully attentive human driver would have achieved in the same moment.

That kind of argument—“we still hit someone, but less badly than a human would have”—is going to become more common. It is also going to be emotionally unsatisfying to the public. For policy makers, it forces a choice between two ways of thinking:

  • Absolute framing: any collision is a failure that shouldn’t happen.
  • Comparative framing: the relevant question is whether autonomy reduces the overall harm compared to the status quo.

A mature regulatory system has to live with the comparative framing, while still treating every serious incident as a chance to find systemic weaknesses.

2) Capacity constraints at the regulator matter as much as the rules

Several senators emphasized an uncomfortable fact: even if Congress writes a law, the agency that enforces it needs staff, expertise, and political support.

During the hearing, lawmakers referenced reporting that the National Highway Traffic Safety Administration (NHTSA) lost significant staffing, including within the office focused on vehicle automation safety. Regardless of what you think of the politics around that, the operational reality is straightforward: thin oversight invites both delay and disaster.

Under-resourced regulators tend to oscillate between two bad modes:

  • They move too slowly to provide clear pathways for safe innovation.
  • They fail to catch preventable hazards early, which leads to headline-grabbing failures and backlash.

3) “Safety” includes design choices that don’t sound like safety

Senators also targeted choices like sensors (for example, the decision to rely more heavily on cameras versus radar) and the way companies communicate supervision requirements.

These are not just engineering debates—they shape the failure modes of the system and the expectations of the humans around it. If a driver-assistance system is named, marketed, or demoed in a way that implies it can drive itself, you are effectively increasing risk by increasing complacency.

A national framework will probably need to treat certain kinds of “deceptive confidence” as a safety issue in itself.

The liability question: who is responsible in a driverless crash?

If safety is about preventing harm, liability is about allocating the cost of harm.

In conventional driving, the default is simple: if a human driver makes a mistake, liability tends to follow that person (and their insurance). Autonomous systems break that assumption.

The hearing raised two closely related issues: arbitration and who accepts blame when the system is at fault.

Arbitration clauses: safety without accountability isn’t credible

Binding arbitration can keep disputes out of open court, limiting discovery and precedent. Senators from both parties signaled discomfort with the idea that a robotaxi company could shield itself from public accountability through fine print.

From a technology-policy perspective, arbitration is not just a consumer-rights issue—it’s a feedback loop issue. Public litigation (for all its flaws) is one of the ways safety problems become visible.

A future AV law could:

  • require clear, plain-language disclosures about arbitration;
  • prohibit arbitration for certain categories of injury claims;
  • or condition AV exemptions (like operating without traditional controls) on stronger accountability terms.

“We’ll accept liability” is a promise that needs structure

Witnesses suggested that their companies would accept liability when their technology is at fault.

That sounds good, but it’s incomplete unless the law clarifies:

  • what counts as the “technology being at fault”;
  • how fault is determined when software updates change behavior over time;
  • how data is preserved and shared after incidents;
  • and how passengers, pedestrians, and other drivers can access that evidence.

In other words: autonomy needs something like an aviation-style incident process, but adapted to the scale and privacy implications of road transport.

Remote operators: the hidden humans in “driverless”

One of the most revealing moments in the discussion was about remote assistance.

Robotaxis sometimes encounter situations that are safe but ambiguous: a blocked lane, a police officer giving hand signals, a confusing construction pattern, or a dense crowd. In those cases, fleets may rely on remote operators to provide guidance.

Senators raised concerns about:

  • where those operators are located (domestic versus abroad);
  • latency and reliability of communications;
  • cybersecurity;
  • and the labor implications of shifting human oversight jobs offshore.

Remote assistance creates a policy tension:

  • It can improve safety by helping a system resolve uncertainty.
  • It can also mask system limitations, allowing companies to claim “driverless” while still depending on human judgment in edge cases.

A sensible framework doesn’t have to ban remote assistance. But it should require transparency about when and how it’s used, and it should set standards for secure communications and auditability.

The “China” angle: autonomy as industrial policy and national security

The hearing repeatedly returned to China—both as a competitive threat and as a supply-chain concern.

Waymo faced questions about using a Chinese-made vehicle platform for a next-generation robotaxi, with the company emphasizing that the vehicles are stripped of software and that Waymo installs its own autonomy stack in the US, with no data sharing outside the country.

Even if that’s technically correct, lawmakers are operating with a broader worry:

  • Vehicle platforms are no longer just metal and mechanical parts; they are computers on wheels.
  • The boundary between “hardware” and “software” is porous.
  • Supply chains become potential leverage points.

The US is already moving toward restricting certain kinds of vehicle software and connectivity tied to China. In that context, “use this chassis, but replace the software” is not a universal political answer.

This is where AV policy collides with trade and security policy. We may end up with rules that effectively say: if you want to operate a driverless fleet at scale, your platform and your data path must meet strict provenance requirements.

That would be costly. It could also be stabilizing, because it turns a vague anxiety (“China is winning”) into enforceable, auditable requirements.

What a national AV framework would need to include

The hearing showed why the patchwork of state rules isn’t enough—but also why a single federal law can’t just be “let’s allow self-driving cars.”

A workable framework would likely need at least these pieces.

Clear definitions: driver assistance vs automated driving vs driverless service

Regulation should separate:

  • Driver assistance (a human is responsible at all times),
  • Automated driving (the system drives under defined conditions, with rules for handoff), and
  • Driverless operation (no human driver present, with service-level obligations).

Without those definitions, marketing and public understanding will continue to blur the lines.

ODD discipline: where the system is designed to work

For vehicles that operate without a driver, ODD constraints are not bureaucracy—they are a safety control. A federal regime should require that companies:

  • declare their ODD;
  • demonstrate performance within it;
  • and have a defined process for expanding it.

Tesla-like approaches that rely on generalized supervision may fit into a different regulatory bucket than Waymo-like services.

Data, transparency, and incident reporting

To earn trust, autonomy can’t be a black box. The framework should define:

  • what data must be recorded (and for how long),
  • what data must be shared with regulators,
  • how privacy is protected,
  • and how the public learns about systemic issues.

This is especially important when systems update frequently over the air.

Enforcement capability: funding and expertise for the regulator

Congress can mandate rules, but the regulator must be able to interpret and enforce them.

If lawmakers want speed, they have to pay for competence: hiring, training, testing facilities, and a modern understanding of software-heavy vehicles.

Liability defaults that match the technology

The law should create predictable liability rules so victims aren’t forced into a maze.

One possible default for true driverless services: the fleet operator is presumptively responsible, with room to shift liability upstream (to a component supplier) when evidence supports it.

For supervised driver assistance, the default could remain closer to today’s model, but with penalties for misleading claims that undermine supervision.

Cybersecurity and remote-operations standards

Remote assistance and connected vehicles increase the attack surface. Minimum standards for encryption, authentication, access logging, and response processes should be part of any serious AV law.

Why Congress keeps stalling (and why that might change)

Autonomous vehicle legislation has been “almost happening” for years. The reasons are not mysterious:

  • The benefits are probabilistic and long-term; the harms are vivid and immediate.
  • The technology landscape is fragmented: robotaxi fleets, consumer driver-assistance, trucking automation, delivery robots.
  • The federal-state division of power is messy: federal rules govern vehicle safety standards, while states manage licensing, traffic enforcement, and much of on-road operation.

But the hearing suggests two pressures that could finally force movement:

  1. Commercial reality: companies want to scale across state lines, and the patchwork is expensive.
  2. Geopolitical framing: when lawmakers see a technology as a competition with China, they become more willing to act—though not always more wise.

A credible bill would have to do something difficult: encourage innovation without granting blanket permission, and enforce safety without pretending that zero risk is achievable.

Bottom line

The Senate hearing wasn’t just political theater. It exposed the core problem with “self-driving” policy in 2026: the US is trying to regulate a spectrum of technologies using language that collapses them into one idea.

Waymo and Tesla can both talk about “autonomy” while building fundamentally different products with different safety strategies and different social tradeoffs. A national framework needs to recognize that difference, set clear accountability rules, and fund real oversight—otherwise we’ll keep bouncing between local patchwork, high-profile incidents, and stalled legislation.


Sources

n English