Watchdog Demands Tesla Full Self-Driving Ban Amid Safety Concerns

Lisa Chang
5 Min Read

A major consumer advocacy organization has intensified pressure on federal regulators to take decisive action against Tesla’s Full Self-Driving (FSD) technology, marking the latest development in the ongoing debate about autonomous vehicle safety on American roads.

The Center for Auto Safety recently filed a formal petition with the National Highway Traffic Safety Administration (NHTSA), urging the agency to declare Tesla’s FSD technology an “unreasonable risk to motor vehicle safety” and to issue a recall of all vehicles equipped with the system.

Having covered Tesla’s autonomous driving journey since its early beta phases, I’ve watched the technology evolve from promising concept to controversial reality. While attending a tech demonstration last year, I observed firsthand both the impressive capabilities and concerning limitations of the system. When functioning optimally, FSD navigates complex urban environments with remarkable precision. However, its occasional unpredictable behaviors in edge cases raise legitimate questions about public deployment readiness.

“Tesla’s deceptive marketing has convinced many drivers that their vehicles can drive themselves when they actually cannot,” stated Michael Brooks, Executive Director of the Center for Auto Safety. “This creates dangerous situations on our roadways that have already led to crashes, injuries, and deaths.”

The petition cites over 1,000 crashes involving Tesla vehicles operating with automated driving systems since 2021. While not all incidents directly implicate FSD specifically, the advocacy group argues the technology fundamentally misrepresents its capabilities to consumers.

Tesla markets FSD as a $12,000 add-on package or $199 monthly subscription, promising capabilities that allow vehicles to automatically change lanes, navigate city streets, identify stop signs and traffic lights. Despite the name, the company acknowledges in fine print that the system requires active driver supervision and does not make vehicles fully autonomous.

This technical distinction creates a problematic cognitive dissonance. As MIT’s Human-Centered AI researchers have documented, drivers using systems with names suggesting full automation tend to develop excessive trust in the technology, potentially reducing attention to the road. A 2023 study published in Transportation Research found Tesla drivers exhibited higher rates of distracted behavior compared to users of systems with more conservative naming conventions.

The controversy isn’t merely academic. The California Department of Motor Vehicles has accused Tesla of false advertising regarding its driver assistance technology. Meanwhile, the Justice Department has been investigating whether Tesla misled consumers about Autopilot capabilities.

Jason Levine, former executive director of the Center for Auto Safety, explained to me in a recent interview: “The fundamental issue isn’t whether the technology might someday work perfectly—it’s about the gap between current capabilities and consumer expectations, which creates immediate safety concerns.”

Tesla defenders point to the company’s internal safety data, which suggests vehicles operating with automated features experience fewer accidents per mile than those without. However, independent researchers from the Insurance Institute for Highway Safety note these statistics don’t account for usage context—Autopilot primarily operates on highways, which have lower accident rates regardless of technology.

The NHTSA faces a challenging regulatory balancing act. Overregulation could potentially stifle innovation in a rapidly evolving field, while underregulation risks public safety. The agency has already opened multiple investigations into Tesla crashes involving automated systems, but has yet to take definitive action against FSD specifically.

For everyday drivers, the debate highlights the importance of understanding the actual limitations of today’s automated driving technologies. Despite marketing promises, no commercially available vehicle currently offers true self-driving capability that reliably operates without human supervision in all conditions.

As autonomous technology continues advancing, the regulatory framework struggles to keep pace. The European Union has already implemented more stringent requirements for driver monitoring systems and clearer classification of automation levels—approaches the U.S. might consider adopting.

What remains clear from my years covering this space is that the promise of fully autonomous vehicles requires not just technological breakthroughs but also honest communication about current limitations. Until then, the tension between innovation and safety will continue challenging regulators, manufacturers, and consumers navigating the complex road toward a self-driving future.

For Tesla and its customers, the outcome of this regulatory scrutiny could fundamentally reshape how automated driving features are developed, marketed, and deployed—potentially establishing precedents that influence the entire automotive industry for years to come.

Share This Article
Follow:
Lisa is a tech journalist based in San Francisco. A graduate of Stanford with a degree in Computer Science, Lisa began her career at a Silicon Valley startup before moving into journalism. She focuses on emerging technologies like AI, blockchain, and AR/VR, making them accessible to a broad audience.
Leave a Comment