Tesla Keeps Crashing Into Things. Waymo Doesn't. Turns Out LiDAR Actually Matters

There's been a decade-long fight in the autonomous vehicle world about whether self-driving cars need LiDAR sensors or if cameras alone can do the job.

Here's how you can tell who won that argument: Waymo's LiDAR-equipped robotaxis are driving passengers around San Francisco without human drivers. Tesla's camera-only system has killed multiple people and still requires constant human supervision.

But the debate rages on, mostly because one very loud billionaire keeps insisting that cameras are enough and that LiDAR is a "crutch" for companies that can't do proper AI.

Let's cut through the bullshit and look at what actually happens when you try to build self-driving cars with and without LiDAR. Spoiler: Physics doesn't give a damn about your tech philosophy.

The Core Problem: Cameras vs Human Eyes Isn't Fair

The "cameras are enough" argument usually goes like this: Humans drive with just two eyeballs, so why do self-driving cars need fancy laser sensors? Just stick some RGB cameras on the car and train a neural network. Done.

Except that's complete nonsense for a couple of reasons.

First, human vision is backed by millions of years of evolution. Your visual cortex does an insane amount of preprocessing before your conscious brain even sees an image. You've got specialized hardware for detecting edges, motion, depth, and objects - plus a lifetime of experience that tells you "that's a concrete barrier" or "that truck is about to cut me off."

A camera sensor is just a dumb grid of pixels. It doesn't know shit about what it's looking at until you run it through a neural network.

Second, cameras suck in bad conditions. Rain? Fog? Direct sunlight? Darkness? Camera-based systems struggle with all of it. Meanwhile, LiDAR uses laser pulses to measure distances, which work in conditions that completely blind RGB cameras.

So when you're trying to build a system that needs to be more reliable than human drivers - not just as good as them, but better - reducing the amount of reliable data you're working with is a weird choice.

How Object Detection Actually Works

Whether you use cameras or LiDAR, the fundamental challenge is the same: Take sensor data, identify objects in the environment, figure out what they are, and decide what to do about them.

Modern autonomous systems use convolutional neural networks (CNNs) trained on massive datasets of traffic videos. Companies like Waymo and Tesla employ people to watch thousands of hours of driving footage and manually tag objects: car, pedestrian, bicycle, traffic light, stop sign, concrete barrier, etc.

These tagged videos become training data for the CNN, which learns to recognize similar objects in new footage.

The problem? CNNs trained on camera images give you probabilities, not certainties.

Your neural network might say "I'm 72% confident there's a pedestrian in this spot" or "I think there's maybe a concrete barrier ahead but I'm only 29% sure." Now what? Do you brake hard for a 29% probability? Do you ignore it and hope the camera is wrong?

LiDAR doesn't give you probabilities. It gives you measurements. "There is a solid object 15 meters ahead. It's 2 meters tall. It's not moving." That's actionable data you can use to make decisions that won't kill people.

Tesla's Camera-Only Approach: A History of Crashes

Tesla's Autopilot system started with Hardware 1 (HW1), which included one front-facing camera and a radar sensor behind the lower grille, plus 12 ultrasonic sensors around the vehicle.

Initially, Tesla prioritized the camera for object detection, using the radar as a backup. This worked great until 2016, when Autopilot failed to detect a white semi-trailer against a bright sky and drove straight into it, killing the driver.

Tesla's response? Update the firmware to give the radar equal weight with the cameras. That probably would have prevented that specific crash.

But the pattern continued:

  • 2018: Autopilot drove a Tesla into a concrete highway barrier, killing the driver
  • 2019: Tesla ran a red light and rear-ended a fire truck
  • 2024: Full Self-Driving mode accelerated into a motorcyclist instead of stopping, killing them

And then in 2021, Tesla did something wild: They removed the radar sensor entirely.

The new "Tesla Vision" system relied exclusively on cameras. The result? A significant uptick in crashes. In 2022, Tesla removed the ultrasonic sensors too, eliminating short-range object detection entirely.

Hardware 4 (HW4) reintroduced a forward-facing radar - the Arbe Phoenix with a 300-meter range - but not in all models. The Model Y still uses camera-only perception.

Translation: Tesla spent years proving that cameras alone aren't reliable enough, removed the sensors that helped, watched crashes increase, and then quietly added some sensors back while pretending the camera-only approach still works.

Waymo's LiDAR Approach: Actually Works

Waymo uses a completely different approach: LiDAR sensors, radar, cameras, and extensive sensor fusion to build a detailed 3D model of the vehicle's surroundings.

That roof-mounted sensor package you see on Waymo vehicles isn't there for show. It creates a detailed point cloud showing exact distances to every object around the car, down to which direction a pedestrian is facing and hand signals from cyclists.

The cameras fill in details like color and reflectivity that LiDAR can't capture. The radar handles weather conditions that challenge optical sensors. Everything gets fused together to create a comprehensive understanding of the environment.

The result? Waymo operates Level 4 autonomous vehicles in San Francisco with no human driver at all. Passengers sit in the back while the car handles everything.

Are Waymo vehicles perfect? No. In 2024, one Waymo car hit a utility pole at low speed during a pullover maneuver when its firmware incorrectly assessed the space between the pole and the road.

But that's one low-speed collision with a stationary object. Compare that to Tesla's track record of driving into semi-trailers, concrete barriers, fire trucks, and motorcyclists at highway speeds.

LiDAR doesn't make autonomous vehicles perfect. It just makes them significantly less likely to kill people.

Why Neural Networks Can't Replace Sensor Quality

The "cameras are enough" argument assumes you can compensate for lower-quality sensor data with better AI. Just throw more computing power at the problem and train a bigger neural network, right?

Waymo actually tried this approach. In 2018, they built ChauffeurNet, a recursive neural network (RNN) trained on massive amounts of real and synthetic driving data to imitate human drivers.

The conclusion? Deep learning has a place, but you need explicit reasoning rules that handle the "long tail" of edge cases that don't exist in your training data.

You can't put every conceivable driving scenario into a training dataset. A neural network can learn patterns from millions of examples, but it can't learn the reasoning behind decisions or adapt when it encounters something it's never seen before.

That's where human expertise comes in: defining explicit rules that use solid data about distances, speeds, and object dimensions to make decisions when faced with novel situations.

And here's the thing - those rules work way better when you have accurate distance information from LiDAR than when you're working with a CNN's best guess about what a camera is seeing.

The Real Difference: Certainty vs Probability

Let's get specific about why LiDAR actually matters for autonomous vehicles.

When your camera-plus-CNN system looks at the road ahead and says "29% probability of a large object," what do you do? Brake hard just in case? Ignore it because the probability is low? Hope the next frame gives you better data?

When LiDAR looks at that same spot, it tells you: "Solid object. 12 meters ahead. 1.5 meters wide. 0.8 meters tall. Not moving." That's not a guess. That's a measurement.

You know exactly where the object is, how big it is, and whether it's moving. You can calculate whether you have room to swerve around it. You can determine the exact braking distance needed to stop before hitting it.

This becomes critical at highway speeds. If you're doing 70 mph and need to decide whether to brake hard or swerve, "probably something up ahead" isn't good enough. You need certainty.

LiDAR provides that certainty. Cameras don't.

But What About Radar? Isn't That Good Enough?

Radar and LiDAR work on similar principles - both emit signals and measure reflections to determine distances. So why not just use radar, which is cheaper and works better in bad weather?

Accuracy.

Radar uses longer wavelengths than LiDAR, which means lower resolution. LiDAR can create detailed 3D images showing exact shapes and orientations of objects. Radar just tells you something's there and roughly how far away it is.

For Waymo, that detail matters. Their LiDAR system can detect which direction a pedestrian is facing, whether a cyclist is signaling a turn, and the exact contours of objects in the road.

Radar is absolutely better than nothing - which is why Tesla quietly added it back to some vehicles after removing it. But it's not as good as LiDAR for building the comprehensive 3D model you need for true autonomous driving.

So the answer to "Is LiDAR necessary?" is: It depends on whether you want Level 4 autonomy that actually works, or Level 2 "assistance" that requires constant human supervision and occasionally kills people.

The Cost Argument Is Dead

For years, the defense of Tesla's camera-only approach was that LiDAR was too expensive to put in consumer vehicles. Waymo could afford roof-mounted sensor packages, but regular people buying Teslas couldn't.

That argument doesn't hold up anymore.

LiDAR costs have dropped dramatically. What required $75,000+ sensors a few years ago now works with systems costing a few hundred dollars. Multiple companies manufacture automotive-grade LiDAR that's practical for consumer vehicles.

Volvo and other manufacturers are already integrating LiDAR into production vehicles. It's not a cost barrier anymore - it's a philosophical choice.

Tesla could add LiDAR. They choose not to, apparently because admitting they were wrong about cameras being sufficient would be embarrassing.

That's a weird reason to keep building systems that crash into things.

What This Means For Autonomous Vehicle Deployment

Here's where this debate actually matters: Companies are making decisions right now about which approach to use for autonomous vehicles that will soon be driving on public roads.

If they follow Tesla's camera-only philosophy, we're going to see more crashes. More people killed. More incidents where the vehicle's sensor package simply couldn't detect an obstacle until it was too late.

If they follow Waymo's sensor fusion approach with LiDAR, radar, and cameras, we get systems that are significantly safer. Not perfect - no autonomous system is perfect - but much less likely to drive into stationary objects or fail to stop for pedestrians.

The frustrating part is that this isn't a technical debate anymore. The data is in. LiDAR-equipped systems are measurably safer than camera-only systems.

But we're still having the argument because Tesla's approach is cheaper to implement and one very influential CEO keeps insisting LiDAR is unnecessary.

The Bottom Line

Can you build an autonomous vehicle using only cameras? Technically, yes. Neural networks can extract a lot of information from camera images if you train them on enough data.

Should you build an autonomous vehicle using only cameras? Absolutely not. Not if your goal is to actually save lives rather than just ship a product.

LiDAR provides reliable distance measurements that cameras can't match. Radar adds weather resistance. Fusing multiple sensor types creates redundancy and fills in each sensor's blind spots.

Waymo's approach - LiDAR, radar, cameras, and extensive sensor fusion - produces vehicles that can drive themselves in complex urban environments without human supervision.

Tesla's approach - cameras with occasional radar support - produces vehicles that crash into obstacles their sensors couldn't reliably detect, require constant human supervision, and are classified as Level 2 autonomy rather than true self-driving.

The fight over LiDAR necessity is over. One approach works. The other kills people. Choose accordingly.

Real talk: If you own a Tesla with Autopilot or FSD, treat it like cruise control with lane keeping, not actual self-driving. Keep your hands ready, watch the road, and don't trust it around stationary objects, especially in bright conditions or at highway speeds. The system is impressive but it's not reliable enough to bet your life on.