Not-Yet-Full Self Driving on Tesla (And How to Make it Better)

We have had Full Self Driving (FSD) Beta on our Tesla Model Y for some time. I had written a previous post on how much the autodrive reduces the stress of driving and want to update it for the FSD experience. The short of it is that the car goes from competent driver to making beginner’s mistakes in a split second.

image

Some examples of situations with which FSD really struggles are any non-standard intersection. Upstate New York is full of those with roads coming at odd angles (e.g. small local roads crossing the Taconic Parkway). One common failure mode is where FSD will take a corner at first too tightly, then overcorrect and partially cross the median. Negotiating with other cars at four ways stops, which are also abundant upstate is also hilariously entertaining, by which I mean scary as hell.

The most frustrating part of the FSD experience though is that it makes the sames mistakes in the same location and there is no way to provide it feedback. This is a huge missed opportunity on the part of Tesla. The approach to FSD should be with the car being very clear when it is uncertain and asking for help, as well as accepting feedback after making a mistake. Right now FSD comes off as a cocky but terrible driver, which induces fear and frustration. If instead it acted like a novice eager to learn it could elicit a completely different emotional response. That in turn would provide a huge amount of actual training data for Tesla!

In thinking about AI progress and designing products around it there are two failure modes at the moment. In one direction it is to dismiss the clear progress that’s happening as just a parlor trick and not worthy of attention. In the other direction it is to present systems as if they were already at human or better than human capability and hence take humans out of the loop (the latter is true in some closed domains but not yet generally).

It is always worth remembering that airplanes don’t fly the way birds do. It is unlikely that machines will drive or write or diagnose the way humans do. The whole opportunity for them to outdo us at these activities is exactly because they have access to modes of representing knowledge that are difficult for humans (eg large graphs of knowledge). Or put differently, just as AI made the mistake of dismissing the potential for neural networks again and again we are now entering a phase that is needlessly dismissing ontologies and other explicit knowledge representations.

I believe we are poised for further breakthroughs from combining techniques, in particular making it easier for humans to teach machines. And autonomous vehicles are unlikely to be fully realized until we do.

Posted: 5th December 2022Comments
Tags:  ai tesla autonomous vehicles human in the loop

Newer posts

Older posts

blog comments powered by Disqus
  1. continuations posted this
    We have had Full Self Driving (FSD) Beta on our Tesla Model Y for some time. I had written a previous post on how much...

Newer posts

Older posts