I don't know, I don't get that feeling. What makes you think that? I don't know much about Elon, but I listened to one of his interviews with Rogan, and he struck me as extremely optimistic, but also grounded and not arrogant.
(I do agree partial-self-driving just seems like a terrible idea. I guess crash stats can reveal if this is true or not, but are perhaps not available.)
Autopilot and FSD beta are not the same. The latter is currently available to maybe a couple dozen testers that are clearly very well informed about the capabilities of the system, as well as the changes in each update. If you really don't believe it, watch their hours and hours of (frankly boring) videos of analyzing the behavior of the system in complex situations while still staying on top of safety.
It does remain to be seen how well Tesla will trust the general public with this level of improved autonomy. As you get closer and closer to the uncanny valley where things just appear to work, you get into the more tricky situations that truly befuddle humans and machines alike.
NHTSA scrutinizes crashes that involve anything close to Autopilot and FSD quite heavily. Aside from one or two incidents they've had complaints about, none of them have risen to the point where they had to put their foot down. Admittedly, Tesla were a big bunch of jerks about how they handled the situation, but still, these were isolated incidents with clear misuse from the driver's part.
I agree with you, in that Musk is overly optimistic (no shit, he's been saying this would be ready in 2018, and it's unclear if it will be in 2021). But he's also quite well informed of the facts on the ground, and is clearly aiming for the moonshot-winner-take-all prize by skipping Lidar and high-precision mapping. That might be a gamble, but need not be an inherently dangerous one, depending on how Tesla handles the super-grey areas around the uncanny valley, where the system appears to work, but really isn't worth risking your life upon. To some, it's already there, as you can see from idiots sleeping in their Teslas while on Autopilot. But again, outside of a couple of incidents over years and millions and miles, the rate of catastrophic failure (accidents) has been surprisingly low.
I completely underestimated the role of professional safety drivers for autonomous vehicles. I thought it's "just a guy" sitting in the car for good measure, but it turns out that the majority of drivers is not fit for the job even after lengthy training, see e.g. [1] (a gread podcast in general).
Also all autonomous driving companies employ safety drivers - except one.
> NHTSA scrutinizes crashes that involve anything close to Autopilot and FSD quite heavily.
I wouldn't put too much hope into the NHTSA regulating Autopilot. It took a two year legal battle to get the data driving their analysis of the Autopilot in 2017, turns out it was completely provided by Tesla, but worse, when confounders where removed, it still showed a higher crash rate for Autopilot.
If you take a non-American view of Autopilot, Europeans agencies did scrutinize the crashes more closely and as a result have restricted the use of Autopilot.
If you are interested in the topic of autonomous driving I recommend the Autonocast podcast.
Elon will get into some pretty bizarre bouts on twitter. I realize this is common for celebrities, but that whole "diver is a pedophile" thing was truly wtf.
If you go read the court testimony it isn’t as strange as it seems on the face. The diver started the tiff and the insult Musk sent in return was said to be common vernacular in South Africa where he grew up.
his covid comments last year were beyond the pale. The low point for me was when Shannon Woodward, who played a scientist on TV and to my knowledge isn't one, had to explain to him that tests are indeed not a big pharma conspiracy
There is a lesson in sales and trust hidden in there. No matter how good your device is, an inexperienced (and worse when famous) person can sow distrust in it instantly.
Wouldn’t surprise me if next machine is just 4 machines glued together to make it Elon Musk proof.
“Had to” and she just replied on Twitter with some odd assumptions are very different. I think anyone would agree that a test that is wrong half the time is not a good test.
(I do agree partial-self-driving just seems like a terrible idea. I guess crash stats can reveal if this is true or not, but are perhaps not available.)