Even if cameras can do it, it feels wrong to not use tech that can do it better that vision alone. Even if it costs more, it should still be used because these are machines that can kill people, and life is too valuable. If it can't be done the best we can, then maybe it shouldn't be done at all yet.
The corporation's risk assessment department has calculated it's more cost effective to deny and then fight the consequences in court vs. spend the extra money up front?
It's because Musk is a combination of stubborn and cheap. That's it - he wants to collect FSD subscriptions or that $10k purchase price without spending any extra money.
Perhaps he can figure it out. It won't be in the next few years.
Some people live in brick houses and others in trailers or worse. It’s just not how the reality of the world works. The reality is we’re lucky to even have the luxury of a warm trailer relative to the chaos and pain of nature.
On a different note TIL learned that Tesla uses the raw camera sensor data and creates an occupancy network, instead of using images and object detection, which feels like to me that Tesla isn't really using vision.
Then again, Musk says Tesla's cameras do photon counting to account for snow, fog, rain...
Except that 1) isn't possible with the cameras they're using, and 2) even if it was, wouldn't be possible in an "open" environment, like "outside a vehicle".
-- Volvo is upgrading the central computer on all 2025 EX90s for free.
-- The company has spent over a year trying to squash software bugs in the EX90, but owners are still reporting serious issues and glitches.
-- One owner told InsideEVs that her EX90 has been a "dumpster fire inside a train wreck."
Sadly, Morgan Stanley suggests that the release of this information may result in invoking the "Osbourne Effect." Osbourne Computer, decades ago, released information about a great new future model. This resulted in customers refraining from buying the current Osbourne model, resulting in steep losses and bankruptcy. Not to say bankruptcy will happen to Rivian, but many customers may refrain from buying the current R1 model, and instead opt for the future R2 model, which may result in a bad quarter or two for Rivian. FWIW. Rivian seems to have amazing technology in the works, but investors may be in for a bumpy ride until a successful R2 rollout ... according to Morgan Stanley (which may be conflicted by their business with Tesla).
>>Morgan Stanley suggests that the release of this information may result in invoking the "Osbourne Effect." Osbourne Computer, decades ago, released information about a great new future model. This resulted in customers refraining from buying the current Osbourne model, resulting in steep losses and bankruptcy.
Sadly that did not work for Tesla, and the promises of FSD next year...for the last 10 years...
It’s literally called the FSD computer and the software is called “supervised”. So if anything they’re directly claiming it’s a hardware accomplishment while the software is lesser
Their production stack is explicitly multi-sensor, and LiDAR is a primary source for metric 3D geometry plus localization and cameras are mainly for semantics. Waymo documents the Waymo Driver as LiDAR plus cameras plus radar.
Waymo uses both LIDAR and RADAR to collect precise data on distance and speed. If it's foggy (commonly the case in SF) those two let the service continue with no interruptions.
A Tesla in those conditions would hopefully refuse to drive itself, but for some reason I think it would just drive badly and pretend that it was doing something safe.
I was going to say the opposite: that unlike back in the Osbourne days, consumers today understand that there will always be “something better” announced soon, and they’re used to making purchase decisions anyway.
* they were positioned for stereopsis like the human visual system
* had 6 degrees of motion freedom like the human visual system
* were hyper-adaptive to lighting conditions like the human visual system
* had a significantly higher density of pixels per degree of arc in the focus region like the human visual system
* and were backed by a system capable of intuiting object inertia like the human visual system.
Tesla does none of those.