InvestorsHub Logo
Followers 0
Posts 13277
Boards Moderated 0
Alias Born 05/06/2016

Re: None

Saturday, 09/23/2023 9:56:43 PM

Saturday, September 23, 2023 9:56:43 PM

Post# of 86625
Engineering whistleblower explains why safe Full Self-Driving can't ever happen

TheStreet reported last week that there are a number of vulnerabilities in the artificial intelligence models that power self-driving cars, not the least of which involves a lack of standardized testing platforms across the industry to ensure independently verified safe models.

Safe, human-level self-driving, however, isn't somewhere up around the bend, according to Navy veteran and engineer Michael DeKort. The costs, he says, in human lives, time and money, are too high for true, safe self-driving to ever be achieved.

The issue for DeKort — the engineer who exposed Lockheed Martin's subpar safety practices in 2006 — is that artificial general intelligence (an AI with human-level intelligence and reasoning capabilities) does not exist. So the AI that makes self-driving cars work learns through extensive pattern recognition.

Human drivers, he said, are scanning their environment all the time. When they see something, whether it be a group of people about to cross an intersection or a deer at the side of the road, they react, without needing to understand the details of a potential threat (color, for example).

"The problem with these systems is they work from the pixels out. They have to hyperclassify," DeKort told TheStreet. Pattern recognition, he added, is just not feasible, "because one, you have to stumble on all the variations. Two, you have to re-stumble on them hundreds if not thousands of times because the process is extremely inefficient. It doesn't learn right away."

"You can never spend the money or the time, or sacrifice the lives to get there," he said. "You have to experience to learn and you have to experience over and over again."

Self-driving cars would have to clock billions to hundreds of billions of miles using their current methods to achieve a fatality rate in line with that of human drivers: one per 100 million miles, a 2016 study by Rand found. Rand found that as self-driving cars seem to improve, it gets harder to analyze their performance accurately because of the rarity of certain edge cases.

Tesla's beta version of FSD, according to Elon Musk, has covered some 300 million miles; the company would have to scale up mileage by 100 to 1,000 times to create a system that is as good as human, according to Rand's calculations. Still, as Musk himself inadvertently demonstrated in a recent demo, drivers can't yet take a nap while their Tesla takes them somewhere; human drivers need to be ready to take control at a moment's notice.

Tesla is currently facing a series of investigations into the safety of its FSD software.

Volume:
Day Range:
Bid:
Ask:
Last Trade Time:
Total Trades:
  • 1D
  • 1M
  • 3M
  • 6M
  • 1Y
  • 5Y
Recent TSLA News