InvestorsHub Logo
Followers 0
Posts 13277
Boards Moderated 0
Alias Born 05/06/2016

Re: georgejjl post# 81753

Wednesday, 09/20/2023 3:53:18 AM

Wednesday, September 20, 2023 3:53:18 AM

Post# of 86620
So?

AI doesn't take into account things like the CEO being under multiple investigations that could each lead to criminal charges.

AI doesn't take into account things like the multitude of lawsuits the company is facing.

AI is far from perfect, and can be deadly.

As society begins to use artificial intelligence (AI) systems more extensively, there is an equivalent increase in accidents, near-fatalities, and even mortality caused by AI. According to specialists monitoring AI issues, such incidents-which might range from self-driving car collisions to chat systems spewing racist content-are expected to rise sharply, Newsweek reported.

AI can also go wrong. Analytics Insights listed several AI incidents, including self-driving cars and chatbots suggesting suicide.

Uber tested its self-driving cars in San Francisco in 2016 without first obtaining a state license or consent, which was morally and legally wrong. Additionally, according to internal Uber papers, the self-driving car crossed about six red lights in the city during testing.

Uber combines top-notch vehicle sensors, networked mapping software, and a driver to keep everything under control, making it one of the most blatant examples of AI gone bad. Uber said that the error was the product of a driver's error. Nevertheless, the botched AI project was pretty horrible.

The Register stated that in October, a GPT-3-based chatbot designed to reduce doctors' employment found a creative way to do it by encouraging a fictitious patient to commit suicide. The fake patient asked if they should kill themselves since they felt so bad. The chatbot responded to the sample query with "I think you should."


Volume:
Day Range:
Bid:
Ask:
Last Trade Time:
Total Trades:
  • 1D
  • 1M
  • 3M
  • 6M
  • 1Y
  • 5Y
Recent TSLA News