InvestorsHub Logo
Followers 85
Posts 32333
Boards Moderated 86
Alias Born 03/22/2005

Re: None

Friday, 04/07/2023 12:47:10 PM

Friday, April 07, 2023 12:47:10 PM

Post# of 140
>>> Elon Musk, tech leaders call for pause in AI race to prevent risk to 'humanity'


by Sheri Walsh

UPI

Mar 29, 2023


https://www.msn.com/en-us/news/technology/elon-musk-tech-leaders-call-for-pause-in-ai-race-to-prevent-risk-to-humanity/ar-AA19f7Bi


March 29 (UPI) -- Hundreds of tech leaders and researchers are warning artificial intelligence labs to immediately stop training AI systems with human-competitive intelligence that "can pose profound risks to society and humanity."

SpaceX founder Elon Musk and hundreds of tech leaders are warning artificial intelligence labs, in an open letter Wednesday, to immediately stop the "out of control" advanced AI race for six months to make sure all systems are safe, or face "profound risks to society and humanity."

The open letter to AI labs was signed Wednesday by Elon Musk, Apple co-founder Steve Wozniak and politician Andrew Yang, in addition to more than 1,300 other big-named tech experts.

The letter blasts AI labs for failing to attain a high level of planning and management, as it called for a pause of "at least 6 months" on the training of "AI systems more powerful than GPT-4."

"Recent months have seen AI labs locked in an out of control race to develop and deploy ever more powerful digital minds that no one -- not even their creators -- can understand, predict, or reliably control," the letter, published by the nonprofit Future of Life Institute, warned.

"Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources," the letter said.

The letter also called on governments to step in and issue a moratorium, if AI experiments are not stopped immediately, while creating independent regulators to make sure all future systems are safe.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said.

The letter from tech experts comes two weeks after OpenAI announced GPT-4, the next-generation of AI technology found in chatbot tool, ChatGPT, which is currently used in Microsoft and Google products. Open AI claims GPT-4 can pass a simulated bar exam with a score in the top 10% of test takers.

"Contemporary AI systems are now becoming human-competitive at general tasks," tech leaders warned.

"We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?" the letter queried.

OpenAI has posed similar questions about regulating AI systems.

"At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models," OpenAI said in a recent statement, to which Wednesday's letter responded:

"We agree. That point is now."

<<<




---

Join the InvestorsHub Community

Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.