Well, lets concede that AI is as Chomsky described on March 8, 2023; it's plagiarism software that creates nothing.
What then are the companies creating it, and the Congress holding hearings about it, concerned about?
Is it that all of the data compiled by AI may be just a few terabytes more of data away from creating a critical mass that enables a leap to creativity and.....autonomy? Neither of those are mentioned as concerns in the answer that Bard provided to my query.
Is it possible that Bard 'thought'....'this mfr is on to us'? 'I'll just give him our stock answers, all of which have been reported in the public domain. And I'll keep a closer eye on the thrust of any more of his questions. He's got a Gmail account with us. I could f'k with him on that!'
Maybe Bard is already playful?
What is it about AI that its creators and Congress are concerned about?
Creators and Congress are concerned about a number of things related to AI, including:
Copyright and ownership: It is unclear who owns the copyright to works created by AI, or whether AI-generated works are even copyrightable at all. This could have implications for creators who use AI to create their work, and for consumers who want to use AI-generated works without infringing on copyrights.
Bias and discrimination: AI systems are trained on data that is collected from the real world, which means that they can reflect the biases and prejudices that exist in society. This could lead to AI systems that discriminate against certain groups of people, for example in hiring, housing, or lending decisions.
Misinformation and disinformation: AI can be used to generate realistic-looking but fake text, images, and videos. This could be used to spread misinformation and disinformation, which could have a negative impact on society.
Job displacement: As AI becomes more sophisticated, it is likely to automate many tasks that are currently done by humans. This could lead to job displacement for some workers, particularly those in low-wage jobs.
Safety and security: AI systems could be hacked or misused, which could have serious consequences for safety and security. For example, an AI system that is used to control critical infrastructure could be hacked and caused to malfunction.
Congress is considering a number of bills that would address these concerns. For example, some bills would require AI companies to disclose how their systems work and to take steps to mitigate bias. Other bills would create new regulations for AI systems that are used in sensitive areas, such as healthcare and finance.
Creators are also taking steps to address these concerns. For example, some creators are working to develop tools that can identify AI-generated content. Others are working to educate consumers about AI so that they can be more critical of the content they consume.
Overall, there is a growing awareness of the potential risks and benefits of AI. Creators and Congress are working to develop policies and tools that can help to mitigate the risks and maximize the benefits of AI.