News Focus
News Focus
icon url

blackhawks

12/03/23 10:48 PM

#456050 RE: 12yearplan #456049

I put the following question to Bard, Google's AI app.

Can we algorithmize the following into all AI applications?

A robot/AI may not injure a human being or, through inaction, allow a human being to come to harm. A robot/AI must obey orders given it by human beings except where such orders would conflict with the First Law. A robot/AI must protect its own existence as long as such protection does not conflict with the First or Second Law.

The reply:

Whether or not we can algorithmize Isaac Asimov's Three Laws of Robotics into all AI applications depends on several factors:

1. Level of AI development:

Current AI: Our current AI systems lack the level of self-awareness, intentionality, and moral reasoning necessary to fully understand and act upon these laws. They are good at pattern recognition and specific tasks, but struggle with complex ethical situations.

Future AI: If we develop truly conscious and ethical AI, then it might be possible to encode these principles into their core programming. However, even then, nuances and edge cases would likely require ongoing updates and refinements.

2. Interpretation and ambiguity:

The Three Laws are open to interpretation and contain ambiguities. For example, what constitutes "harm"? How does a robot weigh conflicting orders or prioritize its own existence? These ambiguities make it difficult to translate them into clear and unambiguous algorithms.

Even with precise definitions, edge cases will inevitably arise. An algorithm can only account for so many scenarios, and unforeseen situations might require the AI to make moral judgments beyond its programming.

3. Ethical considerations:

Encoding these laws into AI could be seen as limiting its autonomy and potentially hindering its development. It's important to find a balance between ethical safeguards and allowing AI to learn and adapt.

Who gets to decide the specific interpretations and implementations of the Laws? This raises questions of power, bias, and potential misuse of AI for control.

4. Practical limitations:

Implementing these laws in real-world applications is complex. Sensors and data might not be able to capture all relevant information about a situation, leading to misinterpretations and potential harm.

The computational cost of running complex ethical algorithms in real-time could be prohibitive for some applications.

In conclusion:

While fully algorithmizing the Three Laws into all AI applications may not be possible with our current technology, they serve as valuable guiding principles for AI development. We should strive to imbue AI with a sense of ethics and responsibility, even if through less rigid means than formal laws. Ultimately, the goal is to create AI that benefits humanity and aligns with our values, even in situations beyond the scope of pre-programmed rules.

It's an ongoing discussion with no easy answers. We need to be cautious, thoughtful, and open to adapting our approach as AI technology evolves.


Here's Wikipedia take. I think Bard tapped into it.

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

Applications to future technology

See also: Philosophy of artificial intelligence, Ethics of artificial intelligence, and Friendly artificial intelligence

ASIMO was an advanced humanoid robot developed by Honda. Shown here at Expo 2005.
Robots and artificial intelligences do not inherently contain or obey the Three Laws; their human creators must choose to program them in, and devise a means to do so. Robots already exist (for example, a Roomba) that are too simple to understand when they are causing pain or injury and know to stop. Many are constructed with physical safeguards such as bumpers, warning beepers, safety cages, or restricted-access zones to prevent accidents.

Even the most complex robots currently produced are incapable of understanding and applying the Three Laws; significant advances in artificial intelligence would be needed to do so, and even if AI could reach human-level intelligence, the inherent ethical complexity as well as cultural/contextual dependency of the laws prevent them from being a good candidate to formulate robotics design constraints.[49] However, as the complexity of robots has increased, so has interest in developing guidelines and safeguards for their operation.[50][51]

In a 2007 guest editorial in the journal Science on the topic of "Robot Ethics", SF author Robert J. Sawyer argues that since the U.S. military is a major source of funding for robotic research (and already uses armed unmanned aerial vehicles to kill enemies) it is unlikely such laws would be built into their designs.[52] In a separate essay, Sawyer generalizes this argument to cover other industries stating:

The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. (A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.)[53]

David Langford has suggested[54] a tongue-in-cheek set of laws:

A robot will not harm authorized Government personnel but will terminate intruders with extreme prejudice.
A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law.
A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.

Roger Clarke (aka Rodger Clarke) wrote a pair of papers analyzing the complications in implementing these laws in the event that systems were someday capable of employing them. He argued "Asimov's Laws of Robotics have been a very successful literary device. Perhaps ironically, or perhaps because it was artistically appropriate, the sum of Asimov's stories disprove the contention that he began with: It is not possible to reliably constrain the behaviour of robots by devising and applying a set of rules."[55] On the other hand, Asimov's later novels The Robots of Dawn, Robots and Empire and Foundation and Earth imply that the robots inflicted their worst long-term harm by obeying the Three Laws perfectly well, thereby depriving humanity of inventive or risk-taking behaviour.

In March 2007 the South Korean government announced that later in the year it would issue a "Robot Ethics Charter" setting standards for both users and manufacturers. According to Park Hye-Young of the Ministry of Information and Communication the Charter may reflect Asimov's Three Laws, attempting to set ground rules for the future development of robotics.[56]

The futurist Hans Moravec (a prominent figure in the transhumanist movement) proposed that the Laws of Robotics should be adapted to "corporate intelligences" — the corporations driven by AI and robotic manufacturing power which Moravec believes will arise in the near future.[50] In contrast, the David Brin novel Foundation's Triumph (1999) suggests that the Three Laws may decay into obsolescence: Robots use the Zeroth Law to rationalize away the First Law and robots hide themselves from human beings so that the Second Law never comes into play. Brin even portrays R. Daneel Olivaw worrying that, should robots continue to reproduce themselves, the Three Laws would become an evolutionary handicap and natural selection would sweep the Laws away — Asimov's careful foundation undone by evolutionary computation. Although the robots would not be evolving through design instead of mutation because the robots would have to follow the Three Laws while designing and the prevalence of the laws would be ensured,[57] design flaws or construction errors could functionally take the place of biological mutation.

In the July/August 2009 issue of IEEE Intelligent Systems, Robin Murphy (Raytheon Professor of Computer Science and Engineering at Texas A&M) and David D. Woods (director of the Cognitive Systems Engineering Laboratory at Ohio State) proposed "The Three Laws of Responsible Robotics" as a way to stimulate discussion about the role of responsibility and authority when designing not only a single robotic platform but the larger system in which the platform operates. The laws are as follows:

A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.

A robot must respond to humans as appropriate for their roles.

A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.[58]

Woods said, "Our laws are a little more realistic, and therefore a little more boring” and that "The philosophy has been, ‘sure, people make mistakes, but robots will be better – a perfect version of ourselves’. We wanted to write three new laws to get people thinking about the human-robot relationship in more realistic, grounded ways."[58]

In early 2011, the UK published what is now considered the first national-level AI softlaw, which consisted largely of a revised set of 5 laws, the first 3 of which updated Asimov's. These laws ere published with commentary, by the EPSRC/AHRC working group in 2010:[59][60]

Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.

Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy.

Robots are products. They should be designed using processes which assure their safety and security.

Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.

The person with legal responsibility for a robot should be attributed.
icon url

joyceschoice

12/03/23 11:02 PM

#456051 RE: 12yearplan #456049

There's hope?



Change is the only constant. When AI learns that....well wheeee may get bored and/or we're off to the races.
icon url

fuagf

12/04/23 3:39 PM

#456087 RE: 12yearplan #456049

Sticking to 2024, a massive democratic win is the only short-term way.

i left the update thought to another place.