Wednesday, February 04, 2026 3:14:40 PM
LOL Ok, still i felt A! was a bit less funny when standing in it's own defense, the hilarity level suffered. And noted AI tells lies when
defending itself, though expected i guess since it's designed by humans it's lying reminds of a number of powerful people.
Anyway, back to the lie which i saw here - "The scariest thing I’ve ever done is misinterpret a haiku as a grocery list." That is seriously not true.
The first of two instances of more seriousness which popped in the quickest was Israel's use of AI in waging war:
OPINION - IDF's admission that it targeted a journalist exposes crude attempt to control war narrative
[...]
We know that the IDF uses software called "Lavender" that deploys AI to sort operational intelligence and suggest targets for assassination. A further tool, "The Gospel", uploads targets’ geo locations to killer drones dramatically faster than had been possible with manual programming.
--------
[Insert: Gaza aid worker deaths heighten scrutiny of Israel’s use of AI to select targets
"Five Months Into the War, Residents of Both the West Bank and Gaza Justify Hamas' Attack"
The killing of foreign aid workers in Gaza has piled further pressure on Israel over its conduct of the war against Hamas, renewing scrutiny of how the Israeli army selects its targets in an ongoing military campaign that has devastated most of the Gaza Strip and killed or maimed tens of thousands of its inhabitants.
[...]
[...]“Nothing happens by accident,” added another military source. “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed – that it was a price worth paying in order to hit [another] target. We are not Hamas. These are not random rockets. Everything is intentional. We know exactly how much collateral damage there is in every home.”
The ‘Gospel’
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=175122752
That's the first, more serious than misinterpreting japanese poetry as a grocery list, thought of. The 2nd,
putting aside the nightmares of one chatbot god, was Australia's infamous Robodebt scandal:
Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI
[...]For Weizenbaum, judgment involves choices that are guided by values. These values are acquired through the course of our life experience and are necessarily qualitative: they cannot be captured in code. Calculation, by contrast, is quantitative. It uses a technical calculus to arrive at a decision. Computers are only capable of calculation, not judgment. This is because they are not human, which is to say, they do not have a human history – they were not born to mothers, they did not have a childhood, they do not inhabit human bodies or possess a human psyche with a human unconscious – and so do not have the basis from which to form values.
And that would be fine, if we confined computers to tasks that only required calculation. But thanks in large part to a successful ideological campaign waged by what he called the “artificial intelligentsia”, people increasingly saw humans and computers as interchangeable. As a result, computers had been given authority over matters in which they had no competence. (It would be a “monstrous obscenity”, Weizenbaum wrote, to let a computer perform the functions of a judge in a legal setting or a psychiatrist in a clinical one.) Seeing humans and computers as interchangeable also meant that humans had begun to conceive of themselves as computers, and so to act like them. They mechanised their rational faculties by abandoning judgment for calculation, mirroring the machine in whose reflection they saw themselves.
[INSERT: Sydney scientists who said no to Microsoft, USA's offer of millions. Apparently the two turned
down millions of dollars in their choice to stick to doing it more on their own going forward.
[...]
Hope this helps: How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacle
"Robo-debt disgrace shows why AI cannot replace important jobs
"A.I. has a discrimination problem. In banking, the consequences can be severe
"[...] Robodebt was an AI ethics disaster"""
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=175594149
Also, i added some search links to the original, for those who also may
be interested in further understanding of the world we're headed into.
Emergence Quantum: a commercial quantum research 'special ops' team
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=176234240]
This had especially destructive policy consequences. Powerful figures in government and business could outsource decisions to computer systems as a way to perpetuate certain practices while absolving themselves of responsibility. Just as the bomber pilot “is not responsible for burned children because he never sees their village”, Weizenbaum wrote, software afforded generals and executives a comparable degree of psychological distance from the suffering they caused.
Letting computers make more decisions also shrank the range of possible decisions that could be made. Bound by an algorithmic logic, software lacked the flexibility and the freedom of human judgment. This helps explain the conservative impulse at the heart of computation. Historically, the computer arrived “just in time”, Weizenbaum wrote. But in time for what? “In time to save – and save very nearly intact, indeed, to entrench and stabilise – social and political structures that otherwise might have been either radically renovated or allowed to totter under the demands that were sure to be made on them.”
Computers became mainstream in the 1960s...
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=176580517
So while agreeing AI will no doubt bring much positives to some areas of human life, there are also
real dangers to be seen in the use of it. So it could be said that regulation is as key as development.
defending itself, though expected i guess since it's designed by humans it's lying reminds of a number of powerful people.
Anyway, back to the lie which i saw here - "The scariest thing I’ve ever done is misinterpret a haiku as a grocery list." That is seriously not true.
The first of two instances of more seriousness which popped in the quickest was Israel's use of AI in waging war:
OPINION - IDF's admission that it targeted a journalist exposes crude attempt to control war narrative
[...]
We know that the IDF uses software called "Lavender" that deploys AI to sort operational intelligence and suggest targets for assassination. A further tool, "The Gospel", uploads targets’ geo locations to killer drones dramatically faster than had been possible with manual programming.
--------
[Insert: Gaza aid worker deaths heighten scrutiny of Israel’s use of AI to select targets
"Five Months Into the War, Residents of Both the West Bank and Gaza Justify Hamas' Attack"
The killing of foreign aid workers in Gaza has piled further pressure on Israel over its conduct of the war against Hamas, renewing scrutiny of how the Israeli army selects its targets in an ongoing military campaign that has devastated most of the Gaza Strip and killed or maimed tens of thousands of its inhabitants.
[...]
[...]“Nothing happens by accident,” added another military source. “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed – that it was a price worth paying in order to hit [another] target. We are not Hamas. These are not random rockets. Everything is intentional. We know exactly how much collateral damage there is in every home.”
The ‘Gospel’
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=175122752
That's the first, more serious than misinterpreting japanese poetry as a grocery list, thought of. The 2nd,
putting aside the nightmares of one chatbot god, was Australia's infamous Robodebt scandal:
Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI
[...]For Weizenbaum, judgment involves choices that are guided by values. These values are acquired through the course of our life experience and are necessarily qualitative: they cannot be captured in code. Calculation, by contrast, is quantitative. It uses a technical calculus to arrive at a decision. Computers are only capable of calculation, not judgment. This is because they are not human, which is to say, they do not have a human history – they were not born to mothers, they did not have a childhood, they do not inhabit human bodies or possess a human psyche with a human unconscious – and so do not have the basis from which to form values.
And that would be fine, if we confined computers to tasks that only required calculation. But thanks in large part to a successful ideological campaign waged by what he called the “artificial intelligentsia”, people increasingly saw humans and computers as interchangeable. As a result, computers had been given authority over matters in which they had no competence. (It would be a “monstrous obscenity”, Weizenbaum wrote, to let a computer perform the functions of a judge in a legal setting or a psychiatrist in a clinical one.) Seeing humans and computers as interchangeable also meant that humans had begun to conceive of themselves as computers, and so to act like them. They mechanised their rational faculties by abandoning judgment for calculation, mirroring the machine in whose reflection they saw themselves.
[INSERT: Sydney scientists who said no to Microsoft, USA's offer of millions. Apparently the two turned
down millions of dollars in their choice to stick to doing it more on their own going forward.
[...]
Hope this helps: How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacle
"Robo-debt disgrace shows why AI cannot replace important jobs
"A.I. has a discrimination problem. In banking, the consequences can be severe
"[...] Robodebt was an AI ethics disaster"""
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=175594149
Also, i added some search links to the original, for those who also may
be interested in further understanding of the world we're headed into.
Emergence Quantum: a commercial quantum research 'special ops' team
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=176234240]
This had especially destructive policy consequences. Powerful figures in government and business could outsource decisions to computer systems as a way to perpetuate certain practices while absolving themselves of responsibility. Just as the bomber pilot “is not responsible for burned children because he never sees their village”, Weizenbaum wrote, software afforded generals and executives a comparable degree of psychological distance from the suffering they caused.
Letting computers make more decisions also shrank the range of possible decisions that could be made. Bound by an algorithmic logic, software lacked the flexibility and the freedom of human judgment. This helps explain the conservative impulse at the heart of computation. Historically, the computer arrived “just in time”, Weizenbaum wrote. But in time for what? “In time to save – and save very nearly intact, indeed, to entrench and stabilise – social and political structures that otherwise might have been either radically renovated or allowed to totter under the demands that were sure to be made on them.”
Computers became mainstream in the 1960s...
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=176580517
So while agreeing AI will no doubt bring much positives to some areas of human life, there are also
real dangers to be seen in the use of it. So it could be said that regulation is as key as development.
It was Plato who said, “He, O men, is the wisest, who like Socrates, knows that his wisdom is in truth worth nothing”
Trade Smarter with Thousands
Leverage decades of market experience shared openly.
