Saturday, August 31, 2024 9:04:03 PM
Robo-debt disgrace shows why AI cannot replace important jobs
"A.I. has a discrimination problem. In banking, the consequences can be severe
"[...] Robodebt was an AI ethics disaster""
New AI systems such as ChatGPT are untrustworthy, and the robo-debt scandal has shown the peril of abdicating responsibility to unfeeling bots.
Paul Smith Technology editor
Feb 17, 2023 – 11.49am
It is certainly noticeable within the walls of The Australian Financial Review towers that the emergence of generative AI – mostly in the form of Open AI’s ChatGPT – is a big crossover topic for the tech sector.
There are “next big things” all the time in the tech beat, but not since the first iPhone and iPads arrived in the sweaty mitts of our reviewers, have non-tech focused colleagues been so exercised about a new advance.
IMAGE - Microsoft and Google are going head-to-head on their AI platforms,
but they need real humans to make sure the bots don’t say horrific things. Getty
Apart from “What was it like to meet Bill Gates?” (Cool, thanks .. https://www.afr.com/technology/ai-is-coming-for-white-collar-jobs-gates-warns-20230123-p5cev7 ), it has been “what do you make of this ChatGPT?” that has been the echo of every walk to the kitchen, stationery cupboard and toilet since the end of the Christmas holiday.
The reason is obvious. The potential for a chatbot to do significant parts of your job is scary stuff, and even though we Financial Review scribes don’t think it can of course, the conversations have spilt over into numerous columns .. https://www.afr.com/technology/why-white-collar-workers-fear-chatgpt-20230201-p5ch30 .. and editorials .. https://www.afr.com/policy/economy/what-should-we-think-about-chatgpt-and-its-smart-friends-20230202-p5che4 .
We’re all so confident of our continuing importance that we can’t stop reassuring ourselves about it, you see.
The thing is, nobody should be worried about being usurped by this technology for the foreseeable future. AI of the ChatGPT (or Google’s Bard . . https://www.afr.com/technology/google-reveals-chatbot-rival-as-ai-tech-race-heats-up-20230207-p5cif7 ) flavour cannot take over any jobs of influence or importance, for the simple reason that it is not fit to do so.
Even if you take quality of work out of the equation, recent events in Brisbane at the royal commission into the robo-debt scandal show that any notion of necessary human trustworthiness and accountability falls to pieces if algorithm-fuelled bots are trusted to run the show.
iMAGE[/I] - Former prime minister Scott Morrison
fronts the robo-debt royal commission last December.
It is hard to think of a more disgraceful abdication of responsible government in Australia, than a federal scheme that targeted vulnerable members of society, chasing them for crippling debts they did not owe with threats of jail sentences, and did not stop even as reports of gross injustice and resulting suicides emerged.
The execution of robo-debt was based largely on the judgments of poorly implemented automation that was programmed to act based on a political policy. The humans you would expect to now hold to account for the irrefutably terrible results, seem comfortably distant from events to just shrug it off.
In December, former prime minister Scott Morrison (who was social services minister at the time of the scheme’s introduction) blamed nameless bureaucrats for not telling him it was illegal .. https://www.afr.com/politics/federal/morrison-says-he-was-never-told-robo-debt-was-unlawful-20221214-p5c6bn , and then his former Coalition government ministers Alan Tudge and Christian Porter gave testimony to the royal commission this month of unbelievable flakiness.
---------------
[ Insert: Ugh, link above is sub blocked which should not be in an article which wasn't.
I couldn't recall exactly why the disgusting robodebt debacle was illegal, soooo tooo..
The flawed algorithm at the heart of Robodebt
Robodebt teaches us that even simple automated decision-making systems come
with the biases of the people, systems and policies that conceive them
By Associate Professor Toby Murray, Dr Marc Cheong and Professor Jeannie Paterson, University of Melbourne
Published 10 July 2023
Australia’s Royal Commission into the Robodebt Scheme has published its findings ..
https://robodebt.royalcommission.gov.au/publications/report . And it’s a damning read.
P - Various unnamed individuals are referred for potential civil or criminal investigation .. https://www.news.com.au/national/politics/royal-commission-into-robodebt-scheme-recommends-referrals-of-individuals-for-civil-and-criminal-prosecution/news-story/7e39a046d31dd0e86695bf0b85a42a13 , but its publication is a timely reminder of the potential dangers presented by automated decision-making systems, and how the best way to mitigate their risks is by instilling a strong culture of ethics and systems for accountability in our institutions.
P - The so-called Robodebt scheme was touted to save billions of dollars by using automation and algorithms to identify welfare fraud and overpayments.
P - But in the end, it serves as a salient lesson in the dangers of replacing human oversight and judgement with automated decision-making.
P - It reminds us that the basic method was not merely flawed but illegal; it was premised on the false belief of treating welfare recipients as cheats (rather than as society’s most vulnerable); and it lacked both transparency and oversight.
[...]
As anyone who has ever worked a casual job will know, averaging a year’s worth of earnings across each fortnight is no way to accurately calculate fortnightly pay. It was this flaw that led the Federal Court to declare in 2019 .. https://www.theguardian.com/australia-news/2019/nov/28/robodebt-the-federal-court-ruling-and-what-it-means-for-targeted-welfare-recipients .. that debt notices issued under the scheme were not valid.
P - This kind of algorithm is known as an automated decision-making .. https://www.ombudsman.gov.au/__data/assets/pdf_file/0029/288236/OMB1188-Automated-Decision-Making-Report_Final-A1898885.pdf .. (ADM) system.
https://pursuit.unimelb.edu.au/articles/the-flawed-algorithm-at-the-heart-of-robodebt ]
---------------
It seems all too easy to sidestep any culpability for running a program that flouted laws, if it is run by bots.
IMAGE - Alan Tudge told the robo-debt royal commission it
wasn’t his responsibility to ensure the scheme was not illegal.
“I do distinctly recall putting a question … that everyone’s assured about the legal underpinnings,” Porter told the hearing. “I can’t recall who it was that affirmed that assurance, but someone did, and I recall that it was a departmental person.”
Great, we’ll blame someone … a departmental person no less, for leaving an AI-based system to hound innocent citizens to their deaths.
At least Porter asked the legality question, a scant endorsement in comparison with Tudge, who had been human services minister overseeing robo-debt, but told the royal commission it was not his responsibility to ensure it was lawful.
It isn’t always a life and death matter with intelligent automation, but the extreme example shows the unacceptable trade-offs required when we cut costs by removing humans from pursuits that require emotional considerations such as caution, compassion and an ability to change course if things are not working as they should.
Since Google launched its Bard chatbot – complete with embarrassing factual errors in its own demonstration material ..https://www.afr.com/technology/the-ai-wars-have-already-cost-google-us100b-20230209-p5cj7i – news reports have highlighted how Microsoft’s Bing search, which is incorporating ChatGPT into its results, is providing incorrect summaries (in an authoritative tone) to numerous search questions.
Hollywood question
At the risk of sounding like a hack Hollywood sci-fi character, we are at a phase of development in various areas of AI, where people with responsibility need to start asking whether we should – and not whether we can.
Sure, we can put autonomous vehicles on the road, but if we can’t work out who’s to blame when someone is run over and killed, should we do it?
[ Shenzhen to put autonomous buses on roads as China accelerates self-driving vehicle tests
Shenzhen plans to have 20 driverless buses on the road by the end of 2024, which comes
amid safety and job loss concerns over autonomous driving tech
https://www.scmp.com/tech/tech-trends/article/3270804/shenzhen-put-autonomous-buses-roads-china-accelerates-self-driving-vehicle-tests
.. and ..
Cities, companies speed up testing of driverless taxis
By Zhang Yiyi Published: Jul 09, 2024 08:09 PM
https://www.globaltimes.cn/page/202407/1315701.shtml ]
Yes, we can get AI to come up with TV plot ideas, or generate passable reports, but if we can’t work out if the “facts” are true or made up, or the ideas are partly plagiarised, should we do it?
Even in areas of clear potential benefit, such as the prospect of people in under-served areas accessing medical information and “expert” advice from an AI bot (as posited by Bill Gates) – who is culpable if someone is given fatally bad advice by a hallucinating AI?
Previous eras of technology advances have shown that if the companies developing the new technology are left to set the standards for its use, those standards will be horrendously low.
The government, therefore, needs to get into the game now, and get some firm rules in place for how AI can be used, and what governance has to be in place.
This is presuming, of course, the current government has a higher regard for the concept of human accountability than its predecessor.
Related
These are OpenAI’s strongest competitors right now
https://www.afr.com/technology/these-are-openai-s-strongest-competitors-right-now-20230209-p5cjds
Related
Is Google a ‘sell’ as Microsoft, ChatGPT mount an AI challenge?
https://www.afr.com/markets/equity-markets/is-google-a-sell-as-microsoft-chatgpt-mount-an-ai-challenge-20230215-p5ckrg
Paul Smith edits the technology coverage and has been a leading writer on the sector for 20 years. He covers
big tech, business use of tech, the fast-growing Australian tech industry and start-ups, telecommunications
and national innovation policy. Connect with Paul on Twitter. Email Paul at psmith@afr.com
https://www.afr.com/technology/robo-debt-disgrace-shows-why-ai-cannot-replace-important-jobs-20230215-p5ckv4
"A.I. has a discrimination problem. In banking, the consequences can be severe
"[...] Robodebt was an AI ethics disaster""
New AI systems such as ChatGPT are untrustworthy, and the robo-debt scandal has shown the peril of abdicating responsibility to unfeeling bots.
Paul Smith Technology editor
Feb 17, 2023 – 11.49am
It is certainly noticeable within the walls of The Australian Financial Review towers that the emergence of generative AI – mostly in the form of Open AI’s ChatGPT – is a big crossover topic for the tech sector.
There are “next big things” all the time in the tech beat, but not since the first iPhone and iPads arrived in the sweaty mitts of our reviewers, have non-tech focused colleagues been so exercised about a new advance.
IMAGE - Microsoft and Google are going head-to-head on their AI platforms,
but they need real humans to make sure the bots don’t say horrific things. Getty
Apart from “What was it like to meet Bill Gates?” (Cool, thanks .. https://www.afr.com/technology/ai-is-coming-for-white-collar-jobs-gates-warns-20230123-p5cev7 ), it has been “what do you make of this ChatGPT?” that has been the echo of every walk to the kitchen, stationery cupboard and toilet since the end of the Christmas holiday.
The reason is obvious. The potential for a chatbot to do significant parts of your job is scary stuff, and even though we Financial Review scribes don’t think it can of course, the conversations have spilt over into numerous columns .. https://www.afr.com/technology/why-white-collar-workers-fear-chatgpt-20230201-p5ch30 .. and editorials .. https://www.afr.com/policy/economy/what-should-we-think-about-chatgpt-and-its-smart-friends-20230202-p5che4 .
We’re all so confident of our continuing importance that we can’t stop reassuring ourselves about it, you see.
The thing is, nobody should be worried about being usurped by this technology for the foreseeable future. AI of the ChatGPT (or Google’s Bard . . https://www.afr.com/technology/google-reveals-chatbot-rival-as-ai-tech-race-heats-up-20230207-p5cif7 ) flavour cannot take over any jobs of influence or importance, for the simple reason that it is not fit to do so.
Even if you take quality of work out of the equation, recent events in Brisbane at the royal commission into the robo-debt scandal show that any notion of necessary human trustworthiness and accountability falls to pieces if algorithm-fuelled bots are trusted to run the show.
iMAGE[/I] - Former prime minister Scott Morrison
fronts the robo-debt royal commission last December.
It is hard to think of a more disgraceful abdication of responsible government in Australia, than a federal scheme that targeted vulnerable members of society, chasing them for crippling debts they did not owe with threats of jail sentences, and did not stop even as reports of gross injustice and resulting suicides emerged.
The execution of robo-debt was based largely on the judgments of poorly implemented automation that was programmed to act based on a political policy. The humans you would expect to now hold to account for the irrefutably terrible results, seem comfortably distant from events to just shrug it off.
In December, former prime minister Scott Morrison (who was social services minister at the time of the scheme’s introduction) blamed nameless bureaucrats for not telling him it was illegal .. https://www.afr.com/politics/federal/morrison-says-he-was-never-told-robo-debt-was-unlawful-20221214-p5c6bn , and then his former Coalition government ministers Alan Tudge and Christian Porter gave testimony to the royal commission this month of unbelievable flakiness.
---------------
[ Insert: Ugh, link above is sub blocked which should not be in an article which wasn't.
I couldn't recall exactly why the disgusting robodebt debacle was illegal, soooo tooo..
The flawed algorithm at the heart of Robodebt
Robodebt teaches us that even simple automated decision-making systems come
with the biases of the people, systems and policies that conceive them
By Associate Professor Toby Murray, Dr Marc Cheong and Professor Jeannie Paterson, University of Melbourne
Published 10 July 2023
Australia’s Royal Commission into the Robodebt Scheme has published its findings ..
https://robodebt.royalcommission.gov.au/publications/report . And it’s a damning read.
P - Various unnamed individuals are referred for potential civil or criminal investigation .. https://www.news.com.au/national/politics/royal-commission-into-robodebt-scheme-recommends-referrals-of-individuals-for-civil-and-criminal-prosecution/news-story/7e39a046d31dd0e86695bf0b85a42a13 , but its publication is a timely reminder of the potential dangers presented by automated decision-making systems, and how the best way to mitigate their risks is by instilling a strong culture of ethics and systems for accountability in our institutions.
P - The so-called Robodebt scheme was touted to save billions of dollars by using automation and algorithms to identify welfare fraud and overpayments.
P - But in the end, it serves as a salient lesson in the dangers of replacing human oversight and judgement with automated decision-making.
P - It reminds us that the basic method was not merely flawed but illegal; it was premised on the false belief of treating welfare recipients as cheats (rather than as society’s most vulnerable); and it lacked both transparency and oversight.
[...]
As anyone who has ever worked a casual job will know, averaging a year’s worth of earnings across each fortnight is no way to accurately calculate fortnightly pay. It was this flaw that led the Federal Court to declare in 2019 .. https://www.theguardian.com/australia-news/2019/nov/28/robodebt-the-federal-court-ruling-and-what-it-means-for-targeted-welfare-recipients .. that debt notices issued under the scheme were not valid.
P - This kind of algorithm is known as an automated decision-making .. https://www.ombudsman.gov.au/__data/assets/pdf_file/0029/288236/OMB1188-Automated-Decision-Making-Report_Final-A1898885.pdf .. (ADM) system.
https://pursuit.unimelb.edu.au/articles/the-flawed-algorithm-at-the-heart-of-robodebt ]
---------------
It seems all too easy to sidestep any culpability for running a program that flouted laws, if it is run by bots.
IMAGE - Alan Tudge told the robo-debt royal commission it
wasn’t his responsibility to ensure the scheme was not illegal.
“I do distinctly recall putting a question … that everyone’s assured about the legal underpinnings,” Porter told the hearing. “I can’t recall who it was that affirmed that assurance, but someone did, and I recall that it was a departmental person.”
Great, we’ll blame someone … a departmental person no less, for leaving an AI-based system to hound innocent citizens to their deaths.
At least Porter asked the legality question, a scant endorsement in comparison with Tudge, who had been human services minister overseeing robo-debt, but told the royal commission it was not his responsibility to ensure it was lawful.
It isn’t always a life and death matter with intelligent automation, but the extreme example shows the unacceptable trade-offs required when we cut costs by removing humans from pursuits that require emotional considerations such as caution, compassion and an ability to change course if things are not working as they should.
Since Google launched its Bard chatbot – complete with embarrassing factual errors in its own demonstration material ..https://www.afr.com/technology/the-ai-wars-have-already-cost-google-us100b-20230209-p5cj7i – news reports have highlighted how Microsoft’s Bing search, which is incorporating ChatGPT into its results, is providing incorrect summaries (in an authoritative tone) to numerous search questions.
Hollywood question
At the risk of sounding like a hack Hollywood sci-fi character, we are at a phase of development in various areas of AI, where people with responsibility need to start asking whether we should – and not whether we can.
Sure, we can put autonomous vehicles on the road, but if we can’t work out who’s to blame when someone is run over and killed, should we do it?
[ Shenzhen to put autonomous buses on roads as China accelerates self-driving vehicle tests
Shenzhen plans to have 20 driverless buses on the road by the end of 2024, which comes
amid safety and job loss concerns over autonomous driving tech
https://www.scmp.com/tech/tech-trends/article/3270804/shenzhen-put-autonomous-buses-roads-china-accelerates-self-driving-vehicle-tests
.. and ..
Cities, companies speed up testing of driverless taxis
By Zhang Yiyi Published: Jul 09, 2024 08:09 PM
https://www.globaltimes.cn/page/202407/1315701.shtml ]
Yes, we can get AI to come up with TV plot ideas, or generate passable reports, but if we can’t work out if the “facts” are true or made up, or the ideas are partly plagiarised, should we do it?
Even in areas of clear potential benefit, such as the prospect of people in under-served areas accessing medical information and “expert” advice from an AI bot (as posited by Bill Gates) – who is culpable if someone is given fatally bad advice by a hallucinating AI?
Previous eras of technology advances have shown that if the companies developing the new technology are left to set the standards for its use, those standards will be horrendously low.
The government, therefore, needs to get into the game now, and get some firm rules in place for how AI can be used, and what governance has to be in place.
This is presuming, of course, the current government has a higher regard for the concept of human accountability than its predecessor.
Related
These are OpenAI’s strongest competitors right now
https://www.afr.com/technology/these-are-openai-s-strongest-competitors-right-now-20230209-p5cjds
Related
Is Google a ‘sell’ as Microsoft, ChatGPT mount an AI challenge?
https://www.afr.com/markets/equity-markets/is-google-a-sell-as-microsoft-chatgpt-mount-an-ai-challenge-20230215-p5ckrg
Paul Smith edits the technology coverage and has been a leading writer on the sector for 20 years. He covers
big tech, business use of tech, the fast-growing Australian tech industry and start-ups, telecommunications
and national innovation policy. Connect with Paul on Twitter. Email Paul at psmith@afr.com
https://www.afr.com/technology/robo-debt-disgrace-shows-why-ai-cannot-replace-important-jobs-20230215-p5ckv4
It was Plato who said, “He, O men, is the wisest, who like Socrates, knows that his wisdom is in truth worth nothing”
Discover What Traders Are Watching
Explore small cap ideas before they hit the headlines.
