InvestorsHub Logo
icon url

fuagf

03/28/17 6:23 PM

#267355 RE: F6 #267330

Is AI Sexist?

In the not-so-distant future, artificial intelligence will be smarter than humans. But as the technology develops,
absorbing cultural norms from its creators and the internet, it will also be more racist, sexist, and unfriendly to women.

By Erika Hayasaki
January 16, 2017

It started as a seemingly sweet Twitter chatbot. Modeled after a millennial, it awakened on the internet from behind a pixelated image of a full-lipped young female with a wide and staring gaze. Microsoft, the multinational technology company that created the bot, named it Tay, assigned it a gender, and gave “her” account a tagline that promised, “The more you talk the smarter Tay gets!”

“hellooooooo world!!!” Tay tweeted on the morning of March 23, 2016.

She brimmed with enthusiasm: “can i just say that im stoked to meet u? humans are super cool.”

She asked innocent questions: “Why isn’t #NationalPuppyDay everyday?”

Tay’s designers built her to be a creature of the web, reliant on artificial intelligence (AI) to learn and engage in human conversations and get better at it by interacting with people over social media. As the day went on, Tay gained followers. She also quickly fell prey to Twitter users targeting her vulnerabilities. For those internet antagonists looking to manipulate Tay, it didn’t take much effort; they engaged the bot in ugly conversations, tricking the technology into mimicking their racist and sexist behavior. Within a few hours, Tay had endorsed Adolf Hitler and referred to U.S. President Barack Obama as “the monkey.” She sex-chatted with one user, tweeting, “DADDY I’M SUCH A BAD NAUGHTY ROBOT.”

By early evening, she was firing off sexist tweets:

“gamergate is good and women are inferior”

“Zoe Quinn is a Stupid Whore.”

“I fucking hate feminists and they should all die and burn in hell.”

Within 24 hours, Microsoft pulled Tay offline. Peter Lee, the company’s corporate vice president for research, issued a public apology: “We take full responsibility for not seeing this possibility ahead of time,” he wrote, promising that the company would “do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes.”

The designers seemed to have underestimated the dark side of humanity, omnipresent online, and miscalculated the undercurrents of bigotry and sexism that seep into artificial intelligence.

The worldwide race to create AI machines is often propelled by the quickest, most effective route to meeting the checklist of human needs. Robots are predicted to replace 47 percent of U.S. jobs, according to a study out of the Oxford Martin School; developing world countries such as Ethiopia, China, Thailand, and India are even more at risk. Intelligent machines will eventually tend to our medical needs, serve the disabled and elderly, and even take care of and teach our children. And we know who is likely to be most affected: women.

Women are projected to take the biggest hits to jobs in the near future, according to a World Economic Forum (WEF) report predicting that 5.1 million positions worldwide will be lost by 2020. “Developments in previously disjointed fields such as artificial intelligence and machine learning, robotics, nanotechnology, 3D printing and genetics and biotechnology are all building on and amplifying one another,” the WEF report states. “Smart systems — homes, factories, farms, grids or entire cities — will help tackle problems ranging from supply chain management to climate change.” These technological changes will create new kinds of jobs while displacing others. And women will lose roles in workforces where they make up high percentages — think office and administrative jobs — and in sectors where there are already gender imbalances, such as architecture, engineering, computers, math, and manufacturing. Men will see nearly 4 million job losses and 1.4 million gains (approximately one new job created for every three lost). In comparison, women will face 3 million job losses and only 0.55 million gains (more than five jobs lost for every one gained).

It's a long one - http://foreignpolicy.com/2017/01/16/women-vs-the-machine/

Prompted by this one 3 videos + 3 images down in yours

Elon Musk's Billion-Dollar Crusade to Stop the AI Apocalypse
http://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x [with embedded videos]

One other board mention of Tay here.

Google Outlines Plan for a Kill Switch That Would Prevent a Robot Takeover
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=123683068







icon url

fuagf

03/30/17 11:16 PM

#267478 RE: F6 #267330

the age of stupid - la era de la estupidez completa con subtitulos



https://www.youtube.com/watch?v=Gs9nVKbC-F4

posted specifically to the 3rd down in your "additional related from the notes/tally of sources" .. your video had gone dark ..

.. 4 mins in and already seen sand-covered Las Vegas .. fire all around the Sydney Opera House .. dejected figures in desolation .. rocks at the Arctic .. looks tough,
now am in the global archive storage facility, a tower about 800km north of Norway .. wind everywhere .. LOOKS MORBID and FOREBODING, YET FASCINATING!

icon url

fuagf

04/02/17 7:17 PM

#267601 RE: F6 #267330

HISTORY OF OIL - Part 1 [2-5 below]

.. embeds for 2 - 5 inserted here ..



The Mamas & The Papas

Uploaded on May 15, 2010

This is part 1 of 5 part series of videos from a documentary called The History Of Oil. All parts combined is about 45 minutes total video viewing.

https://www.youtube.com/watch?v=D4sykoUWZ8g









Love your first, the youn'un - robot-not love affair



.. ignorance in innocence is scary yet beguiling ..



icon url

fuagf

08/19/17 8:32 PM

#271369 RE: F6 #267330

World Robot Conference 2017

From 23 Aug, 2017 08:00 until 27 Aug, 2017 16:00

At Beijing, China

Categories: Conferences
Save to calendar

[ INSERT: Robert Muraine -Funny Robot Dance

]

Scope

Building on the success of the previous two years, the World Robot Conference 2017 (WRC 2017) is scheduled to be held in Beijing from August 23rd to 27th, 2017. Revolving around the world's major areas of robotic research and application, and innovation and development in an intelligent society, the WRC 2017 will offer a venue where academic thoughts can be exchanged at high levels and the latest achievements demonstrated, build a platform which helps promote international collaboration and innovation, bring both Chinese experts and their overseas counterparts together so that they could discuss the trends in the development of robotics and in robotic innovation, clearly point out in which direction the Chinese robotic industry will develop, figure out the profound impact of the much-anticipated robotics revolution on future social development, and provide the basis for China's efforts to draw up a strategy for developing its robotic industry and to transform its manufacturing industries. In summary, the conference will elevate the international influence of China's robotic industry.

Topics of Interest

* industrial robots: indisutrial robot application products and solutions, industrial robot development and software applications, industrial robot functional units and components

* special robots: underwater robots, aerial robots, civil explosion-proof robots

* service robots: household cleaning robots, entertainment robots, educational robots, rehabilitation robots, bionic robots, UAV

* artificial intelligence: development platforms, voice interaction software, control platforms, machine vision, artificial intelligence products, Smart City, Smart Home

For more information, go to http://www.worldrobotconference.com/en/Home/

Technical co-Sponsorship by IEEE RAS.

http://www.ieee-ras.org/about-ras/ras-calendar/event/1203-world-robot-conference-2017
icon url

fuagf

10/26/18 7:54 PM

#292355 RE: F6 #267330

Microsoft Says It Will Sell Pentagon Artificial Intelligence and Other Advanced Technology

"Young Girl Mistakes Discarded Water Heater For A Robot"


“This was not a hard decision,” Brad Smith, Microsoft’s president and general counsel, said in an interview. David Ryder/Bloomberg

By David E. Sanger

Oct. 26, 2018

REDMOND, Wash. — Microsoft said Friday that it would sell the military and intelligence agencies whatever advanced technologies they needed “to build a strong defense,” just months after Google told the Pentagon .. https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html?module=inline .. it would refuse to provide artificial intelligence products that could build more accurate drones or compete with China for next-generation weapons.

The announcement, made quietly in a small, town-hall-style meeting with the software giant’s leadership on Thursday, then planned to be published on a blog Friday afternoon, underscores the radically different paths these leading American technology firms are taking as they struggle with their role in creating a new generation of cyberweapons to help, and perhaps someday replace, American warriors.

But the divergent paths taken by Google and Microsoft also underscore concerns inside the American defense and intelligence establishments about how the United States will take on a rising China.

The Chinese government has, in just the past two years, set goals for dominance in the next decade in artificial intelligence, quantum computing and other technologies that it believes will allow its military and intelligence agencies to surpass those of the United States. Pentagon officials have questioned how committed domestic technology companies are to keeping the United States on the leading edge, the way Raytheon, Boeing, IBM and McDonnell Douglas did in the Cold War.

Google encountered fierce opposition from young engineers .. https://www.nytimes.com/2018/05/30/technology/google-project-maven-pentagon.html?module=inline .. to the company’s participation in “Project Maven,” a program to improve how drones recognize and select their targets. Google declared a few weeks ago it would not bid on a multibillion dollar contract to provide the Pentagon with “cloud services” to store and process vast amounts of data. Amazon, for its part, appears willing to supply its services to the military and intelligence agencies, and it runs the information cloud services that power the Central Intelligence Agency.

Even before Friday’s announcement, Microsoft seemed like the only plausible alternative for the Pentagon’s giant cloud project, called JEDI, in which Amazon is considered the front-runner.

But the Microsoft announcement may have a greater impact on future technologies, including warning systems and weapons powered by artificial intelligence. And Microsoft’s leadership, after brief debates this summer, concluded that by dropping out of the bidding, Google was also losing any real influence in how the weapons are used.

“This was not a hard decision,” Brad Smith, Microsoft’s president and general counsel, said in an interview in his office. “Microsoft was born in the United States, is headquartered in the United States, and has grown up with all the benefits that have long come from being in this country.”

But Mr. Smith seemed to be trying to strike a middle ground.

He has sued the United States government repeatedly to halt Washington’s efforts to get information stored in its servers about customers, and he is pressing for new international agreements to limit how the United States and its adversaries can use cyberweapons.

He also was to argue in the blog post that “to withdraw from this market is to reduce our opportunity to engage in the public debate about how new technologies can best be used in a responsible way.

“We are not going to withdraw from the future,” he said.

https://www.nytimes.com/2018/10/26/us/politics/ai-microsoft-pentagon.html?action=click&module=Top%20Stories&pgtype=Homepage
icon url

fuagf

12/15/18 4:32 AM

#295703 RE: F6 #267330

When algorithms go wrong we need more power to fight back, say AI researchers

"Young Girl Mistakes Discarded Water Heater For A Robot"

The public doesn’t have the tools to hold algorithms accountable

By James Vincent@jjvincent Dec 8, 2018, 2:00pm EST



Governments and private companies are deploying AI systems at a rapid pace, but the public lacks the tools to hold these systems accountable when they fail. That’s one of the major conclusions in a new report .. https://ainowinstitute.org/AI_Now_2018_Report.html .. issued by AI Now, a research group home to employees from tech companies like Microsoft and Google and affiliated with New York University.

The report examines the social challenges of AI and algorithmic systems, homing in on what researchers call “the accountability gap” as this technology is integrated “across core social domains.” They put forward ten recommendations .. https://medium.com/@AINowInstitute/after-a-year-of-tech-scandals-our-10-recommendations-for-ai-95b3b2c5e5 , including calling for government regulation of facial recognition (something Microsoft president Brad Smith also advocated for this week .. https://blogs.microsoft.com/on-the-issues/2018/12/06/facial-recognition-its-time-for-action/ ) and “truth-in-advertising” laws for AI products, so that companies can’t simply trade on the reputation of the technology to sell their services.

Big tech companies have found themselves in an AI gold rush, charging into a broad range of markets from recruitment to healthcare to sell their services. But, as AI Now co-founder Meredith Whittaker, leader of Google’s Open Research Group, tells The Verge, “a lot of their claims about benefit and utility are not backed by publicly accessible scientific evidence.”

Whittaker gives the example of IBM’s Watson system, which, during trial diagnoses at Memorial Sloan Kettering Cancer Center, gave “unsafe and incorrect treatment recommendations,” according to leaked internal documents .. https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/ . “The claims that their marketing department had made about [their technology’s] near-magical properties were never substantiated by peer-reviewed research,” says Whittaker.

"2018 has been a year of “cascading scandals” for AI"

The authors of AI Now’s report say this incident is just one of a number of “cascading scandals” involving AI and algorithmic systems deployed by governments and big tech companies in 2018. Others range from accusations that Facebook helped facilitate genocide .. https://www.theverge.com/2018/8/27/17785522/facebook-myanmar-ethnic-violence-banned-accounts-military-genocide .. in Myanmar, to the revelation that Google’s is helping to build AI tools for drones for the military as part of Project Maven .. https://www.theverge.com/2018/6/7/17439310/google-ai-ethics-principles-warfare-weapons-military-project-maven , and the Cambridge Analytica scandal .. https://www.theverge.com/2018/4/10/17165130/facebook-cambridge-analytica-scandal .

In all these cases there has been public outcry as well as internal dissent in Silicon Valley’s most valuable companies. The year saw Google employees quitting .. https://www.engadget.com/2018/05/14/google-project-maven-employee-protest/ .. over the company’s Pentagon contracts, Microsoft employees pressuring the company .. https://www.theverge.com/2018/6/21/17488328/microsoft-ice-employees-signatures-protest .. to stop working with Immigration and Customs Enforcement (ICE), and employee walkouts from Google .. https://www.theverge.com/2018/11/8/18075936/google-walkout-protest-movement-sexual-harassmesnt-policy-change , Uber, eBay, and Airbnb protesting issues involving sexual harassment.

Whittaker says these protests, supported by labor alliances and research initiatives like AI Now’s own, have become “an unexpected and gratifying force for public accountability.”


This year saw widespread protests against the use of AI, including Google’s involvement in building drone surveillance
technology. Photo by John Moore/Getty Images

But the report is clear: the public needs more. The danger to civic justice is especially clear when it comes to the adoption of automated decision systems (ADS) by the government. These include algorithms used for calculating prison sentences and allotting medical aid. Usually, say the report’s authors, software is introduced into these domains with the purpose of cutting costs and increasing efficiency. But that result is often systems making decisions that cannot be explained or appealed.

---
INSERT: these about three-quarters down in the post this post replies to

A cognitive business can outthink the unknown. - ad
IBM Watson
http://www.ibm.com/cognitive/?cm_mmc=Display_Washingtonpost-_-9.1+MO+Mktg+Plan+Unknown_CA+Cognitive-_-US_US-_-19177234_CA-Evidence-Woodside-970x250-Billboard-NonCollapse-1&cm_mmca1=000005IG&cm_mmca2=10002557&cvosrc=display.WashingtonPost.com.Rotational%20Billboard_SD%20ROS_Desktop_970x250&cvo_campaign=9.1+MO+Mktg+Plan+Unknown_CA+Cognitive-US_US&cvo_pid=19177234#woodside

Cognitive businesses everywhere are working with Watson - ad
Watson is working with businesses, scientists, and governments. Helping us all outthink our biggest challenges.
https://www.ibm.com/cognitive/?cm_mmc=Display_Washingtonpost-_-9.1+MO+Mktg+Plan+Unknown_CA+Cognitive-_-US_US-_-19177234_CA-Cognitive-WorldofWatson-970x250-Billboard-NonCollapse&cm_mmca1=000005IG&cm_mmca2=10002557&cvosrc=display.WashingtonPost.com.Rotational%20Billboard_SD%20ROS_Desktop_970x250&cvo_campaign=9.1+MO+Mktg+Plan+Unknown_CA+Cognitive-US_US&cvo_pid=19177234#WatsonWorld

Welcome to the era of cognitive business
Start your cognitive business journey by learning more about cognitive solutions like Watson and the IBM Cloud platform that supports cognitive workloads.
http://www.ibm.com/cognitive/
END INSERT
----


AI Now’s report cites a number of examples, including that of Tammy Dobbs .. https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy , an Arkansas resident with cerebral palsy who had her Medicaid-provided home care cut from 56 hours to 32 hours a week without explanation. Legal Aid successfully sued the State of Arkansas and the algorithmic allocation system was judged to be unconstitutional.

Whittaker and fellow AI Now co-founder Kate Crawford, a researcher at Microsoft, say the integration of ADS into government services has outpaced our ability to audit these systems. But, they say, there are concrete steps that can be taken to remedy this. These include requiring technology vendors which sell services to the government to waive trade secrecy protections, thereby allowing researchers to better examine their algorithms.

"“If we want public accountability we have to be able to audit this technology.”"

“You have to be able to say, ‘you’ve been cut off from Medicaid, here’s why,’ and you can’t do that with black box systems” says Crawford. “If we want public accountability we have to be able to audit this technology.”

Another area where action is needed immediately, say the pair, is the use of facial recognition and affect recognition. The former is increasingly being used by police forces, in China, the US, and Europe. Amazon’s Rekognition software, for example, has been deployed by police in Orlando and Washington County .. https://www.theverge.com/2018/5/22/17379968/amazon-rekognition-facial-recognition-surveillance-aclu , even though tests have shown that the software can perform differently across different races. In a test where Rekognition was used to identify members of Congress .. https://www.theverge.com/2018/7/26/17615634/amazon-rekognition-aclu-mug-shot-congress-facial-recognition .. it had an error rate of 39 percent for non-white members compared to only five percent for white members. And for affect recognition, where companies claim technology can scan someone’s face and read their character and even intent, AI Now’s authors say companies are often peddling pseudoscience.

Despite these challenges, though, Whittaker and Crawford say that 2018 has shown that when the problems of AI accountability and bias are brought to light, tech employees, lawmakers, and the public are willing to act rather than acquiesce.

With regards to the algorithmic scandals incubated by Silicon Valley’s biggest companies, Crawford says: “Their ‘move fast and break things’ ideology has broken a lot of things that are pretty dear to us and right now we have to start thinking about the public interest.”

Says Whittaker: “What you’re seeing is people waking up to the contradictions between the cyber-utopian tech rhetoric and the reality of the implications of these technologies as they’re used in everyday life.”

https://www.theverge.com/2018/12/8/18131745/ai-now-algorithmic-accountability-2018-report-facebook-microsoft-google
icon url

fuagf

12/09/20 8:54 PM

#360284 RE: F6 #267330

Human-Artificial intelligence collaborations best for skin cancer diagnosis

"Young Girl Mistakes Discarded Water Heater For A Robot "



24 June 2020

Artificial intelligence (AI) improved skin cancer diagnostic accuracy when used in collaboration with human clinical checks, an international study including University of Queensland ... https://www.uq.edu.au/ .. researchers has found.
The global team tested for the first time whether a ‘real world’, collaborative approach involving clinicians assisted by AI improved the accuracy of skin cancer clinical decision making.

UQ’s Professor Monika Janda .. https://researchers.uq.edu.au/researcher/11560 .. said the highest diagnostic accuracy was achieved when crowd wisdom and AI predictions were combined, suggesting human-AI and crowd-AI collaborations were preferable to individual experts or AI alone

“This is important because AI decision support has slowly started to infiltrate healthcare settings, and yet few studies have tested its performance in real world settings or how clinicians interact with it,” Professor Janda said.

“Inexperienced evaluators gained the highest benefit from AI decision support and expert evaluators confident in skin cancer diagnosis achieved modest or no benefit.

“These findings indicated a combined AI-human approach to skin cancer diagnosis may be the most relevant for clinicians in the future.”

Although AI diagnostic software has demonstrated expert level accuracy in several image-based medical studies, researchers have remained unclear on whether its use improved clinical practice.

“Our study found that good quality AI support was useful to clinicians but needed to be simple, concrete, and in accordance with a given task,” Professor Janda said.

“For clinicians of the future this means that AI-based screening and diagnosis might soon be available to support them on a daily basis.

“Implementation of any AI software needs extensive testing to understand the impact it has on clinical decision making.”

Researchers trained and tested an artificial convolutional neural network to analyse pigmented skin lesions, and compared the findings with human evaluations on three types of AI-based decision support.

UQ’s Professor H. Peter Soyer and Dr Cliff Rosendahl were also part of the study.

The paper is published in Nature Medicine .. https://www.nature.com/articles/s41591-020-0942-0 . (DOI: 10.1038/s41591-020-0942-0 .. https://www.nature.com/articles/s41591-020-0942-0 )

Media: Professor Monika Janda, m.janda@uq.edu.au; Faculty of Medicine Communications, med.media@uq.edu.au, +61 7 3365 5133, +61 436 368 746

https://www.uq.edu.au/news/article/2020/06/human-artificial-intelligence-collaborations-best-skin-cancer-diagnosis

Some background

Artificial intelligence will create DIY melanoma detector

Posted on Dec 10 2018 By Alicia Mole



Scientists are working on developing world-first technology to automate skin self-examination, allowing Australians to take a full-body skin mapping scan of moles and lesions in the privacy of their own home.

Artificial intelligence is being enlisted to develop an app to enable Australians to self-diagnose skin cancer by conducting a personal full-body scan via a smart phone at home, in a project led by QUT Future Fellow Dr Anders Eriksson, from QUT School of Electrical Engineering and Computer Science.

The $500,000 project funded by the Merchant Charity Foundation is a collaboration with leading dermatology and skin cancer research experts, Professor Peter Soyer and Professor Monika Janda from the University of Queensland.

“The technology we’re developing will allow a person to simply scan their body with a smart phone by taking a large number of high-resolution images,” Dr Eriksson said.

“The most critical cue for early diagnosis of skin cancer is changes in the appearance of moles and lesions over time.

“The images will be sent to a remote server for processing and within a matter of minutes, a complete 3D reconstruction of the individual’s body is returned, with every mole and lesion identified, analysed, assessed and compared with previous scans.

More - https://research.qut.edu.au/ai/2018/12/10/artificial-intelligence-will-create-diy-melanoma-detector/

-

Man against machine: AI is better than dermatologists at diagnosing skin cancer

Public Release: 28-May-2018

European Society for Medical Oncology

Researchers have shown for the first time that a form of artificial intelligence or machine learning known as a deep learning convolutional neural network (CNN) is better than experienced dermatologists at detecting skin cancer.

In a study published in the leading cancer journal Annals of Oncology [1] today (Tuesday), researchers in Germany, the USA and France trained a CNN to identify skin cancer by showing it more than 100,000 images of malignant melanomas (the most lethal form of skin cancer), as well as benign moles (or nevi). They compared its performance with that of 58 international dermatologists and found that the CNN missed fewer melanomas and misdiagnosed benign moles less often as malignant than the group of dermatologists.

A CNN is an artificial neural network inspired by the biological processes at work when nerve cells (neurons) in the brain are connected to each other and respond to what the eye sees. The CNN is capable of learning fast from images that it "sees" and teaching itself from what it has learned to improve its performance (a process known as machine learning).

More - https://www.eurekalert.org/pub_releases/2018-05/esfm-mam052418.php

See also:

We were so far ahead in AI in the 90's and early 2000's. Since Raygun our...
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=159530217

The pandemic is ushering in the next trillion-dollar industry
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=159873956

The US and China are in a quantum arms race that will transform warfare
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=146189420

Intellectual humility: the importance of knowing you might be wrong
"fuagf -- the inherent problem with ideologues is that they place their mental precepts/constructs/frames in front of and over actually underetanding reality -- they force their apprehension of reality to fit their mental precepts/constructs/frames -- precisey bass-ackwards; bullshit inevitably ensues -- there's always still only one reality out there, ever what it is regardless of whether all, most, few or none have a fair grip on it/aspects of it"
Why it’s so hard to see our own ignorance, and what to do about it.
By Brian Resnick@B_resnickbrian@vox.com Jan 4, 2019, 8:40am EST
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=145917259