InvestorsHub Logo
Followers 2
Posts 1540
Boards Moderated 0
Alias Born 05/06/2017

Re: None

Tuesday, 11/09/2021 6:10:36 PM

Tuesday, November 09, 2021 6:10:36 PM

Post# of 114
How can AI and automation address the risk of inequalities and be used correctly?

source
https://www.artificial-solutions.com/blog/automation-ai-addressing-inequality

November 9, 2021

Over the past 18 months, COVID-19 has drastically impacted the speed at which businesses have incorporated automation into their digital transformation strategies, however, the trend to automate had been already gathered significant pace before the onset of the pandemic. One of the most popular digital initiatives, within the scope of automation, is leveraging advanced Conversational AI (CAI)
https://www.artificial-solutions.com/conversational-ai
interfaces that allow enterprises to deliver personalized interactions while simultaneously combining the operational efficiency of end-to-end automation.

We know what this means – better access to services; the ability to engage with customers in a natural and human-like way, at any time and in the channel of their choice; and the opportunity to engage with Conversational AI and speech technologies that respond to human language with the high convenience that automation provides.

However, care must be taken to ensure that the experiences automation provides are delivered ethically and without bias. The potential for automation to drive inequality affects different areas of Artificial Intelligence (AI), including Conversational AI. CAI uses natural language to facilitate communication between humans and machines. CAI has a vast number of real-world applications, from intelligent assistants to voice-controlled homes, in-car experiences, and automated drive-thru systems.

Despite the recent advances in the space, the ability to communicate naturally (understand context, share a common ground with our interlocutors, ask for clarifications, exert common sense, etc.) is one that automated systems have not yet mastered: dialog systems are still not fully automated. Sure, new pre-trained transformer models like OpenAi’s GPT-3 may be able to power very natural small-talk conversations, but can they troubleshoot issues with your Internet connection in a fully automated fashion, end-to-end? And even if they could: would corporations entrust their brand reputation to pure AI-driven systems?

Who could have guessed that Tay, Microsoft’s unsupervised Twitter bot,
https://www.bbc.com/news/technology-35890188
would develop an extremely offensive personality after only 16 hours of exposure to real Tweets?

But that was 2016 – raising awareness about this problem, as well as developing plans and programs around inclusion in AI, are major touchpoints within the CAI life cycle today.

It is through dedicated programs and informed decisions that it is possible for our users to feel included. This is why, for example, Apple’s Siri no longer defaults to a female voice.
https://techcrunch.com/2021/03/31/apple-adds-two-siri-voices/?guccounter=1
In fact, Apple’s decision to give Siri users the option to choose their preferred gender at set-up time is a great example of an informed decision against preconceived stereotypes.

The renowned novelist Nathaniel Hawthorne once said: “Words (…) how potent for good and evil they become.” The truth is, as natural as language feels to all of us, because it is inherently human, it carries our biases too. And, unfortunately, this is also true for conversational and speech technologies.

Whether the bias originates in the data that is fed into the automated processes behind our conversational applications, or in the humans who created them (the designers, the developers and the project team in general), paying attention to these biases themselves, having a strategy around them, is the key to preventing their permeation into the products that we create.

As far as CAI development goes, humans are still instrumental. Designers, developers and copywriters all influence their systems’ voices. Because of this, it is crucial that creators get access to the tools and resources that will help them build their applications in an inclusive manner.

This can be done by allowing developers to take control over areas at high risk of inequality, which may surface in human-machine conversations, like the approval of training data, or leveraging resources for offensive language detection.

It is through actions like these that designers and developers can prevent systems from becoming biased, as well as create dedicated conversational strategies that will allow them to respond to racist remarks, violent comments or sexist implications that may be coming from users.

As a matter of fact, when it comes to CAI and speech technologies, purposedly curating the training and learning processes increases the chances that our conversational systems will become more inclusive and diverse. Unsupervised automation will be inevitably skewed by the patterns and trends seen in the data.

For instance, a voice interface may have trouble understanding a specific accent if this is underrepresented in the data that was used to train the voice recognizer. So, it is ultimately the data that needs to be representative of the diversity we expect to find among our end audience, our users. But it is also the developers’ ethical obligation to research their users’ characteristics and make amendments to the system based on their reactions to the product, something that is usually achieved through user testing (or human factors testing).

In this regard, research and human factors testing both lead the path towards inclusion. Biases are inherently invisible to those who bear them, so external, diverse and representative voices need to be present before, during and after the development process, so that we can ensure our systems will carry respect, diversity and inclusion forward.

In sum, let’s make sure our teams are informed, aware of biases in the data, in their designs, in their implementations; and trained to prevent them because it is ultimately this awareness that paves the road to true diversity, inclusion and equality in AI.

Here’s our commitment to making CIA and the wider industry more inclusive: Artificial Solutions implements Gavriella Schuster’s #BeCOME framework to expand work on inclusion and gender equality.