InvestorsHub Logo
Post# of 890
Next 10
Followers 480
Posts 60424
Boards Moderated 18
Alias Born 09/20/2001

Re: None

Saturday, 08/26/2023 4:58:31 PM

Saturday, August 26, 2023 4:58:31 PM

Post# of 890
Military AI’s Next Frontier: Your Work Computer

Spycraft developed by defense contractors are now being sold to employers to identify labor organizing. Regulators must step up to protect workers’ privacy.

Gabriel Grill and Christian Sandvig
Wired
June 22, 2023

IT’S PROBABLY HARD to imagine that you are the target of spycraft, but spying on employees is the next frontier of military AI. Surveillance techniques familiar to authoritarian dictatorships have now been repurposed to target American workers.

Over the past decade, a few dozen companies have emerged to sell your employer subscriptions for services like “open source intelligence,” “reputation management,” and “insider threat assessment”—tools often originally developed by defense contractors for intelligence uses. As deep learning and new data sources have become available over the past few years, these tools have become dramatically more sophisticated. With them, your boss may be able to use advanced data analytics to identify labor organizing, internal leakers, and the company’s critics.

It’s no secret that unionization is already monitored by big companies like Amazon. But the expansion and normalization of tools to track workers has attracted little comment, despite their ominous origins. If they are as powerful as they claim to be—or even heading in that direction—we need a public conversation about the wisdom of transferring these informational munitions into private hands. Military-grade AI was intended to target our national enemies, nominally under the control of elected democratic governments, with safeguards in place to prevent its use against citizens. We should all be concerned by the idea that the same systems can now be widely deployable by anyone able to pay.

FiveCast, for example, began as an anti-terrorism startup selling to the military, but it has turned its tools over to corporations and law enforcement, which can use them to collect and analyze all kinds of publicly available data, including your social media posts. Rather than just counting keywords, FiveCast brags that its “ commercial security” and other offerings can identify networks of people, read text inside images, and even detect objects, images, logos, emotions, and concepts inside multimedia content. Its “supply chain risk management” tool aims to forecast future disruptions, like strikes, for corporations.

Network analysis tools developed to identify terrorist cells can thus be used to identify key labor organizers so employers can illegally fire them before a union is formed. The standard use of these tools during recruitment may prompt employers to avoid hiring such organizers in the first place. And quantitative risk assessment strategies conceived to warn the nation against impending attacks can now inform investment decisions, like whether to divest from areas and suppliers who are estimated to have a high capacity for labor organizing.

It isn’t clear that these tools can live up to their hype. For example, network analysis methods assign risk by association, which means that you could be flagged simply for following a particular page or account. These systems can also be tricked by fake content, which is easily produced at scale with new generative AI. And some companies offer sophisticated machine learning techniques, like deep learning, to identify content that appears angry, which is assumed to signal complaints that could result in unionization, though emotion detection has been shown to be biased and based on faulty assumptions.

But these systems’ capabilities are growing rapidly. Companies are advertising that they will soon include next-generation AI technologies in their surveillance tools. New features promise to make exploring varied data sources easier through prompting, but the ultimate goal appears to be a routinized, semi-automatic, union-busting surveillance system.

What’s more, these subscription services work even if they don’t work. It may not matter if an employee tarred as a troublemaker is truly disgruntled; executives and corporate security could still act on the accusation and unfairly retaliate against them. Vague aggregate judgements of a workforce’s “emotions” or a company’s public image are presently impossible to verify as accurate. And the mere presence of these systems likely has chilling effect on legally protected behaviors, including labor organizing.

The corporations purveying these services are thriving in a context of obscurity and regulatory neglect. Defenses of workplace surveillance are made of the thinnest tissue. Industry apologists proclaim that their software, sold to help employers “ understand the labor union environment,” isn’t anti-union. Instead, they brand themselves as selling “corporate awareness monitoring” and prominently mention that “ every American is protected by federal, state, and local laws to work in safe conditions.” It’s apparently not the manufacturer’s fault if a buyer uses this software to infringe on a legally protected right to organize or protest.

Surveillance firms also deflect criticism by claiming that they only use publicly available information, like social media data and news articles. Even when this is true, their argument ignores the consequences of using everyday military-grade surveillance against the citizens of a free society. The same tools that track the movements of Russian tanks in Ukraine should not be given to your supervisor to track you. Intelligence software providers seem to hope that the practice of letting bosses look deeply into the lives of their workers will become so standard that employers will get to do so continuously, as a proactive measure. As capabilities have improved and the costs of gathering and analyzing data have plummeted, we are now facing a future in which any middle manager can mobilize the resources of their own CIA.

This kind of surveillance industry is fundamentally incompatible with a democracy. Companies that deploy such tools must be forced to disclose this use publicly so that existing laws can be enforced. And new regulations are urgently needed. Last year, the National Labor Relations Board announced that it would seek to outlaw “intrusive” and “abusive” labor surveillance, an important step. In addition, workers and unions should be testifying in legislative hearings about the future regulation of AI and workplace surveillance. We need specific rules that state which uses of AI, data sources, and methods are permissible

These technologies are already being sold and deployed across the globe and used for cross-border surveillance. In the best case, an active regulator needs to become a global leader in responsible AI and work to establish international standards for workplace technologies. Without doing this work, multinational companies with global supply chains will find it easy to flout or finesse country-specific protections.

Ultimately, our society may conclude that regulation is not enough: It’s the existence of this market that should be illegal. We may declare that there should be no place where anyone can buy an AI’s profile of your associations, intentions, emotions, and thoughts. Your life outside of work should be protected from your employer by default.
---------------------------

Gabriel Grill is a researcher at the Center for Ethics, Society, and Computing at the University of Michigan.
Christian Sandvig is Director of the Center for Ethics, Society and Computing and the McLuhan Collegiate Professor of Information at the University of Michigan.


Join the InvestorsHub Community

Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.