The Revolutionary Socialist Network, Workers


AI has bad ramifications for workers



Download 2.09 Mb.
Page83/300
Date13.04.2023
Size2.09 Mb.
#61109
1   ...   79   80   81   82   83   84   85   86   ...   300
K - Cap K - Michigan 7 2022 CPWW

AI has bad ramifications for workers


Abdelrahman ‘22 (Maha, before joining the Centre of Development Studies in 2007, Dr Abdelrahman worked as an Associate Professor of Sociology at the American University in Cairo, “The Indefatigable Worker: From Factory Floor to Zoom Avatar”, page 79-80, ML)
AI is but a recent technology in an overarching historical project in which a regime of labor surveillance and control over workers’ bodies has been evolving to respond to new developments in capitalism and to emerging new technologies, regimes of expertise and measurement capacities. These interventions, which are intended to reduce fatigue and to adapt the living machine of the worker’s mind and body to the rules of a dead machine without fatigue, have always been presented as technical, scientific and totally free of politics and ideology. One of the most fascinating features of this long project of surveillance and control has been its assumption of the worker’s body as one which is free of gender, sexual, racial or other power relations. More significantly, ostensibly putting the well-being of the worker at the heart of this regime has made it difficult for workers to reject these efforts which claim to help them cope with fatigue and stress. This paradigm of caring has also partly worked to obscure the conditions which create workers’ fatigue in the first place, making these conditions ever more difficult to challenge.

The capitalist nature of AI development inevitably trades off with ethics.


Schwab ’20 [Katharine; deputy editor of Fast Company's technology section; 10-5-2020; The biggest barrier to humane, ethical AI: Capitalism itself; Fast Company; https://www.fastcompany.com/90558020/ai-ethics-money-facial-recognition-fei-fei-li; 7-6-2022; SK]
Over the last several years, a growing chorus of academics, activists, and technologists have decried the ways in which artificial intelligence technology could engender bias, exacerbate inequity, and violate civil rights.
But while these voices are getting louder, they still butt up against systems of power that value profit and the status quo over ensuring that AI is built in a way that isn’t harmful to marginalized people and society writ large.
In a panel discussion for Fast Company’s 2020 Innovation Festival, experts in ethical AI explained what they’re up against in trying to change the way that large companies and institutions think about building and deploying AI.
For Timnit Gebru, the technical colead of the Ethical Artificial Intelligence Team at Google, one challenge is that she has to work against the incentive structures inherent to capitalism. For publicly traded companies such as Google, constantly increasing profit is the highest good. “You can’t set up a system where the only incentive is to make more money and then just assume that people are going to magically be ethical,” she said.
When it comes to face recognition, the most controversial AI technology right now, Gebru explains that it took a global protest movement against police brutality for the host of large companies including Amazon, IBM, and Microsoft that build the technology to reconsider what they were deploying. Even so, Amazon only agreed to a one-year moratorium on selling its technology to police. (In contrast, Google decided not to sell facial recognition algorithms way back in 2018, and CEO Sundar Pichai has indicated support for EU legislation to temporarily ban the technology.)
Gebru advocates for changing the way AI is built through building “pressure from all sides,” including from internal advocates such as herself, other tech workers, outside activists, everyday people, journalists, regulators, and even shareholders.
“Internally, you can advocate for at least something that’s not so controversial, which is better documentation,” Gebru said. “It means you just have to test your system better, make it more robust. Even then if you’re asking for more resources to be deployed, why should they do that if they think what people have been doing so far has been working well?”
Another challenge is the sheer amount of money available for people building AI systems. Even if large companies stay away from selling face recognition to police to avoid a public relations disaster, smaller upstarts such as the controversial Clearview AI will step in to fill the void. When money is on the line, it becomes more difficult to make decisions in the interest of society rather than to pad a company’s bottom line.
“The reality is there’s just a lot of easy money to be made in AI,” said Olga Russakovsky, an assistant professor of computer science at Princeton University who focuses on computer vision. “I think there’s a lot of very legitimate concerns being raised, and I’m very grateful these concerns are starting to come to the forefront, to the center of these conversations. But there’s easy money and that has been the case for the past at least 10 years. I think it’s hard to resist that . . . and then have some of these deeper and harder conversations.”

The AI economy perpetuates monopolies and commodification of data because of AI capitalism’s incessant drive for growth.



Download 2.09 Mb.

Share with your friends:
1   ...   79   80   81   82   83   84   85   86   ...   300




The database is protected by copyright ©ininet.org 2024
send message

    Main page