Fast company logo
|
advertisement
The biggest barrier to humane, ethical AI: Capitalism itself

[Source images: sinisamaric1/Pixabay; geralt/Pixabay; Sharon McCutcheon/Unsplash]

BY Katharine Schwab4 minute read

Over the last several years, a growing chorus of academics, activists, and technologists have decried the ways in which artificial intelligence technology could engender bias, exacerbate inequity, and violate civil rights.

But while these voices are getting louder, they still butt up against systems of power that value profit and the status quo over ensuring that AI is built in a way that isn’t harmful to marginalized people and society writ large.

In a panel discussion for Fast Company’s 2020 Innovation Festival, experts in ethical AI explained what they’re up against in trying to change the way that large companies and institutions think about building and deploying AI.

For Timnit Gebru, the technical colead of the Ethical Artificial Intelligence Team at Google, one challenge is that she has to work against the incentive structures inherent to capitalism. For publicly traded companies such as Google, constantly increasing profit is the highest good. “You can’t set up a system where the only incentive is to make more money and then just assume that people are going to magically be ethical,” she said.

When it comes to face recognition, the most controversial AI technology right now, Gebru explains that it took a global protest movement against police brutality for the host of large companies including Amazon, IBM, and Microsoft that build the technology to reconsider what they were deploying. Even so, Amazon only agreed to a one-year moratorium on selling its technology to police. (In contrast, Google decided not to sell facial recognition algorithms way back in 2018, and CEO Sundar Pichai has indicated support for EU legislation to temporarily ban the technology.)

Gebru advocates for changing the way AI is built through building “pressure from all sides,” including from internal advocates such as herself, other tech workers, outside activists, everyday people, journalists, regulators, and even shareholders.

“Internally, you can advocate for at least something that’s not so controversial, which is better documentation,” Gebru said. “It means you just have to test your system better, make it more robust. Even then if you’re asking for more resources to be deployed, why should they do that if they think what people have been doing so far has been working well?”

The reality is there’s just a lot of easy money to be made in AI.”

Olga Russakosky, Princeton
Another challenge is the sheer amount of money available for people building AI systems. Even if large companies stay away from selling face recognition to police to avoid a public relations disaster, smaller upstarts such as thecontroversial Clearview AIwill step in to fill the void. When money is on the line, it becomes more difficult to make decisions in the interest of society rather than to pad a company’s bottom line.

“The reality is there’s just a lot of easy money to be made in AI,” said Olga Russakovsky, an assistant professor of computer science at Princeton University who focuses on computer vision. “I think there’s a lot of very legitimate concerns being raised, and I’m very grateful these concerns are starting to come to the forefront, to the center of these conversations. But there’s easy money and that has been the case for the past at least 10 years. I think it’s hard to resist that . . . and then have some of these deeper and harder conversations.”

advertisement

These harder conversations focus on the insidious ways that bias informs the datasets that scientists use to train automated systems and the ways in which these systems are deployed to punish the poor and marginalized.

“Face surveillance is one example,” says Gebru. “If it doesn’t work well, it’s still bad. If it works perfectly well, it’s still bad, depending on how it’s used.”

Another important focus is the fact that the people who are building AI right now are largely white and male. To tackle the problem of who is building these systems, Gebru cofounded Black in AI, an international organization that aims to increase the representation of Black people in the field through workshops, mentorship programs, online communities, and advocacy.

Similarly, Russokovsky and Fei-Fei Li, the Sequoia Professor in the Computer Science Department at Stanford University and codirector of Stanford’s Human-Centered AI Institute, founded an organization called AI4All, which runs AI-focused high school summer camps designed for women and people from underrepresented backgrounds. Over the last summer, AI4All hosted 16 different summer camps.

“My personal dream is to have this program AI4All touch on every state of America and really create the next generation in an ongoing way that is way more diverse, inclusive, that could change the human landscape of AI and how this tech is deployed,” Li said.

Even as more researchers and scientists speak out about the dangers of developing this powerful technology without guardrails, it’s incredibly difficult to push for structural change. But that hasn’t stopped these women from trying.

“Very soon there won’t be just AI companies versus no-AI companies,” said Li. “AI is so prevalent . . . that almost every industry is going to use it. So every industry, every company needs to be thinking about this and actually acting now.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Katharine Schwab is the deputy editor of Fast Company's technology section. Email her at kschwab@fastcompany.com and follow her on Twitter @kschwabable More


Explore Topics