Mind Matters Natural and Artificial Intelligence News and Analysis
bottom-view-close-up-of-four-white-surveillance-cameras-stockpack-adobe-stock.jpg
Bottom view close-up of four white surveillance cameras
Bottom view close-up of four white surveillance cameras

How Toxic Bias Infiltrates Computer Code

A look at the dark underbelly of modern algorithms
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The newly released documentary Coded Bias from Shalini Kantayya takes the viewer on a tour of the way modern algorithms can undermine justice and society and are actively subverting justice at the present moment.

Coded Bias highlights many under-discussed issues regarding data and its usage by governments and corporations. While its prescriptions for government usage of data are well considered, the issue of corporate use of data involves many additional issues that the film skirts entirely.

As the film points out, we are presented these algorithms as if they were a form of intelligence. But they are actually just math—and this math can be used to, intentionally or unintentionally, encode biases. In fact, as Bradley Center fellows Robert J. Marks and George Montañez point out, bias is literally what makes machine learning algorithms work.

Now, the bias is not necessarily a problem. The problem comes from the fact that machine learning is very good at hiding biases within a cloud of statistics that often prevent even their creators from determining the basis of machine learning judgements. This flies directly against the goals of public governance—transparency, accountability, the right to appeal—all of these benefits depend on a decision-making structure that is clear and understandable. But with machine learning, the decision-making process is hidden behind a statistical inference machine that even its creators don’t understand.

Another area that the film tackles is surveillance. The documentary notes that many governments are tapping into their newfound ability for expansive monitoring, not only of what you do online, but also, using facial recognition, of what you do in public. Additionally, with this newly-acquired capability, governments sometimes take the refusal to submit to pervasive surveillance as hard evidence of wrongdoing, while, for example, the surveillance itself is a violation, in the United States for example, of US 4th amendment protections against unreasonable searches.

The more difficult problems concern corporations, especially those whose public influence make their role in society more similar to that of a government than that of a typical business. When a company controls what you see and hear, and who sees and hears you, and also what people see and hear about you, considering that corporation as just an ordinary business stretches the category. I can choose not to eat at a restaurant I don’t like, not to shop at a store that I think promotes wrong values, or not use a plumber who has treated me poorly. But, when large technology corporations step in, participation often becomes mandatory, and sometimes we don’t even know what we are a part of.

Toward the end, the film calls for an “FDA for algorithms.” The idea is that, before an algorithm could be deployed onto collected data, it would have to get approval from a public entity that it is not violating bias. I think this is a perfectly reasonable expectation for such algorithms in the public sector, but not necessarily for the private sector. For one thing, an FDA might cause some people to be overly confident that an algorithm is free of bias, when in fact, they cannot be free of bias.

Perhaps a better option for the public is simply better recognition that algorithms cannot achieve justice, they can only automate bias.

As for corporate use of data, I agree that there are many valid concerns about the data that corporations are collecting and how they are using it. However, I have yet to find a proposed solution that successfully combines the privacy expectations of the individual, the needs of the technology innovators, and the flexibility needed by corporations in handling their own data, not to mention the fact that corporations require freedoms too, for the simple fact that corporations are mere conglomerations of people.


You may also wish to read:

Can a compute algorithm be free of bias? Bias is inevitable but it should be recognized and admitted. (Robert J. Marks)


Jonathan Bartlett

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Jonathan Bartlett is a senior software R&D engineer at Specialized Bicycle Components, where he focuses on solving problems that span multiple software teams. Previously he was a senior developer at ITX, where he developed applications for companies across the US. He also offers his time as the Director of The Blyth Institute, focusing on the interplay between mathematics, philosophy, engineering, and science. Jonathan is the author of several textbooks and edited volumes which have been used by universities as diverse as Princeton and DeVry.

How Toxic Bias Infiltrates Computer Code