Mind Matters Natural and Artificial Intelligence News and Analysis
real-python-code-developing-screen-programing-workflow-abstract-algorithm-concept-lines-of-python-code-visible-under-magnifying-lens-stockpack-adobe-stock
Real Python code developing screen. Programing workflow abstract algorithm concept. Lines of Python code visible under magnifying lens.

How Do We Know the Machine Is Right If No One Knows How It Works?

We don’t, and that’s a problem, says Oxford philosopher John Zerilli
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
John Zerilli

Oxford philosopher John Zerilli, author of A Citizen’s Guide to Artificial Intelligence (2021), asks us to consider how machine learning, the most widely used type of AI, might be deciding our lives without our knowing it:

There are many reasons not to take job rejections personally, but there’s one in particular you might not consider: you might have been screened out by an algorithm that taught itself to filter candidates by gender, surname or ethnicity – in other words, by factors that have nothing to do with your ability to do the job. Even if you’re unfazed by the spectre of runaway robots enslaving humanity, this little tale shows how the ascendancy of machine learning (ML) comes with risks that should vex even the most sanguine observer of technology.

John Zerilli, “Should we be concerned that the decisions of AIs are inscrutable?” at Psyche

The problem isn’t so much that you don’t know but that maybe no one does.

As anyone in the field will tell you, the machinations of ML systems can be inherently difficult to interpret, particularly those of deep neural networks, a special class of ML systems that boast exceptional performance. In the argot of the ML community, deep neural networks are black boxes – devices whose inner workings are bafflingly complex and opaque, even to the initiated. But does this opacity really matter?

John Zerilli, “Should we be concerned that the decisions of AIs are inscrutable?” at Psyche

Some industry pros, Zerilli says, argue that this inscrutability doesn’t really matter so long as we are getting better results than we would get from human decision-making (that’s the reliabilist position). The trouble is, we have no way of knowing whether we are getting better results than we would from human decision-making.

Human bias is often detectable. Toxic bias, coded into unthinking programs is typically inscrutable. Take, for example, assessing risks posed by prisoners applying for parole:

When it labels a prisoner as ‘high risk’, neither the prisoner nor the parole board can be truly satisfied until they have some grasp of the factors that led to it, and the relative weights of each factor. Why? Because the assessment is such that any answer will necessarily be imprecise. It involves the calculation of probabilities on the basis of limited and potentially poor-quality information whose very selection is value-laden.

John Zerilli, “Should we be concerned that the decisions of AIs are inscrutable?” at Psyche

Will people think that the justice system runs more fairly if no one really knows why decisions were made?

Discussing a recent report on how algorithmic bias occurs in health care, technology correspondent Casey Ross writes,

The research to identify bias — based in the Center for Applied Artificial Intelligence at the University of Chicago’s Booth School of Business — was established after an initial study uncovered racial bias in a widely used algorithm developed by the health services giant Optum to identify patients most in need of extra help with their health problems. They found that the algorithm, which used cost predictions to measure health need, was routinely giving preference to white patients over people of color who had more severe problems. Of the patients it targeted for stepped-up care, only 18% were Black, compared to 82% who were white. When revised to predict the risk of illnesses instead of cost, the percent of Black patients flagged by the algorithm more than doubled.

Casey Ross, “‘Nobody is catching it’: Algorithms used in health care nationwide are rife with bias” at Stat News

In this case, the researchers were lucky, They found out why the algorithm had performed as it did. It was using the cost of treatment as a proxy for the severity of illness. Dropping the question of who would pay out of the picture resulted in more equitable predictions of need. But many algorithms continue to spit out results and no one knows why.

Zerilli concludes,

My guess is that we’ll all very much want to know why our super-clever ML systems decide as they do, regardless of how prescient they prove to be, and how legal they are. Ultimately this is because the potential for decisions to mistreat us isn’t governed by the tides that determine accuracy and error. Explanations in some form are probably here to stay.

John Zerilli, “Should we be concerned that the decisions of AIs are inscrutable?” at Psyche

Perhaps a new discipline will form: Algorithm sleuth — the professional who finds out how, exactly, the algorithm is coming up with biased results and recommends a fix.


You may also wish to read:

AI researcher sounds alarm: AI “emotion detectors” are faulty science An industry worth over $30 billion uses ERT on school children and potential hires, often without knowledge or consent. The science behind the claim that AI can recognize six basic universal human emotions is coming under fire amid claims of race bias.

How bias can be coded into unthinking programs MIT researcher Joy Buolamwini started the project as a trivial “bathroom mirror” message. A US federal study of 189 facial recognition software platforms found that the majority performed differently for different demographics, indicating unintended bias. (Heather Zeiger)

and

How toxic bias infiltrates computer code A look at the dark underbelly of modern algorithms. A new film makes the point that algorithms cannot achieve justice, they can only automate bias. (Jonathan Bartlett)


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

How Do We Know the Machine Is Right If No One Knows How It Works?