Mind Matters Natural and Artificial Intelligence News and Analysis
rawpixel-782053-unsplash
Photo by rawpixel on Unsplash

When machine learning results in mishap

The machine isn’t responsible but who IS? That gets tricky
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Wrecked Smashed Car Hood And WheelWhen machine learning generates correlations for marketing and social applications, we needn’t worry much about who is responsible for the results. In the event of business and social mishaps, the blunderers will usually pay.

However, as we start applying machine learning techniques to controlling heavy machinery, for example, we must ask, who bears ultimate responsibility for computer decisions that result from machine learning?

By “responsibility,” I mean moral culpability, not legal liability. There is a relationship, of course, but the nuances differ. For example, a company takes on legal liability for the actions of its employees but the executives do not thereby assume moral responsibility. My focus here is on moral responsibility, in the hope that a clear understanding can also inform those who have to make decisions on legal liability in the future.

The Chain of Moral Responsibility for Machines

Before we look at the problems specific to machine learning, we need to look at how we determine responsibility for automated systems, software, and other products of engineering that do not feature machine learning.

Generally, a machine operates from a chain of causes. The operator sets it up and is responsible for the appropriate settings, environment, and safety precautions. A conscientious operator is generally not held responsible for problems that occur despite following correct procedures.

Software that is the result of machine learning differs from software that is the result of explicit programmer decisions in one very important way: A programmer can demonstrate the reasoning behind his code choices in software he writes and justify his choices.

In software written by machine learning, the programmer is abdicating his role in making choices.

Next down the line is the machine itself. Is the machine functioning according to its specification and appropriate tolerances? Engineers cannot account for every situation but they can account for the likely ones within range of operating conditions that the design assumes. If the machine is operating within the operating conditions, the responsibility for its correct operation lies with the engineers who designed it. Did they miss an important condition?

Was the mishap foreseeable? Is it correctable?

Some situations, sadly, are not easily foreseeable. If redesign is not possible, they can at least be documented. Engineers are not necessarily responsible for every unconsidered situation ahead of time. However, they are responsible for correcting either the machine or the documentation going forward. And, not only for that particular situation but for all reasonably similar ones.

Another source of problems is the manufacture of a machine. It is possible for a machine to be designed correctly but manufactured incorrectly. In this case, the question of responsibility is determined by whether the manufacturers correctly followed the tolerances and the guidelines given by the engineers.

Of course, many engineers use off-the-shelf parts from other organizations. If an off-the-shelf part breaks down, the question is whether the part was being used according to the specifications. If so, responsibility lies with the organization that made the part. If not, the responsibility lies with the engineers who used the part incorrectly.

Moral Responsibility and Machine Learning

So why would moral responsibility be any different when machine learning techniques are applied?

Software that is the result of machine learning differs from software that is the result of explicit programmer decisions in one very important way: A programmer can demonstrate the reasoning behind his code choices in software he writes and justify his choices.

In software written by machine learning, the programmer is abdicating his role in making choices. Instead, he is merely marking the desired outcomes from potential inputs. The programmer has no control over the result. In fact, in most cases, the machine learning program generated would be far too difficult for a programmer to understand.

So, let’s say I use machine learning to generate an automated steering system using a 2D camera. Now, let’s say that the machine fails to avoid another car, and an accident ensues. Who is responsible?

Most failures of physical systems are foreseeable.

In the case of machine learning, we are largely abdicating our ability to know why a fundamental component behaves the way it does.

Let’s go through the list of people involved, and see what they would say about it:

The driver: “It’s not my fault, I assumed that the systems on the car I paid $60,000 for were reliable.”

The car manufacturer: “We purchased the automated steering system from ABC Inc. The car was reliable. It’s a software problem.”

ABC Inc.: “We used a machine learning system developed by XYZ Inc to generate the steering system. We trained the software on a million different 2D image inputs. It passed all of our tests.”

XYZ Inc.: “Machine learning is a statistical inference approach. All that we can guarantee is that the inference engine will perform correctly for the training data.”

Who here is at fault? Is the driver at fault for relying on the software? Is the car manufacturer at fault? It seems unlikely that a car manufacturer can be ultimately blamed for a problem in software purchased from others.

Is ABC Corp (the software developer) at fault? Here the waters are murkier. You could always say that the firm should have used more training data. However, if the camera has a resolution of 1080p, there are literally 10^14,981,179 (that’s a 1 followed by almost 15 million zeroes) possible image inputs. And, that assumes that we don’t care about the previous and following images, which any real-world driving scenario would require (i.e., to know the trajectories of the objects). Therefore, a million training images are not enough. We must know not only what the responses will be, but why they will be that way in order to generalize the information across all the possibilities.

Model interpretability. The term means that machine learning models generated by the machine learning algorithms must be able to be (1) adequately described and (2) adequately understood.

You might say that that is an unfair standard. That is, we don’t expect a braking system to survive every possibility. It must be able to handle most of the common ones. The problem, with that approach, however, is that we generally know the tolerances of physical systems. That is, we know the level of heat at which a physical system will break down, roughly how many miles it can last, etc. Most failures of physical systems are foreseeable.

In the case of machine learning, we are largely abdicating our ability to know why a fundamental component behaves the way it does. Now, many machine learning systems have the ability to describe after the fact why they behaved the way that they did. However, that is not the same as knowing ahead of time how the algorithm will tolerate variances. In fact, the whole purpose of machine learning is, in effect, to shield the programmer from knowing these sorts of facts.

The Importance of Model Interpretability

Some sectors already use machine learning techniques to automate procedures. Most of them require model interpretability. The term means that machine learning models generated by the machine learning algorithms must be able to be (1) adequately described and (2) adequately understood. That is, after machine learning algorithms are performed, the results must be clear enough that a programmer could describe the “reasoning” behind the model clearly.

When machine learning generates models that can be interpreted, responsibility is simplified. The onus is on the software developer to reasonably validate the model generated. The machine learning algorithm is a productivity tool for the programmer, not a replacement for the programmer. The programmer is still responsible for knowing how and why the algorithm operates the way that it does. Additionally, because the model is understandable, the programmer can describe the limitations of the generated model.

The problem, however, is that model interpretability is workable if the machine learning input is based on simple data (i.e., just a few dimensions). As the dimensionality increases, the ability of a model generated by machine-learning to be described or interpreted drops to close to zero. In complicated situations such as traffic avoidance via 2D images, this is nearly impossible.

The upshot is that we should require model interpretability for any machine learning model for which moral responsibility plays a significant role. Any product that is the result of machine learning for which the generated model is not interpretable should be clearly identified in the documentation and such products should not be relied upon in cases where moral responsibility is an important factor. We cannot permanently waive our moral culpability for the things we create.

Note: For an overview of model interpretability, see Ideas on Interpreting Machine Learning

Jonathan Bartlett is the Research and Education Director of the Blyth Institute.

Also by Jonathan Bartlett: “Artificial” artificial intelligence: What happens when AI needs a human I?

and

How Bitcoin works: The social value of trust


When machine learning results in mishap