If you don’t know, you’re in good company. Kalev Leetaru, a technology entrepreneur and recurring contributor at Forbes, points out in a recent piece that most engineers who use AI don’t know themselves. While it may not matter if we don’t know, it matters a good deal that they don’t know.
Leetaru suggests that engineers fail to dig down to the mechanism, at least in part, because they get enamored with the seeming “magic” of Deep Learning:
Deploying one’s first machine learning algorithm can in many ways be like experiencing magic for the first time. Somehow, without any coding, this piece of software has ‘learned’ the underlying patterns of its training data and applied its new knowledge to achieve quite reasonable results on novel input data.
Much like watching a baby take its first steps, the wonderment and magic of this experience lies not in the quality of the results, but rather in the amazement of the moment.”Kalev Leetaru, “Why Is There So Little Understanding Of Deep Learning’s Limitations?” at Forbes
That’s part of the cause. There is another factor: The Internet’s culture of sharing code.
When businesses first adopted computers, they had to license the software they relied on. Much business software from leading technology companies such as Microsoft, IBM, and Oracle is still sold through software licenses. But the software powering much of the Internet is different: It is shared more than it is licensed.
Modern AI applications also have shared roots. Sometimes developers obtain code that another team developed, such as Google’s Tensorflow. Or they use it through a service, such as the services Microsoft, Amazon, and Google offer.
Why does sharing matter? Because software developers can be as lazy as rest of us, Leetaru notes:
… they practice their trade merely by plucking pre-made canned algorithms off the shelf, pointing them to a directory of training and testing data and following the instructions to twist a few knobs until the accuracy level is high enough, before deploying to production and moving on to the next project.Kalev Leetaru, “Why Is There So Little Understanding Of Deep Learning’s Limitations?” at Forbes
If it were their own code, when an odd behavior arises (that is, a bug), they would dig into the code to determine the cause. Instead, they consider it “an unavoidable retraining requirement.”
Because they’ve chosen to not deeply learn their deep learning systems—continuing to believe in the “magic”—the limitations of the systems elude them. Failures “are seen as merely the result of too little training data rather than existential limitations of their correlative approach” (Leetaru). This widespread lack of understanding leads to misuse and abuse of what can be, in the right venue, a useful technology.
Anyone can borrow a drawer full of tools. Knowing when to use a tool and why and knowing its limitations separates the craftsperson from the novice. Sadly, too many AI engineers work as novices rather than using their full humanity to make good, informed decisions. That puts us all at risk.
Also by Brendan Dixon: If You Think Common Sense Is Easy to Acquire… Try teaching it to a state-of-the-art self-driving car. Start with snowmen.
Featured image: Content to leave the work to others/press master, Adobe Stock