Mind Matters Natural and Artificial Intelligence News and Analysis
austin-neill-189146-unsplash
Photo by Austin Neill on Unsplash

Can the Air Force Create Thinking Planes?

Smart drones? They are working on general artificial intelligence (GAI)
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

There is a plan afoot to create general artificial intelligence, that is, the sort of intelligence a person might have who is trying to sell you something or find a brand new way to get around the rules:

In a paper, published last week, a member from the US Air Force talks about a model for artificial general intelligence (AGI). The author of the paper, A Model for General Intelligence, is Paul Yaworsky, Information Directorate of the US Air Force Research Laboratory. There have been many efforts in the past to model intelligence in machines, but with little progress in terms of real cognitive intelligence like those of humans.

Currently, the way AI systems work is not understood completely. Also, AI systems are good at performing narrow tasks but not complex cognitive problems. General intelligence aims to covers the gap between lower level and higher level work in AI—to try and make sense of the abstract general nature of intelligence. Prasad Ramesh, “The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence” at Packt>

Yaworsky’s open access paper “proposes a hierarchical model to help capture and exploit the order within intelligence.”

But it’s unclear just what that means:

At the risk of spoiling the ending for you, this paper proposes a hierarchy for understanding intelligence – a roadmap for machine learning developers to pin above their desks, if you will – but it doesn’t have any algorithms buried in it that’ll turn your Google Assistant into Data from Star Trek.

What’s interesting about it is that there currently exists no accepted or understood route to GAI. Yaworsky addresses this dissonance in his research. Tristan Greene, “The US Air Force is working on general artificial intelligence” at The Next Web

Eric Holloway, currently a captain in the United States Air Force, offers to help us sort it out:

I think they’ll create something that is along the lines of combining expert systems and machine learning, which has been done before. IBM’s Watson is this kind of thing.

The likely way this will turn out is they’ll realize human-in-the-loop is unavoidable for any useful system, so it’ll spin off into something like the existing field of human computation. Perhaps with the rationale that they will rely on humans-in-the-loop until they can figure out how to automate that part, like a lot of AI startups these days. Bonus points if the AI ‘learns’ to mimic human decision makers.

At any rate, it will not be a Terminator.

Mind Matters: It’s not clear how the machine would recognize a new problem, one that wasn’t coded in, or initiate a new solution to it. Wouldn’t that be an easily exploited weakness?

Eric Holloway: AI is going to always be blind to the unknown unknowns out there. We humans can at least adapt.

MM: I sometimes wonder whether AI can be likened to a clever animal. That is, a dog can be very smart—in training as a seeing eye dog, perhaps—but he doesn’t know that on this trip to the vet, he is going to be neutered. It’s not a question whether we tell him or not or whether he wants that or not. He can’t even have a concept of what it means.

EH:I think there is an analogy to animals, but AI is even stupider than an animal. That’s because AI cannot have true teleology, that is, goals. But animals do have it, through instinct.

MM: I see what you mean. To the extent that he understands his situation, a dog can definitely want something, independently, and adapt himself to getting it.

But now that you mention it, something has always puzzled me: Animals want to live. That governs much of their behavior and occupies their intelligence. The reason a cat can often outwit a human is that, while his intellectual abilities are minuscule by comparison, his focus on his own problems is total. How would we get that focus from something that isn’t alive? I realize one can program a machine to “want” something in the sense of installing a program that seeks it. But somehow it doesn’t seem quite the same thing.

EH: Those are actually major problems in AI, mimicking those “signs of life,” like homeostasis, that is, the ability to be an adaptive self-correcting system by nature. I don’t believe that such a property is computable. This is pointed out by Karl Frisson, the “neuroscience genius with the keys to AI(Wired). He calls it “minimum energy” but it’s the same idea.

MM: You mentioned “computable.” That’s the part I wonder about. Say the cat wants you to let him into the garage because he can smell mice in there. You don’t want to do that because you know you will wake up in the middle of the night, hearing him wailing. And then you will have to get up to let him in or you can’t get back to sleep. He may or may not foresee all that himself but he doesn’t care. Finally, he wears down your patience—maybe you are trying to do tech support on the phone—so you just let him into the garage. He is not as smart as you are but he won anyway. It was a conflict between two intelligences, one handicapped by distractions. I am not sure how it is computable.

EH: It isn’t. Both parties exhibit intention, and intentionality is not computable. Turing machines are only reactive, by definition.

MM: That raises an interesting problem. Perhaps a general artificial intelligence created by a defence force would not be loyal, the way a dog would, to the people it was defending. Interesting territory for science fiction…

See also: The “Superintelligent AI” Myth The problem that even the skeptical Deep Learning researcher left out (Brendan Dixon)

and

Software Pioneer Says General Superhuman Artificial Intelligence Is Very Unlikely The concept, he argues, shows a lack of understanding of the nature of intelligence


Can the Air Force Create Thinking Planes?