In a thought-provoking essay, San José State University philosopher Anand Vaidya asks, should it be okay to dismantle Star Trek‘s robotic crew member Data for research purposes, as proposed in the “The Measure of a Man” episode in Star Trek: The Next Generation? Some of the Trek brass seemed to think so:
As real artificial intelligence technology advances toward Hollywood’s imagined versions, the question of moral standing grows more important. If AIs have moral standing, philosophers like me reason, it could follow that they have a right to life. That means you cannot simply dismantle them, and might also mean that people shouldn’t interfere with their pursuing their goals.Anand Vaidya, “If a robot is conscious, is it OK to turn it off? The moral implications of building true AIs” at The Conversation (October 27, 2020)
The question is complicated by a number of factors:
➤ Vaidya admits, “Artificial general intelligence, AGI, is the term for machines that have domain-general intelligence. Arguably no machine has yet demonstrated that kind of intelligence.” No, and there are good reasons for thinking that no machine will do so. Machines only compute and many aspects of thinking are non-computable. The human mind does not function like a computer. Even the computer industry is beginning to recognize that.
So it’s not clear that Data, as portrayed in Star Trek, could really exist.
➤ He goes on to cite the Turing test for machine intelligence as a useful guide to AI intelligence:
Named after pioneering AI researcher Alan Turing, the Turing test helps determine when an AI is intelligent. Can a person conversing with a hidden AI tell whether it’s an AI or a human being? If he can’t, then for all practical purposes, the AI is intelligent. But this test says nothing about whether the AI might be conscious.Anand Vaidya, “If a robot is conscious, is it OK to turn it off? The moral implications of building true AIs (October 27, 2020)
Unfortunately, the Turing Test is not especially useful for determining intelligence either. Clever answers can be generated via programming by a clever person. That doesn’t make the machine intelligent, let alone conscious.
And what the hearer thinks is happening is not strictly relevant. Hearers might be fooled by a clever bird (conscious but not able to truly understand what it has been taught to repeat). The Lovelace test, which requires evidence of genuine creativity that is not inherent in the programming, is a much better test. Never been passed.
In the end, Prof. Vaidya argues that truly intelligent machines should be treated like persons. But then, of course, if they were truly intelligent, they would be persons. So the serious question isn’t really whether they would be persons but whether they would even be possible.
And at that point, a curious disconnect arises.
Here’s another scene from that Star Trek hearing to determine whether Data should be dismantled for research:
Well, it’s obvious to the observer that Data has the inner mental states of a human being (whatever one thinks about the argument offered that humans too are machines). So it’s unclear why there is much controversy about whether Data should have rights.
The practical reality is that Data most likely isn’t possible. Some science fiction fans may have a harder time with that fact than with the prospect of dismantling him. But life is full of unexpected outcomes.
You may also enjoy:
Are the aliens we never see obeying Star Trek’s Prime Directive? The Directive is, don’t interfere in the evolution of alien societies, even if you have good intentions
Star Trek: On second thought: Some serious quibbles For example, why is Picard obsessed with Commander Data? Of course Picard had a great relationship with Data throughout Star Trek: The Next Generation, but I don’t recall that relationship being as close as Star Trek: Picard makes it out to be. Unless I’m missing something critical, Picard’s obsession with the (now visibly older) Data, while not completely out of character, could benefit from some explanation. (Adam Nieri)