Why We Must Free AI from the Constraints of Hollywood Tropes

As with other tropes, ‘AI is human underneath’ has jumped the boundaries of show business, coloring how we think about AI in the real world.

In 1989’s Star Trek: The New Generation episode “The Measure of a Man,” Star Fleet conducts a legal hearing to decide whether the android Lt. Cmdr. Data is sentient, and thus possesses human rights, or is simply an object that is the property of Star Fleet.

The script delineates three criteria for sentience: intelligence, consciousness, and  self-awareness. Data’s intelligence is taken as given, while the script considers consciousness as too nebulous a concept to determine whether Data possesses it or not.

The argument thus hinges on Data’s self-awareness, which Picard (acting as Data’s counsel) argues depends on his behavior. He has exhibited self-aware behavior, so he is certainly self-aware, right?

A single hour of television can only take us so far, leaving us with the essential question: can we determine self-awareness (and hence sentience) from behavior? Or are we simply fooling ourselves, as perhaps Data (and all other AI now and in the future) are simply programmed at best to mimic self-aware behavior?

Beware the AI Trope

There is another participant in the Star Trek argument: we the audience. We’ve come to know and love Data as a character on a favorite TV show, so of course he’s sentient.

Human beings play robots, androids, and other AI characters all the time. All such characters are inherently human simply by virtue of this fact. Even AI characters without human form, like HAL or the computer from Wargames, have human voices.

 It’s no wonder, then, that ‘robot (or other AI) is actually human underneath’ has become a Hollywood trope.

The problem: as with other tropes, ‘AI is human underneath’ has jumped the boundaries of show business, coloring how we think about AI in the real world.

We assign intelligence (a human trait) to AI. We expect it to learn (a human activity). We even believe AI can reason. In other words, we take it for granted that the AI we have today is simply a precursor to Lt. Cmdr. Data. All we need to do to create truly sentient AI is to program it better.

Tropes, however, aren’t reality. They are narrative conveniences that help audiences understand fictional stories. Sometimes they align with the real world, but not always.

In the case of AI, confusing the reality of the software with the trope leads to occasionally dangerous misunderstandings. We expect AI to have human qualities it’s missing and are surprised when it has qualities that aren’t human.

For example, we have come to expect AI to have some measure of common sense, an understanding of causality, or the ability to change focus from one topic to another – capabilities that are simple for humans but still largely out of reach of today’s AI.

Perhaps more dangerous are the non-human qualities that AI exhibits in spades. For example, AI is particularly susceptible to the ‘garbage in, garbage out’ principle: feed an AI model noisy, biased, or otherwise poor data, and you’ll get correspondingly poor results.

Furthermore, it’s surprisingly easy to misdirect AI. Say you’re training a model to recognize cats in images by feeding it large numbers of cat pictures. A mere handful of dogs misidentified as cats will throw off the entire effort.

We’re surprised with the limitations and failings of AI because we’ve set our expectations too high. After all, we expect the AI to be human because humans play androids on TV.

Furthermore, we may also be obscuring what AI is really good at because we’re expecting the wrong strengths. We expect common sense and thus we’re appalled when AI makes a bonehead move, like confusing a white truck for the sky.

We also don’t fully realize that AI is really good at highlighting bias in data sets, so we’re aghast when our HR resume filtering bot favors white men. But if we used the same software with the express purpose of uncovering bias in our data sets, then we’d be quite happy with such a result.

Mimicry vs. Intelligence

Virtual assistants (either for interactive voice response or text-based chatbots) are a popular application of AI today. The best assistants are the ones that can conduct realistic human interactions – the more realistic, the better.

There’s no way we’d actually go so far as to consider that computer voice that answers our call to the cable company to be sentient, however. The best we can expect is for it to accurately mimic human behavior.

Mimicry, after all, has been a goal of AI since Alan Turing’s famous Turing test. All an AI has to do to pass the test is mimic human behavior sufficiently well to fool a human listener.

The irony, of course, is that mimicry wasn’t the point of Turing’s test at all. For him it was merely a means to an end, based on the assumption that only intelligent software would be able to successfully mimic the behavior of a human participant.

The Turing test, however, had an unfortunate unintended consequence: it has driven generations of programmers and AI researchers to create ever more convincing mimicry of human behavior. True to form, today’s deepfakes and virtual assistants are fooling more people every day. It won’t be long until no one can tell the difference.

If we extrapolate our skill at creating AI that excels at mimicry into the future, will we ever jump over the vast divide between the computer code of today to true sentience? Or will the first real Lt. Cmdr. Data be nothing more than a box of chips and computer programs that excels at fooling everyone around it that it is human?

Don’t fall for the trope. Data was certainly human, but he was fictional. Fictional AIs are all human under the covers. The real world simply doesn’t work like that.

The Intellyx Take

Perhaps humans will eventually invent AI that is truly sentient. Or perhaps not. There’s no way I can say, of course. But I can with reasonable certainty conclude that simply improving the ability for AI to mimic human behavior isn’t the path to creating true sentience.

The more fundamental question: is sentience what we want from AI anyway? Do we want robots with human rights? Any Asimov fan knows that we’re opening a massive can of worms if we go that route.

Perhaps mimicry is our long-term goal. True, the best virtual assistants will be the ones that excel at mimicking human behavior – but what about the rest of the AI story?

I’m sure some people are shivering with anticipation, waiting for the day that sexbots are perfectly humanlike. But the point of a sexbot is not to have someone with human rights. On the contrary, the whole point of such an AI is to be able to treat it as property. Otherwise you might as well date a real human, am I right?

The truth of the matter is that in spite of the burgeoning sexbot industry, many of the important use cases for AI have little or nothing to do with mimicking human behavior. Generally speaking, we should want AI – and all software generally – to be good at things that humans aren’t good at.

We are far from reaching agreement on this point. The goal for self-driving cars, for example, is to drive like humans (only better, we hope). But once we free ourselves from the AI tropes of Hollywood and the curse of the Turing test, we might find that AI will finally reach its true purpose – without having to behave like humans at all.

© Intellyx LLC. Intellyx publishes the Intellyx Cloud-Native Computing Poster and advises business leaders and technology vendors on their digital transformation strategies. Intellyx retains editorial control over the content of this document. Image credit: Steve Rainwater.