A Tech-Econ Mashup with a Libertarian Flavor

Revisiting the Turing Test

“Any sufficiently advanced technology is indistinguishable from magic.” – Arthur C. Clarke

Earlier this week, at Will‘s suggestion, I watched an episode of the tv show “NUMB3RS” that involved artificial intelligence and the Turing Test.

A quick (spoiler-heavy) summary: when an AI engineer is found dead in a secure room, the resident supercomputer is the main suspect. After interrogating the computer’s AI (named “Bailey”), Charlie & Co. conclude that it passes the Turing Test and is, in fact, sentient, leading the FBI to believe that Bailey is the murderer. After a few commercial breaks, the math squad discovers that Bailey isn’t a programmed intelligence after all – “she’s” just a complex, souped up expert system that models human conversation. The machine appeared intelligent on the surface because of the rapid speed at which it could search for appropriate responses to the user’s queries. The programmer’s death is attributed to a hacker attack (or his jealous and tech-savvy wife, I can’t remember), and Bailey is deemed a failure and deactivated.

In the show, Bailey is compared to Deep Blue, the supercomputer that defeated chess champion Gary Kasparov in 1997. Deep Blue didn’t have any measureable intelligence; rather, it employed brute force computing, calculating millions of sequences of possible chess moves before determining the best move (the math is complex, but for the sake of simplicity, I’ll say that it chose the move with the highest probability of leading to a win). Deep Blue’s knowledge of chess strategy went only as far as what its programmers “taught” it. The reason it defeated Kasparov had little to do with “what” it knew, or “how” it thought, but the speed at which it could analyze moves (200 million per second), something no human is capable of doing.

So, where does that leave the Turing Test? If a sufficiently complex computer could fool us, how can we really know if a machine is intelligent or not?

To answer that question, I’d ask: How can we know if we’re intelligent or not? Free will is a shaky assertion once science is applied to philosophy. All of our decisions are the results of chemical reactions in our brains that we have little, if any, control over. Sure, we can choose to take a mind-altering chemical to change our brain’s functioning, but that choice isn’t really our own, either; our preferences or morals involving drug use are either part of our social programming or our inherent tastes. By no means am I an expert in cognitive science, but I think we are slaves to our biochemistry, psychology, and external social conditioning much more than we think. My gut also tells me that there are anomolies, too: purely unprecedented thoughts that don’t follow the chemical reaction chain (similar to random genetic mutations in evolutionary biology), although I’d bet these are rare and more often that not inconsequential.

None of that is to say that a murderer isn’t guilty because he ultimately had no choice or control over his actions  – society couldn’t function if we let that kind of logic rule us. I’m just suggesting that we’re not as free as we’d like to think. However, I don’t think that fact really matters in a funtional, “macroscopic,” day-to-day context. Most of us in the developed world live long enough to reproduce, provide for our families, and live well past our reproductive years. Our species has survived and evolved for millions of years, which means we must be doing things correctly – yet how many people do you know who spend considerable effort determining if they’re really intelligent or just on auto-pilot? (Aside: I dare say that a person who devotes his or her career to this kind of research is probably unfit to produce many offspring… to my knowledge, those at the extreme tail ends of the bell curve don’t tend to do well in evolutionary terms. My intuition tells me to compare the number of computer scientists or mathematicians with litters of small children, vs the number of Walmart cashiers with trailers full of kids. On a related note, I am totally screwed if I keep this geek stuff up.)

Getting back to the original questions, now: we assume we’re intelligent because we’re self-aware, and we take it on faith that everyone else is self-aware and intelligent to basically the same degree as us. Whether or not we actually are that self-aware is irrelevant for the most part. So, if a computer is able to deceive us into accepting its sentience, I don’t think it matters much. If such a machine is capable of choosing to commit a crime, or if it’s simply programmed to kill, we’d consider it a menace to society and deal with it with accordingly.

And then the war between robots and humanity shall begin… 😉

Further Reading:

Society of Mind (Marvin Minsky, MIT)

The Age of Spiritual Machines (Ray Kurzweil)


Filed under: Computers and Software, Science & Technology, , , , , ,

One Response

  1. Ceidwad says:

    I don’t think it’s really fair to say that just because our decisions may be influenced by the chemical reactions in our brains, that we automatically don’t have free will at all. We do have free will, it just requires a certain level of concentration to exercise it. You can easily try this yourself. Next time you make a decision which normally would be done on instict, such as gambling on something, instead approach it with a determination to act in a certain way. It is possible to change your patterns of behaviour, at least in my experience.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Whenever you find that you are on the side of the majority, it's time to pause and reflect.
-Mark Twain

@LibbyJ on Twitter

Libby's Delicious Bookmarks

%d bloggers like this: