A Tech-Econ Mashup with a Libertarian Flavor

The Line between Artificial and Human

Daniel Roth over at Wired recently posed the question of whether or not humanoid robots may someday be granted “human” rights.

I’ve seen videos of the incineration of T.M.X. Elmo (short for Tickle Me Extreme); they made me feel vaguely uncomfortable. Part of me wanted to laugh—Elmo giggled absurdly through the whole ordeal—but I also felt sick about what was going on. Why? I hardly shed a tear when the printer in Office Space got smashed to bits. Slamming my refrigerator door never leaves me feeling guilty. Yet give something a couple of eyes and the hint of lifelike abilities and suddenly some ancient region of my brain starts firing off empathy signals. And I don’t even like Elmo. How are kids who grow up with robots as companions going to handle this?

This question is starting to get debated by robot designers and toymakers. With advanced robotics becoming cheaper and more commonplace, the challenge isn’t how we learn to accept robots—but whether we should care when they’re mistreated. And if we start caring about robot ethics, might we then go one insane step further and grant them rights?

My guess is that, as both robotics and artificial intelligence become more sophisticated, the Turing Test will have an important part to play in the future of biological-artificial life relations. For those of you who don’t feel like wikisurfing, the Turing Test is a conceptual method for detecting consciousness in artificial intelligence, developed by Alan Turing (the “father of modern computer science” who cracked the Enigma Code during WWII). The test is simple: a human “judge” sits at a computer terminal and essentially talks/types a conversation (on any topic the judge prefers) with an artificial intelligence on the other end. If the judge believes he’s talking to a human, the A.I. passes the test, having proven itself sufficiently “humanlike” to effectively be considered conscious and capable of thought.

What’s notable about the Turing Test is that passing it involves being able to persuade a human mind, versus meeting various technical specifications determined by some board of academic experts. It reveals more about us as human judges than it does about A.I. architecture. An artifical entity’s state of consciousness is a function of our own human perception, rather than of its own technological sophistication. My own belief is that any entity capable of passing a Turing Test deserves my respect and compassion, regardless of any technical questions about the origin and genuineness of it’s “thoughts” or “feelings.”

I’ve always been fascinated with artificial intelligence and the human response to it. I’m an unabashed Ray Kurzweil fan (and a singularity “enthusiast”), and I love pretty much any science fiction that involves intelligent artificial life. Some of my favorites include Star Trek: TNG’s Data (tied with the fawxy Capt. Jean-Luc Picard for my favorite character) – the episode “Measure of a Man,” in which Data’s status as an autonomous being is called into question, is particularly compelling; both “The Second Rennaisance” and “Matriculated” from The Animatrix; Lester del Rey’s Helen O’Loy, a terrific short story about a man who falls in love with his own creation; and of course, H.A.L. 9000 gets the award for the most menacing artificial life form in science fiction cinema.

If you’re eager to tinker around with your own artificial intelligence, PandoraBots offers free “trainable” chatbots with customizable HTML. Check out my chatbot, “Batman.”


Filed under: Science & Technology, , , , , ,

5 Responses

  1. therealjarfa says:

    Disclaimer: not a philosophy or compsci major, may have no idea what I am talking about.
    I’ve never thought the turing test was that great of a criterion for judging consciousness. After all, a dog could never pass that test, yet I believe that it is conscious, and has several basic rights. On the other hand, you could program a computer to have a large repertoire of responses to chat inputs from a human. Yet I do not think that would necessarily signal self-awareness, and with it the right to compassion from humans. I simply fail to see what is so special about being able to mimic human conversational skills.

  2. Libby says:

    Hey Mr. Arfa, long time no see!

    Factoid: for all our advances in clock cycle speed and processing architecture, we still really suck at teaching computers how to mimic human conversation. Natural Language Processing (NLP) has proven to be quite the pickle. You can’t just program vocabulary and language rules into a machine and have it be able to carry on a coherent conversation, correctly distinguish between meanings for words that are used in many different contexts, or follow a topic beyond a few sentences. I’m not sure what the current approach(es) is/are, but I’m pretty sure it involves machines capable of learning – which is pretty amazing on its own, I think. So, without a doubt, a computer that someday can model human language as well as a native speaker will be a very sophisticated machine. And if a machine can learn something as complicated as human speech, it’s only a hop, skip, and jump for another researcher to begin teaching machines other things such as ethics, philosophy, critical reasoning, art, etc.

    Seriously, watch the Star Trek episode I mentioned in the post. At least read the Star Trek Wiki summary.

  3. Will Luther says:

    I do not think the Turing Test is an acceptable criterion for “‘human’ rights.” For one, I am sure there are some humans that would not pass a Turing Test. More importantly, though: robots (even if they are the product of reproductive robots) are property. And granting them “‘human’ rights” necessarily revokes the property right of the robot’s owner.

  4. […] Earlier this week, at Will’s suggestion, I watched an episode of the tv show “NUMB3RS” that involved artificial intelligence and the Turing Test. […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Whenever you find that you are on the side of the majority, it's time to pause and reflect.
-Mark Twain

@LibbyJ on Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Libby's Delicious Bookmarks

%d bloggers like this: