Wednesday, December 20, 2006

Its Alive! ...Or Not

This is a pretty interesting article about robotics, intelligence and inherent rights.
Robots and machines are now classed as inanimate objects without rights or duties but if artificial intelligence becomes ubiquitous, the report argues, there may be calls for humans’ rights to be extended to them.

It seems they think around 2056 we will have seen the birth of self aware synthetic intelligence. While their report raises some tricky ethical and moral questions, I have to wonder just how optimistic they are being with their estimates. Remember, we were supposed to have conquered this realm by now.

I suppose I cannot fault them for their predictions (and looking past those predictions to the impact on society). There is a school of thought that intelligence is emergent. That is, interactions between simple mechanisms become more and more complex as the scale increases. At some threshold of scale, the level of complex behavior appears to self organize and produce resultant interactions that cannot be predicted by extrapolating the behavior of its fundamental parts. Typically, this is called Strong Emergence (it's hard to see how Weak Emergence could give rise to intelligence). And as we can all testify, our computers, phones, entertainment systems are becoming more and more complex every day. So there is something to be said for this becoming a possibility due to the seemingly geometric increases we see happening.

Materialists (in the most broad sense) say our brains are purely algorithmic in nature and the mind is simply an artifact of an extremely complex machine. Forget free will. There is no such thing here. Everything is ruled by stimulus/response. Even our innermost thoughts. Hell, even this little essay has been determined by past experience and external stimulus. And every word could have been accurately predicted given a complete mathematical model of my brain.

This is called Biological Naturalism. At its core, it's a rejection of the duality of the mind and body. Given a sufficiently complex model a simulation will in essence no longer be a model but an independent, self-aware consciousness. Of course, the creation of this kind of system would make accurate weather modeling (still out of our grasp) seem simple by comparison. In essence we would be building a brain.

There's another idea in the materialist camp. That the form is much more complex than we can ever hope to understand. That intelligence is intrinsically wedded to quantum mechanics. Roger Penrose goes on about this in some of his writings. With a clever application of Gödel's Incompleteness Theorem and the halting problem (knowing when a series of equations is infinite or unsolvable), he posits that while consciousness ultimately arises due to structure, it is beyond what we will ever be able to deduce due to the fact that we can never have a complete and consistant model. The ability to understand, much less create, self-awareness will forever be out of reach thanks to Gödel. This idea is very controversial and I'm not touching it with a ten-foot pole. Plus the math is way beyond me (and I'm talking many light-years beyond).

I'm going to touch on simulation in this case. Barring any major breakthroughs in nueroscience, our understanding of chaos, and fast analog, multi-state computers (or quantum computers), simulations will most likely be pale imitations of homo sapiens sapiens.

But, if a simulation is convincing enough to pass a Turing Test, we start moving out of the shallows of epistemology and into the deep scary waters of existentialism. Here, we question the validity of robotic rights. Would the concept even apply for pure simulations? Or would we be using these simulated beings as a mirror, trying to put limits and restrictions on our seemingly inherent brutality and callousness?

These are questions that will most likely never be answered. Nor should they, really. It is the searching --the blind groping --that make it worthwhile. It's what we trip over and discover, in our own ineffable, blundering way, while trying to answer the unanswerable. What we discover there --those are the real treasures of humanity.

However, there are greater, more pressing questions that we should begin to ask ourselves before it becomes moot. How will we relate to a created intelligence? What common ground can we have, given the wildly different environments?

And even more fundamental: Would we even be able to recognize the existence of an intelligence that would be so completely different from our own?