A few weeks ago, I blogged about the past half-century's startling advances in computer competence. Referring obliquely to the Turing test, I mused, "Makes me wonder: Just how long will it be before we can no longer tell our computers from ourselves?"
A week later, as fate would have it, I showed up in a classroom on Stanford's quad for a discussion between UC-Berkeley philosopher John Searle, PhD, and Stanford artificial-intelligence expert Terry Winograd, PhD, concerning a similar-sounding but subtly deeper question: "Can a computer have a mind?"
Failed philosophy major that I am (see confession), I refrained from raising my hand while Searle recapped his famous "Chinese room" argument. "I don't understand a word of Chinese," he told the audience. But were you to arm him with sufficient instructions for responding to myriad character combinations with counterpart character-combination outputs, he claimed, he might be able to fool a remote observer into concluding otherwise. (Philosophers are always "claiming" something or other. How nostalgic!)
Sure, machines might be able to "think" in the sense of manipulating symbols, said Searle. But when it comes to consciousness, such "thoughts" do not a mind make. Syntax (the manipulation of symbols - nothing but ones and zeroes, in this case) isn't the equivalent of semantics (the effects of those manipulations on our consciousness: in a word, "meaning.)
"We still don't know how the brain creates consciousness," Searle said, arguing that to fully understand subjectivity, it will be necessary not merely to simulate brain function but to duplicate it. (A street map is not the same as the city it's a map of.) That's a comforting constraint for carbon-based throwbacks such as myself, who would like to feel our dominance is assured, at least for a while, by the excruciating nested complexity of the biological components-within-components-within-components of the human brain.
Aha! The Devil is in the details. (The Tom Südhofs of the world are busily working those out as I write this). Score one for biology: A ones-and-zeroes-based gizmo, which can't even sprout body hair, may never acquire that precious thing called "consciousness." At least, not on its own.
But what if nanotech and biotech team up?
Once upon a time before coming to Stanford , I wrote an article titled "The 21st Century Meets the Tin Woodsman" and subtitled: "Can Joe Six-Pack compete with Sid Cyborg?" Consider a scenario wherein computation- and communication-enabled nanoparticles, ingested in a pill, float through the blood-brain barrier and seat themselves at each of the quadrillion or so nerve-cell to nerve-cell contact points in a person's central nervous system:
With nanobots monitoring every critical neural connection's involvement in a thought or emotion or experience, you'll be able to back up your brain - or even try on someone else's - by plugging into a virtual-reality jack. The brain bots feed your synapses the appropriate electrical signals and you're off and running, without necessarily moving. If nanotechnology gets traction, all bets are off, because whoever's packing those brain bots will be infinitely more intelligent than mortal meat puppets like me ... I hope our sleek semiconducting successors like pets, because, while the mammalian herding instinct ensures that many of us will go along for the ride, characteristic human obstinacy ensures that many will not.
Call me obstinate. To the best of my knowledge, I'm still 100 percent human. But in ten or twenty years, at the rate things are going, how will I be able to be sure you are, too?
Previously: Step by step, Sudhof stalk the devil in the details, snagged a Nobel, Half-century climb in computer's competence colloquially captured by Nobelist Michael Levitt and Brains of different people listening to the same piece of music actually respond in the same way
Photo by Javi