Professor Susan Greenfi eld, in the BBC TV series Brain Story
fl eshy brain does, and that controlled your muscles and body in exactly the same way too.
Such a “robot brain” is not a programmed digital computer, any more than your fl eshy brain is. There is no symbol-shuffl ing going on inside it. So even if Searle’s Chinese Room experiment (see pp.134–5) does establish that no programmed symbol-shuffl ing computer can understand, it does not establish that a robot equipped with a synthetic replica of the brain would not understand. Yet it seems Searle must deny that this robot-
brained individual understands. For it is made out of the
“wrong” materials.
There remains a temptation, of course, to say that, though this mechanical wonder might simulate thinking, understanding, and feeling, it is “just”
a machine. But notice that this machine will itself deny that it is “just”
a machine. After all, if its robot-brain architecture is exactly the same as your own, then so will be its output, with the result that its behavior will be exactly the
same, as well. Like you, it will insist that it has thoughts and feelings. It will also be able to tell us all about what it is like to enjoy sensations or to experience love, hate, indifference, or a deep sense of longing (or at least it will be able to do so just as well as you).
If you are still convinced that there remains an essential quality to having a mind that this man-made robot must necessarily lack, then consider one last scenario. Suppose that, over the course of a year, we gradually replaced your own neurons with robot neurons one by one. If these robot neurons do exactly
It is estimated there are about one hundred billion neurons networked together in an adult human brain. Some human neurons are several feet long.
the same physical job as your fl eshy neurons (and let’s just stipulate that they do), then this gradual replacement will not have any effect on your brain’s functioning or on how it interacts with the rest of your body. So your outward behavior will remain entirely unaffected by the process.
If Searle is right and possessing a mind depends on what sort of material your brain is composed of, then the effect of gradually replacing your fl eshy neurons with inorganic robot-equivalents should be the gradual removal of your mind.
But how plausible is this? Presumably, your mind is something you are aware of.
Given your awareness of the mind you possess, you would notice if you began to lose it. And, were you aware of losing it, that is something you would mention.
Yet as your organic neurons were replaced by robot neurons, there would be no change at all to your behavior. You would not report any “loss.” How could you, for your behavior must remain exactly the same?
Could it be that this essential
“something” that so many of us feel sure that we are inwardly aware of, and that we suppose any robot must necessarily lack, is ultimately an illusion?
Silicon-based computers can handle complex tasks, and can even be programmed to “learn”
new approaches to problems.
In 1965, Intel founder Gordon Moore predicted that the computing power that can be fi tted into a given space will double every two years.
“Moore’s law,” as it has come to be known, has turned out to be roughly correct, although it is predicted that silicon-based computers will reach their limit in about 2015. This is largely due to overheating caused by packing circuitry onto ever-smaller pieces of silicon. To break through this “silicon barrier,” new approaches to computing are being developed, including processors constructed on an atomic level that function in a completely different way to silicon chips and promise greater processing power.
Huge advances in computing have led to modern desktop computers that easily outperform the room- sized scientifi c machines of the 1960s.
THE FUTURE OF COMPUTING
he question of God’s existence is central to debates in philosophy of religion. These debates usually revolve around a particular notion of God, one that arises from the Western philosophical tradition, although it has some links with ideas in other world religions. The “God of the philosophers” is defi ned by two closely related and fundamental concepts. First, God is the ultimate reality, the
background against which everything else exists. Second, God is perfection.
St. Augustine (see pp.256–7) wrote that to think of God is to “attempt to conceive something than which nothing more excellent or sublime exists”—or could exist, add other philosophers.
The concepts of God as perfection and as ultimate reality are linked by the idea, deriving from Plato (see pp.244–7), that perfection and reality are intimately connected. Simply put, what is perfect is more real than what is not. Perfection