How Did Alan Turing Propose to Test Whether a Computer Can Think?

Artist Stephen Kettle's stacked slate sculpture of Alan Turing.Artist Stephen Kettle's stacked slate sculpture of Alan Turing.Flickr Steve Parker (CC)

Editors’ Note: This is the third in a series of four essays, written by Jack Copeland, spotlighting Alan Turing, who is considered the father of modern computing and whose work breaking German codes changed the course of World War II.

How could researchers tell if a computer—whether a humanoid robot or a disembodied supercomputer—is capable of thought? This is not an easy question. For one thing neuroscience is still in its infancy. Scientists don’t know exactly what is going on in our brains when we think about tomorrow’s weather, or plan out a trip to the beach—let alone when we write poetry, or do complex mathematics in our minds. But even if we did know everything there is to know about the functioning of the brain, we might still be left completely uncertain as to whether entities without a human (or mammalian) brain could think. Imagine that a party of extraterrestrials find their way to Earth, and impress us with their mathematics and poetry. We discover they have no organ resembling a human brain; inside they are just a seething mixture of gases, say. Does the fact that these hypothetical aliens contain nothing like human brain cells imply that they do not think? Or is their mathematics and poetry proof enough that they must think—and so also proof that the mammalian brain is not the only way of doing whatever it is that we call thinking?

Of course, this imaginary scenario about aliens is supposed to sharpen up a question that’s much nearer to home. For alien, substitute computer. When computers start to impress us with their poetry and creative mathematics—if they don’t already—is this evidence that they can think? Or do we have to probe more deeply, and examine the inner processes responsible for producing the poetry and the mathematics, before we can say whether or not the computer is thinking? Deeper probing wouldn’t necessarily help much in the case of the aliens—because ex hypothesi the processes going on inside them are nothing like what goes on in the human brain. Even if we never managed to understand the complex gaseous processes occurring inside the aliens, we might nevertheless come to feel fully convinced that they think, because of the way they lead their lives and the way they interact with us. So does this mean that in order to tell whether a computer thinks, we only have to look at what it does—at how good its poetry is—without caring about what processes are going on inside it?

That was certainly what Alan Turing believed. He suggested a kind of driving test for thinking, a viva voce examination that pays no attention at all to whatever causal processes are going on inside the candidate—just as the examiner in a driving test cares only about the candidate’s automobile-handling behavior, and not at all about the nature of the internal processes that produce the behavior. Turing called his test the “imitation game,” but nowadays it is known universally as the Turing test.

Turing’s test works equally well for computers or aliens. It involves three players: the candidate and two human beings. One of the humans is the examiner, or “judge,” and the other is the “foil,” or comparator. The idea of the test is that the judge must try to figure out which of the other two participants is which, human or non-human, simply by chatting with them. The session is repeated a number of times, using different judges and foils, and if the judges are mistaken often enough about which contestant is which, the computer (or alien) is said to have passed the test. Turing stipulated that the people selected as judges “should not be expert about machines.”

Turing imagined conducting these conversations via an old-fashioned teleprinter, but nowadays we would use email or text messages. Apart from chatting, the judges must be kept strictly out of contact with the contestants—no peeping is allowed. Nor, obviously, are the judges allowed to measure the candidates’ magnetic fields, or their internal temperatures, or their processing speeds. Only Q & A is permitted, and the judges must not bring any scientific equipment along to the venue. Justifying his test, Turing said: “The question and answer method seems to be suitable for introducing almost any one of the fields of human endeavour that we wish to include” (and his own list of suitable fields for testing included mathematics, chess, poetry, and flirting). Turing added drolly, “We do not wish to penalize the machine for its inability to shine in beauty competitions,” making the point that his question-and-answer test excludes irrelevant factors.

The judges may ask questions as wide-ranging and penetrating as they like, and the computer is permitted to use “all sorts of tricks” to force a wrong identification, Turing said. So smart moves for the computer would be to reply “No” in response to “Are you a computer?” and to follow a request to multiply one huge number by another with a long pause and an incorrect answer—but a plausibly incorrect answer, not simply a random number. In order to fend off especially awkward questioning, the computer might even pretend to be from a different (human) culture than the judge. In fact, it is a good idea to select the test personnel so that, from time to time, the judge and the foil are themselves from different cultures. Here is Turing’s own example, dating from 1950, of the sort of conversation that could occur between a judge and a computer that successfully evades identification:
Judge:     In the first line of your sonnet which reads “Shall I compare thee to a  summer’s day,” would not “a spring day” do as well or better?

Machine:  It wouldn’t scan.

Judge:     How about “a winter’s day?” That would scan all right.

Machine: Yes, but nobody wants to be compared to a winter’s day.

Judge:     Would you say Mr. Pickwick reminded you of Christmas?

Machine:  In a way.

Judge:    Yet Christmas is a winter’s day, and I do not think Mr Pickwick would mind the comparison.

Machine: I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas.       

Turing was a little cagey about what would actually be demonstrated if a computer were to pass his test. He said that the question “Can machines pass the test?'”is “not the same as ‘Do machines think,’ but (he continued) it seems near enough for our present purpose, and raises much the same difficulties.” In one of his philosophical papers, he even cast doubt on the meaningfulness of the question “Can machines think?” saying (rather rashly) that the question is “too meaningless to deserve discussion.” However, he himself indulged in such discussion with gusto. In fact he spoke very positively about the project of “programming a machine to think” (his words), saying “The whole thinking process is still rather mysterious to us, but I believe that the attempt to make a thinking machine will help us greatly in finding out how we think ourselves.”

Turing was also cagey about how long he thought it would be before a computer passes the test. He said (in 1952) that it would be “at least 100 years” before a machine stood any chance of passing his test with no questions barred. This was a sensibly vague prediction, making it clear that Turing appreciated the colossal difficulty of equipping a computer to pass the test. Unfortunately, though, there is an urban myth that Turing predicted machines would pass his test by the end of the twentieth century—with the result that he has been unfairly criticised not only for being wrong, but also for being ‘far too optimistic about the task of programming computers to achieve a command of natural language equivalent to that of every normal person’, one of his critics, Martin Davis, said. Given Turing’s actual words (“at least 100 years”) this is misguided criticism.

There is another widespread misunderstanding concerning what Turing said. He is repeatedly described in the (now gigantic) literature about the Turing test as having intended his test to form a definition of thinking. However, the test does not provide a satisfactory definition of thinking, and so this misunderstanding of Turing’s views lays him open to spurious objections. Turing did make it completely clear that his intention was not to define thinking, saying ‘I don’t really see that we need to agree on a definition at all’, but his words were not heeded. “I don’t want to give a definition of thinking,” he said, “but if I had to I should probably be unable to say anything more about it than that it was a sort of buzzing that went on inside my head.”

Someone who takes Turing’s test to be intended as a definition of thinking will find it easy to object to the definition, since an entity that thinks could fail the test. For example, a thinking alien might fail simply because its responses are distinctively non-human. However, since Turing didn’t intend his test as a definition, this objection misses the point. Like many perfectly good tests, Turing’s test is informative if the candidate passes, but uninformative if the candidate fails. If you fail an academic exam, it might be because you didn’t know the material, or because you had terrible flu on the day of the exam, or for some other reason—but if you pass fair and square, then you have unquestionably demonstrated that you know the material. Similarly, if a computer passes Turing’s test then the computer thinks, but if it fails, nothing can be concluded.

One currently influential criticism of the Turing test is based on this mistaken idea that Turing intended his test as a definition of thinking. The criticism is this: a gigantic database storing every conceivable (finite) English conversation could, in principle, pass the Turing test (assuming the test is held in English). Whatever the judge says to the database, the database’s operating system just searches for the appropriate stored conversation and regurgitates the canned reply to what the judge has said. As philosopher Ned Block put it, this database no more thinks than a jukebox does, yet in principle it would succeed in passing the Turing test. Block agrees that this hypothetical database is in fact “too vast to exist”—it simply could not be built and operated in the real world, since the total number of possible conversations is astronomical—but he maintains that, nevertheless, this hypothetical counterexample proves the Turing test is faulty.

It’s true that the database example would be a problem if the Turing test were supposed to be a definition of thinking, since the definition would entail that this monster database thinks, when obviously it does not. But the test is not supposed to be a definition and the database example is in fact harmless. Turing’s interest was the real computational world, and the unthinking database could not pass the Turing test in the real world—only in a sort of fairyland, where the laws of the universe would be very different. In the real world, there might simply not be enough atoms in existence for this huge store of information to be constructed; and even if it could be, it would operate so slowly—because of the vast numbers of stored conversations that must be searched—as to be easily distinguishable from a human conversationalist. In fact, the judge and the foil might die before the database produced more than its first few responses.

Another famous (but misguided) criticism of the Turing test is by philosopher John Searle. Searle is one of AI’s greatest critics, and a leading exponent of the view that running a computer program can never be sufficient to produce thought. His objection to the Turing test is simply stated. Let’s imagine that a team in China, say, produces a computer program that successfully passes a Turing test in Chinese. Searle ingeniously proposes an independent method for testing whether running the program really produces thought. This is to run the program on a human computer and then ask the human, “Since you are running the program—does it enable you to understand the Chinese?” Searle imagines himself as the human computer. He is in a room provided with many rulebooks containing the program written out in plain English; and he has an unlimited supply of paper and pencils. As with every computer program, the individual steps in the program are all simple binary operations that a human being can easily carry out using pencil and paper, given enough time.

In Searle’s Turing test scenario, the judge writes his or her remarks on paper, in Chinese characters, and pushes these into the room through a slot labeled INPUT. Inside the room, Searle painstakingly follows the zillions of instructions in the rulebooks and eventually pushes more Chinese characters through a slot labeled OUTPUT. As far as the judge is concerned, these symbols are a thoughtful, intelligent response to the input. But when Searle, a monolingual English speaker, is asked whether running the program is enabling him to understand the Chinese characters, he replies “No, they’re all just squiggles and squoggles to me—I have no idea what they mean.” Yet he is doing everything relevant that an electronic computer running the program would do: The program is literally running on a human computer.

This is Searle’s famous “Chinese Room” thought experiment. He says the thought experiment shows that running a mere computer program can never produce thought or understanding, even though the program may pass the Turing test. However, there is a subtle fallacy. Is Searle in his role as human computer the right person to tell us whether running the program produces understanding? After all, there is another conversationalist in the Chinese Room—the program itself, whose replies to the judge’s questions Searle delivers through the output slot. If the judge asks (in Chinese) “Please tell me your name,” the program responds “My name is Amy Chung.” And if the judge asks “Amy Chung, do you understand these Chinese characters,” the program responds “Yes, I certainly do!”

Should we believe the program when it says “Yes, I am able to think and understand?” This is effectively the very same question that we started out with—is a computer really capable of thought? So Searle’s gedankenexperiment has uselessly taken us round in a circle. Far from providing a means of settling this question in the negative, the Chinese Room thought experiment leaves the question dangling unanswered. There is nothing in the Chinese Room scenario that can help us decide whether or not to believe the program’s pronouncement “I think.” It certainly does not follow from the fact that Searle (beavering away in the room) cannot understand the Chinese characters that Amy Chung does not understand them.

Alan Turing’s test has been attacked by some of the sharpest minds in the business. To date, however, it stands unrefuted. In fact, it’s the only viable proposal on the table for testing whether a computer is capable of thought.

Questions for Discussion:

Will computers pass the Turing test in our lifetimes?

Could a computer really think?

Does a flawless simulation of thinking count as thinking?

Is there any systematic way of unmasking the computer in the Turing test (without infringing the rules of the test)?

Could IBM’s ‘Watson’ pass the Turing test?

Is there a difference between being able to think and being conscious?

Is there a difference between being intelligent and being able to think?

If computers do eventually pass the Turing test, what will follow for the human race?

An AI expert once predicted that, if we are lucky, the superintelligent computers of the future may keep us as pets. How real is the danger that Artificial Intelligences will take over?

Are human beings soft cuddly computers?

If the ability to think is not exclusive to humans, is there any quality that would distinguish human beings from the products of technology?

Does Searle’s Chinese Room thought experiment show that thinking is more than number-crunching and symbol-crunching?

If everything counts as a computer, then the human brain is a computer, and so it’s trivially true that computers can think. Is the brain really a computer? Is there anything that isn’t a computer, or does every object have some level of description relative to which it is a computer?

If computers can think, then can neuroscience tell us anything about thinking?

10 Responses

  1. James Laird says:

    In the future, I believe that computers will develop the ability to think. Presently however, computers only emulate/simulate previous human thinking.

    Human thinking involves billions of neural interactions that occur generally in a *parallel* manner, which thereby results in new emergent forces exerted by our thoughts. I’m thinking that’s the whole idea of emergence – something new that comes into existence due to the simultaneous activity of *lots* of lower-level components (i.e., billions of neurons firing in a coordinated manner). There are no “thoughts” existing inside computers today. Computers simply run high-speed serial processes that are pre-programmed by previous human thinking.

    All of the activity inside a computer is controlled solely by the four fundamental forces of physics (4FFOP). There isn’t any new life therein, which is different than human thinking – human thoughts exert new emergent forces – new life. We know that’s true, since we experience mental causation to be true (i.e., one thought in your mind has the ability to affect or influence another thought within your mind, and the intelligent interaction of your thoughts cannot be controlled solely by the 4FFOP if mental causation is true, unless you believe that the source of human intelligence is somehow innate to the 4FFOP).

    In the future when computers are comprised of many processors that coordinate simultaneously in parallel, thereby processing millions/billions of instructions at a time, I believe computers will develop new emergent entities/properties that are similar to human thoughts, and those new entities will exert new emergent forces (i.e., new life) in a manner similar to human thoughts. That hypothesis is consistent with what we observe in nature, since the living things all around us are complex organisms with millions/billions of events happening simultaneously which thereby cause higher level living forces to emerge. Happy holidays, tmsolf.org

    • Jack Copeland says:

      James, Why do you say that a massively parallel computational architecture is capable of producing ‘new emergent forces’ and ‘new life’, but that a programmed serial architecture cannot do these things? Can the serial machine not in principle give a perfect simulation of the computations being done by the parallel machine? And if so, why would the computations in one architecture be unable to cause ‘new emergent forces’ when the equivalent computations in a different architecture are able to do this?

  2. James Laird says:

    Jack,

    I agree with you that it’s possible for a serial machine to give a perfect simulation of the computations performed by a parallel machine. Having said that, please consider the following ideas:

    1. It’s possible for a parallel system to have emergent properties that are different in nature than a serial system.

    2. When a serial system *simulates* the emergent properties of a parallel system, the results are due to preprogramming – the control is solely from the 4FFOP as the encoded software is running. You mentioned in your last comment that a programmed serial machine should be able to produce “new emergent forces” and “new life”. What I’m saying is that a preprogrammed serial system is only capable of simulating “new emergent forces” and simulating “new life”. It doesn’t actually have the parallel processes happening in real time that are required in order to produce non-simulated new forces and new life.

    3. I think it’s important to keep in mind that a physical human brain isn’t about computations, and therefore I don’t think it’s fair to compare the serial processes (i.e., computations) produced by a computer to the parallel processes (i.e., thoughts) produced by a human brain.

    4. There’s something about the nature of life (I don’t claim to understand it, I’m just claiming that it’s happening), wherein new forces tend to emerge when *lots* of coordinated activity happens from subcomponents. That supports my claim that serial processes don’t produce the same emergent properties as parallel systems.

    In summary, a human brain isn’t like a machine; instead, it’s completely different – it’s alive. (The activity that occurs within a brain isn’t predeterministic in nature like computer algorithms. Instead, new forces emerge in human brains, as I argued in my previous comment.)

  3. James Laird says:

    Jack,

    I had an additional thought… Here’s an example that illustrates how parallel systems may have different emergent properties than serial systems.

    Imagine 100 billion water molecules combined in a small 3-space, at a temperature of 31 degrees Fahrenheit. The property of “ice” emerges.

    Now imagine that 1 ea. water molecule is moving at a high speed sequentially throughout each of the positions held by the individual water molecules located in the ice lattice described above. There is only one molecule in this second scenario, and a perfect simulation of the ice lattice is created by using a serial process. The property of “ice” doesn’t emerge.

    • Jack Copeland says:

      James,

      Interesting.

      Focussing on your:

      1. It’s possible for a parallel system to have emergent properties that are different in nature than a serial system.

      I guess this becomes false (would you agree?) if we add a few words along the following lines:

      1′. It’s possible for a parallel system to have emergent properties–in virtue solely of the computations that it performs–that are different in nature than a serial system.

      So, if I understand you correctly, when you emphasise the importance of paralellism for the emergence of mental properties, you are not really taking sides in a debate about parallel computation versus sequential computation. You are saying in effect that computation may not matter very much for the emergence of mental properties, and that what does matter much more is some relatively unknown physico-chemical processing that isn’t essentially computational at all (although is probably simulable by computer) and which must take place in a highly parallel manner.

      I think that’s an interesting conjecture but it still leaves me wondering why parallelism is so important. Another question concerns simulation. If it is agreed that a seuential computer can give a perfect simulation of the activity of your highly parallel bio-device, then why can’t the sequential machine be said to be thinking too? Why isn’t thinking one of those cases where a perfect simulation of X-ing simply is X-ing? Thought ain’t ice!

  4. James Laird says:

    Jack,

    I’m thinking that we need to reach agreement on what the word “simulation” means. To me, it means the process of creating a new entity that’s similar in nature to a preexisting entity. The new entity is only similar, it’s not identical in *every* way to the preexisting entity. In that sense of the word “simulation”, when preexisting entity “A” is simulated by entity “B”, A will have some emergent properties that B won’t have.

    So when someone uses the term “perfect simulation”, I’m thinking that there needs to be some qualifying parameters. For example, you could say that B is a perfect simulation of A if properties X, Y, and Z of entity B perfectly match properties X, Y, and Z of entity A.

    As of today, I don’t believe mankind has the ability to create a perfect simulation of a living system wherein *all* properties of B identically match all properties of A.

    Here’s where I’m going with this… When you talk about a serial machine performing a perfect simulation of a human brain, I’m thinking that the computer must be capable of exerting new living forces (i.e., forces which aren’t simply a direct sum of preexisting forces) in a manner similar to what happens inside a physical brain. I wouldn’t call it a “perfect simulation” unless the computer can do that. To date, mankind hasn’t produced computers that are capable of exerting living forces (that I’m aware of).

    Someday in the future, when computers are able to process a *huge* number of instructions simultaneously in parallel, I believe new life will emerge (i.e., living forces) and computers will no longer be simulating thinking, they’ll be doing real thinking. Higher level entities emerge in a human brain (e.g., your thoughts) and those entities exert new emergent forces that are caused by (but not determined by) billions of chemical reactions that occur simultaneously in a coordinated manner. None of those individual chemical reactions are the sole source of where your intelligence comes from. Your intelligence comes from something that emerges at a higher level, the *pattern* level of neurological activity, and in a similar manner, when computers develop the ability to think, their thinking won’t be associated with a single instruction that’s executed by a single processing node – their thinking will emerge from the collective activity of billions of instructions happening simultaneously. It’s a process that somehow has to do with life (and I’m not claiming to understand how that works).

    • Jack Copeland says:

      Hi James,

      Couple quick questions.

      Why is simultaneity so important to your account? You talk about the ‘collective activity of billions of instructions happening simultaneously’ being at the core of thought. Why do the computational operations have to happen simultaneously? Why can’t they happen in a temporal neighborhood? And if the answer is that occurrence within a neighborhood would be just as good as absolute simultaneity, so far as your hypothesis is concerned, then doesn’t that raise the possibility of an equivalent sequential simulation, with the computationally equivalent sequential operations all happening fast enough to fall inside that temporal neighborhood?

      Your phrase ‘new emergent forces that are caused by (but not determined by) billions of chemical reactions’ caught my eye. How does ’caused but determined’ work? And why is it important to your account that the upwards causation not be deterministic? Is this to try to avoid the ever-present threat that the new ’emergent’ forces will simply be downwards reducible to activity at the lower level?

      I have some comments about the nature of simulations too, but these will have to wait for a few hours–I’m heading off to the New Zealand Alps for the day…

      • James Laird says:

        Jack,

        I tried to post a response last night, but I think my text was a little too long and it didn’t go through.

        I’m taking off for some Christmas travel now, but I’ll keep an eye out for your next essay on BQO.

        Happy holidays!

  5. James Laird says:

    Jack,

    The New Zealand Alps sound awesome! I walked the Milford Track several years ago, and I’ve seen some of the beauty that New Zealand has to offer.

    If millions of simulated temporal neighborhoods are connected together and they’re all active at the same time in a coordinated manner, then I would think that a higher form of life would begin to emerge from that system (e.g., the Internet). I know I’m really vague on the reasons why I think parallel processes are so important, and I wish I had more of an explanation to offer along those lines. When I observe reality around me, it seems obvious that there’s a fundamental requirement for *lots* of things to be happening simultaneously in a coordinated manner in order for new properties to emerge. Isn’t that a reasonable belief? When I see only a few things happening in parallel, or when I see events happen in a serialized manner, I just don’t see lots of new emergent properties.

    The idea of something being “caused by, but not determined by” is what I think the fundamental of “life” is all about. I believe that human thoughts are a perfect example of that; our thoughts are caused by billions of neurons firing in a coordinated manner, but the forces exerted *by* our thoughts aren’t predeterministic in nature; they aren’t controlled *solely* by the 4FFOP in a bottom-up manner. If it’s okay with the BQO folks, I’d like to insert an argument that I’ve written that supports the idea that human thoughts exert new emergent forces. I believe the argument is relevant to the computers vs. human brains discussion we’re having on this thread and it will provide additional value.

    Jack, if you believe that the following argument holds any water, perhaps you’ll be more convinced that in order for a serial machine to “perfectly simulate” human thinking, the computer must be capable of exerting new emergent forces. (If you have time, you might enjoy visiting tmsolf.org; it’s a website I’ve written and it explains several of the points I’m making herein and it’s more comprehensive.) Okay, here’s the argument:

    Do Human Thoughts Exert New Emergent Forces?

    Here are two arguments supporting the idea that new forces are an emergent property of human thoughts. These arguments suggest that the interaction between two thoughts within a physical brain isn’t controlled *solely* by the four fundamental forces of physics (4FFOP). By showing that new forces are an emergent property of human thoughts, these arguments also show that free will (in the strong sense) exists.

    I don’t think anyone would disagree with the following two statements: 1. Intelligence is associated with human thoughts. 2. One thought is able to affect, influence, or interact with another thought within a physical human brain (i.e., mental causation is true).

    Argument #1: When two thoughts interact with one another inside a physical brain (i.e., mental causation), the *intelligence* associated with one thought is able to interact with the *intelligence* associated with another thought. If the intelligent interaction of two thoughts is controlled solely by the 4FFOP, then mental causation must be false. In other words, if all of the control happens strictly from the 4FFOP, then one thought doesn’t truly affect another thought – the interaction is simply an illusion and all of our apparent logic (i.e., intelligence) is sourced directly from the 4FFOP. Since every human consistently *experiences* mental causation to be true, isn’t there sufficient reason to believe that our thoughts exert new emergent forces?

    Argument #2: Here’s a different way to state the same argument: A person may argue that the intelligent interaction between two thoughts is controlled solely by the current neural net wiring of a physical brain and the 4FFOP, but there’s a flaw in that line of thinking. The intelligence associated with a human thought exists primarily at the “pattern level” of neurological activity, not at the individual neuron level. Therefore, in order for interaction to occur between the intelligence associated with one thought and the intelligence associated with another thought, there must be interaction at the pattern level of neurological activity; there must be forces exerted *from* the pattern level, and those forces aren’t simply a summation of the 4FFOP for two reasons: First, because that would mean there is *no interaction* at the pattern level (i.e., mental causation must be false), and instead, the intelligence is innate to the 4FFOP. Second, because the forces exerted from the “pattern level” emerge in a different field than the 4FFOP; therefore, the intelligent forces cannot be a summation of the 4FFOP since forces located in different fields don’t add directly with one another. The forces exerted by our thoughts are caused by lower-level neural activity, but the forces exerted by our thoughts aren’t predetermined; they’re new emergent forces exerted from the pattern level.

    Imagine two ocean waves traveling across the surface of the ocean directly toward one another. The two waves run into each other and there’s interaction. What forces control the manner in which those two waves interact? I believe it’s reasonable to state that the 4FFOP control all of the activity associated with the interaction of those two waves. In addition, I believe it’s reasonable to state that there isn’t any intelligence associated with the interaction of the two ocean waves.

    Isn’t it fair to say that there’s a fundamental difference between the interaction of two ocean waves and the interaction of two human thoughts? Human thoughts don’t just “run into one another” inside of a physical brain thereby causing conclusions to be reached based solely upon the 4FFOP. There’s *intelligence* associated with the interaction of thoughts, and as argued above, that intelligence isn’t innate to the 4FFOP.

    The same principle regarding emergent intelligence explained above applies to human learning: Where does the intelligence come from that’s associated with the forces that change a person’s neural net wiring on the fly while they’re learning something new? There *must* be intelligence associated with those forces; otherwise the changes made to a person’s neural wiring would be random in nature and the person wouldn’t learn anything.

    In summary: Instead of believing that human intelligence is somehow innate to the 4FFOP (and mental causation is therefore false), isn’t it more reasonable to believe that intelligence is an emergent property caused by billions of neurons firing in a coordinated manner inside a human brain? In addition, if human thoughts exert new emergent forces, isn’t that a sufficient reason to believe humans have free will, and are therefore capable of exerting control?

     

    • Jack Copeland says:

      James,

      I climbed up onto a ridge between two lakes, one black, one brilliant blue. Just gazing at New Zealand is restorative!

      I hope you are enjoying your holiday travels and that you manage to read this sometime.

      I said I would talk a bit about the idea of simulation. Simulation is indexed to a computational level. The levels of a device stretch on up from the basic computational operations wired into the hardware (the raw machine operations, you could call them), up on through all those levels that are built upon the raw machine operations. The basic computational operations that are available at a higher level may bear little resemblance to the raw machine operations, even though in some sense the upper level operations are ’emergent from’ the raw machine operations. (An upper level could even offer parallelism, even though (say) the raw machine operations are entirely sequential.) An example of a high level that most people are familiar with is the environment of MS Word. The menu operations available at this level (search, change case, italicise, and so on) bear little resemblance to the raw macine operations (of your laptop, say) that they are emergent from.

      So: to say that some selected level L associated with some hardware device D is being simulated by some other device S is to say that, at some level, S is carrying out (a superset of) the same computational operations that are occurring at L in D. So D and S share a level, you might say, and during the simulation, the same computations are going on at each of those shared levels. But if you look beneath those levels, eveything could be very different in the two devices. The raw machine operations of D, from which the operations at level L are emergent, might be utterly different from the raw machine operations of device S.

      Turning to your first argument, you say ‘Since every human consistently *experiences* mental causation to be true, isn’t there sufficient reason to believe that our thoughts exert new emergent forces?’. I don’t think so. What we all experience might be illusory. Taking a parallel case, what gives the free will debate such urgency is that, even though most people have the experience of choosing freely, that is insufficient to show that we have free will, because we might be subject to a collective illusion (an illusion with evolutionary value perhaps). I personally argue that we do have free will, and that mental causation is real, but pointing out that everyone experiences these things is no way to convince an opponent that freewill and mental causation are real.

      Moving on to your second argument. You claim that ‘mental causation must be false’ unless there are ‘new emergent forces’ that are not reducible to basic physics. I think that mental causation is real, but I belive this claim of yours can be counter-modelled in the computational levels picture that I just sketched. The upper computational levels are real enough, and they do contain genuinely new computational operations that ’emerge’ from the raw machine operations; but in the end the functioning of the computational device (including at the raw computational level) is entirely reducible to physics. That’s my position: upper level causation is real even though it might well be reducible to causation at a lower level. To illustrate: I think it’s true to say that typing ‘New Zealand’ into the Word search box causes my machine to highlight certain character strings in my document–even though my machine’s raw computational operations do not contain a ‘search’ operation, nor any reference to alphabetical characters, nor to boxes,  highlighting, or documents. It’s the same with mental causation. Contrary to your claim, the reducibility of mental causation to, say, interneuronal reactions, does not seem to entail that ‘mental causation must be false’.