- Why Facts Don’t Change Our Minds | The New Yorker
- Why Facts Don’t Change Our Minds
- What Is a Cliché?
- Alphaville is completely free.
Telluric ash and fire-spores boil away. Materials for Teachers Materials for Teachers Home. Poems for Kids. Poems for Teens. Lesson Plans. Teach this Poem. Poetry Near You. Academy of American Poets. National Poetry Month. American Poets Magazine. Poems Find and share the perfect poems. The Road Not Taken. Two roads diverged in a yellow wood, And sorry I could not travel both And be one traveler, long I stood And looked down one as far as I could To where it bent in the undergrowth; Then took the other, as just as fair, And having perhaps the better claim, Because it was grassy and wanted wear; Though as for that the passing there Had worn them really about the same, And both that morning equally lay In leaves no step had trodden black.
Home Burial He saw her from the bottom of the stairs Before she saw him. She was starting down, Looking back over her shoulder at some fear. She took a doubtful step and then undid it To raise herself and look again.
- New discoveries about the human mind show the limitations of reason..
- Zac Power Test Drive: Zacs Power On.
- Aus der Chronik eines geistlichen Herrn (German Edition).
- Unraveling the Model Minority Stereotype: Listening to Asian American Youth, 2nd Edition?
- Fates Intervention.
- The Curiosities of Food: or, The Dainties and Delicacies of Different Nations Obtained from the Animal Kingdom.
He spoke Advancing toward her: 'What is it you see From up there always--for I want to know. He said to gain time: 'What is it you see,' Mounting until she cowered under him. She let him look, sure that he wouldn't see, Blind creature; and awhile he didn't see. But at last he murmured, 'Oh,' and again, 'Oh. I never noticed it from here before. I must be wonted to it--that's the reason. The little graveyard where my people are! So small the window frames the whole of it. Not so much larger than a bedroom, is it? There are three stones of slate and one of marble, Broad-shouldered little slabs there in the sunlight On the sidehill.
We haven't to mind those. But I understand: it is not the stones, But the child's mound--' 'Don't, don't, don't, don't,' she cried. She withdrew shrinking from beneath his arm That rested on the bannister, and slid downstairs; And turned on him with such a daunting look, He said twice over before he knew himself: 'Can't a man speak of his own child he's lost? Oh, where's my hat? Oh, I don't need it!
I must get out of here. I must get air. I don't know rightly whether any man can. Don't go to someone else this time. Listen to me. I won't come down the stairs. I don't know how to speak of anything So as to please you. But I might be taught I should suppose.
I can't say I see how. A man must partly give up being a man With women-folk. We could have some arrangement By which I'd bind myself to keep hands off Anything special you're a-mind to name. Though I don't like such things 'twixt those that love. Two that don't love can't live together without them. But two that do can't live together with them.
Don't carry it to someone else this time. Tell me about it if it's something human. Let me into your grief. I'm not so much Unlike other folks as your standing there Apart would make me out. Give me my chance. I do think, though, you overdo it a little. What was it brought you up to think it the thing To take your mother--loss of a first child So inconsolably--in the face of love. You'd think his memory might be satisfied--' 'There you go sneering now! You make me angry. I'll come down to you. God, what a woman! And it's come to this, A man can't speak of his own child that's dead.
If you had any feelings, you that dug With your own hand--how could you? I thought, Who is that man? I didn't know you. And I crept down the stairs and up the stairs To look again, and still your spade kept lifting. Then you came in. I heard your rumbling voice Out in the kitchen, and I don't know why, But I went near to see with my own eyes. You could sit there with the stains on your shoes Of the fresh earth from your own baby's grave And talk about your everyday concerns. You had stood the spade up against the wall Outside there in the entry, for I saw it. I'm cursed.
God, if I don't believe I'm cursed. What had how long it takes a birch to rot To do with what was in the darkened parlor. Second, there is the suggestion that The Turing Test provides logically sufficient—but not logically necessary—conditions for the attribution of intelligence. Fourth—and perhaps not importantly distinct from the previous claim—there is the suggestion that The Turing Test provides more or less strong probabilistic support for the attribution of intelligence.
We shall consider each of these suggestions in turn. It is doubtful whether there are very many examples of people who have explicitly claimed that The Turing Test is meant to provide conditions that are both logically necessary and logically sufficient for the attribution of intelligence. Perhaps Block is one such case.sfplatform63alexkey.dev3.develag.com/de-revista-guiados-gua-on-line-provisioning.php
Why Facts Don’t Change Our Minds | The New Yorker
However, some of the objections that have been proposed against The Turing Test only make sense under the assumption that The Turing Test does indeed provide logically necessary and logically sufficient conditions for the attribution of intelligence; and many more of the objections that have been proposed against The Turing Test only make sense under the assumption that The Turing Test provides necessary and sufficient conditions for the attribution of intelligence, where the modality in question is weaker than the strictly logical, e.
Consider, for example, those people who have claimed that The Turing Test is chauvinistic; and, in particular, those people who have claimed that it is surely logically possible for there to be something that possesses considerable intelligence, and yet that is not able to pass The Turing Test. Examples: Intelligent creatures might fail to pass The Turing Test because they do not share our way of life; intelligent creatures might fail to pass The Turing Test because they refuse to engage in games of pretence; intelligent creatures might fail to pass The Turing Test because the pragmatic conventions that govern the languages that they speak are so very different from the pragmatic conventions that govern human languages.
None of this can constitute objections to The Turing Test unless The Turing Test delivers necessary conditions for the attribution of intelligence. Rather—as we shall see later—French supposes that The Turing Test establishes sufficient conditions that no machine will ever satisfy. That is, in French's view, what is wrong with The Turing Test is that it establishes utterly uninteresting sufficient conditions for the attribution of intelligence.
There are many philosophers who have supposed that The Turing Test is intended to provide logically sufficient conditions for the attribution of intelligence. That is, there are many philosophers who have supposed that The Turing Test claims that it is logically impossible for something that lacks intelligence to pass The Turing Test. Often, this supposition goes with an interpretation according to which passing The Turing Test requires rather a lot, e. There are well-known arguments against the claim that passing The Turing Test—or any other purely behavioral test—provides logically sufficient conditions for the attribution of intelligence.
Consider, for example, Ned Block's Blockhead. If we agree that Blockhead is logically possible, and if we agree that Blockhead is not intelligent does not have a mind, does not think , then Blockhead is a counterexample to the claim that the Turing Test provides a logically sufficient condition for the ascription of intelligence. After all, Blockhead could be programmed with a look-up tree that produces responses identical with the ones that you would give over the entire course of your life given the same inputs.
There are perhaps only two ways in which someone who claims that The Turing Test offers logically sufficient conditions for the attribution of intelligence can respond to Block's argument. First, it could be denied that Blockhead is a logical possibility; second, it could be claimed that Blockhead would be intelligent have a mind, think.
In order to deny that Blockhead is a logical possibility, it seems that what needs to be denied is the commonly accepted link between conceivability and logical possibility: it certainly seems that Blockhead is conceivable , and so, if properly circumscribed conceivability is sufficient for logical possibility, then it seems that we have good reason to accept that Blockhead is a logical possibility.
Since it would take us too far away from our present concerns to explore this issue properly, we merely note that it remains a controversial question whether properly circumscribed conceivability is sufficient for logical possibility. For further discussion of this issue, see Crooke Blockhead may not be a particularly efficient processor of information; but it is at least a processor of information, and that—in combination with the behavior that is produced as a result of the processing of information—might well be taken to be sufficient grounds for the attribution of some level of intelligence to Blockhead.
For further critical discussion of the argument of Block , see McDermott If no true claims about the observable behavior of the entity can play any role in the justification of the ascription of the mental state in question to the entity, then there are no grounds for attributing that kind of mental state to the entity. The claim that, in order to be justified in ascribing a mental state to an entity, there must be some true claims about the observable behavior of that entity that alone—i. It may be—for all that we are able to argue—that Wittgenstein was a philosophical behaviorist; it may be—for all that we are able to argue—that Turing was one, too.
However, if we go by the letter of the account given in the previous paragraph, then all that need follow from the claim that the Turing Test is criterial for the ascription of intelligence thought, mind is that, when other true claims not themselves couched in terms of mentalistic vocabulary are conjoined with the claim that an entity has passed the Turing Test, it then follows that the entity in question has intelligence thought, mind.
Note that the parenthetical qualification that the additional true claims not be couched in terms of mentalistic vocabulary is only one way in which one might try to avoid the threat of trivialization. The difficulty is that the addition of the true claim that an entity has a mind will always produce a set of claims that entails that that entity has a mind, no matter what other claims belong to the set! Many people have supposed that there is good reason to deny that Blockhead is a nomic or physical possibility. While there might be ways in which the details of Tipler's argument could be improved, the general point seems clearly right: the kind of combinatorial explosion that is required for a look-up tree for a human being is ruled out by the laws and boundary conditions that govern the operations of the physical world.
But, if this is right, then, while it may be true that Blockhead is a logical possibility, it follows that Blockhead is not a nomic or physical possibility. And then it seems natural to hold that The Turing Test does indeed provide nomically sufficient conditions for the attribution of intelligence: given everything else that we already know—or, at any rate, take ourselves to know—about the universe in which we live, we would be fully justified in concluding that anything that succeeds in passing The Turing Test is, indeed, intelligent possessed of a mind, and so forth.
There are ways in which the argument in the previous paragraph might be resisted. At the very least, it is worth noting that there is a serious gap in the argument that we have just rehearsed. Perhaps—for all that has been argued so far—there are nomically possible ways of producing mere simulations of intelligence. But, if that's right, then passing The Turing Test need not be so much as criterial for the possession of intelligence: it need not be that given everything else that we already know—or, at any rate, take ourselves to know—about the universe in which we live, we would be fully justified in concluding that anything that succeeds in passing The Turing Test is, indeed, intelligent possessed of a mind, and so forth.
McDermott calculates that a look-up table for a participant who makes 50 conversational exchanges would have about 10 nodes. It is tempting to take this calculation to establish that it is neither nomically nor physically possible for there to be a "hand simulation" of a Turing Test program, on the grounds that the required number of nodes could not be fitted into a space much much larger than the entire observable universe.
When we look at the initial formulation that Turing provides of his test, it is clear that he thought that the passing of the test would provide probabilistic support for the hypothesis of intelligence. There are at least two different points to make here. First, the prediction that Turing makes is itself probabilistic: Turing predicts that, in about fifty years from the time of his writing, it will be possible to programme digital computers to make them play the imitation game so well that an average interrogator will have no more than a seventy per cent chance of making the right identification after five minutes of questioning.
Second, the probabilistic nature of Turing's prediction provides good reason to think that the test that Turing proposes is itself of a probabilistic nature: a given level of success in the imitation game produces—or, at any rate, should produce—a specifiable level of increase in confidence that the participant in question is intelligent has thoughts, is possessed of a mind. Since Turing doesn't tell us how he supposes that levels of success in the imitation game correlate with increases in confidence that the participant in question is intelligent, there is a sense in which The Turing Test is greatly underspecified.
Clearly, a machine that is very successful in many different runs of the game that last for quite extended periods of time and that involve highly skilled participants in the other roles has a much stronger claim to intelligence than a machine that has been successful in a single, short run of the game with highly inexpert participants. That a machine has succeeded in one short run of the game against inexpert opponents might provide some reason for increase in confidence that the machine in question is intelligent: but it is clear that results on subsequent runs of the game could quickly overturn this initial increase in confidence.
That a machine has done much better than chance over many long runs of the imitation game against a variety of skilled participants surely provides much stronger evidence that the machine is intelligent. Given enough evidence of this kind, it seems that one could be quite confident indeed that the machine is intelligent, while still—of course—recognizing that one's judgment could be overturned by further evidence, such as a series of short runs in which it does much worse than chance against participants who use the same strategy over and over to expose the machine as a machine.
Why Facts Don’t Change Our Minds
The probabilistic nature of The Turing Test is often overlooked. But this interpretation of The Turing Test is vulnerable to the kind of objection lodged by Bringsjord : even on a moderately long single run with relatively expert participants, it may not be all that unlikely that an unintelligent machine serendipitously succeeds in the imitation game.
In our view, given enough sufficiently long runs with different sufficiently expert participants, the likelihood of serendipitous success can be made as small as one wishes. Some of the literature about The Turing Test is concerned with questions about the framing of a test that can provide a suitable guide to future research in the area of Artificial Intelligence. The idea here is very simple.
Suppose that we have the ambition to produce an artificially intelligent entity. What tests should we take as setting the goals that putatively intelligent artificial systems should achieve? Should we suppose that The Turing Test provides an appropriate goal for research in this field? In assessing these proposals, there are two different questions that need to be borne in mind. First, there is the question whether it is a useful goal for AI research to aim to make a machine that can pass the given test administered over the specified length of time, at the specified degree of success.
Second, there is the question of the appropriate conclusion to draw about the mental capacities of a machine that does manage to pass the test administered over the specified length of time, at the specified degree of success. Opinion on these questions is deeply divided. Some people suppose that The Turing Test does not provide a useful goal for research in AI because it is far too difficult to produce a system that can pass the test.
Other people suppose that The Turing Test does not provide a useful goal for research in AI because it sets a very narrow target and thus sets unnecessary restrictions on the kind of research that gets done. Some people think that The Turing Test provides an entirely appropriate goal for research in AI; while other people think that there is a sense in which The Turing Test is not really demanding enough, and who suppose that The Turing Test needs to be extended in various ways in order to provide an appropriate goal for AI.
We shall consider some representatives of each of these positions in turn. Some people have claimed that The Turing Test doesn't set an appropriate goal for current research in AI because we are plainly so far away from attaining this goal. Amongst these people there are some who have gone on to offer reasons for thinking that it is doubtful that we shall ever be able to create a machine that can pass The Turing Test—or, at any rate, that it is doubtful that we shall be able to do this at any time in the foreseeable future. Perhaps the most interesting arguments of this kind are due to French ; at any rate, these are the arguments that we shall go on to consider.
Cullen sets out similar considerations. First, if interrogators are allowed to draw on the results of research into, say, associative priming , then there is data that will very plausibly separate human beings from machines. For example, there is research that shows that, if humans are presented with series of strings of letters, they require less time to recognize that a string is a word in a language that they speak if it is preceded by a related word in the language that they speak , rather than by an unrelated word in the language that they speak or a string of letters that is not a word in the language that they speak.
Provided that the interrogator has accurate data about average recognition times for subjects who speak the language in question, the interrogator can distinguish between the machine and the human simply by looking at recognition times for appropriate series of strings of letters. Or so says French. It isn't clear to us that this is right. After all, the design of The Turing Test makes it hard to see how the interrogator will get reliable information about response times to series of strings of symbols.
The point of putting the computer in a separate room and requiring communication by teletype was precisely to rule out certain irrelevant ways of identifying the computer. If these requirements don't already rule out identification of the computer by the application of tests of associative priming, then the requirements can surely be altered to bring it about that this is the case. Perhaps it is also worth noting that administration of the kind of test that French imagines is not ordinary conversation; nor is it something that one would expect that any but a few expert interrogators would happen upon.
So, even if the circumstances of The Turing Test do not rule out the kind of procedure that French here envisages, it is not clear that The Turing Test will be impossibly hard for machines to pass. For, in the first case, the ratings that humans make depend upon large numbers of culturally acquired associations which it would be well-nigh impossible to identify and describe, and hence which it would arguably be well-nigh impossible to program into a computer. And, in the second case, the ratings that people actually make are highly dependent upon particular social and cultural settings and upon the particular ways in which human life is experienced.
And there would also be widespread agreement amongst competent speakers of English in the developed world that pens rate higher as weapons than grand pianos rate as wheelbarrows. Again, there are questions that can be raised about French's argument here. It is not clear to us that the data upon which the ratings games rely is as reliable as French would have us suppose. What if the grand piano has wheels? What if the opponent has a sword or a sub-machine gun? It isn't obvious that a refusal to play this kind of ratings game would necessarily be a give-away that one is a machine.
Moreover, even if the data is reliable, it is not obvious that any but a select group of interrogators will hit upon this kind of strategy for trying to unmask the machine; nor is it obvious that it is impossibly hard to build a machine that is able to perform in the way in which typical humans do on these kinds of tests. There are other reasons that have been given for thinking that The Turing Test is too hard and, for this reason, inappropriate in setting goals for current research into artificial intelligence.
In general, the idea is that there may well be features of human cognition that are particularly hard to simulate, but that are not in any sense essential for intelligence or thought, or possession of a mind. The problem here is not merely that The Turing Test really does test for human intelligence; rather, the problem here is the fact—if indeed it is a fact—that there are quite inessential features of human intelligence that are extraordinarily difficult to replicate in a machine.
If this complaint is justified—if, indeed, there are features of human intelligence that are extraordinarily difficult to replicate in machines, and that could and would be reliably used to unmask machines in runs of The Turing Test—then there is reason to worry about the idea that The Turing Test sets an appropriate direction for research in artificial intelligence. However, as our discussion of French shows, there may be reason for caution in supposing that the kinds of considerations discussed in the present section show that we are already in a position to say that The Turing Test does indeed set inappropriate goals for research in artificial intelligence.
There are authors who have suggested that The Turing Test does not set a sufficiently broad goal for research in the area of artificial intelligence. Amongst these authors, there are many who suppose that The Turing Test is too easy. We go on to consider some of these authors in the next sub-section.
But there are also some authors who have supposed that, even if the goal that is set by The Turing Test is very demanding indeed, it is nonetheless too restrictive. Objection to the notion that the Turing Test provides a logically sufficient condition for intelligence can be adapted to the goal of showing that the Turing Test is too restrictive. Consider, for example, Gunderson Gunderson has two major complaints to make against The Turing Test. First, he thinks that success in Turing's Imitation Game might come for reasons other than the possession of intelligence.
But, second, he thinks that success in the Imitation Game would be but one example of the kinds of things that intelligent beings can do and—hence—in itself could not be taken as a reliable indicator of intelligence. According to Gunderson, Turing is in the same position as the vacuum cleaner salesman if he is prepared to say that a machine is intelligent merely on the basis of its success in the Imitation Game.
There is an obvious reply to the argument that we have here attributed to Gunderson, viz. In order to carry out a conversation, one needs to have many different kinds of cognitive skills, each of which is capable of application in other areas. Apart from the obvious general cognitive competencies—memory, perception, etc. It is inconceivable that that there be a machine that is startlingly good at playing the Imitation Game, and yet unable to do well at any other tasks that might be assigned to it; and it is equally inconceivable that there is a machine that is startlingly good at the Imitation Game and yet that does not have a wide range of competencies that can be displayed in a range of quite disparate areas.
To the extent that Gunderson considers this line of reply, all that he says is that there is no reason to think that a machine that can succeed in the Imitation Game must have more than a narrow range of abilities; we think that there is no reason to believe that this reply should be taken seriously. More recently, Erion has defended a position that has some affinity to that of Gunderson. In our view, at least when The Turing Test is properly understood, it is clear that anything that passes The Turing Test must have the ability to solve problems in a wide variety of everyday circumstances because the interrogators will use their questions to probe these—and other—kinds of abilities in those who play the Imitation Game.
There are authors who have suggested that The Turing Test should be replaced with a more demanding test of one kind or another. It is not at all clear that any of these tests actually proposes a better goal for research in AI than is set by The Turing Test. However, in this section, we shall not attempt to defend that claim; rather, we shall simply describe some of the further tests that have been proposed, and make occasional comments upon them.
One preliminary point upon which we wish to insist is that Turing's Imitation Game was devised against the background of the limitations imposed by then current technology. It is, of course, not essential to the game that tele-text devices be used to prevent direct access to information about the sex or genus of participants in the game. We shall not advert to these relatively mundane kinds of considerations in what follows.
Harnad , claims that a better test than The Turing Test will be one that requires responses to all of our inputs, and not merely to text-formatted linguistic inputs. That is, according to Harnad, the appropriate goal for research in AI has to be to construct a robot with something like human sensorimotor capabilities. It is an interesting question whether the test that Harnad proposes sets a more appropriate goal for AI research.
In particular, it seems worth noting that it is not clear that there could be a system that was able to pass The Turing Test and yet that was not able to pass The Total Turing Test. This point against Harnad can be found in Hauser , and elsewhere. They say that an artificial agent A , designed by human H, passes the Lovelace Test just in case three conditions are jointly satisfied: 1 the artificial agent A produces output O ; 2 A 's outputting O is not the result of a fluke hardware error, but rather the result of processes that A can repeat; and 3 H —or someone who knows what H knows and who has H 's resources—cannot explain how A produced O by appeal to A 's architecture, knowledge-base and core functions.
Against this proposal, it seems worth noting that there are questions to be raised about the interpretation of the third condition. If a computer program is long and complex, then no human agent can explain in complete detail how the output was produced. Why did the computer output 3. But if we are allowed to give a highly schematic explanation—the computer took the input, did some internal processing and then produced an answer—then it seems that it will turn out to be very hard to support the claim that human agents ever do anything genuinely creative.
After all, we too take external input, perform internal processing, and produce outputs. What is missing from the account that we are considering is any suggestion about the appropriate level of explanation that is to be provided. It is quite unclear why we should suppose that there is a relevant difference between people and machines at any level of explanation; but, if that's right, then the test in question is trivial.
One might also worry that the proposed test rules out by fiat the possibility that creativity can be best achieved by using genuine randomising devices. Schweizer claims that a better test than The Turing Test will advert to the evolutionary history of the subjects of the test. When we attribute intelligence to human beings, we rely on an extensive historical record of the intellectual achievements of human beings. On the basis of this historical record, we are able to claim that human beings are intelligent; and we can rely upon this claim when we attribute intelligence to individual human beings on the basis of their behavior.
According to Schweizer, if we are to attribute intelligence to machines, we need to be able to advert to a comparable historical record of cognitive achievements. So, it will only be when machines have developed languages, written scientific treatises, composed symphonies, invented games, and the like, that we shall be in a position to attribute intelligence to individual machines on the basis of their behavior. Against Schweizer, it seems worth noting that it is not at all clear that our reason for granting intelligence to other humans on the basis of their behavior is that we have prior knowledge of the collective cognitive achievements of human beings.
Perhaps the best known attack on the suggestion that The Turing Test provides an appropriate research goal for AI is due to Hayes and Ford Among the controversial claims that Hayes and Ford make, there are at least the following:. Some of these claims seem straightforwardly incorrect. Consider h , for example. There might be all kinds of irrelevant differences between a given kind of machine and a human being—not all of them rendered undetectable by the experimental set-up that Turing describes—but The Turing Test will remain a good test provided that it is able to ignore these irrelevant differences.
On the other hand, as we noted at the end of Section 4. This change preserves the character of The Turing Test, but gives it scope for greater statistical sophistication. While there are many other criticisms that can be made of the claims defended by Hayes and Ford , it should be acknowledged that they are right to worry about the suggestion that The Turing Test provides the defining goal for research in AI. There are various reasons why one should be loathe to accept the proposition that the one central ambition of AI research is to produce artificial people.
However it is worth pointing out that there is no reason to think that Turing supposed that The Turing Test defined the field of AI research and there is not much evidence that any other serious thinkers have thought so either. Turing himself was well aware that there might be non-human forms of intelligence—cf. However, all of this remains consistent with the suggestion that it is quite appropriate to suppose that The Turing Test sets one long term goal for AI research: one thing that we might well aim to do eventually is to produce artificial people.
If—as Hayes and Ford claim—that task is almost impossibly difficult, then there is no harm in supposing that the goal is merely an ambit goal to which few resources should be committed; but we might still have good reason to allow that it is a goal. There are many different objections to The Turing Test which have surfaced in the literature during the past fifty years, but which we have not yet discussed. We cannot hope to canvass all of these objections here.
Clearly enough, Searle is here disagreeing with Turing's claim that an appropriately programmed computer could think. There is much that is controversial about Searle's argument; we shall just consider one way of understanding what it is that he is arguing for. The basic structure of Searle's argument is very well known. Thus, what we are invited to suppose is a logical possibility is not so very different from what Block invites us to suppose is a logical possibility. However, the argument that Searle goes on to develop is rather different from the argument that Block defends.
So, there is a possible world—doubtless one quite remote from the actual world—in which a digital computer simulates intelligence but in which the digital computer does not itself possess intelligence.
What Is a Cliché?
But, if we consider any digital computer in the actual world, it will not differ from the computer in that remote possible world in any way which could make it the case that the computer in the actual world is more intelligent than the computer in that remote possible world. So far, the argument that we have described arrives at the conclusion that no appropriately programmed computer can think. While this conclusion is not one that Turing accepted, it is important to note that it is compatible with the claim that The Turing Test is a good test for intelligence.
In order to turn Searle's argument—at least in the way in which we have developed it—into an objection to The Turing Test, we need to have some reason for thinking that it is at least nomically possible to simulate intelligence using computers.
If it is nomically impossible to simulate intelligence using computers, then the alleged fact that digital computers cannot genuinely possess intelligence casts no doubt at all on the usefulness of the Turing Test, since digital computers are nomically disqualified from the range of cases in which there is mere simulation of intelligence.
In the absence of reason to believe this, the most that Searle's argument yields is an objection to Turing's confidently held belief that digital computing machines will one day pass The Turing Test. Here, as elsewhere, we are supposing that, for any kind of creature C, there is a version of The Turing Test in which C takes the role of the machine in the specific test that Turing describes. This general format for testing for the presence of intelligence would not necessarily be undermined by the success of Searle's Chinese Room argument. There are various responses that might be made to the argument that we have attributed to Searle.
One kind of response is to dispute the claim that there is no intelligence present in the case of the Chinese Room. If enough details of this kind are added, then it becomes quite unclear whether we do want to say that we still haven't described an intelligent system. Another kind of response is to dispute the claim that digital computers in the actual world could not be relevantly different from the system that operates in the Chinese Room in that remote possible world.
If we suppose that the core of the Chinese Room is a kind of giant look-up table, then it may well be important to note that digital computers in the actual world do not work with look-up tables in that kind of way. Doubtless there are other possible lines of response as well. However, it would take us out of our way to try to take this discussion further. One good place to look for further discussion of these matters is Braddon-Mitchell and Jackson There are radically different views about the measurement of intelligence that have not been canvassed in this article.
Our concern has been to discuss Turing and its legacy. But, of course, a more wide-ranging discussion would also consider, for example, research on the measurement of intelligence using the mathematical and computational resources of Algorithmic Information Theory, Kolmogorov Complexity Theory, Minimum Message Length MML Theory, and so forth. For an introduction to this literature, see Hernandez-Orallo and Dowe , and the list of references contained therein.
We would like to acknowledge the help of the Editors of the Encyclopedia, Jose Hernandez-Orallo, and two anonymous referees. The advice that we we have received has led to numerous improvements. We look forward to receiving further suggestions for improvements from those who've read what we have written. Oppy monash.
- Kriegskonferenzen: Teheran, Jalta, Potsdam (German Edition).
- The Gods Among Us (Divine Masquerade Series Book 1).
- Drawing Schedule.
Dowe monash. In the Discourse , Descartes says: If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men. The first is that they could never use words, or put together signs, as we do in order to declare our thoughts to others.
Alphaville is completely free.
For we can certainly conceive of a machine so constructed that it utters words, and even utters words that correspond to bodily actions causing a change in its organs. Secondly, even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they are acting not from understanding, but only from the disposition of their organs.
For whereas reason is a universal instrument, which can be used in all kinds of situations, these organs need some particular action; hence it is for all practical purposes impossible for a machine to have enough different organs to make it act in all the contingencies of life in the way in which our reason makes us act. Translation by Robert Stoothoff Although not everything about this passage is perfectly clear, it does seem that Descartes gives a negative answer to the question whether machines can think; and, moreover, it seems that his giving this negative answer is tied to his confidence that no mere machine could pass The Turing Test: no mere machine could talk and act in the way in which adult human beings do.
Turing and the Imitation Game 2. Turing and Responses to Objections 2. Some Minor Issues Arising 3. Alternative Tests 5. The Chinese Room 7. Turing and the Imitation Game Turing describes the following kind of game. About this game, Turing says: I believe that in about fifty years' time it will be possible to programme computers, with a storage capacity of about 10 9 , to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.
We also wish to allow the possibility that an engineer or team of engineers may construct a machine which works, but whose manner of operation cannot be satisfactorily described by its constructors because they have applied a method which is largely experimental. Finally, we wish to exclude from the machines men born in the usual manner.
It is difficult to frame the definitions so as to satisfy these three conditions. One might for instance insist that the team of engineers should all be of one sex, but this would not really be satisfactory, for it is probably possible to rear a complete individual from a single cell of the skin say of a man. Turing himself observes that these results from mathematical logic might have implications for the Turing test: There are certain things that [any digital computer] cannot do.
If it is rigged up to give answers to questions as in the imitation game, there will be some questions to which it will either give a wrong answer, or fail to give an answer at all however much time is allowed for a reply. The human intellect is not subject to the Lucas-Penrose constraint. By asking question q, a human could determine if the responder is a computer or a human. Thus C may fail the Turing test.
No mechanism could feel and not merely artificially signal, an easy contrivance pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants. It can do whatever we know how to order it to perform cited by Hartree, p. Some Minor Issues Arising There are a number of much-debated issues that arise in connection with the interpretation of various parts of Turing , and that we have hitherto neglected to discuss.
Assessment of the Current Standing of The Turing Test Given the initial distinction that we made between different ways in which the expression The Turing Test gets interpreted in the literature, it is probably best to approach the question of the assessment of the current standing of The Turing Test by dividing cases. It would take at least thirty five-story main university libraries to hold this many books. We know from experience that we can access any memory in our brain in about seconds, so a hand simulation of a Turing Test-passing program would require a human being to be able to take off the shelf, glance through, and return to the shelf all of these million books in seconds.
If each book weighs about a pound 0. Since a human uses energy at a normal rate of watts, the power required is the bodily power of 3 x 10 15 human beings, about a million times the current population of the entire earth. A typical large nuclear power plant has a power output of 1, megawatts, so a hand simulation of the human program requires a power output equal to that of million large nuclear power plants. As I said, a man can no more hand-simulate a Turing Test-passing program than he can jump to the Moon.
In fact, it is far more difficult. Alternative Tests Some of the literature about The Turing Test is concerned with questions about the framing of a test that can provide a suitable guide to future research in the area of Artificial Intelligence. Among the controversial claims that Hayes and Ford make, there are at least the following: Turing suggesed the imitation game as a definite goal for program of research.
Turing intended The Turing Test to be a gender test rather than a species test. The task of trying to make a machine that is successful in The Turing Test is so extremely difficult that no one could seriously adopt the creation of such a machine as a research goal. No null effect experiment can provide an adequate criterion for intelligence, since the question can always arise that the judges did not look hard enough and did not raise the right kinds of questions.
But, if this question is left open, then there is no stable endpoint of enquiry. Null effect experiments cannot measure anything: The Turing Test can only test for complete success. The perspective of The Turing Test is arrogant and parochial: it mistakenly assumes that we can understand human cognition without first obtaining a firm grasp of the basic principles of cognition.
The Turing Test does not admit of weaker, different, or even stronger forms of intelligence than those deemed human. The Chinese Room There are many different objections to The Turing Test which have surfaced in the literature during the past fifty years, but which we have not yet discussed. A Brief Note on Measurement of Intelligence There are radically different views about the measurement of intelligence that have not been canvassed in this article. Bibliography Abramson, D. Block, N. Boolos, G. Bowie, L. Braddon-Mitchell, D.