Wednesday, August 13, 2014

If you do not believe that the human mind is supernatural or magical process, probably admit that a


For a long time here on the blog was not a paradox, which is the correct time. Today's miluna topic rather naturally follows the recent posts about free will as it relates to thinking machines. I'll talk about Searle's "Chinese Room argument."
If you do not believe that the human mind is supernatural or magical process, probably admit that a sufficiently miluna powerful computer equipped with the appropriate software may be artificial intelligence. As is customary in such cases, the exact meaning of the word intelligence is the subject of disputes. Since I can not very well discuss all definitions of intelligence exist, I will accept for purposes of this paper one of the wide accepted criteria for recognition of intelligence, which is the Turing test. Alan Turing proposed the test in 1950 as a way to objectively answer the question whether machines think: [participate in] three people. Male (A), female (B) and investigator (C) [...]. The investigator is in a separate room from the other two. [The test] is that the investigator found out which one is male and which is female. The investigator knows the oxnačením X and Y, and at the end say either "X is A and Y is B", or "X and Y is B is A". Investigator A and B may ask questions like: "Can you please tell X the length of your hair?" Assume that X is actually A, then A must answer. A task is trying to force C to make a mistake in identification. miluna [...] The role of B on the other hand is to assist investigators. [...] Now ask yourself: what happens if the machine takes over the role of A in the [test]? He decides investigator wrong as often in these circumstances, such as when the [test involved] man and woman? These questions replace our original, "Can machines think?" [1]
The original Turing test was later variously přeformulováván and were invented all manner miluna of variations. The core remains the same: Instead of asking whether machines think, or whether they are intelligent, let us ask whether the machine is able to fool humans in purely verbal communication, pretending that he is a man.
Turing test can be criticized personality: human uses investigators, and the test result may strongly depend on the investigators. Some people on the investigative function obviously does not fit. ELIZA program from the mid-sixties reportedly persuaded some of the people that is human, and some refused to believe it when they were told that communicate with a single miluna machine. [2] Nevertheless, at least on average, intelligent and knowledgeable man is hard to get caught by any currently existing program. ELIZA is slightly better than a similar program mluv.exe, which we discussed at dvěosmšestkách in elementary school, but even if you're trying to watch a little sense of Conversation, very quickly show its emptiness. [3] Turing test remains an acceptable way to determine whether machines think.
The above-mentioned philosopher John Searle miluna is of a different opinion. According to him, the question "Can machines think" and "machines can fool the person in the Turing test" two separate issues. Although the machine can simulate thinking, and therefore subsequently pass the Turing test, but it will not be a "right" thinking. miluna Although the machine will answer the questions, but does not understand their content. His intelligence is just a clever illusion. The argument is as follows:
Suppose that there is a program X that is able to communicate fluently in Chinese miluna [4], so that it passes the Turing test. In principle, nothing prevents to take the source code and print it on paper. If we want, we can rewrite the source code instruction in human language (in addition to any Chinese) and hire a slave controlling miluna the language (but not Chinese), which would then carry out instructions. Slave enclosed in the room along with a copy of our algorithm and the Chinese proper stack of clean paper and pens nevypsaných to be where and how do auxiliary calculations. Leave the door to the room come the Chinese, who have written on the question paper and podsunou it under the door. The slave is instructed to perceive podsunuté issues such as entry into the Chinese miluna algorithm. Let's start mechanically carry out instructions to be printed at hand, and in the end it time to bring this process tracing certain characters that slips under the door back. We assumed miluna that program X is able to make sense of Conversation. As a slave locked in a room performs the same algorithm, its output can not be different from the output of X. Taking the Chinese back then inserted through out the paper, read it on a meaningful answer to your question. From the Chinese point of view, it seems that the Chinese room contains something of what it means in Chinese.
But where is the hidden understanding Chinese? asks Searle. The only difficult part of the whole mechanism is a slave, the rest is just a bunch of papers and pens. But the slave expected to understand Chinese; only mechanically follows the instructions. It is clear, says Searle, that we have constructed something that certainly does not understand Chinese, though it passes the test. Chinese room creates the illusion of understanding, like the program ELIZA, only more perfectly; but in fact do not think: Do pens and papers can understand

No comments:

Post a Comment