Article summary of Computing machinery and intelligence by Turing - Chapter


The imitation game

If you want to answer the question: "Can machines think?", you have to start defining. If you are going to define, however, you use everyday language, which is based on statistical analyzes. That is not the intention.

A new form of the problem can be explained by the imitation game. There are three people: a man (A), a woman (B) and an interrogator (C) (gender does not matter). The interrogator is in a different room than A and B and has to discover through a number of questions which gender A and B have. The task of A is to ensure that C does not solve the task properly and the task of B is to ensure that C does solve the task properly. The interrogator has no knowledge about the sound of the voice, what A and B look like and ideally no knowledge of their handwriting.

The question of the article thus becomes: Will the computer be as good at misleading C as a real person?

Criticism of the new form of the problem

Is this question worth investigating? The new formulation of the problem means that it is not necessary to involve characteristics of people that are difficult to imitate. To investigate thinking, you don't have to "dress" the robot humanly. For example, you don't have to mimic human-like skin.

A form of criticism of this formulation is that the machine is at a disadvantage. If a man has to copy a computer, he will soon be exposed if math questions are asked (for example, he responds too slowly). According to the authors, however, there is no reason to believe that the machine is unable to play the imitation game appropriately.

A final point of criticism is that the machine may adopt a different strategy than imitating a man's behavior. However, according to the author, there is no reason to believe that imitating a man is not the best strategy.

The machines in the game

In the article, the following is meant by machine: an electronic computer or a digital computer. These are the only machines that participate in the imitation game.

Digital computers

Digital computers are intended to perform actions that a human computer does. A human computer must abide to certain rules and may not deviate from this. You can imagine that these rules are written down in a book with an unlimited number of pages. This book can change per task. You can further divide it into three parts:

  1. Storage: you can compare this with the paper (on which you perform your calculations or on which the book is printed). If the calculations are done by heart, then you can compare it with the memory. Information in the storage can be divided into small packages.

  2. Execution: these are the different actions that an individual must perform during a calculation. This can vary per machine.

  3. Control: here it is checked whether the rules are enforced.

The digital computer must be able to hold an order or be able to repeat it, so that a new order does not constantly have to be given. You can compare it with the following example: Bart is sick and has to take a pill every day at 5 pm. His mother can remember him every day, but she can also put up a note telling him to take that pill every day. If Bart is better, the note can be removed. This is the way that the computer works.

If you want to imitate the human computer as precisely as possible, you must know how the action was carried out and then translate the answer into a specific instruction table. This is called programming.

The storage of digital computers is generally limited. However, it is not difficult to imagine computers that have infinite storage. We refer to these computers as infinite capacity computers.

The idea of ​​a digital computer is not new: between 1828 and 1839 Charles Babbage was working on his machine The Analytical Engine. The Analytical Engine as a whole was mechanical and not electrical. The neural network and all computers that have been made since the Analytical Engine are electric. However, since chemical activity in the neural network is just as important, and in certain computers storage is acoustic, the use of electricity is only seen as a superficial similarity.

Universality of digital computers

The digital computers discussed earlier belong to the section discrete state machines. This means that the machines move through sudden movements between different states. In the narrow sense, no machine is built in this way, but in some cases it may be useful to approach them in this way (think of a light that is on or off).

An abstract explanation: there is a machine and a brake. In this case the machine is the position of a wheel. You have position 1, position 2 and position 3 (q1, q2 and q3). You can see what position it is in via a light. The input signal is the lever (i0 or i1). You can put this in a table if you have the last state and an input signal:

 

Last state

 

 

Input

 

 

 

q1

q2

q3

i0

q2

q3

q1

 

i1

q1

q2

q3

 

You can write down the output (namely the light you see) as follows:

State

q1

q2

q3

Output

o0

o0

o1

In other words, the moment the machine was in q1 as the last state, and the lever is at i1, the output is o0.

Only a limited number of outcomes is possible. In this way you can predict all possible outcomes. This is in line with Laplace's vision. Laplace's vision is that you can predict all future states in the universe, provided you know the state of the universe at that time. In practice, however, the smallest deviation can have the greatest consequences. A machine with a discrete state has the feature that this cannot happen.

If a machine wants to do the imitation game, it can play both person A and person B and it becomes difficult for the interrogator to see the difference. However, the machine must have sufficient storage capacity, work fast enough and be reprogrammed, depending on which role it must perform.

A universal machine means that a digital computer can imitate any machine with a discrete state. The advantage of this is that you only have one machine that can perform various calculations.

Contradictory views on the main question

According to the author, within 50 years, computers can be programmed in such a way that after five minutes an interrogator has no more than 70% chance of getting the answer right. Although the author believes that the question "Can machines think?" is irrelevant, he says that in the future there is a general consensus that machines can indeed think. He also wants to emphasize that scientists do not only jump from scientific fact to scientific fact; there is also guesswork. As long as a distinction is made between scientific facts and suspicions, he believes that suspicions make a relevant contribution to science.

Now 9 conflicting views are discussed:

1. Theological objection

Thinking is part of the immortal soul. Because God has given an immortal soul to every person, but not to animals or machines, no animal or machine is capable of thinking. This argument would be more convincing if an animal were also classified as "human". The argument implies a limitation in the ability of God. There are a number of things that are generally accepted that even God is unable to do, but is it not very restrictive to say that he is unable to give an animal a soul?

2. The "head-in-the-sand-objection"

The consequences of thinking machines are frightening; let's just hope this won't happen. Although this argument is rarely so frankly expressed, it does affect the people who deal with this topic. Man tends to think he is the superior species in the world, but thinking machines endanger this position. This way of thinking (the superiority of man) is probably also the reason that the Theological Objection gets a lot of support.

3. The arithmetic objection

There are several mathematical results that can show that the power of machines with a discrete state is limited. An example is Gödel's theorem: propositions can be formulated in powerful, logical systems that cannot be confirmed or rejected, unless the program itself is contradictory. If we want to use Gödel's theorem, a logical system must be described in terms of machines and you must describe machines in terms of logical systems. In this case we are talking about a digital computer with an infinite capacity. However, there are limitations if you play the imitation game with such a machine: he will sometimes give the wrong answer or give no answer at all. The Arithmetic Objection responds to this: machines have all kinds of limitations that do not bother humanity.

Suppose a machine gives the wrong answer. Then people get (according to this objection) a feeling of superiority. According to the author, not so much value should be attached to the fact that machines occasionally give the wrong answer: after all, people often give enough wrong answers. Moreover, there was only 1 machine worse at that time; can you as a person also beat multiple machines at the same time?

4. The argument of consciousness

The moment a machine can write a poem through emotions and thoughts and is aware of the fact that it has written it, we can assume that a machine equals the brain. According to the most extreme form of this argument, you only know for sure that you think yourself (solipsist vision). So we could never find out if a machine is thinking, because to find out, we have to be the machine ourselves. The imitation game is often used under the name viva voce , to find out if someone actually understands the question or has only learned a pattern. According to the author, most people who support this argument are more likely to abandon it than to lean towards the extreme, solipsist side.

5. Arguments of various incapacities

There are things that machines can indeed do, but you will never get a machine to do X. X means different things. A few examples: being kind, being nice, being funny, learning through experience, enjoying ice cream, making mistakes, etc. There is no real support for these propositions, which, according to the author, comes from scientific inference: a person has seen a number of machines in his life and draws its own conclusions. The statements that are mentioned do not fit here. A number are elaborated below:

  1. A machine cannot make mistakes: the definition of “making mistakes” must be looked at: Functional errors: due to a mechanical / electrical fault, which causes the machine to do something that it is not supposed to do. This cannot happen in a philosophical discussion, you are talking about abstract machines. These machines are unable to make mistakes. Conclusion errors: can only occur if a certain conclusion is attached to the answer. For example, if a machine has to type in a certain calculation, the machine can make a mistake here.

  2. A machine is not subject to its thoughts: you can only show that a machine is subject to its thoughts if you can show that a machine has at least some thoughts with at least something of substance. For example, if you show a comparison to the machine, you could say that the machine is currently thinking about the comparison.

  3. A machine has no diversity in its behavior: it is the same as saying that a machine does not have enough storage capacity. The statements mentioned above are mainly related to the concept of consciousness.

6. The objection of Lady Lovelace

Lady Lovelace said: "The Analytical Engine does not claim that it produces anything of its own: it does what we can order." Hartree adds that it may be possible to make a machine think in the future, but that that seemed impossible at that time. A form of The Object of Lady Lovelace is that a machine can never really do anything new. Another form is that a machine cannot surprise us.

7. The argument of continuity in the nervous system

Because the nervous system certainly does not resemble a machine with a discrete state (on the contrary), you can never reconstruct the nervous system with a machine with a discrete state. Of course, discreetly is not the same as continuous, but this difference does not make any difference to the imitation game.

8. The argument of informality of behavior

It is not possible to make a program in which every conceivable situation can be anticipated. If you assume this, we as humans can never be machines. Namely: if every person has a limited number of rules for how he behaves, then we would be just like a machine. However, we do not have those rules, so we are not machines. A distinction must be made between two expressions:

  1. Rules of conduct: know that you must stop when a red light in traffic. You can follow these types of rules and you are consciously working on them.

  2. Laws of behavior: This is about a person's body. When you get pinched, your body responds.

We have never finished investigating all laws of conduct. That is why you can never say definitively: "We can never be a machine". After all, there are no circumstances under which it is said that enough has been investigated.

9. The argument of extra-sensory perception (ESP)

The last argument is about extra-sensory perception, and terms such as telepathy, clairvoyance, prior knowledge and psychokinesis. These ideas go against our scientific insights, but there is evidence (for example for telepathy). When you play the imitation game, in which a computer is involved and a telepathist, the interrogator can ask: “What is the pattern of the card that I have in my hand?” Per 400 cards, the telepathist has, say, 130 times the correct answer, while the computer may have it 104 times right. This way you can guess which is the computer. If the interrogator has psychokinetic powers, the computer may have the answer more often than expected based on probability. If the interrogator is clairvoyant, he will be fine anyway. In short, basically anything can happen based on ESP.

Learning machines

Building on Lady Loveland's argument, you could say you put an idea in a machine, something happens, and then stops responding. Another comparison is with an anatomical pile. Suppose there is a pile of sub-critical size: the idea reacts with a neutron in the stack. The neutron disrupts a process and this eventually goes out. If the size of the pile is critical, the neutron will continue until the entire pile is destroyed. There is also such a system in humans. This is usually sub-critical, but a small portion is above-critical. If an idea comes in here, it creates a "theory", which consists of several ideas from different levels. The question then remains: can machines think above-critical?

Different parts of the brain provide different functions in the mind. You can compare this with an onion. To get to the "real" mind, you have to peel the layers of the onion one by one. In this way you eventually come to the "real" mind.

Compared to nerve cells, modern machines that mimic the neural network work a thousand times faster, so it's not about speed. The way in which the machines must be programmed to play the game must be sought. To achieve this, we have to look at 3 parts:

  1. The initial state of the brain (around birth)

  2. The education to which the brain is subject

  3. Other experiences, apart from education, to which the brain is subject

The idea is that if you focus on mimicking the brain of a child, then through education they will eventually resemble the brain of an adult. The child's brain is hardly programmed. In this way the problem is divided into 2 parts: on the one hand the child program and education.

The comparison with evolution is as follows:

The structure of the child machine

Genetic material

Changes

Mutations

Natural selection

Assessment of the researcher

The researcher can ensure that mutations and the process of natural selection proceed quickly and efficiently. The machine must be able to learn through conditioning. But if, apart from conditioning, there is no other way to communicate, the amount of information can never be more than the number of rewards or penalties you use. The moment the student finally learns to say a word, he will be black and blue because of punishment. So there must be a non-emotional way to communicate, for example by learning (symbolic) language. For example, orders can be given. There must then be definitions and assertions in the system that consist of facts, formulas, propositions given by an authority figure, etc. The teacher can say to the machine: "Go and do your homework".

The commandments ( imperatives ) that a machine must execute are commandments that can apply the rules of the logical system. Think of rules such as: "If there is a method that is faster than another method, then never opt for the slower method." These kinds of statements can be learned on the one hand by authority figures or on the other hand by the machine itself (scientific inference).

The paradox of the learning machine implies that a machine learns certain rules that it must adhere to, but these can also change. This can be explained by the fact that the rules that change are short-lived.

In this way, the teacher may not always understand what is "going on in the machine." The argument that the machine can only do what we say it should do is no longer valid.

A random element in a learning machine can have advantages. If you systematically look up everything, you may get a lot of information that is not right, because you start with an area where the right solutions are not available anyway. With a random element you have a greater chance that there will be a good answer faster. The systematic method is not possible in evolution. You cannot keep track of which genetic combinations there have been.

Intellectually, machines may be able to compete with humans in the future, but the question remains what type of machine is most suitable for this.

Join World Supporter
Join World Supporter
Log in or create your free account

Why create an account?

  • Your WorldSupporter account gives you access to all functionalities of the platform
  • Once you are logged in, you can:
    • Save pages to your favorites
    • Give feedback or share contributions
    • participate in discussions
    • share your own contributions through the 7 WorldSupporter tools
Follow the author: Vintage Supporter
Comments, Compliments & Kudos

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.