They say that the computer intelligence will have undergone the Turing test one of these days. Will this mean that computers have become reasonable?
Of course, they have not. It means a much more modest task solution that is a high-quality imitation of the human mind. Developers of artificial intelligence systems do not yet know how to teach a computer to understand itself, how to reproduce an amazing phenomenon that is “I am” which is familiar to every person regardless of her IQ. Not that the secret of human consciousness has also been discovered till now…
However, the Turing test undergone will entail the stage when self-awareness would most likely be called the main task and the basic criterion of the computer “humanization”.
So, let’s continue the theme of relations between people and robots. The question of whether robots would ever conscious of themselves and what could serve as evidence of this transformation is far beyond the purely scientific disputes. Just in 2007 the South Korean government was discussing the so-called ethical charter adjusting the relationship between people and robots that was developed by local futurologists. It included not only the known laws of robotics by Isaac Asimov, but also clauses on protection robots from abuse, on marriage and sexual relations between humans and robots regulation, etc.
Let’s remember also the forecast-2006 by the British research companies Ipsos-MORI and Outsights and the Institute of Future (USA), ordered by the British government. The researchers concluded that it would take 20 to 50 years to robots found not only civil rights but also duties such as participation in elections, paying taxes and military service.
Of course, to obtain such rights and obligations, the robots must have freedom of choice. And only one who is aware of itself can have a free choice. Without this condition it is not only senseless but also dangerous (for people) to speak about equality of humans and robots. Some economic and political forces can turn to account robots that have not reached the stage of self-awareness but are recognized as self-aware and, therefore, equal to men. For example, certain financial and technological clique can change the votes’ distribution in elections and “push” any laws unpopular among voters-people with the help of “pseudo reasonable” robots controlled by this clique. Such plot is not an innovation for our planet: how many times in the past dictators have come to power through democratic elections by playing on the subconscious instincts of the crowd (I mean by this word people plurality not aware of their true interests and thus easy to be manipulated) …
So, sooner or later we will teach the robot to communicate and behave like a rational creature. Perhaps the developer would be very convincing in assuring us: “Gentlemen! My artificial neural web is for a long time not inferior to the human brain in terms of accumulated experience points and associability. And yesterday I successfully integrated into this web the self-awareness chip!” How do we make sure that this robot is really aware of his “self” as a real person is? It is non-trivial task, considering that even the transfer of human consciousness on storage media, if ever possible, does not give one hundred percent proof.
Let’s assume that an “informational mould” of your personality has been transferred at the storage media. Some time your “self” acted within the body of the robot – and then enriched with lived experience, returned to the usual biological media. Maybe you’ll be enthusiastic to convince others: “Oh, yes! I am aware of myself in the body of robot! It was so unusual”. However, having calmed down, you, as a conscientious researcher, will understand that during the experiment, your personality undergone a double “conversion”. Consequently, it is not a zero probability that the awareness of your “self” in the body of the robot was the illusion of a post-perception. In other words:
1. In fact, your “self” has disappeared on the eve of storage media and you personally have fallen into a coma.
2. The robot spoke and acted according to its own machine algorithms, based on the received “naked” information about your personality.
3. Along the way, the robot accumulated new information about the external and internal events. External events recorded by its sensors, and the events of his “inner world” were a logical and emotional responses to external stimuli. Oh, yes, the robot can behave as an emotional creature (when the subprograms “fear”, “surprise”, “happiness”, etc. are set up in response to certain stimuli) but it is absolutely not conscious of itself.
4. When new information accumulated had been transferred to your brain at the end of the experiment, you have automatically interpreted it as an experience, really gone through your “self” (because the human brain does not know another way to handle such kind of information). In other words, you had the illusion that you had been aware of yourself in the body of the robot.
By the way, if the technologies for converting consciousness ever become a reality, the legitimacy of the arguments presented above could be confirmed with a simple experiment. One can take a “personal mould” of robot and transfer it wholly or partially in the human brain (I’ll point out just in case: this robot and the man above “never met”). If a person has the illusion that he “has been living” in the body of this robot – hence, the error of perception in the double -conversion of consciousness is taking place.
For the sake of justice I’ll have add that if the transfer of consciousness technology ever become popular, this can lead to public acceptance of robots’ self-awareness in spite of paradox described above. Why? For the same reason as the people recognize the existence of other people.
In fact: how do you know that I am not a figment of your imagination? That I really exist? That I also have my “Self”, which is also sometimes wondering in “solitary confinement” of my body whether an external world and other people exist?
All of us sometimes asked this question. And can never get an answer (at least until people will learn to integrate their consciousnesses). Nevertheless, we believe that other people exist. I think we believe it not because we have no other choice. I believe it is the effect of comparing of our world images.
Everyone automatically creates a personal world view and “tests” it unconsciously for contradictions (this is necessary for him to survive). Words and actions of other people in such case have a special status: they reflect their autonomous world images and help this person to clarify the personal world view (that is to survive). Every person compares unconsciously his and others’ worldviews and, of course, finds the differences. Finding the differences useful for him, the person adopts another’s experience and world view. However, constantly revealing the differences, every person does not find the fundamental contradictions (i.e. the contradictions at the level of the laws of nature) between different pictures of the world. This is his proof that the outside world and other people exist. After all, if other people did not exist in reality and life was like a dream we would always run across the absurd contradictions in the images of the world (and – woke up!).
Here is a simpler example. I can not be sure that I see the colors of objects “correctly”, as “other people” see them. I can not even be sure that I am not color-blind being compared with other people, that all the colors affordable to me are not really a set of “grayscale” from the perspective of others. However, comparing pictures of the world, constantly carried out on an unconscious level, allows me to understand: it is all right with my color perception.
Likewise, the mass transfer of consciousnesses on the storage media, comparing our experiences with the experiences of others can lead to the fact that people recognize de facto the ability of robots to aware themselves.
The problem is that even if the conversion of consciousness is possible in principle – it would be implemented not before the appearance of truly intelligent robots (and even much later). Therefore, we need a simpler test, like the classic Turing test.
How to penetrate into the inner world of a robot?
The standard interpretation of the Turing test is “A person interacts with one computer and one man. Based on the answers to the questions he must determine with whom he speaks: with a person or a computer program. The computer programs problem is to lead the person into error, making him to choose wrongly”.
As you can see, just simple the test editing: “… to lead the person into error, that his interlocutor is aware of itself/himself…” generates just another round of “race simulations”, the next stage of competition between developers and testers: who of them would be smarter?
I believe the solution is to turn the test to 180 degrees: to allow the robot to test a man and to see how he would do it. Such test’s rotation might look like this:
Artificial Intelligence (AI) asks the person and analyzes answers trying to understand if the interlocutor is aware himself or it is an unconscious computer program. At the end of the experiment AI announces its verdict, and explains how it came to such conclusions. The course of argumentation is evaluated by human “jury”, which concludes if AI is “truly reasonable”. In case of doubt “jury” repeated the experiment so many times as necessary.
As you can see, the test is divided into two phases. For the purposes of the test it is not important if AI names the man by human person or a robot. It has long been an open secret that people who are entangled in the past and future often behave unconsciously in the present. So the person participating in the test at a certain frame of mind can quite easily be perceived as a soulless robot. The test essence is not to catch our fellow in case on the loss of connection with his “Self”. The test is to catch on this the artificial intelligence.
Try to understand if your interlocutor posing as astronaut really visited cosmos if you never had been there. Try to check the professionalism of another person, if you have never experienced his profession. No matter how logical and consistent you are, your interlocutors will quickly realize that you have no experience about the problem you ask. Likewise, AI having no consciousness will betray itself as soon it begins to look in a person for something that is unfamiliar to it from its own experience.
To participate in an experiment of “the jury” has not only practical but also the symbolic significance: in essence, an ordinary jury also decides whether to let some creature live among the people.
To allow to artificial intelligence to test the reasonableness of the person is also symbolical. If we want to involve robots in our human games we have to give them a choice to take us for reasonable or not.
Thank you for your comments and "likes"!
PS If you like this post - tell Google about it!