Наука 2.0. Угрозы и перспективы искусственного интеллекта
Алексей Самсонович - специалист в области искусственного интеллекта, профессор-исследователь Института им. Краснова Университета Джорджа Мейсона.
Transcript of the video: recorded and translated by Alexei Samsonovich
Alexei Samsonovich is a specialist in artificial intelligence, a Research Assistant Professor from the Krasnow Institute of George Mason University.
AK: Today our guest is Alexei Samsonovich, Research Assistant Professor from the Krasnow Institute of George Mason University. Such a paradox. And all this is not in a Moscow or Leningrad suburb, but on the contrary, in America, and even in Virginia, if I am not mistaken, based on your business card. Alexei, how do you do.
A: How do you do.
AK: Please, in two words, even in one word: Krasnow is not a random not the most popular name in America, isn’t it? Indeed, he must have a connection to what you are doing now? Or, do we not know who is this Krasnow?
A: The connection is rather remote. He was an engineer who designed houses at the beginning of the previous century, and when he died, he left his money designated for the development of science. The Institute was founded on his money.
AK: What a bright person. Now, back to what Alexei Samsonovich is doing. Speaking very generally, without going into details, it is artificial intelligence, am I right?
A: Right. Or, rather, I am working at the intersection of artificial intelligence, neuroscience and cognitive science. In the nutshell.
AK: Alexei has pre-loaded us with fancy terminology before the program started. Therefore, I propose that now we start our conversation by agreeing on this terminology, and then talk about artificial intelligence and neuroscience. The first thing that Alexei Samsonovich told us was that, according to his opinion, now is the time for artificial intelligence. In a moment you will tell us in detail why. And the second thing was also an intriguing combination of words: “biologically inspired cognitive architectures”. As I understand, this book - your book? Are you one of the authors?
A: I co-edited this book.
AK: ..is called “Biologically Inspired Cognitive Architectures”. So let us agree on the terms.
A: The term “artificial intelligence” was coined in 955..
AK: In 1955.
A: Yes, in 1955, when it was proposed to build an artificial system with the same cognitive capabilities as those that humans possess, including the abilities to use and translate natural language, solve mathematical problems, and more generally, to do any cognitive tasks that humans are capable of doing.
AK: Excuse me, why in 1955? Was it like an idea suddenly came to somebody’s mind? Or, was there an objective necessity? Why suddenly in 1955 people said, “what if we make a machine that can translate and think”? Why?
DI: Not necessarily a machine.
A: First, the objective precondition was the emergence of computers. Besides, some mathematical logic was developed by that time. It seemed to people that only one step separated them from an artificial system could replace humans.
DI: Could start playing chess..
A: Play chess at a level of a world champion... There was a prediction that this will happen in 10 years, but this happened somewhat later..
AK: but happened.
A: but happened, while other predictions were not fulfilled. For example, scene recognition that at the time seemed a toy problem is not solved until now: for example, a machine cannot visually distinguish between an accident and a normal traffic even in a city. In other words, the task turned out much more difficult than it was believed initially. Then, due to many promises, efforts and investments that did not produce the expected outcome, a disappointment came. Therefore, now it is very difficult to convince the same funding agencies in the necessity to invest even more into this field. At the same time, ironically, I think that exactly now is the right time to take this challenge seriously. The goal is to understand how the brain works.
DI: How the brain works physiologically?
A: Not only physiologically, but also at the cognitive level. To understand how concepts are represented in the brain, how the information is processed, how emotions emerge, why a human sets these or other goals, how a human personality develops, in general, how is it that a human is capable of receiving education? For instance, a school student, who is reading a book and attending lectures, masters a lot on his own. No computer today is capable of this.
DI: Simpler: a baby is born, and in 2 years it can speak.
A: Exactly. No computer is capable of doing this today, but in a short while computers will become capable of doing this.
DI: Humans will build computers that will be capable of doing this? Or, a computer will do this by itself? Like a computer is sitting on a desk, and then suddenly it can do this?
A: Of course, it cannot undergo a metamorphosis suddenly by itself. However, eventually computers will be creating their own kind and developing themselves. Today some people are talking about the Singularity. I mean, predictions are made by some scientists and fiction writers that..
DI: Not only fiction writers.
AK: What Singularity? Steven Hawking is not a fiction writer, but I followed him several times on singularities..
A: There is a man by name Vernor Vinge who is simultaneously a Professor in California and a fiction writer. He believes that in the near future a gigantic explosion will happen, that has no analogs in the whole history of mankind and, more generally, in the entire history of the world since the Big Bang. The idea is that a new level of artificial intelligence will emerge, such that it will be able to develop itself, and therefore will exceed the human level. After that humans will no longer be necessary. Intellectual machines will emerge in huge numbers, they will be programming themselves, designing and building new machines, and so on. His main point is that we will not be able to understand - and this is the distinction of this new explosion from all previous catastrophes and revolutions... The difference is that previously it was possible for people to imagine before the event what in principle was going to happen. Now we are not capable of imagining what is going to happen. This is his opinion.
DI: In our program “Science 2.0”, this idea is used almost as a regular reference: modern science deals with things that a human cannot imagine. It refers to many fields of science.
A: But, speaking of today’s science, it is still being advanced by people, and therefore, at least somebody is capable of imagining next steps beforehand: at least, those people who make those steps. In our case, however, the step will be made by machines, and not by humans, and it may be difficult for humans to..
AK: What are the grounds for this Vernor Vinge to speculate that something like this will happen?
A: The grounds? For example, Moore’s laws: when you plot the growth of computer processor speed and memory over time,
AK: then it becomes a vertical line.
DI: A singularity.
A: Also, when you plot the level of tasks solvable by those computers, relating them to the human abilities, you see that the extrapolation of the curve goes to no one knows where. It goes faster than the exponent, therefore, the must be an explosion. By the way, this is a false logic.
DI: The exponent goes like this?
A: The exponent goes like this, and what we have is growing faster.
DI: More vertical, I understand.
AK: Vertical, in fact.
AK: And what is the time frame before it happens?
DI: How long it remains for us?
AK: 20 years, I’ve heard in one version.
DI: Still have enough time.
A: Vernor Vinge gives from 20 to 30 years. He says he will be very surprised if this will not happen in 25 years.
A: Vernor Vinge.
AK: And how old is he?
A: He is over 60, looks rather aged.
AK: Slowly rubs his hands, gets ready to greet artificial intelligence...
A: Actually, he is not alone. There is the whole community there...
DI: There is the movie Matrix..
A: The movie Matrix, yes. There is also Ray Kurzweil, who is talking about approximately the same. There is Ben Goertzel, who is also a co-editor of this book, he also carries on approximately the same line..
AK: As an expert in artificial intelligence, do you share their concerns?
A: Not exactly. That is, I do not think that there will be an explosion, but I think that a big breakthrough is not only possible but is inevitable. I mean, if it will not happen in one part of the world, then it will happen in another. And, of course, I think it is not desirable for this breakthrough to happen in an uncontrolled way. For example, as a process out of control on the free market. If suddenly a huge amount of tools and means will flood the market, I mean, tools and means capable of doing more than humans can anticipate, capable of developing themselves, etc. then nobody will be able to restrain this process. An avalanche will start, and in this scenario Vernor Vinge may be just right that nobody will be able to imagine the consequences of this catastrophe. Therefore, it seems to me that this whole process needs to be taken under control in advance by some state government.. In reality, it is easy to keep a machine under control: when you unplug it, it will stop.
AK: So far, yes.
DI: And what architecture do you see necessary for the breakthrough to happen?
A: I believe that the breakthrough will be based on metacognitive architectures. There is today one not very popular yet direction of research in artificial intelligence called cognitive architectures. Some people in Russia call it production programming, as I learned recently. This is the approach that was originally started by Alan Newell in America. The idea was to simulate the human thinking. Not at the level of neurons, but at a higher level of abstraction, when a set of rules is introduced (they are also called productions), which themselves act as objects in some virtual environment. They work when certain conditions emerge, signaling that the given rule should be engaged. In principle, this process can be called non-algorithmic. It is just a complex dynamical system, which is nevertheless endowed with semantics..
DI: In simple words, please?
A: OK, in simple words. We all have in our head a set of concepts, or templates. As a result, we are capable of recognizing what we know already. If you will see something totally new, then you will have difficulty understanding what it is that you are seeing.
DI: I will simply not see it.
A: You will simply not see it. That is, when you see a familiar thing, a template for that thing is activated in your long-term memory. Or, a schema of that thing. Then this schema starts interacting with another schema. And, as a result of this interaction, thoughts emerge and flow, and you perceive them as your stream of consciousness.
DI: Is it perception or thinking?
A: It is thinking. Of course, it includes perception, cognition, decision making, action control, and also memory creation. All these are processes built on one and the same common principles and substrates. We are currently developing a cognitive architecture that is based on schemas as its primary elements. Then, these schemas are grouped into mental states. Every mental state is like a snapshot of your consciousness: that is, a snapshot of what you are thinking at a given moment. The idea is that there may be several such mental states active in your mind simultaneously, and they may interact with each other. Therefore, you can, for example, imagine what your colleague is currently thinking, or look at yourself from a third-person perspective. All this is very important. This multiplicity of active mental states is a distinguishing feature of our architecture.
DI: So, you say that your architecture differs from others because you allow for the co-existence of several parallel processes?
A: Mental states. Elements of those mental states are known by different names: productions, operators, chunks, frames, schemas, or just objects.
AK: Chunks sounds funny. I think I am missing a chunk, therefore I need to go and have a dinner.
DI: Right now?
AK: No, later. So, one mental process is a chunk?
A: A chunk is again a template that contains elements that can be bound to a certain context and to certain available information in that context. The same can be said about a frame, a schema, etc. Furthermore, if you take the ordinary object-oriented programming, you will see that the notion of an object in it implies the same features. You see that this is a very general notion. What is the difference between our system that we are talking about and a standard object-oriented programming language, like C++? The difference is that our system works by itself. As you know, a code in C++ needs to be written manually. Then it is executed exactly as you wrote it. In contrast, our system in some sense generates its own code by itself and then executes this code. In this sense it is autonomous and capable of thinking on its own.
AK: And this program, let us name it again..
DI: Is this program written already?
AK: You say this program allows for..
DI: This is a program that can adjust itself, using its own prototypes, and does this in such a way that it looks as if it was thinking.
A: For the sake of the argument I called it a program, but this is what we call a cognitive architecture.
AK: Understood. Now look, we are talking about this question during our entire program, I mean, our TV program. What is the boundary between an artificial intelligence and merely a computer program that is not an artificial intelligence yet? I guess, you had something in mind when you gave us an example of the dream of people from the 1950’s to build a machine that can play chess better than a human. And here we are, this dream is now reality. Maybe, this chess program is already an artificial intelligence? Or not yet?
A: This example is a product of the field of science that is called today artificial intelligence, although the goal of the field per se is much bigger, of course. It includes the creation of a functional equivalent of the human consciousness. The main difficulty here is exactly in the understanding of the final goal. In other words, the challenge for us is to formulate precisely what is that we want to achieve finally.
AK: Finally, we want to free ourselves from work. Because all jobs must be performed by some artifacts.
A: (laughing) This is what people want to achieve at the end, you are absolutely right. We do not know what will happen in reality, however. It is possible that the initial phase will be exactly like this. When artifacts will replace people, people will start breathing freely. Many problems in the world will be solved.
DI: What problems?
A: For example, problems of hunger, of resources. Because this is a matter of who is dealing with those problems. You see, if we upload all our tasks to machines, then they will be able to do much more than we do.
DI: There is also another problem: free time.
A: Yes, free time.
DI: Will the machines solve this problem for us as well?
A: If machines will do our work, then we will have free time.
DI: And what next? You see, free time is not a benefit, but the opposite. It needs to be utilized.
AK: At first, we will have a good sleep. And what next?
DI: Then we will get drunk. And what next?
A: Today one of the main driving forces of research in artificial intelligence are computer games, computer games industry and entertainment in a broader sense: computer programs used to create movies, for example. Many features available to people on the Internet are also implemented with artificial intelligence tools.
DI: Is there a theleological hypothesis behind all this? Or, is this question irrelevant?
A: Today there are many discussions about philosophy of consciousness and the question of whether it can be reproduced in a computer..
DI: The purpose? Goal setting? What for? When a man starts thinking about himself, he inevitably asks this question “Why do I live, for what purpose?” This happens on all levels: from a great philosopher to a square. There is always a question: “Why am I?” and also “Who am I?” Then the subject is usually helped out with the answer. There are standard schemes for this help: cultural, religious, social/family and so on. Now you say: there will be an explosion, or something big will happen. Doesn’t this prompt the question “why”?
A: In order for people to live better life. I believe that artificial intelligence will help the humanity to solve many of its problems. And this will be perhaps the least dangerous revolution that can happen in our time. I mean, take, for example, genetic engineering: it involves many great dangers. Take that nuclear energy, take any field of science: quantum physics that may discover other worlds...
DI: All these dangers in these sciences are, or course, very beautiful and horrible; however, compared to the danger that you have just described, they are nothing but a child play. Merely nonsense. Can you imagine people that produce nothing?
A: In fact, all the arguments here are exactly opposite, because artificial intelligence is the tool of creation, and not destruction, in contrast with other possible products of science..
AK: I do not know..
DI: So, what about peaceful atom?
AK: Sorry, stop: we are running out of time. What I understood from the words of Alexei is that artificial intelligence is the tool of creation. What it creates for people is free time. It will produce it in incredible industrial quantities. And from now on we should feel absolutely clueless regarding what to do with this free time. Because it means that we will have to do nothing. We will need to do nothing at all.
DI: What is then the sense of human existence? What is the purpose of man?
A: The purpose of man is to create something greater than its own kind.
DI: Oh, I see. Then that’s it. OK, suppose the man creates it, and what then?
AK: Sleep, sleep, sleep.
DI: Then all people will be wearing kippah and praying. Good idea.
A: I also wanted to tell you about the scientific society.
AK: Tell, only briefly.
A: We recently formed a Biologically Inspired Cognitive Architectures Society. To join it, everybody is welcome to send an application through our web site. The address is “bicasociety.org”.
AK: Aha! After what you told us, I think we cannot promise you a lot of people. After this program, this idea just sounds a kind of scary. Better yet, here is what is very important, my friends. If you encounter an artificial intelligence, remember that it must have a plug.
DI: 3 pins or 2 pins?
AK: Does not matter. Any plug. This is a principally important feature for you to be able to unplug it. Just pull it out.
DI: And then stick that fork into sprats?
AK: That we will do together. That is what AI will guarantee to us. Just pull the plug out of the electric outlet. As long as we have the plug, we are safe. Alexei Samsonovich was with us today, a Research Assistant Professor from the Krasnow Institute for Advanced Studies of George Mason University, that is in Virginia, USA. Thank you very much. And welcome to the Biologically Inspired Cognitive Architectures Society. Thank you.
A: Thank you.