When watching TV series like “Stranger things” we realize how scientific progress could be dangerous to the human species. Sure, it is a science fiction TV series, but it shows how much we can be threatened when we lose control of what we are inventing. In this context, “Superintelligence: Paths, Dangers, Strategies” written by Nick Bostrom is a book that raises awareness about the threats of superintelligence in the future by introducing its paths, dangers and strategies.
The author is discussing what will happen if machines surpass human intelligence.What are then the most interesting ideas of the book? And how realistic could the prospect of the superintelligence viewed by Nick Bostrom be?The book is a worthwhile read for anyone into Artificial Intelligence (AI) with motivation to face some jargons that could be more difficult to someone out of the field. Assuming that a superintelligence succeeding AI could be reached, one of the most interesting parts that caught my attention are how convincing the perspective of the author of the risks we encounter. In fact, he believes that we will be facing a true severe existential disaster. On the one hand, when the first superintelligence prototype will be introduced he would already have exceeded his competitors. In other words, it will be able to beat anything at least in the field it was created for.
The problem is that we could imagine the possibility that it would even exceed all of humanity kind combined making it not only out of control of the small research team that worked on it but also out of reach for all of us. On the other hand, there is no guarantee that superintelligence would adopt human values like humility, self-sacrifice, altruism or general concern for others. The early-days AI systems have been considered as computers. They would have final and orthogonal goals. For instance, Bostrom cites means-ends analysis or the ability to successfully update abstract goals, as the metric for intelligence in this context. Moreover, even if the system was trying to complete a simple final goal such as creating exactly one million paper clips, there is a strong reason to believe that it would try to update specific “convergent instrumental”, as described in the book, goals that make it easier to obtain the final goals.
The system would identify two related goals. The first one is about destroying any prospective threat to the final goal and collecting the maximum resources to realize it. The threats could be human beings and they definitely possess resources. In the paper clip scenario, for example, it seems plausible that the superintelligence would try to acquire as many resources as possible to increase its certainty of having produced exactly one million paper clips no more and no less. Giving these important points, we should be aware that a superintelligence could be a curse to the humankind. Fortunately, we can try to avoid the curse or at least make its impact as small as possible when it starts to take place. That leads me to introduce the second fact I liked about the book which could be summarized in a word for me : “semi-optimism”.
In fact, Bostrom is not just complaining about the dangers resulting from AI and superintelligence, but he is also trying to give solutions to limit these threats. He admits that it is not as easy as it seems, but he is being a bit optimistic about it as we can see in the following citation from the book:”Some say: “just build a question system!” or “Just build an AI that is like a tool rather than an agent!” But these suggestions do not make all safety concerns go away, and it is in fact a non-trivial question which type of system would offer the best prospects for safety”.The optimistic view comes from the fact that Nick Bostrom believes we should make the goals of superintelligence compatible with our goals and principles. To achieve that, he considers that it is urgent to establish a new science focused on the study of advanced agents of intelligence and artificial awareness. The author joins in this the requests formulated for a long time by some of his colleagues.
Trained teams, not only of computer specialists, but of mathematicians and philosophers, should be funded to deal with this issue.But one of the reasons that there are no motivated sponsors for this idea is the fact that superintelligence is still considered unrealistic. That lead us to ask two questions: How far could the machine intelligence go to? Could we consider this type of superintelligence realistic?