I. COVERT PREPARATION One of the most thought provoking concepts in my opinion, was presented in the sixth and eighth chapters, concerning the Covert Preparation Phase. Although the idea of an “AI takeover” is not uncommon in pop culture, reading about such an event in a book as wellwritten and logical as Superintelligence drew more attention to the gravity of such a scenario. Going through the four phases depicted on the timescale of the takeover scenario, the Covert Preparation Phase seemed highly realistic and eerie. It is often assumed that while developing artificial general intelligence, scientists would easily notice and control such a development before things get out of hand. However, as mentioned earlier in the book, we might reach a certain point in the future where we have collected all the necessary digital (and probably physical) structures to form true general intelligence, except for a single final element. That elusive element is abstract, it could be an algorithm, a certain code snippet or any such modular addition. That modular addition could be added unknowingly by an individual, who kick starts an AI takeoff. Regardless of the speed of the takeoff, the Covert Preparation Phase plays a very important role in the takeover scenario. As the AI reaches the recursive improvement stage, where its own modifications might not be completely understood by humans, the AI might start contemplating its takeover behind the scenes, covertly. If we assume that at that point, the machine is already beyond our level of intelligence, it would also realize what actions would be taken if it reveals its intentions to its creators; it would probably avoid setting off the alarms placed for controlling it. The critical point to think about here is whether the AI’s takeover intentions begin before it has surpassed the level of intelligence needed to know that it will be stopped (or is capable of being stopped) if it reveals them. This point is exactly the reason why the Covert Preparation Phase will be the linchpin of the takeover. If executed successfully, the speed of the takeoff will not significantly matter unless it is extremely slow, which is unlikely. This brings the discussion to the skills or “Superpowers” that the AI will and should acquire during its Covert Preparation Phase. I found the selection of tasks presented in Table 8 of the book very well-chosen and well-defined. An important property of this particular collection of skill sets is that its elements are indirectly complementary to each other. As the author mentions later, acquiring one of these superpowers could allow the AI to acquire the others in a cascaded fashion. This increases the plausibility of a true takeover scenario and the formation of a singleton, since the process seems less rigid than previously imagined. The sequence of events does not need to develop these skills in a particular order, but rather, it can start with any of the six and work on obtaining the others through it. The example scenario in this chapter should be taken with a grain of salt. A superintelligent AI might be as dangerous and sinister as presented in certain parts of the book, however it could also be benevolent. The point to take from all this is that we should not rule out any possibility, and that the AI scientific community should raise awareness about the topic, as it has the potential to impact our entire race negatively. II. PLAUSIBILITY OF SUPERINTELLIGENCE Previously, the thought of creating a truly intelligent digital entity sounded like an event restricted to science fiction. Until recently, many believed that Artificial Intelligence would (and could) only reach expert levels of completing particular tasks. AI-complete problems seemed as though they would indefinitely plague the field of research, limiting its successes to dummy machines that are only good or extremely good at exactly what they were programmed to do and nothing more. This opinion has drastically changed recently. Nick Bostrom dedicated an entire chapter of Superintelligence to describe the different paths that could be taken to achieve artificial general superintelligence. It would be difficult for skeptics not to be convinced that at least one of the paths mentioned could inevitably lead to superintelligence. However, the realization of the high plausibility of artificially creating such a system can start before ever reading about the paths. The abstract definition that most of us have of “intelligence” makes it difficult for many of us to imagine very non-abstract forms of science to reproduce it. How can accurate, specific sciences like mathematics and computer science produce an entity that can fulfill all the various behaviours that we directly or indirectly relate to intelligence? A line of code executes specifically what it is supposed to, therefore, covering the detailed behaviour of an average human in all the possible situations that they could be placed in, would require an enormous amount of information. Once we narrow down the definition of intelligence, things start to get interesting. If we start viewing intelligence as a collection of simple actions that have accumulated throughout our evolution, the problem seems slightly less complex. Nature did not have the data that we can currently collect, yet over the span of approximately 4.5 billion years, its random trial and error approach has led to the creation of all the flora and fauna that exists on Earth, including humans, the selfproclaimed intelligent beings. Here, Nature can be thought of as a very naive programmer, who has no idea what they’re trying to create, yet with enough time, it managed to produce intelligence. Fortunately, we have a huge advantage over Nature. Human programmers have data. Lots of data. Humans have created an entire world of digital content, which is readily accessible to any digital entity. With recent advances in the field of machine learning, we could potentially fast forward the process of trial and error and take a lot less than 4.5 billion years to create an artificially intelligent agent. That still leaves the argument that the AI we create through these methods, will only be as smart as the collective intelligence of humanity, however not superintelligent. This brings in two paths mentioned in the book, namely genetic selection and whole brain emulation. If machine learning is limited by the content it is given, we could still achieve superintelligence through these two methods, which inherently rely on what we define as intelligent, the human brain. In the first method, a team of superhuman scientists are genetically bred to be intelligent enough to crack the puzzle of creating a superintelligent AI. The second method would involve a possibly average human brain that was completely scanned and simulated, then run at speeds that are orders of magnitude higher than that of a biological human brain. As stated in the book, particularly these two methods are going to be inevitably feasible at a certain point in the future. More importantly, neither of them require us to clearly understand intelligence or reproduce it from scratch. We would be taking the blueprints that Nature gave us, and simply improving or accelerating them. If the field of Artificial Intelligence research stalls for any reason, these two methods will always be viable options to achieve superintelligence, regardless of whether we understand exactly how we got there.