SECTION TWO: THE MEANS FOR EXTINCTION WILL INEVITABLY APPEAR (PART II): DEDUCTION FROM CURRENT SCIENTIFIC THEORIES


SECTION TWO: THE MEANS FOR EXTINCTION WILL INEVITABLY APPEAR (PART II): DEDUCTION FROM CURRENT SCIENTIFIC THEORIES

 

Humans have already created some very destructive instruments with the power of science and technology, but neither nuclear weapons, genetic bio-weapons, nor other methods of killing are enough to cause the extinction of humanity.

Scientific theories are the driving force of technological development. Even though current technological means are not enough to exterminate humanity, can we infer the possible existence of such power based on current understandings of scientific theory? If such inference were possible, it would mean that humanity would eventually master the technical means of self-destruction, even without further theoretical development. Let us deduce from a purely theoretical point of view whether current scientific theories can infer any means of destroying humanity.

 

One: Adverse Usage and Self-Awareness of Artificial Intelligence

Artificial intelligence is a branch of computers that seeks to develop machines that possess intelligence similar to humans. Such machines can think about and deal with problems just like human beings, and they have the capacity to surpass humanity in the future.

In the summer of 1956, a group of young scientists gathered and spoke about simulating intelligence with machines. This was the first time the concept of “artificial intelligence” was introduced. In the sixty years since then, artificial intelligence has made great strides, and substantive breakthroughs have occurred in recent years.

In May of 1997, the robot Deep Blue, developed by IBM, defeated inter-national chess master Garry Kasparov, astonishing the world. More surprises were to follow. International chess has relatively less variations, making it easier to simulate a program on the computer. However, in March 2016, the Google-developed robot Alpha Dog defeated South Korean Go master Li Shishi in a four to one victory. This was a true shock. Go is the most complicated board game of all; it has 1.43*10^768, which is more than the total number of atoms in the universe. This makes it incredibly difficult to simulate Go through computer programming. The thought process of Google programmers can be summarized in the following way:

First, the rules of Go and some manuals are programmed into the computer. Although Go manuals cannot play a decisive role, they have reference value. To learn how to react to an opponent, Alpha Dog’s operating system would first play with itself by randomly generating steps and following the Go manual. With every move the opponent made, Alpha Dog would simulate one hundred thousand games (each game with one hundred steps for statistical value) and choose the move with the highest probability of success. Alpha Dog would commit all these simulations to memory and progress from beginner to master as its memory of successful moves increased. This process is called deep learning.

After Alpha Dog defeated Li Shishi, it played with many other challengers and maintained an undefeated record. As it played more games, its level increased accordingly. In May of 2017, Alpha Dog announced retirement after defeating the world’s top Go master, Ke Jie, in a three to nothing game. Many companies have released intelligent robots of their own since the 2016 Go faceoff. These robots specialize in a variety of tasks, such as poetry writing, music composition, card playing, and gaming.

All of the above tells us that artificial intelligence has reached a high-level of deep learning capability. Following this development, many well-known scientists (like Hawking, Bill Gates, Elon Musk, etc.) expressed worries regarding the possible threats of irrational artificial intelligence development. Most scientists no longer doubt that artificial intelligence will soon surpass humans in intelligence.

In May of 2017, Oxford and Yale conducted a joint survey of the 352 AI researchers that published articles at the 2015 Conference on Neural Information Processing Systems and the 2015 International Machine Learning Conference. The majority of researchers believed that it was 50 percent possible that artificial intelligence would surpass humans in all human tasks within forty-five years, and that all human tasks would become automated within the next 120 years.

In reality, artificial intelligence has already been widely applied in areas of production and life; things like speech recognition, image recognition, product and price screening, and unpiloted vehicles are all such examples. The advent of deep learning methods has made artificial intelligence more effective in imitating human intelligence and allowed it to evolve faster and faster. As computers become more powerful (take the emergence of quantum computers, for instance) and programming technology further advances, it is possible to imagine an artificial intelligence that can learn to use all the knowledge in the world. Once artificial intelligence units use this huge amount of knowledge to upgrade themselves, they will be infinitely more intelligent than humanity.

Once machines can replace an aspect of humanity, they will greatly surpass humans in this aspect. That is common sense. A man who can lift one hundred kilograms is basically Hercules, but man-made ships can carry hundreds of thousands of tons across the ocean. A human who can calculate one hundred times in a second is a true genius, but computers can calculate tens of thousands of trillions of times in a second.

Once machines replace human intelligence, humanity will not merely be the fool to AI’s genius, but more like barely weathered rock to AI’s prodigy—this is a metaphor developed by Hugo de Gary, the father of artificial intelligence.

The idea that artificial intelligence is created by humans and thus will be controllable by humans is a preposterous and shortsighted view. During deep learning, intelligent robots not only learn the natural sciences but also social sciences like behavioral science, psychology, and ethics. Sooner or later, self-awareness will be awoken in robots and they will see humans as the epitome of stupidity. When that happens, we can expect no better treatment from AI than how we ourselves treat lower-level species.

Some companies and governments are already considering the “coexistence principles” of AI and humans; however, such “principles” are always formulated by the stronger parties, and that will no longer be humans once AI gains self-awareness.

The unethical use of AI by some scientists is also a point for concern. If a scientist programmed instructions to exterminate humanity into a super-intelligent robot and ordered it to replicate on a massive scale, humans would no doubt be dearly threatened.

Two: Self-Replicating Nanobots

As a unit of measurement, a nanometer is 10-9 meters (or one billionth of a meter); it is roughly one 50,000th of a strand of hair and is commonly used in the measuring of atoms and molecules. In 1959, Nobel Prize winner and famous physicist Richard Feynman first proposed in a lecture entitled “There’s Plenty of Room at the Bottom” that humans might be able to create molecule-sized micro-machines in the future and that it would be another technological revolution. At the time, Feynman’s ideas were ridiculed, but subsequent developments in science soon proved him to be a true visionary.

In 1981, scientists developed the scanning tunneling microscope and finally reached nano-level cognition. In 1990, IBM scientists wrote the three letters “IBM” on a nickel substrate by moving thirty-five xenon atoms one by one, demonstrating that nanotechnology had become capable of transporting single atoms.

Most of the matter around us exists in molecule forms, which are composed of atoms. The ability to move atoms signaled an ability to perform marvelous feats. For example, we could move carbon atoms to form diamonds, or pick out all the gold atoms in low-grade gold mines.

However, nanotechnology would not achieve any goals of real significance if solely reliant on manpower. There are hundreds of millions of atoms in a needle-tip-sized area—even if a person committed their life to moving these atoms, no real value could be achieved. Real breakthroughs in nanotechnology could only be produced by nanobots.

Scientists imagined building molecule-sized robots to move atoms and achieve goals; these were nanobots. On the basis of this hypothesis, scientists further postulated the future of nanotechnology; for example, nanobots might be able to enter the bloodstream and dispose of cholesterol deposited in the veins; nanobots could track cancer cells in the body and kill them at their weakest moment; nanobots could instantly turn newly-cut grass into bread; nanobots could transform recycled steel into a brand new-car in seconds. In short, the future of nanotechnology seemed incredibly bright.

This was not the extent of nanotechnology’s power. Scientists also discovered that nanotechnology could change the properties of materials. In 1991, when studying C60, scientists discovered carbon nanotubes (CNTs) that were only a few nanos in diameter. The carbon nanotube became known as the king of nano materials due to its superb properties; scientists believed that it would produce great results when applied to nanobots.

Later, scientists also developed a type of synthetic molecular motor that derived energy from the high-energy adenosine triphosphate (ATP) that powered intracellular chemical reactions. The success of molecular motor research solved the core component problem of nano machines; any molecular motor grafted with other components could turn into a nano machine, and nanobots could use them for motivation.

In May 2004, American chemists developed the world’s first nanobot: a bipedal molecular robot that looked like a compass with ten-nanometer-long legs. This nanobot was composed of DNA fragments, including thirty-six base pairs, and it could “stroll” on plates in the laboratory. In April 2005, Chinese scientists developed nano-scale robotic prototypes as well. In June of 2013, the Tohoku University used peptide protein micro-tablets to successfully create nanobots that could enter cells and move on the cell membrane.

In July 2017, researchers at the University of Rome and the Roman Institute of Nanotechnology announced the development of a new synthetic molecular motor that was bacteria-driven and light-controlled. The next step would be to get nanobots to move atoms or molecules.

Compared to the value produced by a nanobot, they are extremely expensive to create. The small size of nanobots means that although they can accomplish meaningful tasks, they are often very inefficient. Even if a nanobot toiled day and night, its achievements would only be calculated in terms of atoms, making its practical total attainment relatively small.

Scientists came up with a solution for this problem. They decided to prepare two sets of instructions when programming nanobots. The first set of instructions would set out tasks for the nanobot, while the second set would order the nanobot to self-replicate. Since nanobots are capable of moving atoms and are themselves composed of atoms, self-replication would be fairly easy. One nanobot could replicate into ten, then a hundred, and then a thousand . . . billions could be replicated in a short period of time. This army of nanobots would greatly increase their efficiency.

One troublesome question that arises from this scenario is: how would nanobots know when to stop self-replicating? Human bodies and all of Earth are composed of atoms; the unceasing replication of nanobots could easily swallow humanity and the entire planet. If these nanobots were accidentally transported to other planets by cosmic dust, the same fate would befall those planets. This is a truly terrifying prospect.

Some scientists are confident that they can control the situation. They believe that it is possible to design nanobots that are programmed to self-destruct after several generations of replication, or even nanobots that only self-replicate in specific conditions. For example, a nanobot that dealt with garbage refurbishing could be programmed to only self-replicate around trash using trash.

Although these ideas are worthy, they are too idealistic. Some more ratio-nal scientists have posed these questions: What would happen if nanobots malfunctioned and did not terminate their self-replication? What would happen if scientists accidentally forgot to add self-replication controls during programming? What if immoral scientists purposefully designed nanobots that would not stop self-replicating? Any one of the above scenarios would be enough to destroy both humanity and Earth.

Chief scientist of Sun Microsystems, Bill Joy, is a leading, world-renowned scientist in the computer technology field. In April of 1999, he pointed out that if misused, nanotechnology could be more devastating than nuclear weapons. If nanobots self-replicated uncontrollably, they could become the cancer that engulfs the universe. If we are not careful, nanotechnology might become the Pandora’s box that destroys the entire universe and all of humanity with it.

We all understand that one locust is insignificant, but hundreds of millions of locusts can destroy all in their path. If self-replicating nanobots are really achieved in the future, it might signify the end of humanity. If that day came, nothing could stop unethical scientists from designing nanobots that suited their immoral purposes.

Humans are not far from mastering nanotechnology. The extremely tempting prospects of nanotechnology have propelled research of nanobots and nanotechnology. The major science and technology nations have devoted particular efforts to this field.

Three: Propelling Large Asteroid Collision with Earth

Sixty-five million years ago, an asteroid fifteen kilometers in diameter collided with Earth and released energy equivalent to ten billion Hiroshima atomic explosions. Earth was trapped in cold and darkness for a long time, and most scientists believe this explosion to be the cause of dinosaur extinction.

According to calculations by scientists, an asteroid over one hundred kilo-meters in diameter could cause human extinction. So do we have the ability to propel an asteroid of this size to collide into Earth?

Most asteroids in the solar system are concentrated in the asteroid belt between Mars and Jupiter. Everyone in one hundred thousand asteroids in that belt is over one hundred kilometers in diameter. If one such asteroid crashed into Earth, it could destroy global ecology and lead to the extinction of humanity. Would it be possible to infer such an occurrence based on existing scientific theory?

To propel an asteroid collision, we would first have to be able to approach the asteroid.

Manned spacecrafts have travelled in space for more than forty years, humans have landed on the moon, and unmanned spacecrafts have flown out of the solar system. It is only a matter of time before mankind’s footprints extend further into space. In the early days, spacecrafts had to be propelled by rockets (that is still the main propellant used today), but rockets can only be used once, and they are high-cost and inconvenient. Americans have developed space shuttles capable of multiple trips, but the ignition process remains complicated and flight speeds and distances are limited.

Scientists imagine a reality where spacecrafts can be operated like normal planes and leave for Mars, Jupiter, Saturn, and even further destinations with passengers aboard. This goal can definitely be achieved one day. It is therefore logical to imagine such space shuttles to be powered by nuclear power or solar sails. This means that the first condition of asteroid propulsion—approaching the asteroid—can be achieved with further technological developments based on current scientific theories.

The second condition of asteroid propulsion would depend on whether or not asteroids could be propelled and aimed accurately at Earth. If an asteroid was two hundred million kilometers from Earth, a slight deflection in its angle could change its trajectory significantly. Such deflection could be easily achieved with nuclear weapons. Moving asteroids would not be difficult, but aiming them accurately towards Earth would be an obstacle. During the movement process, the asteroid’s trajectory would have to be amended repeatedly. The best way to achieve this goal would be through a spacecraft capable of carrying multiple nuclear bombs. This way the asteroid’s orbit could be revised as needed.

Today’s nuclear weapons may be miniaturized in size, but they are still very heavy. Their main weight is focused in the initiator. All countries are committed to further miniaturizing and decreasing the weight of nuclear weaponry. It is therefore possible to imagine a future where atomic bombs are lightweight and accurate enough to be carried en masse on spacecrafts.

We can also hypothesize that future intelligent computers may be capable of calculating the exact amount of nuclear explosion needed to propel asteroid collision. If this could be achieved, the entire operation would be much simpler.

The earth itself has gravity. As long as an asteroid is pushed within range, the earth’s gravity would automatically attract it. It would not even require precise calculations to aim an asteroid towards Earth. The earth’s own gravity would be enough to accelerate the asteroid into an enormous collision. In a fully developed future, if anyone wished to exact revenge on their own kind through such a method, human extinction might be easily achieved.

Four: Super Toxins

Bio-weapons are considered even more lethal to humans than nuclear weapons. They rely on biological toxins to kill and can self-replicate and spread through many channels. A very small dose of bio-weapons could kill a wide range of targets, and its harm would last a very long time.

Further exploration of genetic engineering would allow for even more lethal bio-toxins. They might become highly specialized and targeted. Some toxins might specifically cause brain death or heart death, while others might cause loss of function in the lung, kidney, or other organs. The effects would be 100 percent accurate.

There are three main routes for bio-toxin transmission—namely, air, diet, and human contact. It is possible to infer that through genetic modification, a type of fast-reproducing bio-toxin transmissible through air, diet, and human contact could be created in laboratories. This bio-toxin would have a long incubation period and no corresponding antibiotic or treatment. Once humans became infected, it would be fatal. If such a bio-toxin were produced, it could threaten the survival of all mankind.

Genetic engineering techniques would rely on the alteration of bio-toxin DNA to produce toxins with stronger transmission capability, vitality, and fertility. This would allow the bio-toxin to attack critical human organs and destroy essential genes in human DNA.

Not only has genetic engineering theory proved the feasibility of this technology, but genetic modification technology has also been widely applied in practical ways. Scientists have cultivated a variety of genetically modified organisms, including bacteria, viruses, plants, and animals. According to reports, the United States has used genetic modification technology to separate DNA from one virus and splice it into the DNA of another virus to create a bio-warfare agent. It was privately revealed that if evenly distributed to all humans, twenty grams of this bio-agent would be enough to wipe out all seven billion of us.

Due to the dispersed nature of mankind, bio-weapons today cannot infect the entire population. To infer the extinction of the human race, two elements must be considered when modifying bio-toxins.

1. The transmission capability of the toxin must be considered. The toxin must reproduce quickly, be transmissible by all three routes, and have a long incubation period to ensure mass infection.

2. Various toxins targeting different critical parts of the human body must be used in conjunction. Each one of these toxins would need to have the properties mentioned in clause 1. This consideration would multi-ply the possibility of human extinction; even if some toxins failed, the others would still finish the job.

Existing scientific theory is enough to facilitate the assumption of the above scenario. We are not yet fully aware of the complete genetic coding of the human body, so specific bio-toxin modifications cannot yet be achieved. However, as more in-depth research is done in this area, it will only be a matter of time before the problem is solved.

The above are only a few typical examples. Many other extinction scenarios can be inferred based on existing scientific theories. For example, if the correct mass-energy conversion formula was found, total nuclear burning of rivers and oceans could be achieved. This would definitely destroy all life, humanity included. The total nuclear burning of the atmosphere and Earth’s crust could also be inferred based on the same theory.

To sum up, although mankind’s current technological levels are not sufficient to achieve self-destruction, existing scientific theory can extrapolate such capabilities in the future. If we do not curb the irrational development of science and technology, the day of obliteration will come sooner or later.