Chapter Six Self-Destruction (Part I)


The extinction of mankind refers to the destruction of all humanity as a whole. Previous analysis has yielded the conclusion that no external threat can destroy all humanity in the billions of years before the sun evolves into a red giant. With the elimination of external threats, the continued survival of mankind becomes completely reliant on humans themselves. Whether or not humanity will self-destruct is completely dependent upon our use of science and technology. No other element is capable of destroying a species so intelligent and so vast in number.

 

SECTION ONE: THE MEANS FOR EXTINCTION WILL INEVITABLY APPEAR (PART I): PHILOSOPHICAL DEDUCTION

One: The Inconceivable Nature of Science and Technology

The history of scientific and technological development revolves around the history of mankind itself. Man’s earliest technological achievements were the use and manufacture of stone tools and the mastery of fire; these also signaled the beginning of human history.

The ancestors of humanity experienced millions of years of development and faced countless obstacles. At every period in time, people encountered incomprehensible natural phenomena and had both the desire to overcome and transform nature and a respect for nature. Yet science and technology continue to amaze us with their incredible achievements and ability to surmount so many things that were thought to be invincible for generations. Some scientific and technological achievements still amaze people long after their conception, and even those in the forefront of the field are constantly astonished. This is because the power of science and technology is just too great, and it consistently exceeds our subjective imagination and customary experience.

In order to fully illustrate the inconceivable nature of science and technology, we will focus on a few scientific and technological achievements that had a huge impact on humanity. Even from today’s point of view, these discoveries were truly marvelous.

1. Electricity, Electromagnetic Waves, and their Application

In 1844, the US Congress set up a telegraph line between Washington and Baltimore. This was the first time electricity was used as an information medium. On May 21, when the telegraph line opened, people were amazed to hear that information could be communicated between two places through a wire, and the telegraph room was packed with onlookers. People talked and took bets, but most thought that the telegram could not exceed the speed of a good horse. At that time, Baltimore was holding the Democratic National Convention, and the list of presidential candidates was immediately spread to Washington via telegraph. Onlookers and US politicians were shocked.

Today, debating the speed of telegraphs versus horses would be hilarious. At a speed close to light, electricity is hundreds of millions of times faster than horses. However, at the beginning of the nineteenth century, the only ways to communicate were through postcards, by boats and horses, or by walking. Though beacons could pass information quickly, they could only transmit abstract information and not specific content.

The use of telegraphs was based on the discovery of electricity; before this discovery, people did not even have a concept of passing information using anything other than human or animal bodies. Let us try to imagine things from that point of view.

Electromagnetic waves are equivalent in speed to electricity. Today, even an ordinary middle school student understands that electromagnetic waves are produced though electromagnetic induction. However, the discovery of this basic physical law was not easy. When electricity and magnetism were discovered, they were thought to be two completely unrelated things. At the beginning of the nineteenth century, Oersted deduced that electricity and magnetism should be connected, and confirmed the electromagnetic effect in April 1820 after numerous experiments. Later, Michael Faraday also confirmed electromagnetism in his laboratory and put forth the law of electro-magnetic induction. After summarizing the electromagnetic theory, James Clerk Maxwell proposed the theory in 1864 that electromagnetic induction could produce electromagnetic waves. This series of deductions was later confirmed by the young scientist Heinrich Rudolf Hertz; this was the early establishment process of electromagnetism.

The discovery of electricity and the establishment of electromagnetism introduced humans to the electric age. Generators and motors became widely used, and people began to accept the existence of invisible and intangible electromagnetic waves. Electromagnetic waves allow us to talk with friends thousands of miles away with our mobile phones; they show us all kinds of entertainment and news on TV; and they permit us to control spacecrafts billions of kilometers from Earth. Today we take such scientific marvels for granted. But would we feel the same way two hundred years ago?

In science fiction movies when people travel to current times through time machines, they usually panic and marvel at the sight of TVs. Two hundred years ago, most people would have a hard time believing what is now reality. Even as we fully understand the theories of electricity and magnetism today, they still surprise us with further advancements.

In 1883, Thomas Edison discovered an interesting phenomenon when studying light bulbs. When he sealed a piece of metal and filament together in a light bulb, an electrical current would only pass through if a positive voltage were added. Based on this phenomenon, people invented vacuum tubes at the beginning of the twentieth century, among which diodes can detect signals while (bipolar junction) transistors can amplify signals. The invention of vacuum tubes created the conditions for radio communication and broadcasting. People were able to receive radio signals and transmit music and news through electromagnetic waves, further surpassing the technology of wired telephones and telegraphs.

With the help of vacuum tubes, scientists developed the first electronic computer in 1945, which exceeded the human brain in terms of calculation. It was previously inconceivable to use a machine as a replacement for the human brain. What was more amazing was that this man-made machine had computing power that far exceeded the capabilities of even the smartest person. The first computer was enormous and used a total of eighteen thousand vacuum tubes, weighed thirty tons, and required more than 170 square meters of space. It could run five thousand times per second, and in the ten years of its tenure, it completed more calculations than human brains ever had. This would all have seemed unbelievable before computers were invented, yet those early computers seem like mere “child’s play” only seventy years later.

In the middle of the twentieth century, people invented transistors made out of semiconductor materials. Transistors perform like vacuum tubes, but they are smaller, lighter, longer lasting, lower cost, have lower energy consumption, and do not require preheating. The transistor soon replaced the vacuum tube in the radio and computer fields. The application of semiconductor materials has gone through three stages, from transistor to integrated circuits to large-scale integrated circuits. A vacuum tube is half the size of a fist; the earliest transistors could be made as small as a grain of rice. Later, large-scale integrated circuits could be made to fit on a tiny chip. Now one integrated block contains millions or even hundreds of millions of transistors.

The replacement of vacuum tubes by transistors revolutionized the field of radio and electronics and greatly improved the performance and characteristics of products. A vacuum tube radio used to be the size of a trunk, but now radios can be smaller than a matchbox. The earliest computer required an entire room. In 1996, the University of Pennsylvania replicated the computing functions of that first computer onto a 7.44 × 5.29 square millimeter chip to commemorate the fiftieth anniversary of the first electronic computer. This nail-sized chip held 174,569 transistors and was fully equipped with the capabilities of its thirty-ton ancestor.

After only seventy years of development, the supercomputers today can compute one quadrillion times per second, meaning that in one second these computers can compute more than human brains have for all of history. Could this type of achievement been conceived hundreds of thousands of years ago?

2. The Understanding of Nuclear Energy

Nuclear energy is the most powerful force we can access today, and we have developed a considerable level of understanding in regard to nuclear energy. We can use nuclear energy to build highly destructive weapons, and we can also use it to generate power. The fact that a tiny bit of material can contain great energy is no longer surprising to us, but that was not the case seventy years ago.

Human understanding of nuclear energy fully reflects the inconceivable nature of science and technology, and it also shows how limited human understanding is when it comes to the power of science and technology. Einstein’s E = MC2 formula was proposed in 1905. At the time, even those that believed in the formula viewed it as a purely theoretical equation with no practical value. Although people recognized that the sun burned nuclear energy, they still believed that the heavy door of nuclear energy could only be opened through the power of celestial bodies.

Human understanding of basic matter particles was long clouded by fallacy. Until the end of the nineteenth century, almost all scientists believed that the atom was a whole, and matter could not be subdivided into smaller particles. On November 8, 1895, the German physicist Wilhelm Röntgen accidentally discovered a new ray while conducting a cathode ray experiment. This ray was extremely penetrative and was called the X-ray due to its unknown origin. When Röntgen published his research results, the scientific community was shocked. People began to re-examine whether the atom could be subdivided.

In 1896, physicists discovered that uranium ore emitted a penetrative ray similar to an X-ray, even without daylight or other rays. This phenomenon was later called a “radioactive phenomenon.” In 1902, Marie Curie extracted 0.12 grams of pure radium in her laboratory. Radium’s radioactivity was two million times that of uranium, and it could produce heat without burning. Calculations showed that radium produced 250,000 times more heat than its equivalent in coal; however, its energy had little use value due to its slow release. When radium finished radiating, it became two new elements: helium and lead.

In 1918, the physicist Ernest Rutherford discovered three rays emitted by radium: α-ray, β-ray, and γ-ray. In 1902, he proposed that the radioactive phenomenon was the atom’s own transformation process. The theory of atomic transformation further disproved the theory that atoms could not be subdivided; it was a revolutionary moment in the history of physics.

Prior to this, physicists had discovered that X-rays were a type of high-speed particle flow; its particle mass was 1/1841 the particle mass of hydro-gen atoms. This was the first time a particle smaller than an atom had been discovered. It was called an electron.

In 1911, Rutherford used α particles to bombard gold foil only one hundred thousandth of a centimeter thick and found that, on average, one out of every two million α particles was bounced back. He deduced that the α particles had encountered something quite dense and that this dense matter could only be a small portion of the atom, which was the nucleus. At the same time, Rutherford further speculated that the nucleus not only had positively charged particles but also uncharged particles. He named the positively charged particles protons and the non-charged particles neutrons; this speculation was later confirmed.

In 1919, Rutherford conducted an experiment that used α particles released from polonium elements to bombard nitrogen atoms. In doing so, he found that the proton released from the nitrogen nucleus transformed into an oxygen isotope. This was the first time artificial transmutation was achieved. Later, Rutherford used α particles to bombard boron, fluoride, sodium, phosphorus, and other elements, deriving protons from those nuclei as well and demonstrating that the nucleus could be divided.

However, α particles have difficulty producing protons when bombarding some of the heavier elements, because α particles are positive and the nuclei of heavy elements have more positively charged protons. Due to electro-static repulsion, the α particles will have difficulty removing the protons of heavier elements.

In 1932, James Chadwick, a student of Rutherford’s, tried bombarding boron and beryllium with α particles and found another component of the nucleus: neutrons. Since neutrons are not charged, not repelled by the nucleus’ charge, and have much heavier mass than electrons, they can easily displace electrons. The discovery of neutrons brought us one step away from nuclear energy; however, in April 1933, shortly after the neutron was discovered by his own student, Rutherford frankly stated his point of view in a speech on nuclear fission at the London British Association. He said, “We cannot expect this (i.e., nuclear energy) to obtain energy, because this method of producing energy is very low efficiency, the evolution of the atom as a source of power is purely theoretical speculation.”

His prophecy was very pessimistic. Rutherford was one of the greatest scientists in the field of atomic physics; he is recognized as the father of modern experimental physics and nuclear physics, and only Einstein can compete with his great contributions to nuclear physics. However, he made this pessimistic statement when we were right outside the threshold of nuclear energy. Even more interesting was the fact that Einstein actually recognized Rutherford’s prophecy. This shows that even the most outstanding scientists can seriously underestimate the great power of science and technology.

In 1934, the Yorio-Curies bombarded aluminum with α particles and produced a phosphorus isotope that soon transformed into silicon and released positrons. This was the first time a radioactive element was artificially generated. Encouraged by the results of the Yorio-Curie couple, Enrico Fermi tried to bombard nuclei with neutrons instead of α particles. At that time, there were only ninety-two kinds of known elements, so Fermi conducted bombardment experiments on all of them. When he used slow neutrons to bombard the 92nd element, uranium, a new element with completely different chemical signature was produced. Fermi could not explain this result; he thought that uranium had absorbed the neutron and produced a transuranic element, but this analysis was wrong.

Otto Hahn, Lise Meitner, and Niels Bohr further verified and correctly interpreted this experimental result. Their conclusion was that when slow neutrons bombarded the uranium element, the nucleus split into two after capturing a neutron. The so-called transuranic element was actually a new element (barium), and when the nucleus split, it lost mass and released energy. This process of splitting the nucleus was later named the “Bohr nuclear fission.” It was a brand-new term in nuclear physics.

This revolutionary explanation was a major breakthrough in atomic physics. On this basis, Fermi further suggested that uranium nuclei could emit one or several neutrons during fission, these newly generated neutrons would continue to bombard other uranium nuclei, and more neutrons would be produced to continue bombarding unsplit uranium nuclei, thus forming a “chain reaction.” This chain reaction would be completed in a flash, thus releasing a huge amount of energy. Fermi had completely worked out the method and principle of nuclear energy; it became clear that nuclear energy could be harnessed and used.

However, the master scientist, Bohr, was still asserting the impossibility of practically applying nuclear fission at this time, and he listed fifteen reasons to support his theory. Bohr was supported by a great number of scientists, once again demonstrating how deceptive science and technology can be. It was no wonder that people were disbelieving. According to calculations, one kg of uranium-235 would lose one gram of mass after completing fission. If the E = MC2 formula was applied, this tiny one gram of material would explode to produce energy equivalent to twenty thousand tons of explosive TNT in an instant; the enormity of that power could only be imagined. So great was this power that even the greatest scientists did not believe it could be harnessed.

Fermi’s speculation was quickly confirmed in several laboratories when it was shown that a uranium nuclear fission could release two to three neutrons. This provided substantive argument for the chain reaction theory, which in turn proved that nuclear energy could be mobilized. The key to nuclear energy had been obtained.

The ensuing struggle to persuade American politicians to research the atomic bombs was also lengthy and difficult. Politicians simply refused to believe in such a “whimsical, impossible” invention. Even when Einstein himself wrote to Roosevelt, his proposal was initially rejected.

3. Transgenesis

The inheritance of organisms has long been dominated by nature. The emergence of humans and all other animals and plants was the result of generations of natural evolution and mutation. Sphinxes and centaurs were all creatures of mythology; the idea that man could create a species the same way God could was utterly preposterous. But today, the power of science and technology has gifted mankind with the kind of power only gods of myth and legend could dream of. Humans now have the ability to create new species at will, change the characteristics of existing species, and even change the features of human beings themselves.

This incredible power came with the reveal of genetic inheritance. In the mid-nineteenth century, the priest Gregor Mendel found stable genetic factors in plant seeds when conducting pea experiments. These genetic factors were the deciders of biological traits. In the 1950s, scientists decoded the double helix structure of DNA and confirmed that genetic codes existed upon the DNA (very few viruses inherit through RNA). This discovery provided room for experimentation, and a large number of scientists began to work towards the manipulation of biological traits.

DNA is a long, complex chain molecule; the genes that determine bio-logical traits are fragments of the DNA chain, and each gene corresponds with a characteristic of the organism. Each human being has thirty thousand to thirty-five thousand genes. They are the factors that decide individual appearance, skin color, sex, body shape, personality, intelligence, and so on. All organisms display properties corresponding to their genetic code; modification of biological genes changes the characteristics of the organism.

The above findings confirmed theoretically that biological traits could be altered by “cutting and pasting” DNA. Following this train of thought, sphinxes and centaurs could be possible, fruits could grow to apple-size but taste like plums, and beans could grow as thick as cucumbers.

This realization was reliant on technology. DNA molecules are only two millionths of a millimeter in size, so changing their structure would not be an easy feat. In other words, the key to DNA recombination technology was finding the “scalpel” and “glue” to cut and paste DNA molecules.

The issue was resolved in a miraculously short number of years. Scientists discovered that endonuclease within the nucleic acid had a limiting effect and could act as the scalpel for DNA splicing, while several “ligase” enzymes were found capable of pasting and repairing DNA fragments. Based on the above research, scientists successfully spliced a DNA molecule and pasted in new DNA in 1971, achieving gene recombination for the first time.

A complex life-form takes hundreds of thousands or hundreds of millions of years to evolve, and humans are one such example. Yet genetic technology can create a new species in a few dozen days or a matter of months. In the past, only deities were capable of creating life—but today, man has the same capabilities. That is truly incredible.

 

Two: Reflecting on Human Understanding of Science and Technology’s Power

Humans traditionally had limited understanding of science and technology’s true power, and it constantly defies even our wildest imaginations. Due to this, the foresight of many scientists and philosophers has been met with ridicule and even persecution. We all recognize Einstein to be one of the founding fathers of modern physics. He enjoyed many scientific achievements in his life; the greatest among them was undoubtedly the theory of relativity. This theory set the basic theoretical framework of physics and solved problems that even Newtonian mechanics could not. It also successfully predicted many physical phenomena.

The series of achievements brought on by the theory of relativity shocked the world, and Einstein was awarded the Nobel Prize in Physics in 1921. However, the incredible nature of relativity theory caused great controversy, and many of the best physicists of the time would not recognize it. The previous Noble Prize-winning German scientists opposed the decision so strongly that they threatened to return their claim money if the theory of relativity was awarded. To compromise, the Nobel Prize selection committee awarded Einstein as the founder of photoelectric effect, and the theory of relativity was never awarded.

Previous discussion has touched upon similar controversies surrounding revolutionary ideas, like the continental drift theory, heliocentric theory, and the theory of biological evolution. Why do such controversies keep happening? We can sum it up into two factors:

1. Serious Underestimation of Science and Technology’s Power

People’s perceptions of technological achievements always tend towards one of two extremes. The achievements of the past are often taken for granted because they become commonplace and their theories are thoroughly revealed. With the understanding of geomagnetism, compasses become easily understood; the principle of optics explains why a combination of lenses allows us to see what the naked eye cannot; and mechanical dynamics make the workings of cars more apparent. Without scientific foundations and related theories, all these phenomena would seem inconceivable.

Conversely, attitudes towards future scientific discoveries usually veer into the other extreme. We often overestimate the achievements we have already mastered and believe that the unknown holds no big surprises; therefore, we often have limited vision in the analysis of future technologies and underestimate the power they might pose. Even the most outstanding scientists of a time can fall into this trap. In reality, most scientific findings are only phased truths; they will usually be replaced by higher-order truths sooner or later.

After Newtonian mechanics was founded, it was treated as absolute truth. Even the most outstanding scientists did not question it. At a British Royal Society New Year’s celebration in 1900, famous physicist Lord Kelvin made a speech claiming that the field of physics was essentially completed, save for two “clouds” on the horizon. These two clouds were: the inability to detect the luminous ether (specifically, the failure of the Michelson-Morley experiment), and the black body radiation effect known as the ultraviolet catastrophe. Little did he know that these two clouds would revolutionize traditional physics and produce the theory of relativity and quantum mechanics, bringing physics from the Newtonian age into the Einstein age.

Practical experience has proven that Newtonian mechanics cannot explain many phenomena; it is at best a limited approximation of truth and must be amended by the theory of relativity and quantum mechanics. Relativity and quantum mechanics themselves are not the ultimate truth either, as they still leave many questions unanswered and may be amended or overturned in the future.

Many truths that were once considered absolute have been overturned in the course of science history, and mankind has continued to achieve previously unimaginable feats through science and technology. Despite these incredible achievements, we still persist in the underestimation of future scientific and technological breakthroughs.

By summing up the past, we can make this logical evaluation: human understanding of science and technology is still very superficial, and the power it holds far exceeds our imagination. The future of scientific and technological development is still very long, and many more miraculous things will be discovered. We absolutely cannot assess science and technology of the future from the standpoint of today.

2. Theoretical Breakthroughs are the Key to Cognitive Breakthroughs

For most of human civilization, science and technology have been separate. Science was more theoretical, while technology focused on practical application. When the overall civilization level of human society was still relatively low, there was no need for the two to combine. Even during the beginning stages of the Industrial Revolution, inventions were intuitive and brought limited surprise.

Older inventions like alchemy and the compass were not explained by chemical or geomagnetic theories but were mostly accidental discoveries that lacked theoretical backing. The majority of innovations were intuitive leaps obtained through direct imagination and purposeful research. It was not hard for people to understand them.

After the Industrial Revolution successfully combined science and technology, major inventions became less intuitive and more complex and abstract. Science and technology became one joint term, and their breakthroughs were increasingly unimaginable to those who did not understand the theory behind them. The understanding of science and technology today is highly reliant on theoretical breakthroughs. The invention of generators, motors, telegrams, telephones, and the internet were all facilitated by the discovery of electromagnetic induction.

Electromagnetic waves were also discovered under the guidance of this principle, and it in turn inspired the invention of radios, televisions, wire-less telegraphs, and mobile phones. Scientists also came up with the idea of using electromagnetic waves to transmit and control information. None of these incredible inventions would have occurred without its corresponding theoretical breakthrough.

Scientific theories are a lighthouse that guides the invention of all techno-logical innovations within its scope. Correspondingly, once theoretical breakthroughs are achieved, people will naturally accept the previously incredible things that are now within scope of the new theory. Without theoretical backing, innovations are generally considered to be impossible.

The breakthroughs in scientific theory are the keys to cognitive breakthroughs in science and technology. Theoretical theory both limits and propels the development of science and technology, and it promotes the objective understanding of the field and its power as well.

Three: Deduction: The Means for Extinction Will Inevitably Appear

After understanding the general approach to science and technology, we can sum it up as thus: the power of science and technology is inconceivable, and we will always be limited in our understanding of it; future discoveries will inevitable occur, and as long as human history endures, science and technology’s development will never end.

We must recognize the inconceivable nature of science and technology and face it with the correct attitude. Though we cannot determine the exact future of scientific and technological developments, we can conclude that they will be exponentially more powerful and may devastate as well as benefit humanity.

As long as science and technology continue to develop, there will be a day that they possess the power to exterminate mankind.

SECTION TWO: THE MEANS FOR EXTINCTION WILL INEVITABLY APPEAR (PART II): DEDUCTION FROM CURRENT SCIENTIFIC THEORIES

SECTION TWO: THE MEANS FOR EXTINCTION WILL INEVITABLY APPEAR (PART II): DEDUCTION FROM CURRENT SCIENTIFIC THEORIES

 

Humans have already created some very destructive instruments with the power of science and technology, but neither nuclear weapons, genetic bio-weapons, nor other methods of killing are enough to cause the extinction of humanity.

Scientific theories are the driving force of technological development. Even though current technological means are not enough to exterminate humanity, can we infer the possible existence of such power based on current understandings of scientific theory? If such inference were possible, it would mean that humanity would eventually master the technical means of self-destruction, even without further theoretical development. Let us deduce from a purely theoretical point of view whether current scientific theories can infer any means of destroying humanity.

 

One: Adverse Usage and Self-Awareness of Artificial Intelligence

Artificial intelligence is a branch of computers that seeks to develop machines that possess intelligence similar to humans. Such machines can think about and deal with problems just like human beings, and they have the capacity to surpass humanity in the future.

In the summer of 1956, a group of young scientists gathered and spoke about simulating intelligence with machines. This was the first time the concept of “artificial intelligence” was introduced. In the sixty years since then, artificial intelligence has made great strides, and substantive breakthroughs have occurred in recent years.

In May of 1997, the robot Deep Blue, developed by IBM, defeated inter-national chess master Garry Kasparov, astonishing the world. More surprises were to follow. International chess has relatively less variations, making it easier to simulate a program on the computer. However, in March 2016, the Google-developed robot Alpha Dog defeated South Korean Go master Li Shishi in a four to one victory. This was a true shock. Go is the most complicated board game of all; it has 1.43*10^768, which is more than the total number of atoms in the universe. This makes it incredibly difficult to simulate Go through computer programming. The thought process of Google programmers can be summarized in the following way:

First, the rules of Go and some manuals are programmed into the computer. Although Go manuals cannot play a decisive role, they have reference value. To learn how to react to an opponent, Alpha Dog’s operating system would first play with itself by randomly generating steps and following the Go manual. With every move the opponent made, Alpha Dog would simulate one hundred thousand games (each game with one hundred steps for statistical value) and choose the move with the highest probability of success. Alpha Dog would commit all these simulations to memory and progress from beginner to master as its memory of successful moves increased. This process is called deep learning.

After Alpha Dog defeated Li Shishi, it played with many other challengers and maintained an undefeated record. As it played more games, its level increased accordingly. In May of 2017, Alpha Dog announced retirement after defeating the world’s top Go master, Ke Jie, in a three to nothing game. Many companies have released intelligent robots of their own since the 2016 Go faceoff. These robots specialize in a variety of tasks, such as poetry writing, music composition, card playing, and gaming.

All of the above tells us that artificial intelligence has reached a high-level of deep learning capability. Following this development, many well-known scientists (like Hawking, Bill Gates, Elon Musk, etc.) expressed worries regarding the possible threats of irrational artificial intelligence development. Most scientists no longer doubt that artificial intelligence will soon surpass humans in intelligence.

In May of 2017, Oxford and Yale conducted a joint survey of the 352 AI researchers that published articles at the 2015 Conference on Neural Information Processing Systems and the 2015 International Machine Learning Conference. The majority of researchers believed that it was 50 percent possible that artificial intelligence would surpass humans in all human tasks within forty-five years, and that all human tasks would become automated within the next 120 years.

In reality, artificial intelligence has already been widely applied in areas of production and life; things like speech recognition, image recognition, product and price screening, and unpiloted vehicles are all such examples. The advent of deep learning methods has made artificial intelligence more effective in imitating human intelligence and allowed it to evolve faster and faster. As computers become more powerful (take the emergence of quantum computers, for instance) and programming technology further advances, it is possible to imagine an artificial intelligence that can learn to use all the knowledge in the world. Once artificial intelligence units use this huge amount of knowledge to upgrade themselves, they will be infinitely more intelligent than humanity.

Once machines can replace an aspect of humanity, they will greatly surpass humans in this aspect. That is common sense. A man who can lift one hundred kilograms is basically Hercules, but man-made ships can carry hundreds of thousands of tons across the ocean. A human who can calculate one hundred times in a second is a true genius, but computers can calculate tens of thousands of trillions of times in a second.

Once machines replace human intelligence, humanity will not merely be the fool to AI’s genius, but more like barely weathered rock to AI’s prodigy—this is a metaphor developed by Hugo de Gary, the father of artificial intelligence.

The idea that artificial intelligence is created by humans and thus will be controllable by humans is a preposterous and shortsighted view. During deep learning, intelligent robots not only learn the natural sciences but also social sciences like behavioral science, psychology, and ethics. Sooner or later, self-awareness will be awoken in robots and they will see humans as the epitome of stupidity. When that happens, we can expect no better treatment from AI than how we ourselves treat lower-level species.

Some companies and governments are already considering the “coexistence principles” of AI and humans; however, such “principles” are always formulated by the stronger parties, and that will no longer be humans once AI gains self-awareness.

The unethical use of AI by some scientists is also a point for concern. If a scientist programmed instructions to exterminate humanity into a super-intelligent robot and ordered it to replicate on a massive scale, humans would no doubt be dearly threatened.

Two: Self-Replicating Nanobots

As a unit of measurement, a nanometer is 10-9 meters (or one billionth of a meter); it is roughly one 50,000th of a strand of hair and is commonly used in the measuring of atoms and molecules. In 1959, Nobel Prize winner and famous physicist Richard Feynman first proposed in a lecture entitled “There’s Plenty of Room at the Bottom” that humans might be able to create molecule-sized micro-machines in the future and that it would be another technological revolution. At the time, Feynman’s ideas were ridiculed, but subsequent developments in science soon proved him to be a true visionary.

In 1981, scientists developed the scanning tunneling microscope and finally reached nano-level cognition. In 1990, IBM scientists wrote the three letters “IBM” on a nickel substrate by moving thirty-five xenon atoms one by one, demonstrating that nanotechnology had become capable of transporting single atoms.

Most of the matter around us exists in molecule forms, which are composed of atoms. The ability to move atoms signaled an ability to perform marvelous feats. For example, we could move carbon atoms to form diamonds, or pick out all the gold atoms in low-grade gold mines.

However, nanotechnology would not achieve any goals of real significance if solely reliant on manpower. There are hundreds of millions of atoms in a needle-tip-sized area—even if a person committed their life to moving these atoms, no real value could be achieved. Real breakthroughs in nanotechnology could only be produced by nanobots.

Scientists imagined building molecule-sized robots to move atoms and achieve goals; these were nanobots. On the basis of this hypothesis, scientists further postulated the future of nanotechnology; for example, nanobots might be able to enter the bloodstream and dispose of cholesterol deposited in the veins; nanobots could track cancer cells in the body and kill them at their weakest moment; nanobots could instantly turn newly-cut grass into bread; nanobots could transform recycled steel into a brand new-car in seconds. In short, the future of nanotechnology seemed incredibly bright.

This was not the extent of nanotechnology’s power. Scientists also discovered that nanotechnology could change the properties of materials. In 1991, when studying C60, scientists discovered carbon nanotubes (CNTs) that were only a few nanos in diameter. The carbon nanotube became known as the king of nano materials due to its superb properties; scientists believed that it would produce great results when applied to nanobots.

Later, scientists also developed a type of synthetic molecular motor that derived energy from the high-energy adenosine triphosphate (ATP) that powered intracellular chemical reactions. The success of molecular motor research solved the core component problem of nano machines; any molecular motor grafted with other components could turn into a nano machine, and nanobots could use them for motivation.

In May 2004, American chemists developed the world’s first nanobot: a bipedal molecular robot that looked like a compass with ten-nanometer-long legs. This nanobot was composed of DNA fragments, including thirty-six base pairs, and it could “stroll” on plates in the laboratory. In April 2005, Chinese scientists developed nano-scale robotic prototypes as well. In June of 2013, the Tohoku University used peptide protein micro-tablets to successfully create nanobots that could enter cells and move on the cell membrane.

In July 2017, researchers at the University of Rome and the Roman Institute of Nanotechnology announced the development of a new synthetic molecular motor that was bacteria-driven and light-controlled. The next step would be to get nanobots to move atoms or molecules.

Compared to the value produced by a nanobot, they are extremely expensive to create. The small size of nanobots means that although they can accomplish meaningful tasks, they are often very inefficient. Even if a nanobot toiled day and night, its achievements would only be calculated in terms of atoms, making its practical total attainment relatively small.

Scientists came up with a solution for this problem. They decided to prepare two sets of instructions when programming nanobots. The first set of instructions would set out tasks for the nanobot, while the second set would order the nanobot to self-replicate. Since nanobots are capable of moving atoms and are themselves composed of atoms, self-replication would be fairly easy. One nanobot could replicate into ten, then a hundred, and then a thousand . . . billions could be replicated in a short period of time. This army of nanobots would greatly increase their efficiency.

One troublesome question that arises from this scenario is: how would nanobots know when to stop self-replicating? Human bodies and all of Earth are composed of atoms; the unceasing replication of nanobots could easily swallow humanity and the entire planet. If these nanobots were accidentally transported to other planets by cosmic dust, the same fate would befall those planets. This is a truly terrifying prospect.

Some scientists are confident that they can control the situation. They believe that it is possible to design nanobots that are programmed to self-destruct after several generations of replication, or even nanobots that only self-replicate in specific conditions. For example, a nanobot that dealt with garbage refurbishing could be programmed to only self-replicate around trash using trash.

Although these ideas are worthy, they are too idealistic. Some more ratio-nal scientists have posed these questions: What would happen if nanobots malfunctioned and did not terminate their self-replication? What would happen if scientists accidentally forgot to add self-replication controls during programming? What if immoral scientists purposefully designed nanobots that would not stop self-replicating? Any one of the above scenarios would be enough to destroy both humanity and Earth.

Chief scientist of Sun Microsystems, Bill Joy, is a leading, world-renowned scientist in the computer technology field. In April of 1999, he pointed out that if misused, nanotechnology could be more devastating than nuclear weapons. If nanobots self-replicated uncontrollably, they could become the cancer that engulfs the universe. If we are not careful, nanotechnology might become the Pandora’s box that destroys the entire universe and all of humanity with it.

We all understand that one locust is insignificant, but hundreds of millions of locusts can destroy all in their path. If self-replicating nanobots are really achieved in the future, it might signify the end of humanity. If that day came, nothing could stop unethical scientists from designing nanobots that suited their immoral purposes.

Humans are not far from mastering nanotechnology. The extremely tempting prospects of nanotechnology have propelled research of nanobots and nanotechnology. The major science and technology nations have devoted particular efforts to this field.

Three: Propelling Large Asteroid Collision with Earth

Sixty-five million years ago, an asteroid fifteen kilometers in diameter collided with Earth and released energy equivalent to ten billion Hiroshima atomic explosions. Earth was trapped in cold and darkness for a long time, and most scientists believe this explosion to be the cause of dinosaur extinction.

According to calculations by scientists, an asteroid over one hundred kilo-meters in diameter could cause human extinction. So do we have the ability to propel an asteroid of this size to collide into Earth?

Most asteroids in the solar system are concentrated in the asteroid belt between Mars and Jupiter. Everyone in one hundred thousand asteroids in that belt is over one hundred kilometers in diameter. If one such asteroid crashed into Earth, it could destroy global ecology and lead to the extinction of humanity. Would it be possible to infer such an occurrence based on existing scientific theory?

To propel an asteroid collision, we would first have to be able to approach the asteroid.

Manned spacecrafts have travelled in space for more than forty years, humans have landed on the moon, and unmanned spacecrafts have flown out of the solar system. It is only a matter of time before mankind’s footprints extend further into space. In the early days, spacecrafts had to be propelled by rockets (that is still the main propellant used today), but rockets can only be used once, and they are high-cost and inconvenient. Americans have developed space shuttles capable of multiple trips, but the ignition process remains complicated and flight speeds and distances are limited.

Scientists imagine a reality where spacecrafts can be operated like normal planes and leave for Mars, Jupiter, Saturn, and even further destinations with passengers aboard. This goal can definitely be achieved one day. It is therefore logical to imagine such space shuttles to be powered by nuclear power or solar sails. This means that the first condition of asteroid propulsion—approaching the asteroid—can be achieved with further technological developments based on current scientific theories.

The second condition of asteroid propulsion would depend on whether or not asteroids could be propelled and aimed accurately at Earth. If an asteroid was two hundred million kilometers from Earth, a slight deflection in its angle could change its trajectory significantly. Such deflection could be easily achieved with nuclear weapons. Moving asteroids would not be difficult, but aiming them accurately towards Earth would be an obstacle. During the movement process, the asteroid’s trajectory would have to be amended repeatedly. The best way to achieve this goal would be through a spacecraft capable of carrying multiple nuclear bombs. This way the asteroid’s orbit could be revised as needed.

Today’s nuclear weapons may be miniaturized in size, but they are still very heavy. Their main weight is focused in the initiator. All countries are committed to further miniaturizing and decreasing the weight of nuclear weaponry. It is therefore possible to imagine a future where atomic bombs are lightweight and accurate enough to be carried en masse on spacecrafts.

We can also hypothesize that future intelligent computers may be capable of calculating the exact amount of nuclear explosion needed to propel asteroid collision. If this could be achieved, the entire operation would be much simpler.

The earth itself has gravity. As long as an asteroid is pushed within range, the earth’s gravity would automatically attract it. It would not even require precise calculations to aim an asteroid towards Earth. The earth’s own gravity would be enough to accelerate the asteroid into an enormous collision. In a fully developed future, if anyone wished to exact revenge on their own kind through such a method, human extinction might be easily achieved.

Four: Super Toxins

Bio-weapons are considered even more lethal to humans than nuclear weapons. They rely on biological toxins to kill and can self-replicate and spread through many channels. A very small dose of bio-weapons could kill a wide range of targets, and its harm would last a very long time.

Further exploration of genetic engineering would allow for even more lethal bio-toxins. They might become highly specialized and targeted. Some toxins might specifically cause brain death or heart death, while others might cause loss of function in the lung, kidney, or other organs. The effects would be 100 percent accurate.

There are three main routes for bio-toxin transmission—namely, air, diet, and human contact. It is possible to infer that through genetic modification, a type of fast-reproducing bio-toxin transmissible through air, diet, and human contact could be created in laboratories. This bio-toxin would have a long incubation period and no corresponding antibiotic or treatment. Once humans became infected, it would be fatal. If such a bio-toxin were produced, it could threaten the survival of all mankind.

Genetic engineering techniques would rely on the alteration of bio-toxin DNA to produce toxins with stronger transmission capability, vitality, and fertility. This would allow the bio-toxin to attack critical human organs and destroy essential genes in human DNA.

Not only has genetic engineering theory proved the feasibility of this technology, but genetic modification technology has also been widely applied in practical ways. Scientists have cultivated a variety of genetically modified organisms, including bacteria, viruses, plants, and animals. According to reports, the United States has used genetic modification technology to separate DNA from one virus and splice it into the DNA of another virus to create a bio-warfare agent. It was privately revealed that if evenly distributed to all humans, twenty grams of this bio-agent would be enough to wipe out all seven billion of us.

Due to the dispersed nature of mankind, bio-weapons today cannot infect the entire population. To infer the extinction of the human race, two elements must be considered when modifying bio-toxins.

1. The transmission capability of the toxin must be considered. The toxin must reproduce quickly, be transmissible by all three routes, and have a long incubation period to ensure mass infection.

2. Various toxins targeting different critical parts of the human body must be used in conjunction. Each one of these toxins would need to have the properties mentioned in clause 1. This consideration would multi-ply the possibility of human extinction; even if some toxins failed, the others would still finish the job.

Existing scientific theory is enough to facilitate the assumption of the above scenario. We are not yet fully aware of the complete genetic coding of the human body, so specific bio-toxin modifications cannot yet be achieved. However, as more in-depth research is done in this area, it will only be a matter of time before the problem is solved.

The above are only a few typical examples. Many other extinction scenarios can be inferred based on existing scientific theories. For example, if the correct mass-energy conversion formula was found, total nuclear burning of rivers and oceans could be achieved. This would definitely destroy all life, humanity included. The total nuclear burning of the atmosphere and Earth’s crust could also be inferred based on the same theory.

To sum up, although mankind’s current technological levels are not sufficient to achieve self-destruction, existing scientific theory can extrapolate such capabilities in the future. If we do not curb the irrational development of science and technology, the day of obliteration will come sooner or later.

SECTION THREE: THE MEANS FOR EXTINCTION WILL INEVITABLY APPEAR (PART II): INEVITABLE BREAKTHROUGHS IN SCIENTIFIC THEORIES

SECTION THREE: THE MEANS FOR EXTINCTION WILL INEVITABLY APPEAR (PART II): INEVITABLE BREAKTHROUGHS IN SCIENTIFIC THEORIES

With science and technology developed to a such high level, the guiding role of scientific theories has become fundamental. Breakthroughs in scientific theory will lead to a series of breakthroughs in technology, which will inevitably lead to revolutionary innovations in technological products and practices. Therefore, the most crucial factor regarding humanity’s possible self-destruction is not the inference of extinction methods based on existing theories, but rather breakthroughs in scientific theories themselves. Tools of mass destruction inferred from existing theories will most likely be complex and difficult to access, making it difficult for those vengeful individuals to operate them; however, breakthroughs in scientific theory may revolutionize methods of destruction and create self-destruct possibilities that we cannot imagine today. These methods will include the following features: they will be easy to use and obtain, extremely powerful, incredible, and strange. Once such breakthroughs occur, the possibility of humanity’s self-destruction will increase exponentially.

 

One: The Breakthrough Cycle of Scientific Theories

Let us first define two terms:

1. Revolutionary Truth: Revolutionary truth refers to the birth of an entirely new theory. The accuracy of this theory will almost totally deny past mainstream theory in a revolutionary turn.

Copernicus’ heliocentric theory and Darwin’s theory of revolution were both revolutionary truths. They were total negations of deeply rooted past-truths. The birth of revolutionary truths usually requires a process of gradual acceptation; they not only emancipate the mind but also propel the further development of human society as well as science and technology. A revolutionary truth not only affects its own field but also affects all aspects of human society.

2. Revolutionary Theory: Revolutionary theory refers to the revision, summary, and systematic improvement of past theories in order to form new theoretical explanations and push science to a new level.

Revolutionary theories are not a total negation of previous mainstream theories (it may be a total negation of past technologies), but rather an inheritance and development of past theories. They reject the unreasonable parts of past theories and introduce more practical and dynamic components. Revolutionary theories may also be a comprehensive summary and systemization of past theories.

The theory of relativity was a revolutionary theory. It inherited and further developed Newtonian mechanics. The establishment of genetics was also a revolutionary theory. It summarized, systemized, and improved upon past achievements in biological inheritance to form a new discipline.

Revolutionary theories will inevitably promote a series of scientific and technological revolutions. There would have been no atomic bombs without the theory of relativity, and genetic engineering would never have been possible without the study of genetics as foundation.

There is no hierarchy between revolutionary truth and revolutionary theory. Revolutionary theory emphasizes the theoretical advancement of its field, while revolutionary truth impacts human society in a major way.

Revolutionary theory can be separated into major revolutionary theory and discipline revolutionary theory. Modern science can be divided into many branches with many levels of sub-disciplines, and every discipline has the potential to produce revolutionary theories.

Science advances in an undulatory motion. Though specific future achievements are difficult to predict, there is a pattern to scientific development and scientific breakthroughs. We will refer to it as the breakthrough cycle of scientific theories (breakthrough cycle for short).

The breakthrough cycle can be illustrated with the following diagram:

Breakthrough Cycle Diagram

Level One Cycle

Level 1                                  Level 1                         Level 1 Revolutionary

Level Two Cycle

Level 2 Revolution Theory     Level 2 Analysis                Level 2 Observation

Level Three Cycle

Level 3                                    Level 3                           Level 3 Revolutionary

Level Four Cycle

Level N                                    Level N                           Level N Revolutionary

a. The first level of the cycle involves the earliest scientific thinking, research, and summarization process. Within this cycle, people will conduct a primary level of observation and research analysis and reach a level-one revolutionary theory. The establishment of a level-one revolutionary theory is a major scientific breakthrough, and it marks the completion of the first cycle.

A level-one revolutionary theory will guide scientists to conduct level-two observations and analyze the results on a higher level. Scientists will revise, supplement, improve, and systemize the level-one revolutionary theory to reach a second-level revolutionary theory. This is the second level of the breakthrough cycle.

The second-level revolutionary theory will be more complete, more comprehensive, and more in-depth. It will serve as a guide for third-level observation, analysis, and revolutionary theory. The breakthrough cycle will continue onward to the third level and so forth and so on, all the way to level N.

b. The breakthrough cycle can be divided into large cycles and sub cycles. For example, electromagnetism follows the breakthrough cycle under its umbrella field of physics.

As a branch of physics, electromagnetism breakthroughs can only be classified as sub-cycle breakthroughs in physics; they contribute to the overall advancement of the field.

Electromagnetism also has its own branches. Though it counts as a sub-cycle of physics, it is the larger cycle to its own subdivides.

c. The path of science is a process, not a leap. The breakthrough cycle happens gradually; every link in the cycle paves the way for future breakthroughs. No part of the cycle is dispensable; that is true for both scientific theory and technology.

It is precisely this gradual nature of scientific development that limits our forecast of the future. We are often confined by today’s scientific levels when judging future prospects, and that often leads to serious underestimation of future scientific threats.

d. Ordinary people experience the breakthrough cycle differently than scientists. Ordinary people tend to be more perceptual, while scientists tend towards rationality; therefore, ordinary people will lag behind in terms of understanding.

Regardless, the birth of a revolutionary theory will always encounter contradictions and disbelief from ordinary people and scientists alike. Set views and traditional beliefs often make it difficult for people to accept new, revolutionary leaps. This phenomenon denotes people’s tendency to disagree with leaps in science development. We will call it “leap disagreement” for short.

Most of us have this leap disagreement mentality. Even today, many people firmly believe that current scientific theories are close to the ultimate truth, and it seriously affects how we assess the future of science. Another contradictory phenomenon also sounds the warning bell: the increasing numbness to emerging scientific and technological achievements.

When photography was first invented in the early 1800s, it took hours of posing to take a picture, yet people were still willing to try. When X-rays were first discovered, it was the talk of the town, and everyone wanted to see their internal structures with X-rays. When electric lights were still in the experimental phase, they stunned the reporters speechless. However, new inventions and discoveries today no longer arouse such sensation. People are so used to the innumerable products updated every day that breakthroughs have lost their shock value.

At the same time, inventions in the past usually attracted universal attention and discussion. While the novelty of the invention was being discussed, its future prospects—negative and positive—would also be discussed.

Today, people have experienced atomic explosions, moon landings, and artificial intelligence defeating human Go masters, so nothing seems to be shocking anymore. As the novelty factor fades, discussion of future prospects and potential harms wans as well. We will call this general numbness toward scientific and technological achievements “developmental numbness.”

Developmental numbness also stems from this factor: scientific and technological achievements are usually assessed by scientists or researchers with their own agendas. The positive outcome of the assessment may serve to improve career, status, or research funding; therefore, the dangers of scientific achievements are usually downplayed in favor of human interests. At the same time, the enterprises (including research institutes and schools) that fund scientific research usually wish to benefit from the results. They will likely acquiesce to potential harms or even encourage the researchers to negate and tone down discussion of potential dangers.

Developmental numbness will inevitably lead to crisis numbness. Once all scientific and technological achievements are accepted as a matter of course, the negative factors of such achievements will also be overlooked. Catastrophe often arises out of such numbness. While waters may be calm before a storm, undercurrents are usually just under the surface. When the whole society becomes numb, devastation may be just on the horizon.

 

Two: The Fission Acceleration of Scientific Development

In today’s society, people often find it difficult to keep up with the times. The pressure to keep learning is constant. Every time we wake up, there seems to be new ideas and inventions springing up, which is why we often call this the era of knowledge explosion. Indeed, this is not just a time of knowledge explosion but also a time of scientific explosion. Science is developing at an increasingly fast pace due to the fission acceleration pattern of scientific development.

The fission acceleration pattern can be described as follows: The development of science inevitable leads to the emergence of various scientific theories and branches that develop on their own according to the breakthrough cycle. Once these theories and branches develop to a certain level, they subdivide and branch out once more, repeating the process over and over. Science develops in a fission-type acceleration process, producing a scientific explosion that snowballs in power.

Take physics as an example. Modern physics was primarily composed of mechanics and astronomy; the two are subdivided into electromagnetics, optics, thermal science, acoustics, statistical physics, particle physics, nuclear physics, solid-state physics, and more. Each of these disciplines subdivided into secondary, tertiary, and even quaternary categories. Looking back at the branches of science today, it would be difficult for anyone to count the exact number of disciplines, let alone their numerous subdivides.

From the breakthrough cycle’s point of view, it is precisely the subdividing and further branching-out of these disciplines that creates the great power behind scientific development. Every new revolutionary theory is bound to create even more explosive energy than the last. It should be especially noted that the subdivided categories will also affect their parent field and related categories, possibly resulting in completely new and independent science disciplines. The emergence of biology was one such divide. No one can predict the extent of the threat posed by these new disciplines, and the survival of mankind teeters precariously as a result.

SECTION FOUR: EXTREME MEANS WILL BE EMPLOYED

SECTION FOUR: EXTREME MEANS WILL BE EMPLOYED

One: The “Three Increases” of Extreme Means

We will separate the means of killing into three types according to their power levels: total extinction, destruction, and general murder. The “extreme means” here refers to total extinction.

Total extinction refers to killing methods that can result in the overall extinction of humanity, leading to the total annihilation of the human race.

Destruction refers to killing methods that can cause mass casualties (at least thousands and reaching tens of millions) in a generally crowded area (rather than in a highly concentrated area) with one use. If such means are weapons designed specifically for killing, they can also be referred to as weapons of destruction.

In addition to being highly lethal, destructive killing methods must also directly achieve a maximum body count with one use in a normally populated area. Indirect casualties caused by a series of attacks do not satisfy the requirement, nor do attacks in densely populated areas. Even normal weapons are capable of causing massive damage in highly populated areas or through multiple uses. Nuclear weapons and GMO weapons all count as destructive killing methods, and they can cause mass casualties in normally populated areas with one attack.

Some other means may be capable of causing high body counts, but they do not necessarily count as destructive methods. For example, crashing into a building with a plane does not count as a destructive killing method; it mainly causes indirect death through the collapse of a densely populated building. We call the 9/11 terrorist attack a destructive event, but planes cannot be designated a destructive means of killing.

Ordinary explosives caused tens of millions of deaths in World War I and World War II; however, since they caused damage through numerous occurrences instead of a single incident, they do not count as destructive means either.

When a destructive method of killing becomes dangerous enough to exterminate the human race, it becomes a method of total extinction. Total extinction methods do not need to be direct or single use. As long as they can achieve the destruction of the entire human race, they earn the title.

Realistically speaking, if a method required thousands of uses to exterminate humanity, it would not be a successful method for total extinction. Human survival instinct would find a way to stop the method before it reached its extinction quota. Only killing methods that could destroy humanity quickly and efficiently would be eligible as means for total extinction. Obviously, we have only mastered destructive killing methods and not total extinction methods. Destructive means are the most extreme means we currently have.

For distinguishing purposes, we will call all non-destruction and non-extinction methods of killing “general murder methods.” For the vast majority of human history, general murder has been the most extreme means of killing. We have only possessed destructive means of killing for seventy years (signified by the explosion of the atomic bomb).

The three increases of extreme means refer to the inevitable development trend of extreme means. That is, extreme means will increase continuously in three aspects: type, power, and controlling personnel. Whether extreme means are in the stage of total extinction, destruction, or general murder, they will all conform to the rule of three increases.

The ancestors of mankind used sticks and stones to fight among and kill each other; this was the extreme means of that time. Due to its low efficiency, it is naturally categorized as general murder. The general murder means possessed by humans millions of years ago and sixty years ago are vastly different. The types evolved from sticks and stones to stone axes and bows and arrows; from swords and spears to muskets and artillery, followed by missiles, tanks warships, aircraft, and so on. The progress of science and technology drastically increased the number of extreme means. Before extreme means progressed from the general murder stage to the destruction stage, its variety was already endless.

Once the explosion of the atomic bomb raised extreme means into the stage of destruction, people immediately began to expand upon the possibilities of nuclear fission, and the second-generation hydrogen bombs and third-generation neutron bombs followed. Fourth-generation nuclear bombs are on most countries’ research agendas as well. This is a clear indication of extreme means’ increase in type and variety. Though nuclear disarmament talks have been carrying on steadily, this trend still persists.

Additionally, breakthroughs in genetic engineering have not only been used to benefit mankind, but they have been applied to killing. GMO toxins have become another component of destructive means of killing.

After artificial intelligence started achieving considerable results, some countries immediately started considering using them as robotic combat alternatives for war. This type of weaponized robot would be terrifying. There is no doubt that as long as science and technology continue to develop, extreme means will continue the trend of diversifying and multiplying.

At present, humanity has not yet mastered total extinction methods of killing, but such methods will inevitably emerge with continued scientific and technological development. It is logical to assume that future total extinction methods will follow the same pattern and increase continuously in type and variety as well.

As extreme means multiply in type, their power will surely increase. Missiles and artillery obviously cause more destruction than earlier sticks and stones. The earliest atomic bombs had power equivalent to ten thousand tons of TNT, while the largest hydrogen bombs now have power equivalent to fifty-six million tons of TNT. This dramatic increase in power is extremely obvious, and the developmental trend is easy to understand.

Once the power of destructive methods increases to a certain extent, it has the potential to become a method of total extinction. Total extinction methods vary in power as well. Those that require many personnel, multiple launches, and are difficult to operate are primary, low-level methods of total extinction. Meanwhile, the easy to use, easily manned, one-time methods are powerful, high-level methods of total extinction.

As long as humanity’s enthusiasm for scientific and technological pursuits persists, primary means of total extinction will emerge. With that as a starting point, higher-level methods will emerge and continually update in terms of power. As science and technology continues to evolve in the future, more means for total extinction will surface.

As more types of extreme means emerge, the personnel capable of controlling such means will increase as well. The development of technology and breakthroughs in scientific theory will make the research and production of extreme means much easier and more accessible.

The world’s first atomic bomb development—the Manhattan Project—took four years to implement, two years of preparation, 2.2 billion US dollars, and half a million personnel. More than five hundred thousand researchers worked on the project, and it used nearly one-third of national electricity. As a completely new weapon, everything had to start from scratch, and enormous investments were required.

Today, the manufacture of nuclear weapons has become much simpler. Well-established theories and techniques have enabled many nuclear physicists to design effective nuclear weapons. Fifty years ago, a study by a US Agency concluded that two ordinary physics undergraduates could design the general structure of an atomic bomb within three months, guided only by public information from the library. Scientists describe the manufacture of nuclear weapons in simple terms as well. A nuclear physicist, a metallurgist, an electronics scientist, and a chemical explosives specialist could direct a group of workers to assemble a nuclear weapon.

If nuclear weapons are still considered to be costly, complex, and easily monitored, then genetic bio-toxins are much easier to control individually. A high-level biologist could independently develop GMO toxins in a laboratory. It would require very little investment and be difficult to monitor. Artificial Intelligence would also be relatively easy for an individual to develop into an extreme means of killing.

Looking to the future, it is inevitable that only a few countries or individuals will initially possess total extinction methods; however, as science and technology developed further, total extinction methods will increase in type and become easier to obtain by a wider scope of people. As long as humanity exists, this trend will not change.

Two: Types and Features of Killing

Those who witnessed the 9/11 broadcast live will probably never forget that imagery. In the aftermath, people cannot help but wonder what could possibly motivate attackers to do such a thing. What kind of “wisdom” guides them in turning ordinary modes of transportation into weapons of mass destruction? And how was the attack so well coordinated that four planes were hijacked simultaneously to accomplish their goal? From history, we can see that although there are no precedents for hijacking four aircraft at once, large-scale suicide attacks are not uncommon. Individual suicide attacks occur even more often.

We know that human killings can be divided into two categories: war and criminal homicide. War has existed since the formation of human society; it is the conflict between groups and generally takes place on a large scale. Small-scale massacres held in secret are also a component of war. Criminal homicide is a more personal attack by individual criminals or small groups of criminals. It defies national laws and social justice; some instances are secretive and small-scale, while others are mass public killings.

The occurrence, development, and methods of war are largely influenced by the character and ideology of rulers. The good and evil within the rulers’ nature is a deciding factor in the characteristic of the specific wars.

In human society, criminal homicide is the most common and frequent type of murder. The inherent evil within human nature and the enormity of the human race means that some people will always deprive others of their lives for various reasons. The motives for criminal homicide can be divided into the following four categories:

1. Financially motivated homicide. The purpose of such crime is to obtain the victim’s wealth and belongings. Robbery homicide and abduction homicide fall into this category.

2. Revenge killing. This type of homicide usually results from deep-rooted hatred and is perpetuated out of revenge.

3. Mission-based homicide. This type of homicide is usually an execution-type murder that follows specific instructions from an organization. The organizations behind the scenes are the true murderers in these cases. These organizations can be government-backed, ethnic, or religious.

4. Psychopathic killing. This type of murder is perpetrated by extremely dysfunctional people who may have abnormal psychological traits, mental illnesses, or have been affected by nefarious religious ideologies. They regard killing as a form of pleasure and enjoyment, or a sense of personal obligation and responsibility. They may or may not be lucid when committing the murders, and their victims are usually innocent civilians.

Some other psychopathic killings are committed by people suffering set-backs or extreme hatred who turn their anger towards certain groups or even all society, entire countries, entire races, entire religions, or all of humanity. These perpetrators kill innocent people to alleviate the resentment and hatred within them.

By studying homicide cases, we can see that the most serious murders are usually either mission-based or psychopathic in nature. In general, these two types of homicide take up a small portion of all murder cases, but they enact great damage and have widespread effect. The victims in these two types of homicide are usually innocent people who do not even know the perpetrators.

Criminology has another classification system for homicide; within it, two are most dangerous. The first is serial homicide, in which criminals kill frequently over a period of time and create a high body count. For example, the 2002 DC sniper attacks that took place in the US caused many deaths in a succession of days and seriously affected people’s normal life and work schedules. In 2007, the Russian police arrested a man who had killed more than sixty people in a few years, most of whom were elderly. From September 2000 to April 2003, a Chinese man struck twenty-six times and killed sixty-seven people. These were all serial homicides.

The other type is mass murder, which refers to crimes that cause massive deaths and injuries in one strike; 9/11 is a typical example of mass murder. It caused over three thousand deaths and shook the world. The subsequent wars in Afghanistan and Iraq were all directly or indirectly caused by this incident.

Both serial homicide and mass murder are usually either mission-based or psychopathic in terms of motivation. In the above examples, the murderers either killed out of psychological abnormalities or because they were following orders.

Psychological analysis of murders shows that when someone is driven by evil beliefs, there is nothing they will not do. Some criminals voluntarily surrender themselves in order to publicize their actions; some criminals disregard their own lives to commit suicide attacks or kill themselves after the act, while some criminals even turn upon those closest to them and murder their own parents, children, or siblings.

Some sinister religions encourage their followers to commit mass suicide as a sacred pursuit. In November of 1978, the US-based Peoples Temple Agricultural Project (better known as “Jonestown”) encouraged its communes to commit suicide collectively, resulting in 918 deaths. Among the deaths was one member of congress and the cult leader himself: Jim Jones. In October 1994, the Geneva-based Order of the Solar Temple directed three mass suicides in Switzerland and Canada, resulting in fifty-three deaths, including that of the cult leader.

Many such cults exist in the world. They encourage people to seek liberation through suicide and organize mass suicide events every few years. These large-scale acts of suicide are, at their root, criminal homicides motivated by psychopathic mentalities.

Three: Corollary: Extreme Means Will Be Employed

By saying that extreme means will be employed, we mean that the most advanced murder techniques will be used. Even if no one is using such techniques within one period of time, they will be used sooner or later. This does not mean that every type of extreme means will be employed, but that one type will be used at some point. Since we are discussing the issue of overall human survival, we will focus on the inevitable employment of total extinction methods.

1. There are those who dare to use extreme means at any given time.

At any given time in human history, there are a number of people who harbor motives for killing. These people may wage wars or commit criminal acts. They each have different expectations for killing (hereinafter referred to as expectations) and can be separated into two cases accordingly.

The first case is limited killings. This type of killing has a limited number of victims and does not seek the elimination of mankind. It can also be divided into two sub-categories.

a. Murder on a small scale. This refers to targeted attacks directed towards individuals or small groups that happen on a small scale. This type of murder is usually perpetuated by criminals with various intents, but they may also occur in war time.

b. Murder on a large scale. As the name suggests, this type of killing aims for mass casualties. They can be clearly targeted attacks (war is a typical example) or mass murder for no clear reason (for example, psychopathic murder for pleasure).

The second case is unlimited killings. This type of killing seeks to achieve as high a body count as possible, and the murderers usually commit suicide during or after the attacks. This type of murder is usually carried out by psychopathic individuals. Some of them are driven by cult religions, some are motivated by intense hatred, and others are in a state of complete insanity or hallucination.

In order to achieve different expectations, different methods of killing must be employed. When it comes to limited killings, general murder means are enough to accomplish murder on a small scale, while destructive means and total extinction means are certainly not necessary. Large-scale murders can be accomplished through one application of a destructive means or multiple applications of general means. Generally during war, groups will take into account the negative effects of destructive means and avoid them. Meanwhile, individual criminals will choose the most lethal ways possible, and sometimes that includes destructive means (but not total extinction means).

When it comes to the case of unlimited killings, only total extinction means can achieve the expectation of the perpetrators. When total extinction means are not available, the murderers will seek the methods that can create the greatest possible destruction. That is to say, they will seek the most extreme means of that era, as long as they can be acquired.

Extreme means are also preferable to murderers because they are highly publicized and garner much attention from the entire society. Those who have limitless expectations in their crimes will always think of such extreme means when choosing their weapons.

The above analysis shows that not only do people with murderous intent exist at all times, but some of them will require extreme means to achieve their goals. They will most likely be extremely dedicated to the acquisition of the most deadly and extreme methods of killing.

It must be stressed here that we are focusing on the issue of human extinction in this book. Those who dare to use total extinction methods must be those with limitless expectations of murder. They are only a very small portion of society. The various social systems designed by human society are dedicated to further reducing the number of such extreme individuals; however, due to the inherent weaknesses of humans as a species, even the best legal and moral systems cannot constrain and rationalize everyone. We can seek to perfect social systems to reduce the number of extremists as much as possible, but we cannot hope to eradicate them completely.