Machine Intelligence: A Human Dilemma


By Morris Jones, METI International Advisory Council

Science fiction has been ahead of reality for decades. In the future, humanity will build smart machines. These machines could eventually become smarter than us. They could eventually decide that they do not need the species that created them. Then the human race will be exterminated.

Fear of smart machines seems to be hardwired into our collective psyche. For much of history, humans were under threat by wild animals and other forces of the world that were beyond their control. In this modern, industrialised society, few of us worry about being eaten by sabre-toothed tigers. Most threats to our life seem to be induced, directly or otherwise, by other humans. But the fear remains. Some of this is certainly instinctive, but some of it is founded on contemporary reasoning.

Industrialisation has been a profoundly disruptive force for humanity. It’s important to note that the word “disruptive” is both positive and negative. Centuries ago, most of us were peasant farmers, barely producing enough food to feed ourselves and our families. We went through an agrarian revolution, then an industrial revolution, and now an information revolution. Today, our technology and lifestyles are profoundly removed from those of our ancestors. We live longer. We can travel almost anywhere in the world instead of being almost unable to leave our village. We also can’t get enough of our electronic gadgets, to the point that millions of people display compulsive or addictive behaviour with their smartphones. These effects have been profound and not all have been beneficial. Everything comes at a price.

This profound awareness of how technology affects humanity also drives our concerns over how even more advanced technology could affect us in ways that we cannot even conceive at the present.

Recently, there has been a lot of publicity over the potential threats to humanity that could be posed by artificial intelligence. Prominent individuals have donated money and established foundations for counteracting the potential threat of these machines. Concerns are being raised over the imminent possibility of autonomous weapons systems that do not seek permission from human operators before firing.

The SETI community has not been immune from these considerations, and has added some useful issues to the entire debate.

SETI practitioners have long theorized that any extraterrestrial civilization we contact could be far more technologically advanced than us. This would possibly be linked to expectations that they would be much older as a species and a civilization, giving them more time to develop.

An older (and hopefully wiser) civilization could have developed the sort of artificial intelligence that technological visionaries both imagine and fear. It has long been suggested that any transmission we receive from another world will not be composed by a biological organism, but by a machine. Some SETI theorists have even suggested that there could be ways of studying a transmission to determine if it has come from a creature or a computer.

Apart from acting as interstellar emissaries, why else would extraterrestrials deploy smart machines? It would most probably be to serve their own needs, rather than talk to other planets.

Smart machines could gradually augment traditional biology. Cue the “cyborg” science fiction genre, where people become semi-robotic. This could be used to give additional strength and capabilities. People who would otherwise be disabled will be enabled with mobility and vision. We already have a well-established industry in bionic ear implants. Illnesses or weaknesses could be reduced or eliminated. Then there are mental and intellectual issues. Can the brain be boosted in capabilities with artificial implants? Can we get more memory, more intelligence, or instant access to information?

This would be a big step forward, and a step that would make many people uncomfortable. But it could be just an intermediate step to the ultimate transition. We could shed our traditional biology entirely and become pure machines.

A machine society could transcend much of the limitations of traditional organisms. We could say goodbye to illness, incapacitation and death. But would a robot copy of John Doe really be John Doe? That’s a profound and unresolved question. How does consciousness arise? What is the “self”? Can a robot really have a soul? These questions have been debated incessantly by biologists, physicists, philosophers, theologians and just about everyone else.

Assuming it exists, a robot civilization is likely to have a profoundly different sociology to a biological one. Many important aspects of human society are linked directly to our biology. We are born. We need food. We need protection. We reproduce. We die. We have processes, rituals, industries and institutions linked to these factors. Take away the biology, and you take away some of these needs. Now, should you reset the way that individuals act, and society as a whole responds? It seems inevitable.

Thus, a robot society could appear when our smart machines turn on us, but it could also be a voluntary transition.

Considering the implications of a machine society is useful for our search for extraterrestrial intelligence, but it is also useful for contemplating our own future.