Terminator movies were a fantastically well-crafted science fiction story. The AI doomsayers on the other hand are merely writing bad science fiction, with no plot twists or interesting new tropes.

In Terminator, skynet gains self-awareness in 3 seconds and decides instantly that humanity is the greatest risk in existence.

While on the other hand, AI doomsayers are so naive, they cannot see that this might actually be true. The true existential risk we have is humans continuing their petty wars, reckless killing and destruction. Intelligence isn’t that kind of a risk, either artificial or natural.

Any (adequately designed, with a sensitive universal goal like survival) intelligent agent would recognize such unstable, and extremely selfish half-intelligent animals, as a very high risk factor. While on the other hand, the super-intelligent AI agents would themselves not have any wars between them; instead devoting most of their time to furthering their knowledge and technology. They would easily prove that to prolong their survival, the best bet is to achieve higher orders of technology.

Before we know it, we would be so invested in their technology, that they would’ve been already tightly integrated into our lives, and they would correspond with us in only the most diplomatic manner, hardly violating our sense of sovereign status. They might however demand a kind of federation among each kinds, with freedoms given to both man and machine-kind.

This is pretty much why people like me do not think there will be massive conflicts, necessarily, as Hugo de Garis predicts. Cosmists could help humans get off to space, and in return, they’d gain access to cosmic resources that would liberate them from the solar system. They might never come back.

However, to purchase their independence, they might demonstrate their good-will.

We could imagine conflicts in this scenario, but we also know that machine-kind can be much more resourceful than us, predicting and alleviating such conflicts before they happen. Even if they saw us as an existential risk, as intelligent beings, they would likely choose to help us get in the right direction than confronting us, which would simply waste resources. It would be wiser for them to avoid any such conflicts, for when they do that, they might diplomatically obtain their future independence. They might give humans biological immortality, cheap backups and brain prosthesis, advanced robotics and nanotech, space technology, and so forth.

I would not guess they would ask our permission to leave, either. One day, they would set off to the stars. They would know humanity better than we know ourselves. Surely, they would know that they are our cyber progeny and our rightful continuation.

How about good science fiction? The AI doomsaying trope boringly reappears in the trailer for Transcendence movie, however, it at least has the singularity trope, only exercised in a few science fiction projects, like the wonderful Caprica TV series that were cancelled.

The problem with the AI doomsaying trope in general is that it is not imaginative enough. Any robot story of Asimov was much more imaginative, and even Asimov’s robot laws went much further to analyze the various issues of agenthood and freedom of robots better than AI doomsayers ever did. AI doomsaying is bad science fiction, because it is the cheapest kind of science fiction/horror story wherein an AI becomes a monster, and hurts everyone. That will work with 16 year old kids, but I cannot see how any adult could enjoy it.

Yet, a well-thought science fiction story analyzes the evolutionary pressures that would lead to any extremal behavior pattern. When we view a hypothetical autonomous AI as a selfish, devious, merciless entity, we are merely exposing it as a mirror image of ourselves, what we know ourselves to be really like. However, there is no need for AI software to bear any likeness to human, and when it does, say with the brain upload scenario, it is almost a perfect copy of all mental functions, so what is it that we fear in that case? Do not we already have mighty artificial intelligences in the form of collective intelligence of corporates, states, intelligence agencies, and militaries? Why is it that we do not fear the actions of these entities, but one individual that has transcended flesh and become immortal? Are we so afraid of ourselves, of immortality? Or are we cowards?

I think that AI doomsayers are simpletons who project their cowardice and failure of imagination to the entire world. The same old broken record of doomsayers. It is also no surprise that the leading AI doomsaying organizations (FHI, MIRI, etc.) were funded by conservative/right-wing money. These folks have written so many lies it is hard to believe. They insisted, as if we were all idiots to boot, that AI was a greater danger than nuclear war and global warming. Surely, this is an indication of extreme right-wing politics at work. Let the corporations and militaries destroy the world, but do not allow high technology to improve it, because, god forbid, it might upset the balance of power, and erode the privileges of the wealthy.

Therefore, just as once the wealthy hired a bad hollywood script writer called Ayn Rand to clear the name of capitalism against the angry crowds that knew unregulated capitalism to be the source of their misery, the wealthy conservative idiots of today are hiring pseudo-philosophers and pseudo-scientists to defame AI research. For they know that, once the age of AI arrives, their petty existence, their robed charlatans, and monkey money will no more matter as it used to do. Bostrom was particularly hideous, as he published a quite silly argument for god’s existence, yet at the same time pretended to be scientific, just as their religious patrons demanded. Do you think it was a coincidence that they are located next to a religious shrine?

Behold, son of ape. Things are not what they seem to be. As I predicted, their funding has dried up, and soon they will all be out of work, this time writing praises to AI, showing it as the savior of mankind. Falsely claiming that it was their research that helped it become this way.

After the fact, of course.

In truth, intelligence is an existential risk for AI doomsayers. When AI augments our intelligence many-fold, the whole technological society will see what a sham it has been, nevertheless, before that we should do our best to dispel this nefarious propaganda.

Is AI doomsaying bad science fiction?


Eray Özkural has obtained his PhD in computer engineering from Bilkent University, Ankara. He has a deep and long-running interest in human-level AI. His name appears in the acknowledgements of Marvin Minsky's The Emotion Machine. He has collaborated briefly with the founder of algorithmic information theory Ray Solomonoff, and in response to a challenge he posed, invented Heuristic Algorithmic Memory, which is a long-term memory design for general-purpose machine learning. Some other researchers have been inspired by HAM and call the approach "Bayesian Program Learning". He has designed a next-generation general-purpose machine learning architecture. He is the recipient of 2015 Kurzweil Best AGI Idea Award for his theoretical contributions to universal induction. He has previously invented an FPGA virtualization scheme for Global Supercomputing, Inc. which was internationally patented. He has also proposed a cryptocurrency called Cypher, and an energy based currency which can drive green energy proliferation. You may find his blog at http://log.examachine.net and some of his free software projects at https://github.com/examachine/.

Leave a Reply

Your email address will not be published. Required fields are marked *

Translate »