AI doomsayer charlatans (such as Eliezer Yudkowsky, Nick Bostrom, Elon Musk and their various minions and shills) have been spreading Fear, Uncertainty, Doubt about the AGI field for over 7 years, and it is time to reveal what they do to censor criticism. They have engaged in editor suppression with flimsy excuses on Wikipedia to remove this entire section from Wikipedia in the article titled “Existential risk from artificial general intelligence”, and ban yours truly, by a librarian nonetheless. Fortunately, wikipedia does not allow them to edit history, but I am certain that Wikipedia has been infiltrated by the witless autistic scammers of MIRI and FHI and they are actively suppressing criticism of their little Scientology project. Just like Scientology would!

Here is the link

And here is a copy just in case they manage to remove it from Wikipedia’s servers:

Criticisms

  1. The scientific validity and significance of these scenarios are criticized by many AI researchers as unsound, and metaphysical reasoning. Much criticism has been made about the speculative, horror/science-fiction movie like reasoning that is not based on solid empirical work. Many scientists and engineers, including well-known machine learning experts such as Yann LeCunYoshua Bengio, and Ben Goertzel seem to believe that AI eschatology(existential AI risk) is a case of luddite cognitive bias and pseudo-scientific predictions. [35] [36] [37] Furthermore, most of these claims were championed by openly agnostic philosophers like Nick Bostrom with controversial views like simulation hypothesis [38], and doomsday argument [39] instead of technical AI researchers.
  2. Stephen Hawking and Elon Musk earned an international luddite award due to their support of the claims of AI eschatologists. In January 2016, the Information Technology and Innovation Foundation (ITIF) awarded the Annual Luddite Award to Stephen Hawking, Elon Musk, and artificial intelligence existential risk promoters (AI doomsayers) in FHI, MIRI, and FLI, stating that “raising sci-fi doomsday scenarios is unhelpful, because it spooks policymakers and the public, which is likely to erode support for more research, development, and adoption.” [40] Further note that Future of Life Institute (FLI) published an incredibly egotistical dismissal of the luddite award they received, claiming they are employing the leading AI researchers in the world, which is not objectively the case, and could be interpreted as an attempt at disinformation. [41] Many researchers view their efforts as a case of inducing moral panic, or employing Fear, Uncertainty, Doubt tactics to prevent disruptive technology from changing the world while earning a good income from fear-mongering.
  3. The main argument for existential risk depends on a number of conjunctive assumptions whose probabilities are inflated, which makes the resulting probability seem to have significant probability, while many technical AGI researchers believe that this probability is at the level of improbable, comic-book scenarios, such as Galactus eating the world. [42]
  4. Making an AGI system a fully autonomous agent is not necessary, and there are many obvious solutions to designing effective autonomous agents which are purposefully neglected by Bostrom and his aides, to make it seem like their reasoning is sound, however their proposed solutions and answers to such solutions are strawman arguments. They furthermore claim that it is impossible to implement any of the obvious solutions, which is also nonsensical, and they consistently try to censor all criticism of their work by social engineering and other academically unethical methods, such as removing harsh criticisms from this page.
  5. There is a conflict of interest between the claims of “AI existential risk” and organizations like MIRI, FHI, and FLI that promote such AI doomsaying/eschatology, as their funding is completely dependent on the public accepting their reasoning and donating to them as most eschatology organizations do.
  6. There are just too many atoms and resources in the solar system and the reachable universe for an AGI agent to needlessly risk war with humans. Therefore, there is no real reason for a supposedly very powerful AGI agent to wage war upon mankind, to realize any expansive, open-ended goal, the agent would most likely venture outside the solar system, than dealing with an irrational biological species.
  7. As for a consequential war in-between AGI agents with humans taking collateral damage, this could be of significance only if the two AGI agents were of nearly parallel intelligence. If in contrast, one AGI agent was substantially superior, the war would be over very quickly. By creating a “friendly” AGI agent which engages the “unfriendly” AGI agent in war, the humans could risk a self-fulfilling a doomsday prophecy. As an example, the Department of Defense has more to do with offense than with actual defense.
  8. While humans assert existential risks to themselves, they conveniently ignore existential risks to the sustenance of intelligent life in general in the galaxy, as would be remedied by the quick spread of AGI agents.
  9. Roko’s basilisk offers an existential risk of its own, one which could actually be compounded by attending to the general existential risks. It is also a perfect reductio ad absurdum of everything that Yudkowsky and Bostrom have claimed about AI technology having an inherent “existential” risk. As a consequence of this apparent absurdity, Roko’s basilisk was censored from the LessWrong community blog where AI eschatologists convene and discuss their apocalyptic fears.
  10. It is suggested by many critics that trying to design a provably “friendly”, or “safe” autonomous agent imitating human ethics, or some other ideal behavior itself is the greatest risk from AI technology. Paradoxically, this would make FHI the greatest existential risk from AI technology. [43]
  11. Opportunities for hybridization, i.e. cyborgs, cannot be neglected. On the other hand, Nick Bostrom has repeatedly claimed that brain simulations, which is the primary means for technological immortality is also an existential risk, which casts doubt on his claims of being a transhumanist.
Censored Criticism on the Wikipedia Article About AI Eschatology

examachine

Eray Özkural has obtained his PhD in computer engineering from Bilkent University, Ankara. He has a deep and long-running interest in human-level AI. His name appears in the acknowledgements of Marvin Minsky's The Emotion Machine. He has collaborated briefly with the founder of algorithmic information theory Ray Solomonoff, and in response to a challenge he posed, invented Heuristic Algorithmic Memory, which is a long-term memory design for general-purpose machine learning. Some other researchers have been inspired by HAM and call the approach "Bayesian Program Learning". He has designed a next-generation general-purpose machine learning architecture. He is the recipient of 2015 Kurzweil Best AGI Idea Award for his theoretical contributions to universal induction. He has previously invented an FPGA virtualization scheme for Global Supercomputing, Inc. which was internationally patented. He has also proposed a cryptocurrency called Cypher, and an energy based currency which can drive green energy proliferation. You may find his blog at http://log.examachine.net and some of his free software projects at https://github.com/examachine/.

Leave a Reply

Your email address will not be published. Required fields are marked *

Translate »