AI doomsayer charlatans (such as Eliezer Yudkowsky, Nick Bostrom, Elon Musk and their various minions and shills) have been spreading Fear, Uncertainty, Doubt about the AGI field for over 7 years, and it is time to reveal what they do to censor criticism. They have engaged in editor suppression with flimsy excuses on Wikipedia to remove this entire section from Wikipedia in the article titled “Existential risk from artificial general intelligence”, and ban yours truly, by a librarian nonetheless. Fortunately, wikipedia does not allow them to edit history, but I am certain that Wikipedia has been infiltrated by the witless autistic scammers of MIRI and FHI and they are actively suppressing criticism of their little Scientology project. Just like Scientology would!
Here is the link
And here is a copy just in case they manage to remove it from Wikipedia’s servers:
Criticisms
- The scientific validity and significance of these scenarios are criticized by many AI researchers as unsound, and metaphysical reasoning. Much criticism has been made about the speculative, horror/science-fiction movie like reasoning that is not based on solid empirical work. Many scientists and engineers, including well-known machine learning experts such as Yann LeCun, Yoshua Bengio, and Ben Goertzel seem to believe that AI eschatology(existential AI risk) is a case of luddite cognitive bias and pseudo-scientific predictions. [35] [36] [37] Furthermore, most of these claims were championed by openly agnostic philosophers like Nick Bostrom with controversial views like simulation hypothesis [38], and doomsday argument [39] instead of technical AI researchers.
- Stephen Hawking and Elon Musk earned an international luddite award due to their support of the claims of AI eschatologists. In January 2016, the Information Technology and Innovation Foundation (ITIF) awarded the Annual Luddite Award to Stephen Hawking, Elon Musk, and artificial intelligence existential risk promoters (AI doomsayers) in FHI, MIRI, and FLI, stating that “raising sci-fi doomsday scenarios is unhelpful, because it spooks policymakers and the public, which is likely to erode support for more research, development, and adoption.” [40] Further note that Future of Life Institute (FLI) published an incredibly egotistical dismissal of the luddite award they received, claiming they are employing the leading AI researchers in the world, which is not objectively the case, and could be interpreted as an attempt at disinformation. [41] Many researchers view their efforts as a case of inducing moral panic, or employing Fear, Uncertainty, Doubt tactics to prevent disruptive technology from changing the world while earning a good income from fear-mongering.
- The main argument for existential risk depends on a number of conjunctive assumptions whose probabilities are inflated, which makes the resulting probability seem to have significant probability, while many technical AGI researchers believe that this probability is at the level of improbable, comic-book scenarios, such as Galactus eating the world. [42]
- Making an AGI system a fully autonomous agent is not necessary, and there are many obvious solutions to designing effective autonomous agents which are purposefully neglected by Bostrom and his aides, to make it seem like their reasoning is sound, however their proposed solutions and answers to such solutions are strawman arguments. They furthermore claim that it is impossible to implement any of the obvious solutions, which is also nonsensical, and they consistently try to censor all criticism of their work by social engineering and other academically unethical methods, such as removing harsh criticisms from this page.
- There is a conflict of interest between the claims of “AI existential risk” and organizations like MIRI, FHI, and FLI that promote such AI doomsaying/eschatology, as their funding is completely dependent on the public accepting their reasoning and donating to them as most eschatology organizations do.
- There are just too many atoms and resources in the solar system and the reachable universe for an AGI agent to needlessly risk war with humans. Therefore, there is no real reason for a supposedly very powerful AGI agent to wage war upon mankind, to realize any expansive, open-ended goal, the agent would most likely venture outside the solar system, than dealing with an irrational biological species.
- As for a consequential war in-between AGI agents with humans taking collateral damage, this could be of significance only if the two AGI agents were of nearly parallel intelligence. If in contrast, one AGI agent was substantially superior, the war would be over very quickly. By creating a “friendly” AGI agent which engages the “unfriendly” AGI agent in war, the humans could risk a self-fulfilling a doomsday prophecy. As an example, the Department of Defense has more to do with offense than with actual defense.
- While humans assert existential risks to themselves, they conveniently ignore existential risks to the sustenance of intelligent life in general in the galaxy, as would be remedied by the quick spread of AGI agents.
- Roko’s basilisk offers an existential risk of its own, one which could actually be compounded by attending to the general existential risks. It is also a perfect reductio ad absurdum of everything that Yudkowsky and Bostrom have claimed about AI technology having an inherent “existential” risk. As a consequence of this apparent absurdity, Roko’s basilisk was censored from the LessWrong community blog where AI eschatologists convene and discuss their apocalyptic fears.
- It is suggested by many critics that trying to design a provably “friendly”, or “safe” autonomous agent imitating human ethics, or some other ideal behavior itself is the greatest risk from AI technology. Paradoxically, this would make FHI the greatest existential risk from AI technology. [43]
- Opportunities for hybridization, i.e. cyborgs, cannot be neglected. On the other hand, Nick Bostrom has repeatedly claimed that brain simulations, which is the primary means for technological immortality is also an existential risk, which casts doubt on his claims of being a transhumanist.
- [35] Bill Gates Fears A.I., but A.I. Researchers Know Better: The General Obsession With Super Intelligence Is Only Getting Bigger, and Dumber. By Erik Sofge Posted January 30, 2015 on Popular Science Magazine, http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better
- [36] Will Machines Eliminate Us? People who worry that we’re on course to invent dangerously intelligent machines are misunderstanding the state of computer science. By Will Knight on MIT Technology Review on January 29, 2016, Retrieved from: https://www.technologyreview.com/s/546301/will-machines-eliminate-us/
- [37] Dr. Ben Goertzel’s blog, The Singularity Institute’s Scary Idea (and Why I Don’t Buy It), Published on Friday, October 29, 2010, http://multiverseaccordingtoben.blogspot.com.tr/2010/10/singularity-institutes-scary-idea-and.html
- [38] Bostrom’s simulation argument is considered by his critics as a case of Intelligent Design since he uses the term “naturalist theogony” in his paper on the subject, and he talks of a hierarchy of gods and angels, as well, which is suspiciously close to biblical mythology. His paper posits a post-human programmer deity that can accurately simulate the surface of the Earth long enough to deceive humans, which is a computational analogue of young earth creationism, see https://en.wikipedia.org/wiki/Nick_Bostrom#Simulation_argument
- [39] Doomsday argument is a philosophical argument that is somewhat analogous to religious eschatology that a doomsday will likely happen, also known as Carter’s catastrophe, and used in some amusing science-fiction novels
- [40] Artificial Intelligence Alarmists Win ITIF’s Annual Luddite Award, Published on ITIF website on January 19, 2016, https://itif.org/publications/2016/01/19/artificial-intelligence-alarmists-win-itif%E2%80%99s-annual-luddite-award
- [41] FLI’s response to the luddite award they received. http://futureoflife.org/2015/12/24/think-tank-dismisses-leading-ai-researchers-as-luddites/
- [42] The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation, Richard Patrick William Loosemore, 2014 AAAI Spring Symposium Series http://www.aaai.org/ocs/index.php/SSS/SSS14/paper/viewPaper/7752
- [43] http://www.exponentialtimes.net/videos/machine-ethics-rise-ai-eschatology