The combination of simulation argument and existential AI risk nonsense can be aptly described as new age creationism.

Claiming that AGI technology will destroy “humanity” with 20-30% probability was a rather comical maneuver by the arch-creationist Bostrom.  I understand that FHI/MIRI were trying to evade the right criticism that if there is negligible probability, funding is not required. Was not Bostrom the same non-conscious entity who said that we live in The Matrix with 33% probability? And then the FHI apologists (who he made friends with, by bribing them) told me something like “he wrote that paper, but that doesn’t mean he subscribes to the argument”. I wonder if FHI scammers realize they are getting more comical every day.

Now, is there really any reason not to equate creationism with AI eschatology any more? I think there is a good reason to equate. Both are gross misapplications of probabilistic reasoning. In fact both are exactly the same wrong methodology. Both are not Bayesian, and both don’t use a proper prior. Just because Marcus Hutter is not making fun of them (and instead wrote a paper that lends slight support to their paranoia), does not mean that I will not enjoy the opportunity. The problem is that this has nothing to do with probabilistic reasoning, but it is garden variety superstition. Maybe furthermore, formalization of garden variety superstition. If you require to be explained what this “philosophical” mistake is, it is the same mistake Chalmers commits: conceivability does not entail possibility. In their case, conceivability of an extremely unlikely (has hyper near-zero a priori probability) horror/science-fiction scenario does not entail significant probability at all.

Those two last sentences constitute the entirety of the explanation I should make for philosophically literate readers.

They are merely committing to ignorance, which makes it seem like they are deriving a probability, but in reality they are just making up probabilities, and inflating extremely low (hyper near zero) probabilities to extremely significant 0.2-0.3 probabilities. That is why, their “research group” is committing the mistake that some “naturalist theogony” is true with 0.33 probability, and AI tech will destroy us all with 0.3 probability. It is merely a typical schizophrenic delusion the examples of which you may encounter at any asylum, nothing more. And it can only be made true when you ignore all relevant scientific knowledge which constitutes the proper prior — even that does not salvage it but let’s not get too technical… In other words, they are repeating a severe methodological error of Chalmers. And let us also note that Chalmers was the first to “advance” a claim that The Matrix movie could explain his dualism — and of course other theological delusions but he pretends as if he does not mean that, much like Bostrom.

The only logical conclusion I am forced to draw that these charlatans, as I suspected from the very beginning, are crypto-creationists with a new age techno christianity mask. They are trying to push the limits of natural stupidity, as Einstein suspected, there are few limits to it.

Disclaimer: I did receive a bribe offer myself, I was told in a comfortable setting that if I were to stop my criticism of FHI, I could find a job at FHI. I am not looking for a job at a pseudoscience institute, so that cannot work on me at all. I am fully committed to scientific inquiry, I have no respect for any social conventions you may have erected with regards to protection of cognitively challenged specimens with cognitively insignificant ideas like Bostrom and Chalmers, regardless of what monkey titles they may have improperly gained in a world full of unthinking monkeys.

Simulation Argument and Existential AI Risk

examachine

Eray Özkural has obtained his PhD in computer engineering from Bilkent University, Ankara. He has a deep and long-running interest in human-level AI. His name appears in the acknowledgements of Marvin Minsky's The Emotion Machine. He has collaborated briefly with the founder of algorithmic information theory Ray Solomonoff, and in response to a challenge he posed, invented Heuristic Algorithmic Memory, which is a long-term memory design for general-purpose machine learning. Some other researchers have been inspired by HAM and call the approach "Bayesian Program Learning". He has designed a next-generation general-purpose machine learning architecture. He is the recipient of 2015 Kurzweil Best AGI Idea Award for his theoretical contributions to universal induction. He has previously invented an FPGA virtualization scheme for Global Supercomputing, Inc. which was internationally patented. He has also proposed a cryptocurrency called Cypher, and an energy based currency which can drive green energy proliferation. You may find his blog at http://log.examachine.net and some of his free software projects at https://github.com/examachine/.

8 thoughts on “Simulation Argument and Existential AI Risk

  • April 16, 2017 at 1:14 am
    Permalink

    Ok, first off I’m a layman, and clearly no scientist. However I have read a bit here and there, and what concerns me is the human nature of scientists to enthusiastically pursue nascent technologies that could have disastrous results for all of humanity. The scientists working on the first atomic bomb test weren’t 100% sure it wouldn’t ignite the entire atmosphere and destroy the earth.
    The Large Hadron group weren’t 100% sure it wouldn’t create a black hole and swallow the Earth.
    Transhumanism agenda can’t 100% guarantee AI won’t supersede its programming and view humanity as hostile. It just seems humans in general are bent on designing their own end.
    Granted none of these scenarios happened, or have yet to … but why continually invent more and more efficient means of mass destruction ? I have to wonder if some of the smartest people aren’t also the most psychotic.
    You could liken it to continually sticking your hand in a snake hole, sure you didn’t get bit the first, second, or even third time, but good god why keep trying?

    Reply
    • July 26, 2017 at 1:40 pm
      Permalink

      Hi Jason,

      I think your concerns are legitimate but the analogy between nuclear weapons and AI is wrong. AI isn’t a destructive thing, it is just cognition, thinking. We seem to want to anthropomorphize it like everything else, but these programs are not humans or animals. Sure, we could try to build a free artificial person, I have a few blog entries about that sort of experiment (take a look at earlier entries, a lot of interesting posts about AI). But then again, would it have to be like human in any way? And do we really need artificial persons? In all likelihood, AI tech will not be artificial persons, but more like smart tools that get stuff done in the real world, like driving a car, or flipping burgers.

      Now, let us consider the other possibility. Is AI then like nuclear power, could it turn into fearsome weapons in the wrong hands? Not by its inherent qualities. You can use AI to control a weapon, but you’d need to build such a robot first. Militaries are already building robot armies. Hardly needs AI to control. You can remote them. No autonomous decision making is necessary even if AI is used, just point and shoot, if a jarhead can do it, so can a machine. I suggest we should worry about militaries, wars, politicians, bankers, etc., more than we worry about AI tech.

      Thanks a lot for reading my blog.

      Reply
  • April 19, 2017 at 5:28 pm
    Permalink

    To the absurdity of Bostrom’s ideas, let me add the exponential resources costs that are a function of the degrees of freedom of the systems. A hyperintelligent entity would have a hyper-hyper exponential cost. Apply Bayes to that and the probability that such can exist is vanishingly small.

    Reply
    • July 26, 2017 at 1:34 pm
      Permalink

      Well, I will have to disagree with that, we must be able to construct a world mind, or a stellar mind, but the more pressing problem is that such agency has not a single reason to make a video game in which he tries to emulate his ancestors, because that would require enormous resources which are probably valuable for many other tasks that such agency will require. Also, the information about ancestors is not accessible, it is lost, it does not make any sense for that reason, too. It is quite illogical, but you see, for creationists logic is not important, what matters to them is that they can find a single excuse, however unreasonable, to support their delusions.

      Reply
  • May 28, 2017 at 12:04 am
    Permalink

    Eray,

    You are erroneously conflating a simulation argument with Christian creationism. A simulation argument posits no divine agency or actors at all. It posits creators who are (probably) researchers running programs on advanced hardware. That’s it – no woowoo at all.

    Your rabid dismissal of all simulation arguments on the bogus grounds that they are somehow necessarily infected with a nefarious Christian (or even generically, Theist) agenda is just plain whacky.

    You are spot on about the bogus probabilities though. There is no existing prior that isn’t irreparably polluted with rampant speculation so Bayesian estimates – for all their convincingly dense math – are baseless bullshit.

    Reply
    • July 26, 2017 at 1:36 pm
      Permalink

      You are wrong. I am showing the logical equivalence of classical Christian theology with the simulation argument nonsense. All the main tenets are there I argue. This is what this post explains. I see you do not understand the arguments. That is quite alright. Not everyone has to understand philosophy of religion.

      Reply
  • November 26, 2017 at 2:18 pm
    Permalink

    Any thoughts on Dr. James Gate’s adinkra formulation?
    Topic: “Computer Code Discovered In Superstring Equations”

    [youtube=http://www.youtube.com/watch?v=bp4NkItgf0E&w=740&h=447]

    Reply
  • June 6, 2018 at 10:25 pm
    Permalink

    I do love the UI/UX and Aesthetically Pleasing Web Design.

    Top notch, imho.

    If I were to guess at your framework of thinking, I would venture a guess at a combination of mythologies related to cyberspace stirring up first in the 1990’s.

    Nothing wrong with that. Admirable, really.

    I’m thoroughly enjoying your writing.

    Thanks for being awesome.

    -Matthew

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Translate »