The combination of simulation argument and existential AI risk nonsense can be aptly described as new age creationism.

Claiming that AGI technology will destroy "humanity" with 20-30% probability was a rather comical maneuver by the arch-creationist Bostrom.  I understand that FHI/MIRI were trying to evade the right criticism that if there is negligible probability, funding is not required. Was not Bostrom the same non-conscious entity who said that we live in The Matrix with 33% probability? And then the FHI apologists (who he made friends with, by bribing them) told me something like "he wrote that paper, but that doesn't mean he subscribes to the argument". I wonder if FHI scammers realize they are getting more comical every day.

Now, is there really any reason not to equate creationism with AI eschatology any more? I think there is a good reason to equate. Both are gross misapplications of probabilistic reasoning. In fact both are exactly the same wrong methodology. Both are not Bayesian, and both don't use a proper prior. Just because Marcus Hutter is not making fun of them (and instead wrote a paper that lends slight support to their paranoia), does not mean that I will not enjoy the opportunity. The problem is that this has nothing to do with probabilistic reasoning, but it is garden variety superstition. Maybe furthermore, formalization of garden variety superstition. If you require to be explained what this "philosophical" mistake is, it is the same mistake Chalmers commits: conceivability does not entail possibility. In their case, conceivability of an extremely unlikely (has hyper near-zero a priori probability) horror/science-fiction scenario does not entail significant probability at all.

Those two last sentences constitute the entirety of the explanation I should make for philosophically literate readers.

They are merely committing to ignorance, which makes it seem like they are deriving a probability, but in reality they are just making up probabilities, and inflating extremely low (hyper near zero) probabilities to extremely significant 0.2-0.3 probabilities. That is why, their "research group" is committing the mistake that some "naturalist theogony" is true with 0.33 probability, and AI tech will destroy us all with 0.3 probability. It is merely a typical schizophrenic delusion the examples of which you may encounter at any asylum, nothing more. And it can only be made true when you ignore all relevant scientific knowledge which constitutes the proper prior -- even that does not salvage it but let's not get too technical... In other words, they are repeating a severe methodological error of Chalmers. And let us also note that Chalmers was the first to "advance" a claim that The Matrix movie could explain his dualism -- and of course other theological delusions but he pretends as if he does not mean that, much like Bostrom.

The only logical conclusion I am forced to draw that these charlatans, as I suspected from the very beginning, are crypto-creationists with a new age techno christianity mask. They are trying to push the limits of natural stupidity, as Einstein suspected, there are few limits to it.

Disclaimer: I did receive a bribe offer myself, I was told in a comfortable setting that if I were to stop my criticism of FHI, I could find a job at FHI. I am not looking for a job at a pseudoscience institute, so that cannot work on me at all. I am fully committed to scientific inquiry, I have no respect for any social conventions you may have erected with regards to protection of cognitively challenged specimens with cognitively insignificant ideas like Bostrom and Chalmers, regardless of what monkey titles they may have improperly gained in a world full of unthinking monkeys.

Simulation Argument and Existential AI Risk

Leave a Reply

Your email address will not be published. Required fields are marked *

Translate »