This film is merely bad science fiction, nothing more. Sure, we could build such bots, but the scenario is absurd. Would banning any kind of weapon control system prevent terrorists from using it? That itself is a basic error of this film. It says nothing useful, except for attacking a straw man argument about autonomous military drones, that autonomous AI agents would make far more effective, humane, or preferable weapons — which I do not believe anyone has made in earnest. In other words, it is an attempt to associate AI with “unethical weapons”. We know that weapons are unethical, already. Thus, saying they could be used for an evil purpose changes nothing. The film wishes to go one step further, making AI-equipped military drones a class of weapons that are analogous to nuclear, chemical or biological weapons, weapons that should be illegal under Geneva conventions or equivalent, and hoping to make a big fuss about it all. And further drawing a false dichotomy saying that anyone who does not agree with our scenario is defending these terrible, evil AI’s that we imagined, as is typical of their world-saving charade.

Who cares zilch about that? Military, terrorists, criminals, the kind of people who do use such means. AI does not quite improve the destructive capabilities of such weapons. If you as a terrorist or a military commander wished to kill a massive amount of people, you would not care about collateral damage; you would launch a Hellfire missile from a drone like the US army does. Therefore, the US army is far more “terrorizing” using technologically more advanced drones, because they kill indiscriminately. This counter-example defeats the main purpose of the film. I know that this is Stuart Russell’s favorite example about the supposedly unavoidable dangers of AI, but it is a ridiculous scenario that explains no new or useful ideas at the same time, wasting everyone’s brain power. Some people without the requisite imagination may wish to stick with their field of expertise. Let James Cameron shoot science-fiction/horror scenarios involving robots, and Russell might want to stick with computer science.

The trouble is, this is merely a science fiction/horror scenario, it is by definition AI scare. Well done, Prof. Russell. Unfortunately, your artistic effort at fiction here is rather pointless and misguided. I can understand that you would like to demonize AI by suggesting that it could be used for weapon technology, but in case you have not noticed, AI is a general-purpose technology, it could be used in anything. And such autonomous weapons already exist; guided missiles, for instance are autonomous weapons, though it seems that you believe just because you called a face recognition circuit an “AI” it suddenly becomes evil. However, Cruise missiles are just perfectly moral, right? Because they are not tainted with “AI”. They are used by good people, our heroic, God-fearing militaries. That is how sensible your scenario is, it is detached from reality, like everything else published by AI doomsayers as usual. It simply makes no sense, and the saddest part is that your dearest scaremongers will never realize or admit why it does not.

Your severe cognitive dissonance forces you to craft and then superstitiously live by straw man scenarios. That sort of superstition is of course essential to the pseudo-scientific AI doomsayer cults that zany conmen like Anders Sandberg rally around. Alas, I digress. Bad science fiction fails to inform anything about actual risks of AI technology. Perhaps, you would like to stop pretending that when we make robots autonomous, they suddenly become much more “evil” than their RC or slightly more primitive counterparts with a previous generation of control software. We already do know that military drones are dangerous. That is precisely why they are being built, to terrorize people, and to destroy lives and property. That is the very purpose of military, to kill, and to destroy; in case you have not noticed it, the military has seldom made ethical choices about human lives. It is the military technologists, military companies, military commanders, and militaries that are evil, not AI technology. There, the problem is, nobody believes that adding more AI routines to already highly dangerous and destructive drones is going to make them “right”, but this is apparently your only chance at making it seem like adding AI routines to already highly dangerous military drones would make them a morally reprehensible class of weapons when they were not without AI.

I tried to clearly explain this so that everyone would understand the reductio ad absurdum and wrong assumptions. It seems to present nothing of relevance, because you would find it very hard to explain why you approve of any kind of military drones in the first place, in any intellectual debate which is not organized by your AI doomsayer cultists. You can say anything in a film, that’s the beauty of fiction, but it is also no substitute for an intellectual thesis. You are wrong precisely because you have run out of intellectual options, and thus instead are seeking to create a speculative “Public Relations” moment using the idiocratic mechanism of Youtube, and paid publicists. I suppose the vernacular term is “beating a dead horse”.

https://youtu.be/9CO6M2HsoIA

Slaughterbots – AI scare PR gone wrong

examachine

Eray Özkural has obtained his PhD in computer engineering from Bilkent University, Ankara. He has a deep and long-running interest in human-level AI. His name appears in the acknowledgements of Marvin Minsky's The Emotion Machine. He has collaborated briefly with the founder of algorithmic information theory Ray Solomonoff, and in response to a challenge he posed, invented Heuristic Algorithmic Memory, which is a long-term memory design for general-purpose machine learning. Some other researchers have been inspired by HAM and call the approach "Bayesian Program Learning". He has designed a next-generation general-purpose machine learning architecture. He is the recipient of 2015 Kurzweil Best AGI Idea Award for his theoretical contributions to universal induction. He has previously invented an FPGA virtualization scheme for Global Supercomputing, Inc. which was internationally patented. He has also proposed a cryptocurrency called Cypher, and an energy based currency which can drive green energy proliferation. You may find his blog at http://log.examachine.net and some of his free software projects at https://github.com/examachine/.

4 thoughts on “Slaughterbots – AI scare PR gone wrong

  • November 16, 2017 at 12:50 am
    Permalink

    Isn’t the obvious evolution of military towards non-lethal weapons? Drexler draws a future without wars, since any war will be over within a day with 0 deaths.

    Reply
    • November 16, 2017 at 3:24 pm
      Permalink

      Well, given that our societies aren’t necessarily getting smarter, unfortunately that might take a long time. My point being that most killing is already automated, there is already a great deal of distance between sophisticated aerial weapons and operators, I will update the essay with some interesting information that colleagues pointed out later.

      Reply
  • November 18, 2017 at 10:25 pm
    Permalink

    Agreed trying to ban the technology will not prevent non-state and clandestine use. The US army can kill an equal number of poeple with indiscriminate drones and missiles, but mass killings cause political blowback. Small UAVs with shaped charges and face recognition are easier to transport & hide than missiles and can be targeted and designed by small groups, also suitable for false flag ops. The film performs a useful service in raising awareness of these technologies for the wider public.

    Reply
    • November 24, 2017 at 7:24 pm
      Permalink

      I don’t think it does. It tries to show smart munitions as a tool terrorists can easily obtain and use. Such weapons would be illegal. And if any drones are to be used by CIA, etc., that’s just political tomfoolery, has nothing to do with actual uses of smart munitions. IOW, it does portray military applications of AI as if they would cause mass proliferation of very dangerous weapons that would be used by every terrorist and shadowy government agency. That’s just really foolish speculation nothing more. And it doesn’t help raise any awareness, at best it confuses people. If advanced weapons are so easy to deploy why don’t we see them everywhere already? Saying smart munitions will be used for ethnic cleansing? That’s just more foolishness, what will they use it for, to kill more black people in USA? These charlatans would be more concerned with cops killing black people if they really meant it, but they don’t. Is it a mass destruction weapon? No, it is not. The film is just someone’s foolish imagination, someone who doesn’t understand military strategy and current weapons, terrorism, covert operations, weapons sale and use, etc. Maybe those clueless idiots at FHI like the clown called Anders Sandberg. You don’t need face recognition to control a drone to kill a single person if you really want to. And you certainly don’t need any AI to control drones with massive killing capability. But is there any opposition to drones here? No. Any real opposition to smart weapons like guided missiles, drones, smart bombs? No. So, this is just foolishness and illogical attempt at creating emotional outrage, at large. All of this is just trying to inflate any dangers from AI, their usual schtick. All of the stuff they show is highly illegal actions that also violate international treaties, they would be so with any technology. In other words, this is just a red herring. What’s the relevance? None.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Translate »