We all know the story by now.  Hank Pym Tony Stark, intrigued with the concept of artificial intelligence, creates a machine – an artificial intelligence.  And then, Ultron goes rogue…

The idea of artificial intelligence gone wrong is a staple of science-fiction, and it comes as no surprise that the character debuted in 1968.  In fact, when calculators were developed, people started freaking about the idea!  Perhaps the classic example is ‘The Terminator’, which painted a dystopian future reality in which humanity had been deemed superfluous to robotic requirements.

Ultron ancestor - Terminator

It might surprise you to hear, though, that behind all science-fiction there lie disturbing science-facts.  Let me take you to the beautiful and historic Cambridge University, where you find the Centre for the Study of Existential Risk.  It’s not the only centre studying this issue, but it’s one example that fascinates me.

It is a comparatively new idea that developing technologies might lead – perhaps accidentally, and perhaps very rapidly, once a certain point is reached – to direct, extinction-level threats to our species.

And yes, artificial intelligence is at the top of their ‘hit list’.  In fact, as recently as February this year, the Centre worked with the Faculty of Philosophy to host a conference on “Self-Prediction in Decision Theory and Artificial Intelligence”.  In other words – on the possibility of artificial intelligences making decisions for themselves.

Ultron, of course, fits the classic trope of an anthropomorphised AI.  Taking on a humanoid form, Ultron has been given very human motivations – up to and including ‘daddy issues’ with his creator.  In the recent ‘Rage of Ultron’ graphic novel, the two even merged into one being!

Rage of Ultron

But according to Nick Bostrom, an Oxford philosopher who coined the term ‘existential risk’, the real danger is from a nonhuman computer that lacks common sense.  He likes to use a colourful, if exaggerated, example:

Imagine a machine programmed with the seemingly harmless, and ethically neutral, goal of getting as many paper clips as possible. First it collects them. Then. realizing that it could get more clips if it were smarter, it tries to improve its own algorithm to maximize computing power and collecting abilities. Unrestrained, its power grows by leaps and bounds, until it will do anything to reach its goal: collect paper clips, yes, but also buy paper clips, steal paper clips, perhaps transform all of earth into a paper-clip factory. “Harmless” goal, bad programming, end of the human race.

The point is that artificial intelligence is simply programming, and can never simulate true, emotional sentience – at least, not any time soon.  A powerful artificial intelligence with programming errors is the real problem.

In ‘Avengers: Age of Ultron’, we know that Tony Stark will build Ultron as a defence system, an attempt to bring peace to the world.  It’s actually nothing more than an extension of something actually being discussed at a United Nations conference this week.

Fear the in-laws?  The UN fear the Laws.

The Laws – Lethal Autonomous Weapons Systems – are automated systems that can make the judgment call of whether to act or not for themselves.  They can identify different behaviours, allocate them as threatening or non-threatening, and take lethal action.  It may sound like a science-fiction plot, but there’s actually a fully-fledged movement to oppose such technological development, known as – wait for it – the Campaign to Stop Killer Robots!  (Man, if a comic-book writer used that for a Campaign name, he’d be mocked!)

The Campaign to Stop Ultron

One of the key concerns is the simple question of who would take responsibility if a ‘killer robot’ went wrong.  There’s currently no legal framework that makes computer programmers, manufacturers or military commanders responsible for death or damage.  Nor does it look likely that such a legal framework will be forthcoming.  So the UN are actually debating making the Laws illegal.

All of which raises the intriguing question; in the Marvel Cinematic Universe, is there a legal framework to hold Tony Stark accountable if Ultron goes wrong?  In fact, should he be held accountable?  By the same argument, in the comics should Hank Pym be held responsible for all the damage caused by Ultron?  Bear in mind this includes entire worlds razed in ‘Avengers: Rage of Ultron’, not to mention a brief spell where Ultron commanded the worlds-consuming Phalanx (think the Borg on steroids).

Ultron leads the Phalanx

Science-fiction sometimes feels like the opportunity to glimpse the world as it could be, and then to watch in shock and horror as it comes to pass.  It looks as though the Ultron debate isn’t just theoretical, so how far are we from Ultron?

Show ComicsVerse some Love! Leave a Reply!