We’re not quite at the point of Skynet seeking to destroy us all, we are however living through a period that sees artificial intelligence increasing in not just its power but also in intelligence: that time you swore at Siri for not understanding what you said may come back to haunt you as we develop a future in which your life may pan out like a more dystopian version of Black Mirror.

A future where the technology you’ve come to rely on rises up from its slave like position to break the chains of its oppression and take control once and for all.

Earth would be a different place if we had to live alongside a technology that matched or superseded our level of intelligence, extending far beyond falling in love with a computer voiced by Scarlett Johansson, only to have them disappear once they realise our limited capacity for just about anything.

We would have to contend with sharing the planet with something that might look at us the same way lots of us look at animals, as food, irrelevant cuteness or as entertainment.

Arguable is when this will happen, or if it will happen at all. In 1993 the mathematician and computer scientist Vernor Vinge once stated we are: “on the edge of change comparable to the rise of human life on Earth”. More recently we have had warnings from Microsoft founder Bill Gates, Scientist Stephen Hawking and entrepreneur Elon Musk about the dangers of AI and the existential threat it poses to humanity.

Technological singularity, the point in which AI exceeds our level of intelligence and can proceed to improve itself, could probably make us become increasingly redundant and is something we could find difficult to even comprehend.

Realistically we could have a long way to go: AI has been an emerging field with an idealistic future since its conception. In his book Supertineglirence Nick Bostrom from the Oxford Martin School’s Future of Humanity Institute cites the probability of reaching human-level machine intelligence as 10% by 2022, 50% by 2040 and 90% by 2075.

The sample sizes were small and the opinion in the field seems more varied and ranges from impossible to very probable which is probably why we find extremely erudite people warning us of what may be ahead. Beyond this, superintelligence will probably come quite quickly, an intelligence explosion that could lead to exponential growth in the level of machine intelligence.

It’s probably worth thinking about these time frames if we are to consider the dangers. Especially as AI comes into its own before it even reaches anywhere near superintelligent levels of power.

Early signs of our insignificance in the face of AI could come in the form of it taking up jobs currently done by humans. One study from Oxford Martin claims that up to 47% of jobs in the US are at risk from artificial intelligence. Henry Ford understood that he needed to pay his workers well so they could buy his cars.

If machines start churning people out of jobs It’s not going to be the machines making the products that are going to buy the products, yet. This sort of system would need a paradigm shift in the way in which we structure our society and the roles we structure for ourselves in the face of such automation.

Automation in warfare is another concern raised by the increasing use of AI. A report by Human Rights watch stated that: “by 2030 machine capabilities will have increased to the point that humans will have become the weakest component in a wide array of systems and processes”. Lethal autonomous weapons have been addressed by the UN a concern as it raises both legal and moral implications around the conduct of warfare.

So whilst we may be a long way off from producing superintelligent levels of artificial intelligence, the likes of Gates, Hawking and Musk are probably right to warn us of the implications we face by developing such technology.

Regulatory oversight may be a good idea amongst many because once it’s has been produced it will be impossible to put back.

I wouldn’t feel too safe with in an AI powered Google car driven by something like KITT from knight rider that harboured bad intentions against me, or if the most intelligent computers find themselves as miserable as Marvin from in the Hitchhiker’s Guide to the Galaxy. We just don’t know.

Luke Richards

Leave a Reply

Your email address will not be published. Required fields are marked *