Artificial intelligence
AI is already surrounding our lives, delivering really great product experiences. It can recognize human speech, spot fraud or drive a car. Azeem Azhar explains what lies behind the AI-boom and why it concerns us all.
The idea of “thinking machines” or artificial intelligence (AI) seems ubiquitous now. We’re surrounded by systems that can mostly understand what we mean when we talk to them, sometimes recognize images, and occasionally recommend a movie we might want to watch. These developments are the result of 60 years of fluctuating interest, progress and respectability. Today, it’s undeniable that the awareness around AI is at an all time high; so, is it finally time for AI to pay dividends? The short answer is”yes”. But why now? And why does it matter?
The term AI was first coined in 1956, when science had tamed the atom and the space age was about to begin; AI was seen as similarly tractable, leading Marvin Minsky, cognitive scientist at MIT, to predict that “Within a generation […] the problem of creating ‘artificial intelligence’ will substantially be solved”.
Sadly, AI was much harder than rocket science, and while rockets landed on the Moon, belief, interest and funding for AI research crashed back to Earth, leading to the first of many “AI winters”.
What the original AI researchers tried to create was AGI, or artificial general intelligence: a computer as smart and flexible as a human, able to perform any intellectual task that a human being can. Meanwhile, science fiction writers were stoking fears about ASI, or artificial super intelligence: computers smarter than humans. These ASIs would either be man’s ultimate nemesis, such as Skynet from the Terminator film series, or our guardian angels, like the Minds of Iain M. Banks’ Culture novels.
The AI we see today is ANI, or artificial narrow intelligence. ANI specializes in just a single area, so that an AI like Google’s Alphago can beat the world champion at chess, but would be beaten by a three-year old at recognizing photographs. Baidu, the Chinese Internet giant, has developed an AI able to transcribe speech better than a human, but it’s incapable of opening a door, let alone playing chess. This is because AGI is hard. Things most humans find difficult, such as math, financial market strategy or language translation, are child’s play for a computer, while things humans find easy, such as vision, motion, movement and perception, are staggeringly difficult for them. Computer scientist Donald Knuth summed it up by saying “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking,’”– but only because our skills have been optimized by millions of years of evolution.
You can go an awfully long way with ANIs, however, and they’re now everywhere in our world. If you use an Iphone your usage and preferences are modeled by an AI known as a deep neural network running 24×7 on your phone. If you take photos on an Android device then Google Photos uses AI techniques to recognize your chums and describe your pictures. The world’s largest technology firms have recognized that AI builds better products, better products mean happier users, happier users mean higher profits.
ANI systems can recognize human speech, describe images, spot fraud, profile customers, reduce power consumption or drive a car. So it’s no surprise that Apple, Google, Facebook, IBM, Twitter and Amazon have been busily buying up the top AI startups and hiring talent at a ferocious rate. In September 2016 Apple had a whopping 281 open roles for specialists in machine learning – an important sub-discipline of AI. Google counts more than 7,000 employees involved in AI or machine learning (or about 1 in 8). So, what’s behind the current boom? There are three accelerants that can be roughly divided into three categories:
1. Underlying technologies
Computing is now both powerful and cheap enough to carry out the complex mathematics driving the algorithms that underpin AI systems. Moore’s Law, which predicts the doubling of available computing power every 18 months, has helped, but so have new technology architectures, like the GPU (graphics processor unit), pioneered by NVIDIA in the late 2000s. Using GPUs, computations that once took days, now take just minutes – and that speed is still doubling every 18 months. The scale of change has literally been astronomical, today: Ten times more transistors are made every second than there are stars in the Milky Way galaxy. My Apple watch has more than double the processing power of 1985’s fastest supercomputer, the Cray-2, which back then cost nearly 20 million USD. Amazon rent out computing power equivalent to one hundred Cray-2s for less than 3 USD an hour.
Alongside Moore’s Law has been an explosion of data. AIs are a bit like children: They need to be trained, but they’re comparatively slow learners, requiring lots of data to learn to recognize even simple things like the letter ‘A’. Fortunately, 2.5 billion gigabytes (or 2.5 exabytes) of data are now generated every day – more than 90 percent of all the data ever created has been generated in the last two years.
2. Business and technology
The backdrop of this underlying technology has been the increasing computerization of business, prompting tech investor Marc Andreesen to coin the phrase “Software is eating the world”. This insatiable appetite comes from the realization that every business problem is now behind a digital interface. And as application programing interfaces (or APIs) have become the norm in digital interfaces, they make it easier for automated systems, like AI, to access them and control them. For example, Uber showed that running a transportation system was really a route optimization and liquidity management problem.
3. Feedback loops
The sum of these two accelerants is the previously mentioned AI lock-in loop, where great investment begets greater results begetting greater investment. Which is exactly what we are seeing: massive increases in performance taking AI systems over a tipping point, beyond which they can deliver a genuinely delightful product experience. And this is the point at which we care about AI: when it does better than a human. Speech recognition systems that are nearly as good as a human are just frustrating – we won’t use them unless we have to. Self-driving cars half as good as an average human driver are a no-go. But once self-driving cars are better, as Tesla’s data suggest they now are, we cross a boundary.
And this is where we are with ANIs. Across incredibly wide and broad domains, artificial narrow intelligence meets or exceeds human performance. And as it does so, the delight we as consumers get from these services draws us to those with the artificial smarts. So, it’s a bright shining future, then? Well, perhaps.
Luminaries such as Stephen Hawking and Elon Musk worry that super-intelligent AIs may ultimately threaten our survival as a species. That’s a subject for another essay, but the growth of AI presents some real challenges: The creation of natural monopolies that would be harder to escape than Google’s search monopoly or Facebook’s social monopoly. “Data network effects” favor those with the most data – will the rest of us become mere sharecroppers? Accountability for algorithmic decisions. Will these decisions be based on fair, balanced, diverse data or on biased data and shortcuts that discriminate against women, the poor and minorities? The redundancy of many previously human functions. Will AI cost people their jobs – self respect, social standing, etc. – as they develop and improve more and more skills?
It now feels as if AI has turned a corner, that it already powers parts of our everyday lives. But like some tragic Greek hero is doomed to become invisible the moment it succeeds. We underplay the remarkable scientific achievements of computers that can transcribe speech or drive a car. Indeed, as many in the industry have noted, “We stop calling it AI when it starts to work,” unfairly moving the goalposts when-ever AI looks likely to score. By many previous measures AI has not just scored, but has already won the game.