These days we hear a lot about Artificial Intelligence (AI), but many folks I speak to seem to have little appreciation for what it’s all about – and why it’s so important. If you’re curious, here is my take on the past, present and future of this discipline.
Where we’re coming from
Artificial Intelligence has gone through a number of phases in the past seven decades. In the fifties there was the vision, articulated by Alan Turing, that the newly invented electronic computer will attain the capabilities of human-like thought. Then, in the sixties and seventies, computers became powerful enough to tackle some specific challenges like playing chess, conversing with a human, or understanding the world. There was high hope and great hype, which crashed in the seventies and eighties when it became clear that both the hardware and the algorithmic methods available then were inadequate in any but the most limited domains, and that generalization to real “intelligence” was nowhere in sight. The nineties saw progress in artificial neural networks research, in which I was engaged as an Intel researcher. Finally, the new millennium combined these with the truly powerful hardware now available and with Big Data availability to provide a slew of AI applications that are truly astounding – though not in the manner that Turing had imagined.
In fact, having observed AI’s progress and its applications for years, I find it at once exhilarating and alarming, successful and lacking… a very confusing position. Which, come to think of it, can be used to describe biological intelligence just as well.
So what is this all about?
Where we are now
The ultimate goal of AI research is to create a computer program that will be able to act like a human being in a very wide range of domains. This is what is needed to pass the Turing test, and is an incredibly hard challenge, which we are nowhere near attaining as yet. In that sense, Turing (who had predicted success by the end of the 20th century) was wrong.
But today most computer scientists aren’t losing sleep over passing the Turing test. They prefer to attack more modest goals and do it in a way that can earn their companies lots of money. Yes – much leading edge work in AI is to be found today not in academia but in corporations like Google, Facebook, Amazon and eBay. All of these – and others – are pushing forth the state of the art to solve specific problems that previously only a human could solve; and in that they are extremely successful. Every day sees new AI applications that would have been considered miraculous 20 years ago.
And so, the world champions in Chess, Jeopardy and Go are AI programs from IBM and Google; and so, we have near perfect speech recognition on every smartphone; and face recognition; and so on and so forth.
Not that anyone is impressed: we move ahead, adopting the implied definition “Real AI is what we haven’t yet been able to do this year”.
How AI does its thing
Here’s the amazing fact: we have no idea how. Back in the sixties, if you wanted a program that can identify a fast racehorse in a photo, you’d need to analytically specify what a fast horse looks like (good luck with that!), then write a program to seek that look by going over the photo in a manner you specified. In the sixties that would’ve been practically impossible due to hardware limitations, but if it weren’t, the programmer would know exactly how the program does its thing, since they wrote the program! To quote Lady Ada Lovelace’s 1843 observation, “[The computer] can only do whatever we know how to order it to perform”.
Today, by contrast, we have AI programs that can work wonders, but we have no idea how the program does it. We don’t need to specify the physical attributes of our winning horse; all we need do is train the program on lots and lots of data – oodles of photos of horses and their associated race outcomes. The task becomes one of statistical classification: here is a pile of fast horse photos, here is a pile of slower horses – now figure out how to classify a new photo into one of these piles. We can train programs to do this very well – neural networks are especially good at this – but we have no idea what about the photos makes the program flag it as a winner. The photos are fed as a string of ones and zeros, or pixel values, so the computer doesn’t even know it is dealing with a two-dimensional image. It certainly took much progress in computer science to know how to build algorithms that can learn like that; but now that we have an algorithm, it isn’t telling us how it’s doing the classification.
And these programs are doing an outstanding job. They can predict world events like epidemics or political unrest by analyzing and correlating millions of past news articles. They can interact in natural language with users on e-commerce web sites. They can recognize faces, even if the person has grown a beard in the meantime. They can diagnose X-Rays and mammograms and identify life-threatening conditions. And lots more – often without telling us how! All they ask is that we provide them with the aptly called Big Data – which we can easily do in this age of global connectivity and pervasive digital data collection.
What I find amazing is not just the achievements but the rate of progress, which is accelerating exponentially. Every week brings new capabilities – not just to scientific journals but to our own devices. Tools like Google Search are quietly adding features – Search by Image, Identify a tune, figure out what you need to see that will help you even when you hadn’t specified it in the query… and it is natural that Google be doing this, because it has perhaps more Big Data than anyone.
So where is all this going?
How should I know? Ask an AI!
Seriously… there are two paths to keep in mind.
First, in specific applications like the ones mentioned above, there is no doubt that progress will continue, at a growing speed, driven by Moore’s law and the huge financial incentive driving commercial development. The old saying that “computers and robots can replace manual labor but will never replace humans in intellectual tasks” sounds increasingly hollow. It seems likely that in 20 or 30 years computers will be able to replace humans at practically any task – be it driving cars (already here!), teaching students, creating art, or designing and programming more powerful computers. This could be a good thing or a bad thing, depending on how humankind handles it; in any event it will bring massive changes and dislocations that will transform our world.
Second, and rather less certain, we may see progress towards “General Artificial Intelligence” – machines that can pass the Turing test, and that may even develop consciousness at some point. What will happen then is a subject of debate; one projected scenario is the “technological singularity” predicted years ago by Ray Kurzweil (who is currently at Google, not surprisingly). In this scenario the exponential growth in computer power ends up with a rapid “superintelligence explosion” that changes everything, which he extrapolates will happen around mid-century. This singularity will change everything, and what happens beyond it can’t be predicted at all – it may involve a fruitful symbiosis between humans and machines, or the end of humanity as we know it. This is what had prompted Elon Musk, as well as the late Stephen Hawking, to warn that AI is an extremely dangerous discipline.
What you should do about this
Whichever path materializes, you can see that AI is far more than just another computer technology. It is the ultimate technological disruptor, and will change everything. Some find this scary and threatening; others find it exciting; nobody can deny that it will be interesting.
For your part, I strongly advise keeping an eye on the latest developments in this space, because not knowing about the path AI is taking (and taking us on) will leave you blind to what may be the most important thing happening in your lifetime.
Muy interesante y preocupante a la vez