AI: Still not clever enough to be allowed out on its own

Barely a day goes by now without a Robots Taking Jobs story. If that wasn’t bad enough, once they’ve taken all our jobs they will eventually take over the world. They might even wipe us out, though why they would bother to enslave us is beyond me.

All good clean entertainment but in most news pieces there is little or no attempt to explain terms like artificial intelligence and machine learning. As Matt Ballantine pointed out a few weeks ago, some of the robot stories are pure hype.

Imagery matters. Imagery shapes the agenda. And there’s a whole load of crap, clichéd stock imagery that time-pressed and underpaid online editors attach to their copy without really thinking.

So just what is artificial intelligence and can machines really learn?

There’s no generally agreed definition but there is a useful explanation of the various terms here, summarised below, which is a good starting point.

Artificial Intelligence – a broad term referring to computers and systems that are capable of essentially coming up with solutions to problems on their own. The solutions aren’t hardcoded into the program; instead, the information needed to get to the solution is coded and AI uses the data and calculations to come up with a solution on its own.
Machine Learning takes the process one step further by offering the data necessary for a machine to learn and adapt when exposed to new data. Machine learning is capable of generalizing information from large data sets, and then detects and extrapolates patterns in order to apply that information to new solutions and actions. Obviously, certain parameters must be set up at the beginning of the machine learning process so that the machine is able to find, assess, and act upon new data.

Essentially, what we call artificial intelligence (AI) has come about because we now have vast amounts of digital data and machines with massive computing power. They are therefore able to trawl this data within seconds, enabling them to do things which, for humans to do, requires intelligence.

Here’s a simple example. A few weeks ago, a friend of mine posted a picture of himself in an old church and, having stripped out any identifying tags, asked his friends to guess where he was. It wasn’t difficult. I knew he was on a trip to York. I knew that most people who go to York visit the cathedral first. It was a simple matter of getting a map, looking at the nearby churches and doing an image search until I found the right one. I found it on the third attempt.

Google have developed a programme which can do this. It can identify any location from a photograph, without needing digital GPS information. It would be able to find my friend’s location just by recognising the pixels and matching the photographs. It doesn’t need to know that he’s in York. It doesn’t need know he’s in England. It doesn’t even need to know that it’s looking for a church. It can trawl millions of photographs at such speed that it has made the human intelligence that I needed to apply to solve the problem redundant. It’s not actually thinking but it can process data at such a rate that it achieves things that would require a lot of thinking for a human to do.

Furthermore, AI can recognise patterns in data and learn from them. Neural networks enable machines to cluster and classify data so that they can, for example, recognise faces and identify objects. They can also establish correlations and therefore make predictions based on past data.

Machine learning, too, involves the application of huge amounts of data. As Bernard Marr explains:

A Neural Network is a computer system designed to work by classifying information in the same way a human brain does. It can be taught to recognize, for example, images, and classify them according to elements they contain.

Essentially it works on a system of probability – based on data fed to it, it is able to make statements, decisions or predictions with a degree of certainty. The addition of a feedback loop enables “learning” – by sensing or being told whether its decisions are right or wrong, it modifies the approach it takes in the future.

There was a lot of fuss last year about a system designed to judge beauty contests which turned out to be racist. Or, at least, that is how some people interpreted it. I came across the story when someone on Twitter accused the people who developed Beauty.AI of programming racism into it. They didn’t, of course, but the truth is even more interesting.

Beauty.AI leant its understanding of beauty from millions of images. The trouble is, most of those images were white. The programme therefore assumed that lighter skin was one of the criteria by which it should judge contestants and didn’t pick any dark-skinned people as winners. The machine itself wasn’t being racist. It was simply reflecting the data it had been given.

Microsoft’s AI chatbot, Tay, ran into similar problems when it began tweeting racist comments. Again, it hadn’t been programmed to be racist but it had been instructed to replicate the speech patterns of people with whom it engaged. It only took a few targeted tweets from a band of dedicated fascists, or mischief-makers pretending to be fascists, and before long, poor Tay was denying the holocaust, praising Hitler and going on about building a wall and making Mexico pay for it. Eventually it signed off sounding tired and emotional.

Of course, the programme wasn’t actually racist. It only appeared so because it was imitating the speech of the people with whom it had interacted. Like Beauty.AI, it was just doing as it was told and learning from the data it had been given.

We find it entertaining to endow artificial intelligence with human characteristics but really it is simply machines crunching massive amounts of data at incredible speed. I say simply, but the sheer power of these machines means they will be able to perform tasks which currently require a significant level of human intelligence.

A couple of weeks ago I chaired a panel on the future of work, made up of some very distinguished experts. One of them, Sarah O’Connor from the Financial Times, told us how she pitted herself against an AI programme called Emma in a competition to write a commentary on the latest employment figures. Both pieces were submitted to editor Malcolm Moore who then had to decide which one to run. This short video tells the story.

In the end Sarah won. The machine produced copy much more quickly than she did but it wasn’t as good. It lacked Sarah’s insight and ability to make wider connections.

Emma was indeed quick: she filed in 12 minutes to my 35. Her copy was also better than I expected. Her facts were right and she even included relevant context such as the possibility of Brexit (although she was of the dubious opinion that it would be a “tailwind” for the UK economy). But to my relief, she lacked the most important journalistic skill of all: the ability to distinguish the newsworthy from the dull. While she correctly pointed out the jobless rate was unchanged, she overlooked that the number of jobseekers had risen for the first time in almost a year.

Interestingly, Emma also appeared to blame poor wage growth on immigration. Again, this simply reflects the data the programme was accessing and its aggregation of previously written UK labour market commentary.

As Sarah went on to point out, Emma isn’t going to take her job bit it could save her a lot of time. By pulling out the relevant data and creating a starter commentary, it would give Sarah more time to add the creative insights that make an article informative and thought provoking. Machines might take over the more routine and tedious bits of people’s jobs leaving them to do something more interesting.

There is evidence that this is starting to happen in a number of professions. Last week the FT reported on law firms using AI to do some of the mundane work that used to be done by junior lawyers, such as trawling through Land Registry documents and pulling out information from title deeds.

Bertalan Mesko reckons AI will make him a better doctor. It won’t literally tell doctors how to treat people but it can collate and give doctors rapid access to vast amounts of medical information which to base their diagnoses. Last year, doctors in Japan used IBM’s Watson computer to cross-reference a patient’s condition against 20 million oncological records and discovered that the patient had a rare form of leukaemia. Using its ability to mine data and find patterns, a machine can provide a diagnosis that is quite often right.

Machines, then, can produce intelligent outcomes without actually being intelligent in the same way that humans are. They can do things that look intelligent to us because we need intelligence to do them, simply by processing huge amounts of data very quickly and by being able to recognise patterns within that data.

Will we ever develop artificial general intelligence, which would enable machines to think in the way humans can? Opinion is divided. Some scientists believe the human mind is too complex to replicate. Nigel Shadbolt, professor of AI at Southampton University, says:

Brilliant scientists and entrepreneurs talk about this as if it’s only two decades away. You really have to be taken on a tour of the algorithms inside these systems to realise how much they are not doing.

The machines, he says, might look clever but we are a long way from making them intelligent:

[I]t is easy to endow our AI systems with general intelligence. If you watch the performance of IBM’s Watson as it beats reigning human champions in the popular US TV quiz show you feel you are in the presence of a sharp intelligence. Watson displays superb general knowledge – but it has been exquisitely trained to the rules and tactics of that game and loaded with comprehensive data sources from Shakespeare to the Battle of Medway. But Watson couldn’t play Monopoly. Doubtless it could be trained – but it would be just another specialised skill.

We have no clue how to endow these systems with overarching general intelligence. DeepMind, a British company acquired by Google, has programs that learn to play old arcade games to superhuman levels. All of this shows what can be achieved with massive computer power, torrents of data and AI learning algorithms. But our programs are not about to become self-aware. They are not about to apply a cold calculus to determine that they and the planet would be better off without us.

What of “emergence” – the idea that at a certain point many AI components together display a collective intelligence – or the concept of “hard take off” a point at which programs become themselves self-improving and ultimately self-aware? I don’t believe we have anything like a comprehensive idea of how to build general intelligence – let alone self-aware reflective machines.

Others, though, say that it is only a matter of time. A human brain is simply a series of atoms so, sooner or later, we will be able to replicate it. A paper by Oxford University’s Future of Humanity Institute noted:

[P]redictions on the future of AI are often not too accurate and tend to cluster around ‘in 25 years or so’, no matter at what point in time one asks.

As if to prove the point, their survey of 550 AI experts, carried out in 2013, concluded:

[T]he results reveal a view among experts that AI systems will probably (over 50%) reach overall human ability by 2040-50.

As Andrew Ng, chief scientist at Chinese web search giant Baidu and associate professor at Stanford University, said:

Those of us on the frontline shipping code, we’re excited by AI, but we don’t see a realistic path for our software to become sentient.

There’s a big difference between intelligence and sentience. There could be a race of killer robots in the far future, but I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.

But even if machines don’t learn to think in the near future, their sheer power may cause them to do things their creators didn’t anticipate. Machines learn from data but the sheer scale and complexity of that data means that humans can’t possibly know what conclusions the machines will draw. It’s not until they start discriminating on the grounds of skin colour, making racist remarks, creating super weapons in a computer game or concluding from data that asthma plus pneumonia means a lower risk of death that the people who programmed them realise something might be wrong. It is possible to make a computer do something you didn’t mean it to just be making a mistake in code. When you are telling it learn for itself by trawling a mass of data, there will inevitably be some unintended consequences.

Furthermore, data produced by humans may also reflect long-standing social prejudices so, while we may think a machine is impartial, if it is basing its decisions on what has happened previously it will replicate the bias of the past. If prevailing social attitudes associate lighter skin with beauty then the machine will do so too. Likewise, if we train machines to select job candidates based on examples of people who have been good performers in the past, they will simply replicate the biases of an organisation’s human recruiters and managers. By ascribing objectivity to machines we might further entrench existing prejudices.

As Nigel Shadbolt says, the potential dangers in AI are not the stuff of apocalyptic science fiction. What we should be worried about, he says, is far more mundane:

[T}here is the danger that arises from a world full of dull, pedestrian dumb-smart programs.

We might also want to question the extent and nature of the great processing and algorithmic power that can be applied to human affairs, from financial trading to surveillance, to managing our critical infrastructure. What are those tasks that we should give over entirely to our machines?

Anyone who has wept with frustration tying to find a contact phone number on a corporation’s website when its hard-coded processes can’t answer a slightly unusual query will see the potential danger. An assumption that clever systems are comprehensive and objective could result in frustration for users, unfair decisions or even serious harm. The threat isn’t from robots running amok but from an alignment of unforeseen circumstances and small mistakes, amplified by the power and reach of connected machines. The usual perfect storm but running at breakneck speed.

No-one can be sure where artificial intelligence will take us and what it will enable us to do. It is likely to have a huge impact on work and employment over the next couple of decades. But Professor Shadbolt’s term ‘dumb-smart’ is a useful reminder that, at the moment, it’s not actually that clever and we are still not clever enough to anticipate what it might do with our instructions. AI therefore still requires human supervision and vigilance. It’s not intelligent enough to be allowed out on its own and perhaps it never will be.

This entry was posted in Uncategorized. Bookmark the permalink.

7 Responses to AI: Still not clever enough to be allowed out on its own

  1. Dipper says:

    Years ago I looked into decision making and computers and came across some articles from the late 1960’s and early 1970’s (sadly cannot find the references for these). The basic idea was that the new generation of mainframe computers and the new technology of linear optimisation meant we could now solve all our problems. We simply had to quantify the parameters and constraints of every problem, feed them into the mainframe et voila! the answer. With just a few mainframe computers, mankind could dispense with politicians and solve all our problems objectively and optimally.

    One of the first examples of how this wasn’t going to work was choosing the site for London’s third airport. Hillingdon in NW London was quite a popular choice, but there was a disused church there. No-one could agree whether the numerical value to assign to the value of the church was zero (it was unused – no-one wanted it) or infinite (the cultural value of a church could not be measured in money).

    Whenever I hear these arguments for how AI was going to take over the world I drift back to mainframe computers and a disused church in Hillingdon.

  2. Dipper says:

    … and if AI was really going to work then surely fund management would consist of banks of computers and about three people running them. This industry consists purely of information and algorithms both soft (“sell in May and go away”) and hard (neural networks). Needless to say there are hundreds of thousands of people in this industry.

    In my time doing quant stuff in finance we spent a lot of time trying to find the perfect algorithm. We often found promising candidates but these would fail whenever a meddling political made some dumb announcement. I heard a few presentations from people running neural networks and these were always the same. “We set up a neural network. We trained it on a period of data. We ran it and did the trades. We made some small money, but then something changed and we started losing money, and then we stopped it.” Quite often they would look into the innards of the model and say they had found one parameter “which looked like momentum” and another “which looked like interest rate differential”. Needless to say at that point it is no longer a neural network but a model your intern can put together in excel. Neural Networks are for people who would like to be clever, but aren’t.

    There is no magic. It is just data, calculations, and statistics.

  3. Dipper says:

    … and an obvious test of AI is for it to predict its own future.

  4. bill40 says:

    If I were a journalist at the Mail, Sun or Express I’d be very worried about Emma. Emma can spout unproven racist nonsense much faster than them, hell she could take all their jobs right now!

  5. TheWickedChild says:

    I really enjoyed the blog. A couple of comments as discussed:
    Firstly, a technical point, e.g. I don’t think Machine learning is beyond AI. Sort of the other way round for me. Machine learning can be very simple as can just rely on code to optimise a couple of variables based on a data stream. There’s a limit to the simplicity of AI. Both AI and machine learning require an aspect of data.
    On issues with AI, you use the infamous case of the beauty.AI judge. But what (I think) is not noted in this case is that it maybe was more ‘accurate’ than we wanted to admit. In most cultures, lighter skin tone is classed as more attractive due to young (childless) females having lighter skin on average.
    Did the AI pick this up, is it accurately reflecting us? What about between cultures (in West)?
    Hmm, problematic: who are we to judge the AI, eh? So was there an issue with this AI? Probably yes, but it could be argued that the AI was being accurate and that we humans couldn’t handle the truth. The deep issue this is uncovering is that we don’t know what we want from AI. Do we want it to provide an accurate analytical output, or do we expect this to be mediated with values that reflect our own? A big weakness we have with the AI, therefore, is often not a failure to understand the technology, it’s a failure to understand ourselves.
    On application in complex applications (not requiring a value judgment!), I believe you are underestimating the near-term potential. Again, the issue is in the difficulty in envisaging the change. Half of the battle is in reimagining the work to suit the talents of the AI, as currently, things are set-up for humans. However, we work very differently now to when I started my career: it seems likely that we can/will do the same to change to suit AI. This will be a similar transition to factories where having machines work mostly unaided was, at first, odd and disconcerting, but now is the default.
    Interestingly, at work I’m talking your line and bigging up the humans, where our MD just wants to cut the humans out of the loop: that’s his aim. However, I think there will be more closed-loop AI systems in the near future than people expect. One part of my study is looking at how humans react to information, and how we can improve that response. I’m not going to lie, it’s not always encouraging. Machines don’t have to be perfect to be very competitive: we’re only human.
    Cheers TwC

  6. Simon says:

    Some random thoughts:
    The machine couldn’t beat an expert journalist, but I bet 95% of the general population would have produced something worse.
    No matter how powerful these computers are, they are still an order of magnitude or three less powerful than a human brain.
    When we are able to explain sentience in terms of the actions of neurons, then we will be able to confidently state whether or not a machine possesses it.
    People have been redrawing the boundaries of what makes them as a species special since Darwin. I expect this to continue until the last meat based Human dies in its enclosure 🙂

  7. Pingback: Weekly Musings – May 21, 2017 | Talent Vanguard

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s