In the last 18 months, a painting created by artificial intelligence (AI) was sold at Christie’s for €400,000, a writing bot at tech magazine TNW out-performed its journalist colleagues, and Google announced that it had achieved quantum supremacy in a computation that would have taken a binary computer 10,000 years. AI seems to be coming of age.
In spite of all the hype, it is not quite as it appears, and the above examples actually highlight some of the challenges with the current state of AI. The painting sold at Christie’s, called Portrait of Edmund Belamy, was made by an AI algorithm, but the data input and the final selection was made by humans from the French art collective Obvious. They saw the role of AI in much the same way as other artists view a paintbrush: a tool to produce the work.
Journalists at TNW, meanwhile, pre-programmed sets of phrases into a bot called Stoshi Nagaboto which collected data about Bitcoin and turned it into short articles. Unlike its human counterparts, Stoshi did not need to pause to eat or sleep.
The machine’s performance was measured in terms of views, but this is not necessarily a good gauge of success. Journalists will spend time finely crafting insightful long-form articles that may have fewer views, but offer a much deeper level of engagement.
Google’s claims of quantum supremacy have also been disputed, with IBM stating that Google had exaggerated the speed claims, and that the calculation could have been achieved conventionally in a couple of days.
So, AI is not yet doing the job it promises, in spite of significant investments in the technology. Instead, the vast sums of money spent on development has led to a ‘fake it ’til you make it’ approach. Organisations such as Facebook, Spinvox, Clara, Expensify have all been found playing Wizard of Oz, by employing people to undertake tasks that should be done by AI.
Putting aside the deception, though, maybe it is not such a bad thing that humans remain part of the process. Amazon, for example, had to abandon using AI for recruitment, as it eliminated women from jobs, due to historical prejudice in the data used. There is a risk that this kind of data bias could also apply to the employee benefits sector.
AI offers the potential to do amazing things, but we are not there yet. While we are not that far from aeroplanes being able to fly themselves, most of us would prefer to get on one with a pilot.
We want the reassurance of a highly skilled individual on hand in case something goes wrong, so perhaps we should apply the pilot analogy to other areas of AI. Where we are making important decisions that impact on both individual and societal futures, we need skilled, knowledgeable people to ensure the machines make the right choices.
Mark Brill is senior lecturer in future media at Birmingham City University