Everyone is worried about Artificial Intelligence. From writers in Hollywood to computer programmers, recent advances in technology are causing concern about what Generative AI is going to mean for the future of work, our society and the wider world. Is there nothing machines will not be able to do?
By Professor Carl-Benedikt Frey, Dieter Schwarz Associate Professor of AI & Work, Oxford Internet Institute & Director, Future of Work Programme, Oxford Martin School, and
Professor Michael Osborne, Professor of Machine Learning, Department of Engineering Science and co-Director, Oxford Martin AI Governance Initiative.
We have spent a decade researching the impacts of AI. Ten years ago, we wrote a paper estimating that some 47% of US-based jobs could be automated in principle, as AI and mobile robotics expanded the scope of tasks that computers can do.
Our estimates were based on the premise that, while computers might eventually be able to do most tasks, humans would continue to hold the comparative advantage in three key domains: creativity, complex social interactions, and interaction with unstructured environments (such as your home).
However, it is important to acknowledge there has been meaningful progress in these domains, with Large Language Models (LLMs), such as GPT4, capable of producing human-like text responses to a very wide range of queries. In the age of Generative AI, a machine might even write your love letters.
Yet if GPT4 does write your love letters, your in-person dates will become even more important. The bottom line is, as virtual social interactions are increasingly aided by algorithms, the premium on in-person interactions, which cannot be replicated by machines, will become even greater.
As virtual social interactions are increasingly aided by algorithms, the premium on in-person interactions, which cannot be replicated by machines, will become even greater
Second, although AI can produce a letter in the style of Shakespeare, this is only because Shakespeare's works already exist, and on which an AI can be trained. AI is generally good at tasks which have clear data and a clear goal, such as maximizing the score in a video game, or the similarity to the language of Shakespeare. But if you want to create something genuinely new, rather than rehashing existing ideas, for what should you optimise? Answering the question of the true goal is where much human creativity resides.
Third, as we noted in our 2013 paper, there are many jobs that can be automated, but Generative AI - a subfield of the broader field of AI - is not yet an automation technology. It needs prompting from a human, and it needs a human to select, fact-check and edit the output.
Finally, Generative AI generates content that mirrors the quality of its training data - 'garbage in results in garbage out'. And these algorithms require training on expansive datasets, such as extensive segments of the internet, as opposed to smaller, refined datasets developed by experts. Consequently, LLMs are inclined to create text that aligns with the average, rather than the extraordinary, portions of the internet. Average input yields average output.
If you want to create something genuinely new, rather than rehashing existing ideas, for what should you optimise? Answering the question of the true goal is where much human creativity resides
What implications does this hold for the future of employment? For one thing, the latest generation of AI will persist in necessitating human intervention. What is more, workers with less specialised skills stand to gain disproportionately, as they can now generate content that aligns with the 'average' benchmark.
What implications does this hold for the future of employment? For one thing, the latest generation of AI will persist in necessitating human intervention. What is more, workers with less specialised skills stand to gain disproportionately
Could the hurdles outlined above be overcome shortly, paving the way for widespread automation also of creative and social tasks? In the absence of a major breakthrough, we think it is unlikely. Firstly, the data already ingested by LLMs is likely to comprise a considerable fraction of the internet, making it unlikely that training data can be significantly expanded to power further progress. Furthermore, there are legitimate grounds to expect a surge of inferior AI-crafted content on the Web, progressively degrading its quality as a source of training data.
Second, while we have become accustomed to Moore's Law - the observational law that declares the quantity of transistors in an integrated circuit (IC) doubles approximately every two years - many anticipate this trend will lose momentum, owing to physical constraints, around 2025.
Third, it is estimated that the energy to create GPT4 cost a large fraction of its $100 million training cost - even before the price of energy went up. With the climate challenge looming large, there are questions over whether this approach can continue.
Many jobs will be automated, but not because of the latest wave of Generative AI
What is needed, in other words, is AI that is capable of learning from smaller, curated datasets, drawing upon expert samples, rather than the average population. But when such innovation will come is notoriously hard to predict. What we can do is create better incentives for data-saving innovation.
Consider this: around the turn of the 20th century, there was a genuine contest - would electric vehicles or the combustion engine prevail in the burgeoning car industry? At first, both contenders were on a par, but massive oil discoveries tipped the balance in favour of the combustion engine. Now imagine that we leveraged a tax on oil back then: we might have shifted the balance in favour of the electric car, sparing us plenty of carbon emissions. In similar fashion, a tax on data would create incentives for innovation to make AI less data-intensive.
Going forward, as we have argued elsewhere, many jobs will be automated, but not because of the latest wave of Generative AI. In the absence of major breakthroughs, we expect the bottlenecks we outlined in our 2013 paper to continue to constrain automation possibilities for the foreseeable future.
See Oii news here