Legend has it that William Tell shot an apple from his young son's head. While there are many interpretations of the tale, from the perspective of the theory of technology, a few are especially salient.
Author
- Akhil Bhardwaj
Associate Professor (Strategy and Organisation), School of Management, University of Bath
First, Tell was an expert marksman. Second, he knew his bow was reliable but understood it was just a tool with no independent agency. Third, Tell chose the target.
What does all this have to do with artificial intelligence? Metaphorically, AI (think large language models or LLMs, such as ChatGPT) can be thought of as a bow, the user is the archer, and the apple represents the user's goal. Viewed this way, it's easier to work out how AI can be used effectively in the workplace.
To that end, it's helpful to consider what is known about the limitations of AI before working out where it can - and can't - help with efficiency and productivity.
First, LLMs tend to create outcomes that are not tethered in reality. A recent study showed that as much as 60% of their answers can be incorrect. Premium versions even incorrectly answer questions more confidently than their free counterparts.
Second, some LLMs are closed systems - that is, they do not update their "beliefs". In a mutable world that is constantly changing, the static nature of such LLMs can be misleading. In this sense, they drift away from reality and may not be reliable.
What's more, there is some evidence that interactions with users lead to a degradation in performance. For example, researchers have found that LLMs become more covertly racist over time. Consequently, their output is not predictable.
Third, LLMs have no goals and are not capable of independently discovering the world. They are, at best, just tools to which a user can outsource their exploration of the world.
Finally, LLMs do not - to borrow a term from the 1960s sci-fi novel Stranger in a Strange Land - "grok" (understand) the world they are embedded in. They are far more like jabbering parrots that give the impression of being smart.
Think of the ability of LLMs to mine data and consider statistical associations between words, which they use to mimic human speech. The AI does not know what statistical association between words mean. It does not know that the crowing of the rooster does not lead to a sunrise, for example.
Of course, an LLM's ability to mimic speech is impressive. But the ability to mimic something does not mean it has the attributes of the original.
Lightening the workload
So how can you use AI more effectively? One thing it can be useful for is critiquing ideas. Very often, people prefer not to hear criticism and feel a loss of face when their ideas are criticised - especially when it happens in public.
But LLM-generated critiques are private matters and can be useful. I have done so for a recent essay and found the critique reasonable. Pre-testing ideas can also help avoid blind spots and obvious errors.
Second, you can use AI to crystallise your understanding of the world. What does this mean? Well, because AI does not understand the causes of events, asking it questions can force you to engage in sense-making. For example, I asked an LLM about whether my university (Bath) should widely adopt the use of AI.
While the LLM pointed to efficiency advantages, it clearly did not understand how resource are allocated. For example, administrative staff who are freed up cannot be redeployed to make high-level strategic decisions or teach courses. AI has no experience in the world to understand that.
Third, AI can be used to complement mundane tasks such as editing and writing emails. But here, of course, lies a danger - users will use LLMs to write emails at one end and summarise emails at the other.
You should consider when a clumsily written personal email might be a better option (especially if you need to persuade someone about something). Authenticity is likely to start counting more as the use of LLMs becomes more widespread. A personal email that uses the right language and appeals to shared values is more likely to resonate.
Fourth, AI is best used for low-stakes tasks where there is no liability. For example, it could be used to summarise a lengthy customer review, answer customer questions that are not related to policy or finance, generate social media posts, or help with employee inductions.
Consider the opposite case. In 2022, an LLM used by Air Canada misinformed a passenger about a fee - and the passenger sued. The judge held the airline liable for the bad advice. So always think about liability issues.
Fans of AI often advocate it for everything under the sun. Yet frequently, AI comes across as a solution looking for a problem. The trick is to consider very carefully if there is a case for using AI and what the costs involved might be.
Chances are, the more creative your task is, or the more unique it is, and the more understanding it requires of how the world works, the less likely it is that AI will be useful. In fact, outsourcing creative work to AI can take away some of the "magic" . AI can mimic humans - but only humans "grok" what it is to be human.
Akhil Bhardwaj does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.