Lessons from AI Risks: Insights from Five Giants

The progress of artificial intelligence (AI) has been relentless. With OpenAI's latest model, o3 , recently breaking records yet again , it raises urgent questions about safety, as well as the future of humanity.

Author

  • Simon Rogerson

    Professor Emeritus in Computer Ethics, De Montfort University

One place we can turn for help is to great thinkers from the past. They explored beyond the obvious in their worlds and often looked into the future, foreseeing a time when machines would have AI-like capabilities.

The English 19th century mathematician and writer Ada Lovelace is sometimes recognised as the first computer programmer for her work with the polymath Charles Babbage on his " analytical engine ". This was a general purpose mechanical computer, which was never completed, but its design mirrored that of computers decades later.

Charles Babbage's analytical machine.
Charles Babbage's analytical machine. Wikimedia , CC BY

Her 1842 notes to Babbage , exploring the potential of his proposed device, foresaw something akin to AI in future. "It might act upon other things besides number", she said, suggesting that such a machine could one day express relationships between pitched sounds in order to "compose elaborate and scientific pieces of music of any degree of complexity or extent".

This requires pattern recognition across a vast array of sound and music data - exactly what large language models are doing today by generating music from text prompts.

All the same, Lovelace was sceptical about the machine's thinking capabilities, arguing it would still be dependent on humans to originate whatever it could come up with. Indeed, AI models today are still not really thinking, so much as building sentences based on mathematical probabilities from being trained on trillions of human words from the internet.

Lovelace pointed to such limitations to "guard against the possibility of exaggerated ideas that might arise as to the powers of the analytical engine". However, she also emphasised the "collateral influences" this machine could have beyond its bare output. Her example is that it could shed new light on science, but the wider implication is that such devices must never be underestimated.

The Turing test

Lovelace's argument also raised another implicit question. What happens if and when the machines do become the originators, once sentience is no longer science fiction? This inspired another English mathematician and thinker a few decades later, Alan Turing.

Turing's 1949 "imitation game" , later known as the Turing test, sought to determine whether a computer could think in a way comparable to a human. It remained a key test of AI until it was considered surpassed by OpenAI's ChatGPT in 2022.

Turing actually thought this would happen sooner, writing in a famous 1950 paper :

I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

Alan Turing
Alan Turing (1912-54). Wikimedia , CC BY

He wasn't especially pessimistic about what crossing this rubicon would mean, arguing in the same paper in favour of trying to create a machine that simulated a child's mind rather than an adult's. He thought this could be "easily programmed", implying we had little to fear from such endeavours.

Equally, he wasn't blind to the potential for humans to end up subordinated by thinking machines. In a public lecture in 1951 , he remarked: "If a machine can think, it might think more intelligently than we do, and then where should we be?"

Turing's biographer , Christof Teuscher, described him as an "Orwell of science". It's interesting to contrast his views with George Orwell himself, who despite never pondering AI, did talk about the dangers of machines more generally in The Road to Wigan Pier (1937).

If you are prepared to indulge swapping out the references to "machines" for "AI", it offers interesting possibilities about what Orwell might have made of today's technological arms race:

The sensitive person's hostility to [AI] is in one sense unrealistic, because of the obvious fact that [AI] has come to stay. But as an attitude of mind there is a great deal to be said for it …

Verbally, no doubt, we would agree that [AI] is made for man and not man for [AI]; in practice any attempt to check the development of [AI] appears to us an attack on knowledge and therefore a kind of blasphemy. And even if the whole of humanity suddenly revolted against [AI] and decided to escape to a simpler way of life, the escape would still be immensely difficult …

Mechanise the world as fully as it might be mechanised, and whichever way you turn there will be some [AI] cutting you off from the chance of working - that is, of living.

Norbert Wiener's ethics

This brings us to the American scientist and mathematician Norbert Wiener. Recognised as the founder of computer ethics, Wiener's seminal work is The Human Use of Human Beings (1950), which aimed to "warn against the dangers" of exploiting machines' potential.

Wiener foresaw a time when machines would be talking to one another, and improve over time by being able to keep track of their past performances.

Comparing it to the old folk tale of a person finding a djinnee (genie) in a bottle and knowing it was better left there, he wrote:

The machine like the djinnee which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us.

Decades later, the English physicist Stephen Hawking had similar concerns. He wrote in 2016 that AI could be:

The biggest event in the history of our civilisation, but it could also be the last - unless we learn how to avoid the risks. Alongside the benefits, AI will also bring dangers like powerful autonomous weapons or new ways for the few to oppress the many.

In his final months, he wrote:

I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.

These five giants of the past prompt us to think very carefully about AI. Lovelace talked about a human tendency to first overrate the potential of a new technology, only to later over-correct by underestimating the reality. Wiener warned against the "selfish exploitation" of untested technological potential, which has surely led us to numerous catastrophic outcomes from IT failures over the years.

Clearly the same thing could now happen with a much more powerful technology. It's likely that these writers would have looked at recent developments and seen fools rushing in where angels fear to tread.

The Conversation

Simon Rogerson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).