The world is facing a "silver tsunami" - an unprecedented ageing of the global workforce. By 2030, more than half of the labour force in many EU countries will be aged 50 or above. Similar trends are emerging across Australia , the US and other developed and developing economies .
Author
- Sajia Ferdous
Lecturer in Organisational Behaviour, Queen's Business School, Queen's University Belfast
Far from being a burden or representing a crisis, the ageing workforce is a valuable resource - offering a so-called "silver dividend" . Older workers often offer experience, stability and institutional memory. Yet, in the rush to embrace artificial intelligence (AI), older workers can be left behind .
One common misconception is that older people are reluctant to adopt technology or cannot catch up. But this is far from the truth. It oversimplifies the complexity of their abilities, participation and interests in the digital environments.
There are much deeper issues and structural barriers at play. These include access and opportunity - including a lack of targeted training . Right now, AI training tends to be targeted at early or mid-career workers.
Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK's latest coverage of news and research, from politics and business to the arts and sciences. Join The Conversation for free today .
There are also confidence gaps among older people stemming from workplace cultures that can feel exclusionary . Data shows that older professionals are more hesitant to use AI - possibly due to fast-paced work environments that reward speed over judgment or experience.
There can also be issues with the design of tech systems. They are built primarily by and for younger users. Voice assistants often fail to recognise older voices, and fintech apps assume users are comfortable linking multiple accounts or navigating complex menus. This can alienate workers with legitimate security concerns or cognitive challenges.
And all these issues are exacerbated by socio-demographic factors . Older people living alone or in rural areas, with lower education levels or who are employed in manual labour, are significantly less likely to use AI.
Ageism has long shaped hiring, promotion and career development. Although age has become a protected characteristic in UK law, ageist norms and practices persist in many not-so-subtle forms.
Ageism can affect both young and old, but when it comes to technology, the impact is overwhelmingly skewed against older people.
So-called algorithmic ageism in AI systems - exclusion based on automation rather than human decision-making - often exacerbates ageist biases.
Hiring algorithms often end up favouring younger employees. And digital interfaces that assume tech fluency are another example of exclusionary designs. Graduation dates, employment gaps, and even the language used in CVs can become proxies for age and filter out experienced candidates without any human review.
Tech industry workers are overwhelmingly young . Homogenous thinking breeds blind spots, so products work brilliantly for younger people. But they can end up alienating other age groups.
This creates an artificial "grey digital divide" , shaped less by ability and more by gaps in support, training and inclusion. If older workers are not integrated into the AI revolution, there is a risk of creating a divided workforce. One part will be confident with tech, data-driven and AI-enabled, while the other will remain isolated, underutilised and potentially displaced.
An 'age-neutral' approach
It's vital to move beyond the idea of being "age-inclusive", which frames older people as "others" who need special adjustments. Instead, the goal should be age-neutral designs.
AI designers should recognise that while age is relevant in specific contexts - such as restricted content like pornography - it should not be used as a proxy in training data, where it can lead to bias in the algorithm. In this way, design would be age-neutral rather than ageless.
Designers should also ensure that platforms are accessible for users of all ages.
The stakes are high. It is also not just about economics, but fairness, sustainability and wellbeing.
At the policy level in the UK, there is still a huge void. Last year, House of Commons research highlighted that workforce strategies rarely distinguish the specific digital and technological training needs of older workers. This underscores how ageing people are treated as an afterthought.
A few forward-thinking companies have backed mid- and late-career training programmes. In Singapore, the government's Skillsfuture programme has adopted a more agile, age-flexible approach. However, these are still isolated examples.
Retraining cannot be generic. Beyond basic digital literacy courses, older people need targeted, job-specific advanced training. The psychological framing of retraining is also critical. Older people need to retrain or reskill not for just career or personal growth but also to be able to participate more fully in the workforce.
It's also key for reducing pressure on social welfare systems and mitigating skill shortages. What's more, involving older workers in this way supports the transfer of knowledge between generations, which should benefit everyone in the economy.
Yet, currently, the onus is on the older workers and not organisations and governments.
AI, particularly the generative models that can create text, images and other media, is known for producing outputs that appear plausible but are sometimes incorrect or misleading . The people best placed to identify these errors are those with deep domain knowledge - something that is built over decades of experience.
This is not a counterargument to digital transformation or adoption of AI. Rather, it highlights that integrating older people into digital designs, training and access should be a strategic imperative. AI cannot replace human judgment yet - it should be designed to augment it .
If companies, policies and societies exclude older workers from AI transformation processes, they are essentially removing the critical layer of human oversight that keeps AI outputs reliable, ethical and safe to use. An age-neutral approach will be key to addressing this.
Piecemeal efforts and slow responses could cause the irreversible loss of a generation of experience, talent and expertise. What workers and businesses need now are systems, policies and tools that are, from the outset, usable and accessible for people of all ages.
Sajia Ferdous does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.