Six Times Imperial Pushed Boundaries Of AI In 2024

This is how we continued push the boundaries of artificial intelligence (AI), breaking new ground in fields ranging from healthcare to climate science

Here are six ways AI research at Imperial made an impact in 2024.

AI transformed lung cancer diagnosis with AI-driven imaging

Imperial researchers pioneered a technique that combined medical CT scans with AI to improve the diagnosis of lung cancer.

This non-invasive method classified cancer types and predicted patient outcomes, eliminating the need for traditional biopsies, which can be costly and uncomfortable, and delay treatment.

Based on the medical data of existing patients, the Imperial team developed an AI-powered, deep learning assessment tool they call tissue-metabolomic-radiomic-CT (TMR-CT), to analyse tumour characteristics directly from imaging.

Published in npj Precision Oncology, the study enabled earlier detection and more precise treatment decisions.

AI boosted weather forecast accuracy for regional predictions

Data scientists from Imperial used local atmospheric data and data assimilation to enhance accuracy of region-specific weather forecasting.

By integrating real-time observational data with machine learning models, they improved the accuracy of the existing U-STN global forecasting model to better reflect the UK's unique climate features.

The study, presented at the NeurIPS 2023 conference, demonstrated how selective data integration could significantly enhance forecasting accuracy and reliability.

AI revealed the copyrighted material found in training data

New research developed by privacy experts from Imperial's Computational Privacy Group, allowed content creators to check whether their work has been used to train AI such as Large Language Models (LLMs).

These LLMs require massive amounts of data – be it images or text – which has previously been obtained by developers on shaky legal grounds, sometimes by ignoring license and copyright restrictions.

Inspired by tried-and-true traditional copyright traps, researchers created unique fictitious sentences in content, which could be hidden from online readers, and observed changes in the LLM to see whether they were using said data.

It was a step toward ensuring AI is built and used in a responsible, transparent way and content creators are compensated for their work.

AI advances quantum chemistry challenges

In a collaboration between Imperial and Google DeepMind, scientists have implemented deep neural networks – an AI method inspired by brain-like systems – to address a complex quantum chemistry problem.

Their study, published in Science in August, explored the use of AI to understand the behavior of molecules transitioning to and from their 'excited states.'

'Excited states' occur when molecules are energised, such as through high heat or pressure, causing their electrons to rearrange into new configurations. These phenomena are fundamental to numerous chemical processes but notoriously difficult to model.

Representing the state of a quantum system is extremely challenging Dr David Pfau Department of Chemistry

Lead researcher Dr David Pfau, explained: "Representing the state of a quantum system is extremely challenging... This is exactly where we thought deep neural networks could help."

AI-powered 'sound sieve' wins undergraduate innovation competition

Team Marigold – made up of undergraduates Leo Kremer (Dyson School of Design Engineering), Maria Guerrero Jimenez, and Mele Gadzama (Department of Physics) – received first place for their AI-enhanced sound filtering tool.

Team Marigold won the Faculty of Natural Sciences Make-A-Difference (FoNS-MAD) competition with their AI-driven Chrome extension, which acts as a sound sieve.

Designed for people with misophonia, a condition that causes severe emotional responses from trigger sounds, the tool filters specific noises from online media. Inspired by a team member's sister, the project refined its machine learning model through user testing and feedback from support groups.

"There's nothing like this out there. It will allow people with misophonia to engage with online media without constant struggles," the team said.

FoNS-MAD annually challenges students to develop impactful, low-cost technologies, providing eight weeks of funding, lab access and mentorship to turn ideas into solutions.

New AI stroke brain scan readings are twice as accurate as current method

Imperial researchers, in collaboration with the Universities of Munich and Edinburgh, have developed AI software that analyses brain scans to determine the onset time of strokes. Identifying the precise start time is critical as treatments vary in the first few hours post-stroke.

This information enables doctors to make decisions that maximises the patient's chance of reversal, ensuring no secondary damage is caused.

The model was trained on hundreds of medical scans where stroke time was known. By also extracting additional medical data from the scan, such as texture, the AI can identify the start time of strokes 50% more successfully than the standard visual technique of doctors, ultimately enabling faster and more accurate treatment of patients in emergencies.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.