U of T AI Tech Captures Photons in Motion

Close your eyes and picture the iconic "bullet time" scene from The Matrix - the one where Neo, played by Keanu Reeves, dodges bullets in slow motion. Now imagine being able to witness the same effect, but instead of speeding bullets, you're watching something that moves one million times faster: light itself.

Computer scientists from the University of Toronto have built an advanced camera setup that can visualize light in motion from any perspective, opening avenues for further inquiry into new types of 3D sensing techniques.

The researchers developed a sophisticated AI algorithm that can simulate what an ultra-fast scene - a pulse of light speeding through a pop bottle or bouncing off a mirror - would look like from any vantage point.

David Lindell (supplied image)

David Lindell, an assistant professor in the department of computer science in the Faculty of Arts & Science, says the feat requires the ability to generate videos where the camera appears to "fly" alongside the very photons of light as they travel.

"Our technology can capture and visualize the actual propagation of light with the same dramatic, slowed-down detail," says Lindell. "We get a glimpse of the world at speed-of-light timescales that are normally invisible."

The researchers believe the approach, which was recently presented at the 2024 European Conference on Computer Vision , can unlock new capabilities in several important research areas, including: advanced sensing capabilities such as non-line-of-sight imaging, a method that allows viewers to "see" around corners or behind obstacles using multiple bounces of light; imaging through scattering media, such as fog, smoke, biological tissues or turbid water; and 3D reconstruction, where understanding the behaviour of light that scatters multiple times is critical.

In addition to Lindell, the research team included U of T computer science PhD student Anagh Malik, fourth-year engineering science undergraduate Noah Juravsky, Professor Kyros Kutulakos and Stanford University Associate Professor Gordon Wetzstein and PhD student Ryan Po.

The researchers' key innovation lies in the AI algorithm they developed to visualize ultrafast videos from any viewpoint - a challenge known in computer vision as "novel view synthesis."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.