In nature, flying animals sense coming changes in their surroundings, including the onset of sudden turbulence, and quickly adjust to stay safe. Engineers who design aircraft would like to give their vehicles the same ability to predict incoming disturbances and respond appropriately. Indeed, disasters such as the fatal Singapore Airlines flight this past May in which more than 100 passengers were injured after the plane encountered severe turbulence, could be avoided if aircraft had such automatic sensing and prediction capabilities combined with mechanisms to stabilize the vehicle.
Now a team of researchers from Caltech's Center for Autonomous Systems and Technologies (CAST) and Nvidia has taken an important step toward such capabilities. In a new paper in the journal npj Robotics, the team describes a control strategy they have developed for unmanned aerial vehicles, or UAVs, called FALCON (Fourier Adaptive Learning and CONtrol). The strategy uses reinforcement learning, a form of artificial intelligence, to adaptively learn how turbulent wind can change over time and then uses that knowledge to control a UAV based on what it is experiencing in real time.
"Spontaneous turbulence has major consequences for everything from civilian flights to drones. With climate change, extreme weather events that cause this type of turbulence are on the rise," says Mory Gharib (PhD '83), the Hans W. Liepmann Professor of Aeronautics and Medical Engineering, the Booth-Kresa Leadership Chair of CAST, and an author of the new paper. "Extreme turbulence also arises at the interface between two different shear flows-for example, when high-speed winds meet stagnation around a tall building. Therefore, UAVs in urban settings need to be able to compensate for such sudden changes. FALCON gives these vehicles a way to understand the turbulence that is coming and make necessary adjustments."
FALCON is not the first UAV control strategy to use reinforcement learning. However, previous strategies have not tried to learn the underlying model that truly represents how turbulent winds work. Instead, they have all been model-free methods. Such methods focus on maximizing a reward function that cannot be used to tackle different settings, such as different wind conditions or vehicle configurations, without retraining because they focus on just one environment.
"That's not so good in the physical world, where we know that situations can change drastically and quickly," says Anima Anandkumar , the Bren Professor of Computing and Mathematical Sciences at Caltech and an author of the new paper. "We need the AI to learn the underlying model of turbulence well so that it can take action based on how it thinks the wind will change."
"Advancements in fundamental AI will change the face of the aviation industry, enhancing safety, efficiency, and performance across a range of platforms, including passenger planes, UAVs, and carrier aircraft. These innovations promise to make air travel and operations smarter, safer, and more streamlined," says Kamyar Azizzadenesheli, a co-author from Nvidia.
As the FALCON acronym says, the strategy is based on Fourier methods, meaning that it relies on the use of sinusoids, or periodic waves, to represent signals-here, wind conditions. The waves provide a good approximation of standard wind motions, keeping needed computation to a minimum. Within those waves, when extreme turbulence arises, the unsteadiness shows up as a noticeable change in frequency.
"If you can learn how to predict those frequencies, then our method can give you some prediction of what is headed your way," says Gharib, who is also director of the Graduate Aerospace Laboratories at Caltech.
"Fourier methods work well here because turbulent waves are better modeled in terms of frequencies, with most of their energy lying in low frequencies," says co-lead author Sahin Lale (PhD '23), now a senior staff research engineer at Neural Propulsion Systems, Inc., who completed the work while at Caltech. "Using this prior knowledge simplifies both the learning and control of turbulent dynamics, even with a limited amount of information."
To test the effectiveness of the FALCON strategy, the researchers created an extremely challenging test setup in the John W. Lucas Wind Tunnel at Caltech. They used a fully equipped airfoil wing system as their representative UAV, outfitting it with pressure sensors and control surfaces that could make adjustments online to things like the system's altitude and yaw. They then positioned a large cylinder with a moveable attachment in the wind tunnel. When wind flowed over the cylinder, it created random, major fluctuations in the wind reaching the airfoil.
"Training a reinforcement learning algorithm in a physical turbulent environment presents all kinds of unique challenges," says Peter I. Renn (BS '19, PhD '23) co-lead author of the paper who is now a quantitative strategist at Virtu Financial. "We couldn't rely on perfectly clean signals or simplified flow simulations, and everything had to be done in real time."
After about nine minutes of learning, the FALCON-assisted system was able to stabilize itself in this extreme environment.