Click here. Open this attachment. Log into your account. People get dozens of emails daily asking us to perform these kinds of tasks-but all it takes is one from a malicious actor to risk exposing personal information. These phishing attempts can lead to identify theft, fraud and installation of dangerous computer viruses. Beyond those personal risks, however, phishing emails can also be used to target government agencies in a way that could divulge classified information and put national security at risk.
"People everywhere-including the federal government-click on things they shouldn't click on, which gives bad actors access to things they shouldn't have access to," says Holly Tucker, Mellon Foundation Chair in the Humanities and professor of French. This spring, she led an undergraduate immersion program to investigate those threats, involving seven students who conducted original research on phishing attempts among their peers, including examining the increasing use of generative artificial intelligence to conduct more sophisticated attacks.
The fellowship is associated with the Institute of National Security that will launch inside the School of Engineering this September, led by retired Gen. Paul Nakasone, the former commander of U.S. Cyber Command and director of the National Security Agency. The purpose of the institute is to address emerging security concerns in a 21st century environment, harnessing Vanderbilt's expert faculty, strategic partnerships and interdisciplinary approach. That exciting mix of talent and strategy will make Vanderbilt the premier institute for accelerating innovation, advising officials and preparing students to be the next generation of national security leaders.
"The idea behind the institute is that we listen to our national security partners and what they want us to engage in, and then we put together academic special forces to respond," Tucker says. Those "academic special forces" go beyond computer science or international studies to include expertise in all aspects of science, social science and the humanities. "The problems we are facing as a nation require all kinds of different approaches, and Vanderbilt is unique in its small footprint and massive brain-we're about to get different voices to the table to ask different questions."
The immersion program was a "win-win" that allowed students and faculty to conduct research directly applicable to keeping students on campus safe, while at the same time providing behavioral insights that could help the NSA and other agencies better protect the nation. Students not only learned how to conduct research, but also met with an impressive roster of top national security officials, including FBI Director Christopher Wray; Ambassador-at-Large for Cyberspace and Digital Policy Nate Fick; and retired Air Force Lt. Gen. Charles "Tuna" Moore, former deputy director of U.S. Cyber Command who now is a distinguished visiting professor at Vanderbilt.
"Right now, cybersecurity folks say the training on this stuff is abysmal," Tucker says. "We can teach people to be careful of a URL or look for misspellings, but so much of this stuff is very subtle. What do we need to know about the practice of phishing discernment so we can better educate folks?"
Tucker's expertise is 17th- and 18th-century French history, and for the past several years, she has integrated generative AI technology into a class in which students roleplay personas in the French Revolution. Despite her interests in technology and historic military conflicts, she was surprised last year when members of the nascent institute asked her to moderate a panel and a fireside chat with Nakasone for a summit on national security and emerging threats. "I thought they were crazy-why were they asking me?" she remembers.
Upon reflection, however, she realized she did have a lot to say about new technology and international conflict. After all, the development of gunpowder and rifle technology in the 16th century dramatically changed battlefield tactics, then easily concealable handguns in the 17th century became a national security risk. "What's fascinating to me is seeing the way societies respond to emerging technologies, not just on an individual level, but also how they affect the global balance of power," she says.
When she was invited to join the institute, she didn't hesitate to lead the immersion program, along with a diverse team that included Vanderbilt senior cybersecurity analyst Max Lieb, AI researcher Jess Phelan, MS'23, and Ph.D. students in computer science and philosophy Carlos Olea, MS'24, and Cameron Pattison, Class of 2028. Her humanities background, Tucker says, has suited her to explore the subject. "Phishing emails are not always obvious," she says. "They often play on emotions or create a sense of urgency. Humanities are all about the practice of subtle interpretation, when something isn't so black and white."
The seven students selected for the program did all of the programming and study design, learning to jump the hurdles of conducting ethical research on human subjects. Along the way, they had fun drumming up excitement among other Vanderbilt students for a game called "Get Phished," setting up fishbowls full of Swedish Fish and sending out a student in a giant shark costume, eventually hooking some 500 participants.
For the game, students were shown "real" emails alongside the types of phishing emails that commonly target undergraduates-offering employment, asking to click on attachments or requesting personal information. Some of both types of emails were human-generated, while others were created with a Gen AI model similar to ChatGPT. The researchers found that students were fooled by the phishing emails as often as 1 in 5 times. Consistent with past research, they found people who were more confident in their abilities were the least accurate in discerning phishing attempts. They also found that students were significantly less able to identify AI-generated phishing emails, failing 29 percent of the time, compared with 17 percent for human-generated.
The team is in the process of publishing the research and is developing a larger study in cooperation with the Peabody School's LIVE Initiative (Learning Incubator: a Vanderbilt Endeavor), which uses AI and other technology to aid learning. The eventual goal is to deploy a learning program on Vanderbilt's email servers. The program has been invaluable for students in helping them better understand national security issues and see research in action.
"I appreciated learning about the seriousness of phishing attacks and how they can lead to disastrous cybersecurity attacks," says computer science major Elise Farley, BA'24, who is moving to Washington, D.C., to begin a career in technology consulting. Computer science student Kate Fischer, Class of 2026, has long been interested in a career in national security, and she found the extent of threats using generative AI particularly eye-opening: "I was able to gain many insights into the changing methods of global attacks that are facilitated by rapidly developing technologies."
Tucker is already putting together the cohort for the next immersion program in the fall: an undergraduate class on Gen AI and national security that will culminate in a student-led project. She is also going through the process of security clearances for an upcoming sabbatical, during which she will be stationed at Maryland's Fort Meade for four to six months to investigate how to apply behavioral science research to cyber defense. After years of writing distinguished books on French history, she is happily surprised by the new turn her career has taken.
"My entire career at Vanderbilt has been one of being allowed to think outside the box and disciplinary limitations," Tucker says, adding that in a way it mirrors the scientific pioneers she's studied in the past. "In the 17th century, they were called natural philosophers, and they were doing sophisticated and interdisciplinary things to push knowledge forward," she says. "Someday people will look back on the 21st century and see that we were no different."