A new collaboration between researchers at UConn and the International University of Rabat (UIR) is tackling issues of language and AI

"Visualizing AI": An artist's illustration of AI large language models which generate text. (Google DeepMind, Unsplash)
Depending on who is asked, artificial intelligence (AI) may be revered, feared, or just plain weird. To some, AI represents the dawn of a new golden age of technology and humanity. And others would argue that so-called AI is not really that "intelligent" at all.
In order to have these disagreements productively, argues UConn Humanities Institute Director Anna Mae Duane, we first have to clear something up: are we even talking about the same thing?
"There's an issue of disciplinary language — when we're talking about AI, even when we're using the same words in the same language, we don't mean the same thing at all," says Duane. "What a philosopher means by 'intelligence' and what a computer programmer means by 'intelligence,' or 'learning' or 'training' or 'language,' are all very different things."
Duane has had a career-long penchant for collaborating with other scholars, across disciplines and continents. Under her leadership, the UCHI's latest venture is "Reading Between the Lines: An Interdisciplinary Glossary for Human-Centered AI," a partnership with the International University at Rabat (UIR) in Morocco.
This partnership is supported by a $25,000 grant from the Consortium of Humanities Centers and Institutes (CHCI).
It will include a series of podcasts with interdisciplinary experts weighing in on these critical AI conversations, culminating in a cross-campus, in-person symposium in fall 2025.
'L' is for Large Language Model
What we refer to as "AI" is usually a large language model, which works just how it sounds - by absorbing vast amounts of linguistic data and learning to synthesize outputs based on this data. Examples of LLMs include ChatGPT and the built-in AI features on many apps.
But exactly what language are these models being trained on? Predominantly English, notes Duane.
This can result in issues when AI is used for non-English contexts. For example, Duane recalls a colleague at UIR who is developing an application to help seniors in need of arthritis care.
"What became clear was that just because the AI she was using was trained on English, there were all sorts of mistranslations and misunderstandings," Duane says.
In addition to mistranslations on a literal level, AI can also introduce cultural errors. Culturally informed care is critical to increasing access to healthcare for everyone; an LLM that is trained on mainstream American ideologies will be less useful in every other cultural context.
This is just one unforeseen consequence of modeling LLMs on a diet of data dominated by one small corner of the world. Others are likely to emerge as AI is integrated into more industries and technologies.
But by establishing a strong scholarly basis for understanding these consequences, Duane thinks we can also help mitigate them.
"We're not helpless in how this turns out, including how we speak about it now," she says. "We don't have to do this sort of passive, 'Well, it's off and running…' thing."
Collaborating with an international university, where the primary languages spoken are French and Arabic, is an important step in building this understanding.
"This project is a bold step toward reimagining AI in ways that respect and reflect linguistic and cultural diversity," says Dr. Ihsane Hmamouchi, Vice-Dean at the International Faculty of Medicine at UIR. "What excites me most is our commitment to embedding patient stories and social realities into AI models. By doing so, we're not only challenging the structural biases of conventional systems but also paving the way for more equitable, human-centered digital healthcare solutions. It's about developing technology that listens as much as it computes."
Taking the Conversation Global
"One reason this became possible is because we've been putting together an interdisciplinary AI working group here, building that conversation," says Duane. "We have computer scientists and philosophers and historians and journalists, and we meet once a month via the Institute."
This working group was first supported by a UConn CLAS Multidisciplinary Research Grant. With the interdisciplinary groundwork already laid, the research team was able to then expand the conversation, growing what had previously been an "informal collaboration" with AI scholars at UIR.
It's a testament to the creative and scholarly potential that is unlocked when academics can freely share and build on one another's expertise.
"Here at UConn, we have this great synergy between people in several disciplines, and the capacity to really learn from each other's work, in ways that produce better research and better conversations than staying in our silos," Duane says. "We can't [stay in our silos], on something like AI. It's going to change everything about how we work and live."
In addition to Duane and Hmamouchi, the project's collaborators include Clarissa J. Ceglio, UCHI Associate Director of Collaborative Research and Associate Professor of Digital Humanities; Nasya Al-Saidy, UCHI Managing Director; Dan Weiner, Vice Provost of UConn Global Affairs; and Allison Cassaly, Global Initiatives Coordinator, UConn Global Affairs.