Artificial intelligence is increasingly being used in home health care - but home health care workers are generally unaware of that. Nor do they understand how AI works, why it may retain their information and that it could replicate bias and discrimination in their workplace.
A team of Cornell researchers investigated the implications of AI tools on the work of frontline home health care workers, such as personal care aides, home health aides and certified nursing assistants, in a qualitative study. They'll present the work at the Association for Computing Machinery's Conference on Human Factors in Computing Systems (CHI '25), April 26- May 1 in Yokohama, Japan.
"Our study takes the first steps in a broader agenda that seeks to elevate the voices of frontline stakeholders in the design and adoption of safe and ethical AI systems in home health care," said Nicola Dell, co-author of the paper and associate professor of information science at Cornell Tech. She is also associate professor at the Jacobs Technion-Cornell Institute and at the Cornell Ann S. Bowers College of Computing and Information Science.
The researchers' interviews with 22 home care workers, care agency staff and worker advocates revealed that home care workers lack understanding of AI technology, its data usage and the reasons AI systems retain their information.
"Participants in the study recognized the significant efficiency gains AI tools can provide, especially in an industry facing labor shortages and increasing demand," said co-author Ian René Solano-Kamaiko, a doctoral student in computing and information science at Cornell Tech. "However, we saw that agency participants often assumed these systems were trustworthy simply because they improved operational outcomes, despite acknowledging they have no idea if these tools are operating fairly."
The home care workers in the study generally did not realize that AI is already being implemented in their work, particularly through algorithmic shift-matching systems used by agencies that employ them. Home care workers receive shift assignments from agencies through a matching process designed to balance their availability, qualifications and geographic location with the needs and location of patients.
"We found a significant knowledge gap: Agency staff were generally more aware of AI's use in home care, while most home care workers - those directly affected by these systems - had little knowledge of AI and were often unaware it was already being used in their work," Solano-Kamaiko said.
This knowledge gap is troubling given that algorithmic rankers, which are similar to the shift-matching systems used in home care, have been shown to discriminate against groups who share the same demographic characteristics as home care workers: women, people of color, immigrants and individuals with other marginalized identities.
"While some participants acknowledged the risk of AI reinforcing existing inequalities, most were largely unaware of the potential for these technologies to reproduce racism, sexism and other forms of discrimination," Solano-Kamaiko said. "These findings underscore the urgent need for greater transparency, critical oversight and awareness around the use of AI in home care settings."
To better support home care workers in the future, the researchers emphasize the need for equitable, participatory governance structures to regulate AI. They argue these structures should include important stakeholders at all levels, including patients and home care workers.
"Participatory approaches to developing AI governance will need to be constructed with care to ensure they center problems and potential solutions from the perspectives of stakeholders who are not only on the margins, but whose voices are critically excluded in current discourse on AI governance," Solano-Kamaiko said.
To ensure these stakeholders have the necessary AI knowledge to help govern AI structures, the researchers also advocate for "stakeholder-first" approaches to AI education.
"Instead of focusing AI literacy on the technology itself, the stakeholder-first approach shifts the emphasis from the content to be learned to the contexts in which AI systems are applied," Solano-Kamaiko said. "This approach helps workers better understand and reason about the implications of AI in their specific contexts without requiring technical skills like programming."
The study was funded by the Innovation Resource Center for Human Resources.
Grace Stanley is a staff writer-editor for Cornell Tech.