AI Bias Impacts Hiring, Healthcare Decisions

University of Oklahoma

Generative AI tools like ChatGPT, DeepSeek, Google's Gemini and Microsoft's Copilot are transforming industries at a rapid pace. However, as these large language models become less expensive and more widely used for critical decision-making, their built-in biases can distort outcomes and erode public trust.

Naveen Kumar, an associate professor at the University of Oklahoma's Price College of Business, has co-authored a study emphasizing the urgent need to address bias by developing and deploying ethical, explainable AI. This includes methods and policies that ensure fairness and transparency and reduce stereotypes and discrimination in LLM applications.

"As international players like DeepSeek and Alibaba release platforms that are either free or much less expensive, there is going to be a global AI price race," Kumar said. "When price is the priority, will there still be a focus on ethical issues and regulations around bias? Or, since there are now international companies involved, will there be a push for more rapid regulation? We hope it's the latter, but we will have to wait and see."

According to research cited in their study, nearly a third of those surveyed believe they have lost opportunities, such as financial or job prospects, due to biased AI algorithms. Kumar notes that AI systems have focused on removing explicit biases, but implicit biases remain. As these LLMs become smarter, detecting implicit bias will be more challenging. This is why the need for ethical policies is so important.

"As these LLMs play a bigger role in society, specifically in finance, marketing, human relations and even healthcare, they must align with human preferences. Otherwise, they could lead to biased outcomes and unfair decisions," he said. "Biased models in healthcare can lead to inequities in patient care; biased recruitment algorithms could favor one gender or race over another; or biased advertising models may perpetuate stereotypes."

While explainable AI and ethical policies are being established, Kumar and his collaborators call on scholars to develop proactive technical and organizational solutions for monitoring and mitigating LLM bias. They also suggest that a balanced approach should be used to ensure AI applications remain efficient, fair and transparent.

"This industry is moving very fast, so there is going to be a lot of tension between stakeholders with differing objectives. We must balance the concerns of each player—the developer, the business executive, the ethicist, the regulator—to appropriately address bias in these LLM models," he said. "Finding the sweet spot across different business domains and different regional regulations will be the key to success."

About the project

"Addressing bias in generative AI: Challenges and research opportunities in information management" is published in the journal Information & Management, DOI no. 10.1016/j.im.2025.104103 . Kumar , who is an associate professor of management information systems at OU, co-authored this paper with Xiahua Wei from the University of Washington, Bothell and Han Zhang from the Georgia Institute of Technology and Hong Kong Baptist University.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.