Research: Decision-making Mystery Of AI Chatbots

A new study from Cornell SC Johnson College of Business explores the differences between decision-making processes in human and artificial intelligence. In the working paper, "Do AI Chatbots Provide an Outside View?" Stephen Shu, professor of practice at the Charles H. Dyson School of Applied Economics and Management, and his co-authors investigate the decision-making characteristics of AI chatbots.

The study revealed that AI chatbots, despite their computational prowess, exhibit decision-making patterns that are neither purely human nor entirely rational. The chatbots possess is termed as an 'inside view' akin to humans, characterized by falling prey to cognitive biases such as the conjunction fallacy, overconfidence, and confirmation biases.

AI chatbots offer what Shu terms an "outside view," complementing human decision-making in certain aspects. They excel in considering base rates, remain less susceptible to biases stemming from limited memory recall, and demonstrate insensitivity to availability and endowment effect biases. For example, whereas humans tend to exhibit an endowment effect bias where they value items more when they possess an item (versus when they do not own them), AI chatbots do not seem to exhibit this bias.

Read the full story on the Cornell SC Johnson College of Business news site, BusinessFeed.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.