When bots - automated agents that perform tasks on behalf of humans - become more active in online communities, it has profound effects on how humans interact with each other on those platforms. Bots designed to help users see more content increase the number of people users connect with but also decrease the interactions between people.
Author
- John Lalor
Assistant Professor of IT, Analytics, and Operations, University of Notre Dame
In online communities, replies, likes and comments between users form a network of interactions. Analysis of these social networks shows patterns, such as who is connecting and who is popular or important in the community.
My colleagues Nicholas Berente and Hani Safadi and I analyzed the network structure of communities on Reddit, called subreddits, that had seen increased use of bots from 2005 to 2019. Our goal was to see whether the presence of bots affected how the human community members interacted with each other.
Based on recent research, we knew that we were looking for two types of bots : reflexive and supervisory bots.
Reflexive bots are coded to plug into a community's application programming interface . Based on how they are coded, they either post content based on specific rules or search for specific content and post a reply based on their preprogrammed rules. Supervisory bots have more permissions in the community and can delete or edit posts or even ban users based on preprogrammed community moderation rules.
We found that when there is more reflexive bot activity in a community - more bots posting content - there are more human-to-human connections. This means that the reflexive bots posting content enable people to find novel content and engage with other users they otherwise would not have seen. However, this high bot activity leads to less back-and-forth discussion between users. If a user posts on a subreddit, it is more likely that a bot will reply or interject itself into the conversation instead of two human users engaging in a meaningful back-and-forth discussion.
When there are supervisory bots moderating a community, we see less centralization in the human social network. This means that those key people who were important to the community have fewer connections than before. Without supervisory bots, these members would be the ones who establish and enforce community norms. With supervisory bots, this is less necessary, and those human members are less central to the community.
Why it matters
Bots are prevalent across online communities , and they can process vast amounts of data very quickly, which means they can react and respond to many more posts than humans can.
What's more, as generative AI improves, people could use it to create more and more sophisticated bot accounts, and the platforms could use it to coordinate content moderation. Tech companies investing heavily in generative AI technologies could also deploy generative AI bots to increase engagement on their platforms.
Our study can help users and community leaders understand the impact of these bots on their communities. It can also help community moderators understand the impact of enabling automated moderation through supervisory bots.
What's next
Bots are rigid because of their rules-based nature, but they are likely to become more advanced as they incorporate new technologies such as generative AI. More research will be needed to understand how complex generative AI bots affect human-to-human interactions in online communities.
At the same time, automating platform moderation can lead to strange effects, because bots are more rigid in their enforcement and cannot deal with potential issues on a case-by-case basis. How generative AI changes moderator bots remains to be seen.
The Research Brief is a short take on interesting academic work.
John Lalor does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.