Meta's Factchecker Cuts Spark AI, Neurotech Concerns

Mark Zuckerberg's recent decision to remove factcheckers from Meta's platforms - including Facebook, Instagram and Threads - has sparked heated debate . Critics argue it may undermine efforts to combat misinformation and maintain credibility on social media platforms.

Authors

  • Fiona Carroll

    Reader in Human Computer Interaction, Cardiff Metropolitan University

  • Rafael Weber Hoss

    PhD Candidate in Intelligence Technologies and Digital Design, Cardiff Metropolitan University

Yet, while much attention is directed at this move, a far more profound challenge looms. The rise of artificial intelligence (AI) that processes and generates human-like language, as well as technology that aims to read the human brain , has the potential to reshape not only online discourse but also our fundamental understanding of truth and communication.

Factcheckers have long played an important role in curbing misinformation on various platforms, especially on topics like politics, public health and climate change. By verifying claims and providing context, they have helped platforms maintain a degree of accountability.

So, Meta's move to replace them with community-driven notes , similar to Elon Musk's approach on X (formerly Twitter), has understandably raised concerns. Many experts view the decision to remove factcheckers as a step backward , arguing that delegating content moderation to users risks amplifying echo chambers and enabling the spread of unchecked falsehoods.

Billions of people worldwide use Meta's various platforms each month, so they wield enormous influence. Loosening safeguards could exacerbate societal polarisation and undermine trust in digital communication.

But while the debate over factchecking dominates headlines, there is a bigger picture. Advanced AI models like OpenAI's ChatGPT or Google's Gemini represent significant strides in natural language understanding. These systems can generate coherent, contextually relevant text and answer complex questions. They can even engage in nuanced conversations. And this ability to convincingly replicate human communication introduces unprecedented challenges.

AI-generated content blurs the line between human and machine authorship. This raises ethical questions about authorship, originality and accountability. The same tools that power helpful innovations can also be weaponised to produce sophisticated disinformation campaigns or manipulate public opinion.

These risks are compounded by other emerging technology. Inspired by human cognition, neural networks mimic the way the brain processes language. This intersection between AI and neurotechnology highlights the potential for both understanding and exploiting human thought.

Implications

Neurotechnology is a tool that reads and interacts with the brain. Its goal is to understand how we think. Like AI, it pushes the limits of what machines can do. The two fields overlap in powerful ways.

For example, REMspace, a California startup, is building a tool that records dreams. Using a brain-computer interface, it lets people communicate through lucid dreaming. While this sounds exciting, it also raises questions about mental privacy and control over our own thoughts.

Meanwhile, Meta's investments in neurotechnology alongside its AI ventures are also concerning. Several other global companies are exploring neurotechnology too. But how will data from brain activity or linguistic patterns be used? And what safeguards will prevent misuse?

If AI systems can predict or simulate human thoughts through language, the boundary between external communication and internal cognition begins to blur. These advancements could erode trust, expose people to exploitation and reshape the way we think about communication and privacy.

Research also suggests that while this type of technology could enhance learning it may also stifle creativity and self-discipline, particularly in children.

Meta's decision to remove factcheckers deserves scrutiny, but it's just one part of a much larger challenge. AI and neurotechnology are forcing us to rethink how we use language, express thoughts and even understand the world around us. How can we ensure these tools serve humanity rather than exploit it?

The lack of rules to manage these tools is alarming. To protect fundamental human rights, we need strong legislation and cooperation across different industries and governments. Striking this balance is crucial. The future of truth and trust in communication depends on our ability to navigate these challenges with vigilance and foresight.

The Conversation

Rafael Weber Hoss receives research funding from the Brazilian government.

Fiona Carroll does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).