Guidelines on 'Responsible Research and Development of Artificial Intelligence' adopted by the Senate of the Max Planck Society
Many of the technologies we rely on in everyday life today, such as machine learning and natural language processing, are based on decades of scientific advancements in the field of artificial neural networks. However, much of the latest, cutting-edge AI research has emerged from the research labs of private tech companies. It is no coincidence that the 2024 Nobel Prize in Chemistry was awarded to two researchers from Google DeepMind.
In an article published in September 2024, the journal Nature criticized the fact that much of today's AI research no longer takes place in university laboratories but instead happens behind the closed doors of private companies. Industry now accounts for 96 percent of the largest-and most powerful-AI models. This raises a significant concern: companies are likely to prioritize profit, which could shape not only the types of AI products they aim to develop, but also the research questions they choose to pursue.
Against this backdrop, scientific research plays a crucial role. It can help steer AI research toward serving the common good. Notably, there is a clear lack of AI applications addressing the problems people face most urgently, such as climate change, biomedical challenges, and social science issues.
As part of a two-day internal Max Planck symposium on "AI and Research", over 200 participants gathered on-site in Berlin, with up to 250 joining remotely. Experts presented their research on and with AI, covering topics that ranged from AI applications in brain research and psychiatry to its use in astrophysics, geosciences, and social sciences.
The symposium also featured panel discussions on the further methodological development of AI research at the Max Planck Society (MPG), fostering interdisciplinary collaboration, and strategies to enhance the efficient use of AI both in scientific endeavours and in science support services.
The event marked the launch of a broader AI initiative at the Max Planck Society. In November 2024, the Senate of the Max Planck Society adopted a guideline on Responsible Research and Development of Artificial Intelligence). While AI systems developed exclusively for scientific purposes are exempt from the regulations of the European AI Act, it remains essential that ethical and professional standards are upheld in scientific research.
This paper was developed in collaboration with researchers and serves as a supplement to the existing guidelines of the Max Planck Society. These guidelines provide the framework for the research and development of trustworthy AI. They include the Code of Conduct, the rules of good scientific practice, and the information and regulations on the responsible handling of research freedom and associated risks.