Kuvassa Pekka Aula

Artificial intelligence in science: From opportunities to ethical challenges

“It is important for the scientific community to define clear guidelines for the use of AI, to increase transparency and to maintain a constant dialogue on how the technology can be utilized fairly and efficiently,” writes Secretary General Pekka Aula. “Scientific research is not detached from social values or ethical considerations.”

The recent advancements in artificial intelligence (AI) and especially the easy access to new AI applications and services has opened up a plethora of new opportunities for scientific research. Machine learning and generative language models allow researchers to perform complex tasks faster and more efficiently than before.

A recent Nature survey of more than 1,600 researchers sheds light on the integration of AI in the scientific community.1 According to the survey, many scientists consider AI a valuable tool in terms of data processing and speeding up computations, for example. However, there are obvious concerns about the risks of AI, such as the proliferation of misinformation and the rise of scientific fraud. Large language models, in particular, can produce inaccurate or misleading results in research, if their performance is not analysed critically.

“The role of AI can be limited to that of a research assistant, but it has the potential to be much more than that.”

The role of AI can be limited to that of a research assistant, but it has the potential to be much more than that. Science has enormous masses of data at its disposal, and AI has proven to be an effective tool for analysing them. For example, genetic sequence analysis and climate change modelling are areas of research in which using AI has resulted in significant advantages, which were previously beyond the reach of scientists.

As the Nature survey also shows, the use of AI in science is not entirely unproblematic. The biggest risk is that knowledge based on AI will begin to shift the scientific consensus in a misleading direction. This danger becomes real if AI is trained on insufficient or biased data masses. Such misleading data could strengthen false impressions or stereotypes, for example.

“The ethics of scientific research requires that the critical review of AI is not limited to technical details alone.”

The ethics of scientific research requires that the critical review of AI is not limited to technical details alone. It is necessary to openly discuss how and to what extent AI is used, to whose advantage and with what consequences. It is up to the scientific community to ensure that AI is used responsibly and ethically to facilitate scientific progress while also respecting human dignity and individual rights.

This spring, we at the Finnish Academy of Science and Letters conducted several trials where we applied AI in a variety of tasks, such as communication and various forms of scientific advice. In addition, we organised our employees into learning pairs who meet regularly to discuss the risks and opportunities of AI and to share new ideas and potential applications. We also put together our own ethical guidelines for the use of AI.

The Nature survey also revealed, somewhat unsurprisingly, that the importance of AI in research will increase. More than half of the respondents expect AI tools to be “very useful” or “essential” for researchers in their field in the upcoming years. It is important for the scientific community to define clear guidelines for the use of AI, to increase transparency and to maintain a constant dialogue on how the technology can be utilized fairly and efficiently. Scientific research is not detached from social values or ethical considerations. AI-modified science should also modify AI in return – ethically, purposefully and for the greater good. Science and humanity can thus continue to progress side by side.

Pekka Aula

Secretary General

The article has been published in the Yearbook 2023 of the Finnish Academy of Science and Letters. The Yearbook is freely available on our website: see the publication.


1 Van Noorden, Richard & Perkel, Jeffrey M. 2023: AI and science: what 1,600 researchers think. Nature 621, pp. 672–675.