Twelve Swedish civil society organizations have collaborated to launch joint guidelines for the responsible use of artificial intelligence (AI). These guidelines, developed in conjunction with AI Sweden, are intended to serve as a toolkit for civil society to navigate the opportunities and pitfalls of AI technology, ensure ethical use, and counteract injustices.
"Questions regarding AI's impact on civil society and marginalized groups will become increasingly central," says Carl Norling Markai, Project Manager at AI Sweden.
An increasing number of sectors across society are being impacted by artificial intelligence, driving efficiency and creating value in ways that were previously unimaginable. At the same time, there is a risk that algorithms may reinforce existing social injustices. Given its focus on marginalized groups and sensitive issues, civil society has an even greater responsibility to ensure the responsible use of AI.
AI is everywhere, and we must take responsibility. Civil society organizations have historically played a crucial role in driving systemic change for the benefit of all. In the same spirit, we need to engage in the conversation around AI—exploring how it can support our work, create new opportunities, and enable us to innovate for the good of society.
Rodolfo Zúñiga
Head of Digital Safety and AI at Save the Children
These new guidelines serve as an important reminder that technological development must go hand in hand with societal values.
Our entry point is fundamentally a rights-based mission: to shape public opinion and ensure that AI does not contribute to further discrimination—whether at the individual or structural level.
Maria Jacobson
Communications Officer at the Anti-Discrimination Bureau West
The purpose is to provide civil society organizations with the practical knowledge and tools necessary to implement AI in a way that strengthens their mission.
Carl Norling Markai, Project Manager at AI Sweden, emphasizes the importance of this work:
![]()
These guidelines are essential to ensuring that AI serves society in an inclusive and fair way. They empower civil society not only to benefit from AI, but also to play a critical role in scrutinizing its use—identifying and addressing risks related to biased data and a lack of fairness.
![]()
Carl Norling Markai
Project Manager at AI Sweden
He adds:
“This is an issue that will become increasingly important for politicians and policymakers involved in AI regulation and governance in Sweden.”
The guidelines were developed within the framework of the Civil Society Forum for Responsible AI – an initiative from AI Sweden with support from Google.org. AI Sweden invited and shaped the working group, consisting of 12 organizations, held meetings, and arranged workshops.
The forum has gathered a diversity of perspectives from organizations such as Funktionsrätt Sverige (Disability Rights Sweden), Riksidrottsförbundet (Swedish Sports Confederation), Svenska Röda Korset (Swedish Red Cross), Bris (Children's Rights in Society), and Rädda Barnen (Save the Children), which has contributed to developing guidelines that are relevant and applicable across the entire sector. The guidelines compile frameworks and tools, identify the unique ethical challenges faced by civil society, and culminate in concrete recommendations for responsible and ethical AI implementation.
"The practical guidance on how to approach and apply AI in a responsible and constructive way has been extremely valuable. I believe that's what we, as an organization, will benefit from the most moving forward," says Martin Tägtström, CIO at the Swedish Red Cross.
The guidelines present principles for the responsible use of artificial intelligence (AI) within civil society organizations in Sweden. The guidelines address ethical AI use within civil society and highlight the opportunities and risks of AI technology related to fairness, transparency, data protection, and also provide practical advice and checklists.
The guidelines have been developed by the Civil Society Forum for Responsible AI, a working group consisting of twelve selected civil society organizations. The initiative for the Civil Society Forum for Responsible AI was taken by AI Sweden within the "AI for Impact" project, which is funded by Google.org.
Continue reading