Responsible use of AI
Responsible AI by design is a strategic necessity to use artificial intelligence (AI) for addressing major societal and business challenges and fostering positive societal development. It unlocks value creation while preventing underuse, misuse, and harmful use of AI. AI Sweden actively explores and contributes to the responsible development and use of AI in real world use cases and helps organizations turn Responsible AI into action through collaboration and knowledge sharing.
Capabilities and practices
Responsible AI, in our view, is about enabling the adoption of AI to address societal and business challenges while safeguarding democratic values and quality of life.
At AI Sweden, we recognize that many organizations and leaders need support in navigating the complexity of AI and in building the necessary maturity, competence, and capabilities.
When supporting our partner organizations we refer primarily to the capabilities of three main areas; Regulatory and compliance, Ethics and sustainability and AI Security and reliability.
![]()
Responsible AI is not about building “perfect” AI solutions, checking boxes, or foreseeing all risks and harms that AI can cause. It means building the capabilities to make informed decisions about AI’s use and its local and global impacts, and creating the conditions for meeting our time’s most pressing challenges with the help of AI.
![]()
Francisca Lundborg
Head of Responsible AI at AI Sweden
AI Sweden promotes a responsible development and use of AI, characterized by the following cornerstones:
- informed leadership
- domain-specific knowledge combined with technical skills
- safe experimentation and active exploration
- multidisciplinary stakeholder engagement
![]()
Organizations must create the conditions to actually use AI for creating sustainable value and fostering positive societal development: competence, confidence, and trust are key. Turning the AI Act and Ethics Guidelines for Trustworthy AI into action is a crucial step to this end.
![]()
Conny Svensson
Director of AI Adoption at AI Sweden
As organizations mature on their AI journey, they will realize that Responsible AI is not something that happens afterwards. On the contrary, included from the start and throughout the AI life-cycle, Responsible AI by design helps organizations innovate faster and more sustainably. And yet, it does not need to be perfect from the start: Responsible AI should be understood as a process of safe experimentation, active exploration, and capability building over time.
AI Sweden's Responsible AI Function
In order to put principles into practice we have established a Responsible AI Function that supports AI Sweden partners and the ecosystem with tools and guidance to help navigate the complexity of AI and implement it responsibly.
Members of the Responsible AI Function
Francisca Lundborg, Head of Responsible AI (parental leave)
Billy Jörgensen, Head of Regulatory and Leadership Initiatives
Tommy Schönberg, Head of Defense Innovation and Secure AI
Carl Norling Markai, Impact Initiative Developer (point of contact)
Collaborating for Responsible AI
By providing a neutral platform, the Responsible AI Function helps partners and the wider Swedish AI ecosystem collaboratively develop shareable, practical resources and best practices for the responsible development and use of AI. Current projects and initiatives:
The Responsible AI Knowledge Hub
A platform designed to help organizations identify and adopt tools and resources for developing and utilizing AI responsibly. The hub serves as a virtual space, facilitating knowledge sharing and supporting organizations in translating Responsible AI principles into operational practice. AI Sweden partners will find exclusive guides, playbooks, and more.
Secure AI
AI Sweden’s Secure AI initiative provides a unique national capability to identify and respond to technical vulnerabilities in AI systems such as adversarial manipulation, data poisoning, and data leakage from models. We develop countermeasures that make AI trustworthy and resilient, help understand how to keep the integrity of data, and support compliance with the EU AI Act through concrete technical methods, tools, and evaluations.
Legal Expert Group
In this network, legal experts from AI Sweden’s partner organizations meet to discuss and share legal questions and challenges connected to artificial intelligence. The network holds meetings regularly, by invitation from AI Sweden. All participants in the network can suggest topics and questions to be discussed at upcoming meetings.
AI Act ready
Knowledge initiative aimed at Swedish organizations that want to gain a basic understanding of the EU AI Act. Experts from AI Sweden, Almi, PwC Sweden, Scania, and the Swedish Authority for Privacy Protection (IMY) share their knowledge and insights.
Recording from AI Sweden’s stage at Almedalen 2025: Carl Norling Markai hosts a discussion with Eric Leijonram (GD, IMY), Nicklas Mårtensson (Chair, Funktionsrätt Sverige), and Caroline Atelius (COO, Microsoft Sweden) – exploring how to put responsible AI into practice, including the tools, models, and decision-making processes needed.
Advancing Responsible AI together
Do you want to learn more about how AI Sweden helps partners interpret regulatory decisions? Are you interested in collaborating with AI Sweden and other partners on creating shareable resources that operationalize principles and requirements for Responsible AI? Is your organization a forerunner in Responsible AI practices and interested in sharing experiences with others, or are you facing challenges in putting principles into practice in concrete use cases?
If so, we’d love to hear from you and explore what we can do together. Get in touch with us by reaching out to Carl Norling Markai.
Exploring Responsible AI in concrete use cases
Throughout AI Sweden’s projects and initiatives, we actively explore what Responsible AI means in practice. Abstract principles, such as legal compliance, human agency, technical robustness, privacy, and data management and transparency, concretize in use cases and need to be understood and negotiated in real world scenarios. Together with partners, we strengthen the Swedish ecosystem to sustainably shape and accompany the current unprecedented transformation of society and organizations.
Read more about some of our projects and how we embed Responsible AI in our work:
Courses and competence building: AI literacy is a key requirement for the responsible development and use of AI. AI Sweden offers both tailored education programs for different organisations and sectors as well as publicly available courses on AI topics for practitioners, data scientists, business specialists, and CxOs.