Skip to main content

Responsible use of AI

Responsible AI by design is a strategic necessity to use artificial intelligence (AI) for addressing major societal and business challenges and fostering positive societal development. It unlocks value creation while preventing underuse, misuse, and harmful use of AI. AI Sweden actively explores and contributes to the responsible development and use of AI in real world use cases and helps organizations turn Responsible AI into action through collaboration and knowledge sharing.

Capabilities and practices

Responsible AI, in our view, is about enabling the adoption of AI to address societal and business challenges while safeguarding democratic values and quality of life.

At AI Sweden, we recognize that many organizations and leaders need support in navigating the complexity of AI and in building the necessary maturity, competence, and capabilities. 

When supporting our partner organizations we refer primarily to the capabilities of three main areas; Regulatory and compliance, Ethics and sustainability and AI Security and reliability.

 

Venn diagram where Responsible development & use of AI intersects Regulatory & Compliance, AI Security, and Ethics & Sustainability.
Francisca Lundborg

Responsible AI is not about building “perfect” AI solutions, checking boxes, or foreseeing all risks and harms that AI can cause. It means building the capabilities to make informed decisions about AI’s use and its local and global impacts, and creating the conditions for meeting our time’s most pressing challenges with the help of AI.

Francisca Lundborg

Francisca Lundborg

Head of Responsible AI at AI Sweden

AI Sweden promotes a responsible development and use of AI, characterized by the following cornerstones:

  • informed leadership
  • domain-specific knowledge combined with technical skills
  • safe experimentation and active exploration
  • multidisciplinary stakeholder engagement
Conny Svensson

Organizations must create the conditions to actually use AI for creating sustainable value and fostering positive societal development: competence, confidence, and trust are key. Turning the AI Act and Ethics Guidelines for Trustworthy AI into action is a crucial step to this end.

Conny Svensson

Conny Svensson

Director of AI Adoption at AI Sweden

As organizations mature on their AI journey, they will realize that Responsible AI is not something that happens afterwards. On the contrary, included from the start and throughout the AI life-cycle, Responsible AI by design helps organizations innovate faster and more sustainably. And yet, it does not need to be perfect from the start: Responsible AI should be understood as a process of safe experimentation, active exploration, and capability building over time.

AI Sweden's Responsible AI Function

Francisca Lundborg, Billy Jörgensen, Tommy Schönberg and Carl Norling Markai

In order to put principles into practice we have established a Responsible AI Function that supports AI Sweden partners and the ecosystem with tools and guidance to help navigate the complexity of AI and implement it responsibly.

Members of the Responsible AI Function

  • Francisca Lundborg, Head of Responsible AI (parental leave)

  • Billy Jörgensen, Head of Regulatory and Leadership Initiatives

  • Tommy Schönberg, Head of Defense Innovation and Secure AI

  • Carl Norling Markai, Impact Initiative Developer (point of contact)

Collaborating for Responsible AI

By providing a neutral platform, the Responsible AI Function helps partners and the wider Swedish AI ecosystem collaboratively develop shareable, practical resources and best practices for the responsible development and use of AI. Current projects and initiatives:

The Responsible AI Knowledge Hub

A platform designed to help organizations identify and adopt tools and resources for developing and utilizing AI responsibly. The hub serves as a virtual space, facilitating knowledge sharing and supporting organizations in translating Responsible AI principles into operational practice. AI Sweden partners will find exclusive guides, playbooks, and more.

Secure AI

AI Sweden’s Secure AI initiative provides a unique national capability to identify and respond to technical vulnerabilities in AI systems such as adversarial manipulation, data poisoning, and data leakage from models. We develop countermeasures that make AI trustworthy and resilient, help understand how to keep the integrity of data, and support compliance with the EU AI Act through concrete technical methods, tools, and evaluations.

Legal Expert Group

In this network, legal experts from AI Sweden’s partner organizations meet to discuss and share legal questions and challenges connected to artificial intelligence. The network holds meetings regularly, by invitation from AI Sweden. All participants in the network can suggest topics and questions to be discussed at upcoming meetings.

AI Act ready

Knowledge initiative aimed at Swedish organizations that want to gain a basic understanding of the EU AI Act. Experts from AI Sweden, Almi, PwC Sweden, Scania, and the Swedish Authority for Privacy Protection (IMY) share their knowledge and insights. 

Tommy Schönberg

AI Security involves recognizing that AI comes with innate vulnerabilities, and it entails working to ensure that the overall AI system is robust to exploitation – intentional or otherwise – of these flaws.

Tommy Schönberg

Tommy Schönberg

Head of Defense Innovation and Secure AI

 Carl Norling Markai

Responsible AI is not static - if we approach it as a commitment and process to become better over time, the first step is already taken.

 Carl Norling Markai

Carl Norling Markai

Impact Initiative Developer at AI Sweden

Recording from AI Sweden’s stage at Almedalen 2025: Carl Norling Markai hosts a discussion with Eric Leijonram (GD, IMY), Nicklas Mårtensson (Chair, Funktionsrätt Sverige), and Caroline Atelius (COO, Microsoft Sweden) – exploring how to put responsible AI into practice, including the tools, models, and decision-making processes needed.

Advancing Responsible AI together

Do you want to learn more about how AI Sweden helps partners interpret regulatory decisions? Are you interested in collaborating with AI Sweden and other partners on creating shareable resources that operationalize principles and requirements for Responsible AI? Is your organization a forerunner in Responsible AI practices and interested in sharing experiences with others, or are you facing challenges in putting principles into practice in concrete use cases?

If so, we’d love to hear from you and explore what we can do together. Get in touch with us by reaching out to Carl Norling Markai.

 Carl Norling Markai
Carl Norling Markai
Impact Initiative Developer
+46 (0)70-588 05 72

Exploring Responsible AI in concrete use cases

Throughout AI Sweden’s projects and initiatives, we actively explore what Responsible AI means in practice. Abstract principles, such as legal compliance, human agency, technical robustness, privacy, and data management and transparency, concretize in use cases and need to be understood and negotiated in real world scenarios. Together with partners, we strengthen the Swedish ecosystem to sustainably shape and accompany the current unprecedented transformation of society and organizations. 

Read more about some of our projects and how we embed Responsible AI in our work:

People working on laptops in a collaborative office environment.
The shared digital assistant for the public sector is a collaboration between Swedish authorities...
Close-up of mammogram showing breast tissue, used for cancer screening.
AI in breast cancer screening can lead to both time savings and improvements in screening quality...
Picture of Johan Östman
In the latest episode of the AI Sweden Podcast, Johan Östman, researcher and project manager at AI...
Professional group portrait of eight members of the Labor Market AI Council at the AI Sweden office in Stockholm
At the initiative of AI Sweden, trade unions, employer organisations, and transition organisations...
Close-up of fabric and logotypes: 'Med finansiering from Västra Götalandsregionen' and EU-flag with the text 'Co-funded by the European Union'
At the intersection of creativity and emerging technology, this project brings together fashion...

Courses and competence building: AI literacy is a key requirement for the responsible development and use of AI. AI Sweden offers both tailored education programs for different organisations and sectors as well as publicly available courses on AI topics for practitioners, data scientists, business specialists, and CxOs.