Skip to main content

Secure AI

Artificial Intelligence is becoming a cornerstone for the Swedish industry, the public sector, and society at large. But as adoption accelerates, so do the risks. Traditional cybersecurity focuses on protecting infrastructure, while AI Security is about ensuring that AI systems themselves behave safely, robustly, and ethically — even under manipulation or attack.

AI Sweden’s Secure AI initiative provides a unique national capability to:

  • Identify and respond to vulnerabilities in AI systems such as adversarial manipulation, data poisoning, and model leakage.
  • Develop and validate countermeasures that make AI trustworthy and resilient.
  • Help you understand how to keep the integrity of data you use to train AI, or you submit to an AI model.
  • Support compliance with the EU AI Act through concrete methods, tools, and evaluations.
  • Build talent and capacity by engaging researchers, students, and industry experts.

We have been working with AI Security since 2021, and Secure AI stands as Sweden’s collaborative hub for ensuring that AI systems remain safe, reliable, and aligned with our values — even in the face of evolving threats.

Projects and talent programs

Golden honeycomb pattern on a computer component with a metallic shine

AI-Powered Honeypots

Honeypots are decoy computing systems that mimic real environments to trick attackers into interacting with them. There are two ways a honeypot can increase security: (1) by occupying attacker...
Man presenting on "LeakPro" to a small audience with "AI Sweden" banner visible.

LeakPro: Leakage profiling and risk oversight for machine learning models

Many recent works have highlighted the possibility of extracting data from trained machine-learning models. However, these examples are typically performed under idealistic conditions and it is...
Woman placing notes on a whiteboard

AI Security Graduate Program

The AI Security Graduate Program is about talents securing robust, ethical, and sustainable AI, within your organization. Together with partners, AI Sweden and Nexer Tech Talent cultivate Sweden's...

Keynotes and publications on Secure AI

Selected keynotes and publications. Visit our listing of scientific publications and AI Swedens Youtube channel for a comprehensive list.

Figure from the publication

Destabilizing a Social Network Model via Intrinsic Feedback Vulnerabilities

Rogers, Lane H., Emma J. Reid, and Robert A. Bridges. SafeThings Workshop, IEEE S&P 2025, arXiv preprint

[pdf]
Screen shot from the talk

Emerging Threats & Solutions in Protecting The Electric Grid

Dr. Mary Bell, DSU

(45 min)

Youtube
Screen shot from Bobby Bridges' lecture

Can 'Lying Smartly' Preserve Privacy?

Dr. Robert (“Bobby”) Bridges, Innovation lead at AI Sweden

(63 min)

Youtube
Excerpt from the publication

Poisoning Attacks on Federated Learning for Autonomous Driving

Sonakshi Garg, Hugo Jönsson, Gustav Kalander, Axel Nilsson, Bhhaanu Pirange, Viktor Valadi, Johan Östman. In: SCAI 2024.

[pdf]
Screen shot from the webinar

AI trends seminar 2025 - Impact on AI Security and trust

Rebecka Cedering Ångström, Principal researcher, Ericsson 
Mauricio Muñoz, Senior research engineer, AI Sweden

(16 min)

Youtube

Related news

Johan Östman and Maja Haak presenting LeakPro projekt in the Edge Lab
What can research in AI security contribute to the fight against organized crime? Johan Östman...
EU Minister Jessica Rosencrantz at AI Sweden with partners.
Sweden's and Europe's competitiveness, innovation, and AI security were among the topics of...
Johan Östman and Tim Isbister posing in front of AI Sweden poster
AI Sweden was represented with two scientific papers at this year's International Conference on...
Mauricio and Bobby
The recruitment of two international AI experts strengthens AI Sweden's capabilities. "I'm delighted...
Dr. Robert (Bobby) Bridges calculating and writing on a whiteboard
Can AI adapt honeypots to produce more threat intelligence by eliciting greater interaction from...
A large group of students
How can AI help in detecting suspicious ship movements at sea? How can AI help protect IT...

Get involved

AI Sweden partners, reach out to Mats Nordlund.

Not a partner? Learn more about partnering with AI Sweden.

Mats Nordlund
Mats Nordlund
Director of AI Labs
+46 (0)70-398 08 37