Secure AI
Artificial Intelligence is becoming a cornerstone for the Swedish industry, the public sector, and society at large. But as adoption accelerates, so do the risks. Traditional cybersecurity focuses on protecting infrastructure, while AI Security is about ensuring that AI systems themselves behave safely, robustly, and ethically — even under manipulation or attack.
AI Sweden’s Secure AI initiative provides a unique national capability to:
- Identify and respond to vulnerabilities in AI systems such as adversarial manipulation, data poisoning, and model leakage.
- Develop and validate countermeasures that make AI trustworthy and resilient.
- Help you understand how to keep the integrity of data you use to train AI, or you submit to an AI model.
- Support compliance with the EU AI Act through concrete methods, tools, and evaluations.
- Build talent and capacity by engaging researchers, students, and industry experts.
We have been working with AI Security since 2021, and Secure AI stands as Sweden’s collaborative hub for ensuring that AI systems remain safe, reliable, and aligned with our values — even in the face of evolving threats.
Projects and talent programs
AI-Powered Honeypots
LeakPro: Leakage profiling and risk oversight for machine learning models
AI Security Graduate Program
Keynotes and publications on Secure AI
Selected keynotes and publications. Visit our listing of scientific publications and AI Swedens Youtube channel for a comprehensive list.
Related news
Get involved
AI Sweden partners, reach out to Mats Nordlund.
Not a partner? Learn more about partnering with AI Sweden.