Skip to main content

AI Security Lab

As part of AI Sweden’s Edge Learning Consortium, and together with Dakota State University, the AI Security Lab will aim to provide developers and users of Edge Learning Infrastructure the understanding to develop robust and secure AI solutions.

"By utilizing edge devices for training AI, edge learning opens up a number of new possible surfaces for adversarial attacks. To reduce the risks and impact of attacks on edge learning systems, it is important to understand what vulnerabilities exist." said Dr. José-Marie Griffiths, President of Dakota State University, at the launch of the Edge Learning Consortium, Φ-lab @Sweden, and AI Security Lab.

The AI Security Lab will involve security researchers in the development of edge learning solutions. They can help provide the understanding to develop robust solutions from the start for developers and users of edge learning infrastructure.

Challenges

  • What new vulnerabilities could exist in an edge learning environment?
  • What would be preferred attack surfaces in an edge learning environment?
  • What types of threats should we be aware of? (e.g., stealing information, sabotaging the model, hijacking the system, etc.)
  • What are strategies and technology to detect attacks?
  • What are the strategies and technology to prevent attacks?
  • What are the strategies and technology to thwart an ongoing attack?

The AI Security Lab is part of the Edge Learning Consortium.

Madison Cyber Labs at Dakota State University (DSU) and AI Sweden are the key sponsors of this lab. The first student exchange program in 2022 attracted six students (3 American and 3 Swedish), while in 2023 the program was doubled (6 + 6 students). The plan for 2024 is to double it once again. Read more about the exchange program.

Get involved

The Edge Learning Lab Consortium is open to AI Sweden partners.
Get involved by contacting Beatrice Comoli

Beatrice Comoli
Beatrice Comoli
Administrative Lead Data Factory
+46 (0)70-146 09 64