Skip to main content

LeakPro enables collaboration around sensitive data

Wednesday, January 29, 2025

In the latest episode of the AI Sweden Podcast, Johan Östman, researcher and project manager at AI Sweden, talks about LeakPro. The project’s goal is to better understand—and therefore be able to assess—the risk of models leaking training data.

A picture of Johan Östman

If I were to describe our objective in one sentence, it would be to unlock collaborations around sensitive data.

A picture of Johan Östman

Johan Östman

Research scientist at AI Sweden

LeakPro builds on conclusions drawn in a previous AI Sweden project. In the Regulatory Pilot Testbed, Region Halland and Sahlgrenska University Hospital worked together with the Swedish Authority for Privacy Protection (IMY) and AI Sweden. The aim then was to explore how federated learning could be used in healthcare. The Swedish Authority for Privacy Protection concluded that it was impossible to assess the probability of models trained on personal data leaking training data. Consequently, the models themselves must be considered personal data and handled according to data protection regulations.

LeakPro aims to answer that central question: What is the probability of a certain model leaking training data? The organizations working to find this answer are AstraZeneca, Region Halland, Sahlgrenska University Hospital, Scaleout, Syndata, RISE, and AI Sweden.

Knowing the probability of leakage opens two doors. First, it enables operational decision-making. "Is the risk of training data leaking reasonable compared to the value the model can create?" Second, it allows developers to evaluate their models and further develop them to reduce the risk of leakage to an acceptable level.

"We want to build something relevant and useful, and to achieve that, we have to be at the forefront of academic research, utilizing the most sophisticated attacks available," says Johan Östman.

In the podcast episode, he elaborates on the project's background, how collaboration between actors from different sectors strengthens LeakPro, its goals, and what the journey entails.

You can listen to an excerpt here:

Listen to the full episode

To play this Spotify content, you need to "Allow all" cookies.

Adjust your settings

an icon showing a shield with incoming arrow, AI in the center and threat being dissolved into stars
Many recent works have highlighted the possibility of extracting data from trained machine-learning...

Related AI Sweden news articles

Johan Östman and Fazeleh Hoseini, Research engineers at AI Sweden

When will an AI model reveal your sensitive data?

2024-06-05
AI models can leak training data–this is known. However, such leakage has mainly been observed in lab-like conditions that often favor the attacker. Today, there are few answers on what the risks look...
Mats Nordlund speaking during the AI in Automotive event October 1, 2024

AI Sweden forms an AI & Security Consortium

2024-10-16
AI Sweden is establishing an AI & Security Consortium, building on the success of the Edge Learning Consortium and leveraging both new and ongoing projects in the security domain.
A large group of students

From ships to chips: AI talent program tackles security challenges

2024-08-23
How can AI help in detecting suspicious ship movements at sea? How can AI help protect IT infrastructure making it more resistant to hacker attacks? And can we prevent data leakage from AI models...