Skip to main content

LeakPro 2 expands successful AI privacy assessment tool

Wednesday, March 18, 2026

LeakPro 2 aims at creating a tool that not only will quantify the risk that a model could leak sensitive data, but also the impact such a leakage could have. The work builds on the successful results from LeakPro 1, a project that resulted in tools already creating value at AI Sweden partners.

LeakPro workshop

Project partner participants at the LeakPro II kickoff: Viktor Valadi (Scaleout), Mattias Åkesson (Scaleout), Mats Nordlund (AI Sweden), Johan Östman (Recorded Future), Henrik Forsgren (RISE), Rickard Brännvall (RISE), Samuel Genheden (AstraZeneca), Fazeleh Hoseini (AI Sweden), Håkan Warston (Saab), Mattias Ripoll (Syndata), Marcus Lingman (Region Halland)

“A probability score for attack success is a great start, but it doesn’t tell you how serious the consequences could be,” says Fazeleh Hoseini, research scientist, and project manager for LeakPro 2. She continues:

Fazeleh Hoseini

Is it a single data point or an entire database at risk? Are the affected individuals part of a vulnerable group? And how does the risk align with regulations like GDPR or the EU AI Act? In LeakPro 2, while we are expanding on the technical part, such as attacking generative models, we are adding the dimensions of harm, legal compliance, and organizational auditing—to give decision-makers a tool that actually works in practice

Fazeleh Hoseini

Fazeleh Hoseini

Research scientist, AI Sweden

Addressing the privacy paradox

For industries subject to strict regulations, like healthcare, finance, and government, the "Privacy Paradox" has long been a barrier to innovation: how can you leverage the power of AI without risking the exposure of sensitive data? Addressing this question, LeakPro was launched in 2023 to develop a tool to assess privacy risks in machine learning.

LeakPro 1 focused on developing technical attacks against AI models to be able to quantify the risk of attacks leading to data leakage. The result was an open-source tool that is already being used by partners to evaluate the security of AI models.

Viktor Valadi, Scaleout

We work with projects where data privacy is extremely important, and there we have incorporated both LeakPro as a framework and learnings we have gained from developing it together with AI Sweden and the other partners. Our decisive question for LeakPro in such projects is: Can we do this? For privacy assessments, frameworks and know-how are equally important.

Viktor Valadi, Scaleout

Viktor Valadi

Scaleout Systems

Covering GenAI, compliance, and impact quantification

Fazeleh Hoseini likens the project’s expansion in phase two to the Penrose triangle, an ‘impossible’ geometric figure metaphor for the conflicting demands that AI developers, lawyers, and organizations grapple with: The technology demands precision, the regulations are open for interpretation, and organizations need practical guidelines.

With partners such as Scaleout productifying the tool, Syndata working with synthetic data, and Recorded Future adding cyber security expertise, LeakPro 2 becomes a catalyst for privacy aware AI in Sweden. Swedish. The project's aim to open-source the code is a conscious strategy to democratize AI privacy and make advanced risk assessment accessible to everyone, from startups to authorities.

One of the most requested features in this second phase of LeakPro is the mentioned support for LLMs. In the first phase, generative models were deemed possible to evaluate, but excluded due to their complexity. Now they are a high priority.

Lottiefile

The components of LeakPro

Privacy assessments in models (black-box and white-box attacks); Synthetic data; and federated learning.

Fazeleh Hoseini

LLMs introduce a new class of privacy risks for data-intensive industries like healthcare, finance, and pharma, where sensitive data is everywhere. Patient records, clinical trials, financial transactions, and proprietary research: these have always been sensitive, but LLMs are now being trained on this data at scale. Until now, there's been no comprehensive and systematic way to measure or mitigate the risk that it leaks back out, whether that's personal data or intellectual property

Fazeleh Hoseini

Fazeleh Hoseini

Research scientist

By including LLMs in LeakPro 2's attack suite, organizations can assess whether the AI models they use or share inadvertently expose sensitive data, whether that's personal information, proprietary research, or confidential records.

“LeakPro 2 is not just a continuation of a successful project; it's an expansion that turns a useful tool into a practical framework for responsible AI. Beyond measuring risk, it also helps organizations actively mitigate it by optimizing privacy-enhancing technologies (PETs) like federated learning and synthetic data to find the right balance between privacy and utility. By integrating risk assessment, PET optimization, legal compliance, and organizational workflows, we're building a resource for safer, more trustworthy AI adoption” says Fazeleh Hoseini.

LeakPro 2 will focus on:

  • Expanding the technical attack suite to include large language models (LLMs), addressing a critical gap in LeakPro 1.
  • Integrating legal and organizational perspectives into a DPIA-aligned (Data Protection Impact Assessment) workflow, to connect technical risks with real-world compliance.
  • Creating standardized, interpretable outputs to help practitioners assess and act on privacy risks without the demand for narrow technical domain expertise.
  • PET optimization — tuning privacy-enhancing technologies like federated learning, synthetic data, and differential privacy to find the right balance between privacy and utility.
  • Harm quantification — not just measuring attack success but connecting it to real-world impact on data subjects.
  • Benchmarking suite — creating the first objective standard for comparing privacy attacks in practice.

Project pages

Fazeleh Hoseini and Johan Östman
LeakPro 2 is a framework for assessing and mitigating privacy risks in machine learning models. It...
Man presenting on "LeakPro" to a small audience with "AI Sweden" banner visible.
Many recent works have highlighted the possibility of extracting data from trained machine-learning...

Facts: Secure AI in AI Labs

The issue of AI and safety has become a central issue worldwide. Secure AI is also a central part of AI Labs' projects since many years. 

"We divide Secure AI into three overall categories: Ensuring that AI learns the right things, that AI does the right things, and that AI does not leak information," says Mats Nordlund

The safety aspects of AI are included in a number of projects at AI Sweden, in addition to LeakPro, including Federated Machine Learning in the Banking Sector and the Industrial Immersion Exchange Program, which is organized together with the American Dakota State University and will be held for the third time in 2024. 

Learn more about AI Sweden's work with Secure AI.