WIP: Impact report 2025 - do not share the link
WIP· Do not share this page
Opening remarks
2025 was clearly a year of rapid developments in the AI world. Industry and the public sector are applying more solutions than ever before across sectors and domains to drive concrete results.
2025 was also the year when AI Sweden’s long-term collaborative efforts and strategic initiatives scaled its impact. During the last few years the partner network has expanded significantly. AI Sweden’s partners are not just learning together, they are actively collaborating for breakthrough innovations that deliver value across Swedish industry and society. That’s why we’re specifically highlighting five impact stories in this year’s impact report that truly showcase the power of collaboration and applied AI.
2026 will be a year of continued rapid development. Moreover, it is becoming increasingly evident that exponential AI development does not exist in a vacuum. Not least geopolitical considerations call for Sweden to secure its own technological agency while intensifying strategic alliances both nationally and across the globe. This requires strategic investments in both technological and human capabilities and not least securing access to talent. By building on the diverse strengths in the AI ecosystem, we can maximize the value and impact for Sweden.
Let us make 2026 the year of partnership to accelerate together.
Annika Elfström
Acting Managing Director at AI Sweden
SEK 143 million
In 2025, 143 MSEK was invested in AI Sweden. We also estimate that the value of engagement and non-cash contributions such as time and infrastructure donations total an additional +200 MSEK.
50+ public organizations collaborating
In 2025, 55 municipalities, regions, and government agencies participated and co-funded the project A shared digital assistant for the public sector (Svea), with another 65+ new partners joining in 2026 – making it Sweden’s largest collaborative AI initiative within the public sector.
10 million GPU hours
Has been granted to the EU-project OpenEuroLLM where AI Sweden collaborates with 20 leading organizations to develop a family of high-performance, open-source language models for all official EU languages.
89 talents
89 talents from 12 Swedish and international universities joined AI Sweden’s talent programs in 2025, joining an alumni community of 390+ trained talents.
170+ partners
AI Sweden’s partner network grew by 30% in 2025, now representing organizations across every major sector in Sweden.
8000+ people
More than 8,000 people participated across AI Sweden’s 100+ events and seminars, learning from peers and exchanging insights.
100+ podcast episodes
In December 2025, the AI Sweden Podcast, one of Sweden’s top tech podcasts, released its 100th episode. In 2025 alone, listeners spent 25,000 hours tuning in.
30+ publications
Whitepapers, reports, and peer-reviewed scientific papers co-authored with partners – including 3 papers accepted at leading AI conferences such as ICLR, NeurIPS, and SaTML.
20 countries
In 2025, we collaborated internationally across 20 countries through projects, partnerships, and talent exchanges.
So much happened in 2025! This report highlights some of our proud achievements, but it doesn’t stop there - have a look at the full picture to see what you think.
Creating impact, together.
Behind the numbers lie a capacity built in collaboration with our partners. To show what this work enables in practice, we highlight a selection of cases below that demonstrate how joint efforts, shared resources, and applied AI create tangible value across sectors.
Uppskalning av Sveriges största AI-samarbete inom offentlig sektor
Möjliggör samarbete kring känslig data med säker AI
Stärker Europas digitala suveränitet genom tillförlitliga språkmodeller
Minskade koldioxidutsläpp från industrin genom AI-driven energioptimering
Bästa praxis för AI-implementering i Sverige
Svea: Scaling Sweden’s largest collaborative AI project within the public sector
The Swedish public sector faces a demographic challenge where a shrinking workforce must meet the needs of an aging population, while administrative tasks remain burdensome and time-consuming. To safeguard public welfare, adopting large language models can support routine tasks, freeing up time for more complex and value-creating work.
During 2025, the project for the Shared digital assistant for the public sector, also known as Svea, continued to scale across municipalities, regions, and government agencies. More than 50 organizations participated and co-funded the Svea project, and another 65+ new partners are joining in 2026. Svea is more than a web-based chat interface; the project supports skill development, analyzes the application of legal frameworks, and investigates secure data sharing, all running on infrastructure in Sweden. The project is coordinated by AI Sweden and is Sweden’s largest collaborative AI initiative within the public sector.
![]()
Svea empowers public sector employees to increase the quality of their work while also saving many hours every week: time that can be spent on more interpersonal and valuable work, rather than administration.
![]()
Jonatan Permert
Project manager for the initiative
9 of 10 Active users report both saving time and improving quality of work
50+ Public sectors organizations engaged in the project.
15 000+ Registered users across municipalities, regions, and government agencies.
500 000+ Data points annotated by 700 public sector employees.
2+ weeks Saved per active user per year. *
50 MSEK Estimated annual efficiency gains based on current usage. **
* Based on an average of ~1.9 hours saved per week among active users.
** Calculated from reported time savings among today’s active users.
Implementing Svea permanently would significantly reduce administrative workloads. Beyond efficiency, it would help build the foundations for a sovereign and trustworthy AI infrastructure in Sweden. By collaborating, the public sector can reduce its dependency on external systems and build vital digital competence.
![]()
We believe that the future of AI for the public sector must be built on infrastructure that is secure, transparent, and nationally grounded. That is why we are proud to provide the computing power that makes it possible to run Svea within Sweden, giving municipalities and government agencies a secure foundation for integrating AI into their operations.
![]()
Robert Lidberg
CEO & Co-founder, Airon
![]()
The project provides us with invaluable knowledge, networking opportunities, and above all, a significant increase in our AI maturity, both at an organizational and individual level.
![]()
Linda Burman
Innovation leader at Huddinge municipality
Organizations participating: AI Sweden, Ale kommun, Alingsås kommun, Arvika kommun, Burlövs kommun, Eda kommun, Flens kommun, Forshaga kommun, Grums kommun, Göteborgs stad, Göteborgsregionen, Hagfors kommun, Hammarö kommun, Huddinge kommun, Härryda kommun, Höganäs kommun, Inera, Jönköpings kommun, Karlstad kommun, Kil kommun, Kristinehamn kommun, Kungsbacka kommun, Kungälv kommun, Kävlinge kommun, Lerum kommun, Lidköping kommun, Lilla Edet kommun, Luleå kommun, Malmö stad, Mölndals kommun, Partille kommun, Region Halland, Region Skåne, Region Värmland, SKR-arbetsgivarpolitiska avdelningen, Skara kommun, Skövde kommun, Sollentuna kommun, Staffanstorp kommun, Stenungsund kommun, Stockholms stad, Storfors kommun, Sunne kommun, Svenljunga kommun, Svedala kommun, Swedac, Säffle kommun, Tibro kommun, Tjörns kommun, Torsby kommun, Trelleborgs Kommun, Upplands Väsby kommun, Uppsala kommun, Vara kommun and Öckerö kommun.
The project is co-funded by Vinnova and participating organizations.
In the initiative’s ongoing stage, Airon is the technology partner and contributes with AI hardware. Airon is a Swedish supplier of computing power with a particular focus on data protection, integrity, and sustainability.
Intel was the technology partner in the first stage and contributed computing power based on the Gaudi 2 hardware. Intel's contribution was crucial for the initiative to be formed.
LeakPro: Enabling collaboration around sensitive data through secure AI
Secure AI adoption requires resolving critical challenges in the intersection of legal and technology. A top priority for many organizations handling sensitive data is mitigating the risk of training data leakage from AI models.
In 2025, we concluded the LeakPro project together with our partners. A flagship initiative within our broader work on secure AI, the project combined advanced technical testing with legal and organizational perspectives to deliver a reusable open-source framework for evaluating privacy leakage in machine learning. The framework has been validated across several data modalities and within real-world settings in healthcare, pharma, and public sector contexts and serves as a tool for lawyers and technical experts to have a joint understanding of risks.
In some cases, including evaluating the robustness of federated learning systems and assessing whether trained models could be open-sourced without exposing sensitive information, the results from LeakPro have informed real technical and organizational decisions at industry partners, such as Scaleout. In Scaleout’s case, parts of the framework are now included in customer offerings.
Beyond the technical platform itself, research emerging from Secure AI initiatives such as LeakPro has gained international recognition, including acceptance of four papers at some of the world’s leading AI conferences, ICLR, SaTML, and NeurIPS.
![]()
To make well-informed decisions that balance benefits against risks, we need a clear way to measure risk. From a legal point of view, there must be some form of consensus on managing and assessing risks for new AI solutions. Tools for evaluating the latest AI technologies are lacking, which is why a project like LeakPro is important to us.
![]()
Magnus Kjellberg
Head of the Center of Excellence for AI at Sahlgrenska University Hospital
The components of LeakPro:
Privacy assessments in models (black-box and white-box attacks); Synthetic data; and federated learning
4 types of data leakage risks evaluated Covering different ways sensitive information could unintentionally be exposed in trained models, federated learning, and synthetic data.
4 data types supported The LeakPro framework is validated across tabular, image, time-series, and text-related settings.
3 research papers published Peer-reviewed papers
9 MSc theses The project produced peer-reviewed publications and MSc theses that advance how privacy attacks are evaluated in practice.
3 PhD researchers engaged LeakPro used as part of ongoing doctoral research at different stages.
Industry uptake Parts of the LeakPro are now integrated into Scaleout’s customer offering, Syndata is using LeakPro in two of their projects, and the framework is currently used in two other Vinnova-funded projects.
LeakPro’s impact extends beyond individual and technical pilots. By providing a validated framework to quantify privacy leakage risks, the project lowers the threshold for organizations to collaborate, reuse data, and deploy AI solutions. This becomes increasingly critical as European regulations tighten, especially in healthcare and other sensitive domains.
Looking ahead, LeakPro II has launched as a direct continuation of the project. It aims to scale and formalize the capabilities established in the first phase. While the first phase has already demonstrated how privacy risk assessment can support real operational decisions, the LeakPro II will strengthen this into methods and tools that can be standardized across organizations and sectors, creating lasting ecosystem value.
![]()
The results of our researchers and scientists working in AI security have recently been published at NeurIPS and ICLR, two of the top conferences in AI research. This demonstrates that the environment and colleagues here at AI Sweden are world leaders in their fields. At a time when EU regulations are introducing new and strict requirements for transparency and security in high-risk systems, our research—driven together with our partners—delivers not only theory, but also concrete tools to meet the future regulatory landscape and build AI solutions that society can trust.
![]()
Mats Nordlund
Director of AI Labs at AI Sweden
Organizations participating: AI Sweden, RISE, Scaleout, Syndata, Sahlgrenska University Hospital, Region Halland, and AstraZeneca. IMY and Esam participated as part of the reference group.
The project was co-funded by Vinnova: Advanced Digitalisation.
Test: Listen to this section: LeakPro project | 4.22 min
Generated with Google Vertex AI
OpenEuroLLM: Building Europe’s foundation for trustworthy language models at scale
As stricter requirements around transparency, legal compliance, and digital sovereignty increasingly limit the use of commercial models in regulated environments, the demand for sovereign European large language models continue to grow. Against this backdrop, the EU has outlined a roadmap to secure Europe’s technical and economic independence. OpenEuroLLM is a major European collaboration addressing these challenges by developing open and compliant language models aligned with European values – a critical foundation for scaling AI responsibly.
OpenEuroLLM brings together 20 leading research institutions, AI companies, and supercomputing centers to advance European AI capabilities. During 2025, the project became the first AI initiative to receive coordinated strategic access and compute to multiple EuroHPC systems*, securing the infrastructure required to train next-generation European language models at scale. This is a major achievement and secures Sweden’s access to world-class compute.
20 partners From leading research institutions, AI companies, and supercomputing centers across Europe.
4 super-computers Strategic access to Europe’s most powerful HPC systems: LUMI, Leonardo, Jupiter & MareNostrum5, connecting Sweden to world-class resources.
A minimum of 10M GPU hours Granted to train large language models.
€34 M Total project budget.
AI Sweden plays a leading role in the project through our Natural Language Understanding (NLU) team. Our participation rests on a solid foundation of experience, including the development of the first large Nordic language model, GPT-SW3, and the ongoing work within TrustLLM, which focuses on reliable models for Germanic languages.
In case the larger trained models and transparency of the project is successful, it could lower the barriers to AI across sectors and industries in Europe. Beyond the models themselves, OpenEuroLLM establishes a sovereign AI infrastructure.
By developing these models openly, the transparency of data, training, and governance increases, allowing Swedish public and private sectors to build applications where they fully understand the data and training process, a prerequisite for compliance with the EU AI Act. At the same time, the project’s collaborative approach builds collective capacity, ensuring efficient use of compute and bringing together expertise that would be difficult to achieve through isolated national initiatives alone.
![]()
We must be able to use large language models in a great many societal sectors. Sometimes it's possible to use commercial solutions, but many times we will need open and transparent models developed in line with European values.
![]()
Nina Ökvist
Head of NLU (Natural Language Understanding), AI Sweden
Organizations participating: AI Sweden, Charles University, Institute of Formal and Applied Linguistics - Czechia (coordinator), Alliance for Language Technologies EDIC, (ALT-EDIC) - France, Eindhoven University of Technology - the Netherlands, ELLIS Institute Tübingen - Germany, Fraunhofer IAIS - Germany, Research Center Juelich - Germany, University of Helsinki - Finland, University of Oslo - Norway, University of Turku - Finland, University of Tübingen (Tübingen AI Center)- Germany, Silo GenAI (AMD Silo AI) - Finland (co-lead), Aleph Alpha Research - Germany, ellamind - Germany, LightOn - France, Prompsit Language Engineering - Spain, Barcelona Supercomputing Center - Spain, Cineca Interuniversity Consortium - Italy, CSC - IT Center for Science - Finland, SURF - the Netherlands.
The project is co-funded by the Digital Europe Programme under grant agreement No 101195233.
Energy: Decarbonizing industry with AI-enabled energy optimization
At Volvo Group’s Skövde plant, a small number of energy-intensive processes account for around 10% of the company’s total operational greenhouse gas emissions, making them a critical focus for decarbonization.
The Skövde Plant Approaching Carbon Elimination initiative builds on a prestudy in which Volvo Group, Skövde Energi, the University of Skövde, and AI Sweden explored how AI could support the transition toward low-carbon industrial production.
Central to the work was transforming the plant's conventional digital system into an industrial AI system to enable smart energy utilization and production planning. This included the development of simulation-based optimization models and developing an AI cloud platform for both the factory and the surrounding electricity grid. The results of the prestudy are now being carried forward by Volvo Group in a main implementation project.
If the Skövde Plant Approaching Carbon Elimination project is implemented fully by the help of the AI system developed in the prestudy:
Up to 88% projected CO2 reduction in targeted processes
340 000+ tonnes of CO2e estimated cumulative emissions avoided over the first 10 years of operation
The shift from fossil fuel to electricity powered furnaces in the manufacturing process, expected to reduce the greenhouse gas emissions by up to 88%, creates a big increase of needed electricity to the factory. The AI system addresses this by coordinating the factory’s electricity use with the municipality’s overall demand. In practice, this means the plant avoids consuming large amounts of electricity at the same time as the surrounding community. The result is not only lower emissions, but also smoother coexistence with the surrounding municipalities’ need for electricity.
With a modular approach, the project also ensures scalability to other Volvo Group sites, meaning it has potential to significantly reduce emissions across the entire organisation. Over its first ten years of operation, the project estimates a cumulative avoided impact of approximately 340,000 tonnes of CO2 equivalents.
Beyond technical performance, the project also demonstrates how close collaboration between industry, energy providers, academia, and AI infrastructure can unlock new ways of operating industrial systems as active participants in the energy ecosystem: not only as consumers, but as flexible, data-driven partners contributing to grid stability and decarbonization.
In addition to saved emissions, we also learned that the switch to an AI system could make the factory become an active participant in the energy ecosystem. If this approach is scaled beyond the Skövde plant, it could change how energy-intensive industry and the power grid work together everywhere.
Tony Holmqvist
Chapter Lead at Volvo Group
![]()
Flexibility in both production and consumption is key to the efficient use of the electricity system. That means adjusting how much energy is consumed or produced depending on the current state of the grid. To make that possible, AI and machine learning are crucial tools.
![]()
Anna Svensson
Co-lead for AI Sweden’s work on energy
Organizations participating: AB Volvo, AI Sweden, Skövde Energi and University of Skövde.
The project was co-funded by Advanced Digitalisation - Vinnova
Data-driven organizations: Best practices for AI operationalization in Sweden
Many organizations struggle to move from pilots to large-scale implementation of AI. Complex ways of building, running, and maintaining AI systems, combined with long lead times, unclear roles and responsibilities, and high infrastructure requirements often slow adoption and limit value creation.
To tackle these challenges, leading Swedish organizations from industry, academia, and the public sector joined forces in the Data-Driven Organizations initiative to help formulate best practices for AI operationalization in Sweden. Over 20 months, project partners explored several use cases to better understand how to deploy AI solutions that are sustainable, compliant, and administratively manageable. This included evaluations on hardware for different computational workloads, building secure and compliant infrastructure across development and production without physical separation of solutions, and exploring how organizations can handle large-scale model management – such as operating more than 1000 models in production simultaneously.
Rather than each organization learning in isolation, Data-Driven Organizations have provided a shared starting point for how to structure AI infrastructure, governance, and operations. The practices identified and documented provide a shared foundation that lowers the barrier for organizations to move from pilot AI projects to full-scale, sustainable operations. By offering blueprints for scalable machine learning operations, governance frameworks, and practical lessons from real cases, Data-Driven organizations enable learning across sectors, and stronger collective capability to AI adoption.
20 project partners in cross-sector collaboration between industry, public sector and academia
10+ whitepapers: Produced by the projects participating organisations, documenting tested practices for AI operationalization across sectors.
35 MSEK: Project budget
8 real-world use cases studied in the project
![]()
Thanks to our participation in the project, we have received support in optimizing capacity and resources within MLOps, while also gaining valuable insights from various sectors. It has been a very positive and educational experience for the entire team.
![]()
Lina Gårdemark
Data Engineer/Data Scientist at Region Halland
![]()
As a result of DDO, we now have a blueprint for how to scale MLops on a small number of clusters and a shared GPU pool. This will increase the utilization of investments made. We also see big value in the whitepapers produced by the other participants and will work to adapt them to our needs.
![]()
Daniel Jakobsson
Strateg Artificiell Intelligens at Trafikverket
![]()
Sustainability doesn't have to be a fancy word you mention to sound green. It's really just about not being wasteful: choosing what you actually need and optimizing what you have. The barrier to AI is lower than most of us think - what you need is smart architecture, not just computational muscle.
![]()
Milena Miernik
Developer at Aixia
Participating organizations: AI Sweden, AIXIA, Hewlett Packard Enterprise, Linköping University, NetApp, Proact, Red Hat, Region Halland, RISE, Santa Anna IT Research Institute, Sahlgrenska University Hospital, Statistics Sweden, the Swedish Tax Agency, Stormgrid, the Swedish Transport Administration, Volvo Parts, Region Västra Götaland, IBM Svenska, Predli Consulting and Hopsworks.
This project was co-funded by Vinnova and participating partners.
Our approach
AI Sweden’s impact is created where shared ambition meets practical application. Rather than working on isolated projects and solutions, we focus on building the conditions that enable many organizations to move forward together. In practice, this means addressing the barriers that most often slow down AI adoption: access to data and infrastructure, legal and regulatory uncertainty, skills gap, and the challenge of scaling pilots to full-scale implementation.
Together with our partners, we address these barriers by acting as a catalyst for change. We create shared conditions for progress, and thus facilitate the turn from individual ambition into collective capacity and greater impact. Want to join us in our mission? Read more about becoming a partner.
AI Sweden receives funding from Vinnova, the Swedish Agency for Economic and Regional Growth and the European Regional Development Fund, as well as from our 170+ partners — including Västra Götalandsregionen, who provide extended regional funding.
The 3 most-listened AI Sweden Podcast episodes of 2025
Scientific publications 2025
Subgraph Federated Learning via Spectral Methods, Javad Aliakbari (Chalmer University of Technology), Johan Östman (AI Sweden), Alexandre Graell i Amat (Chalmer University of Technology). Presented at NeurIPS 2025.
Abstract
We consider the problem of federated learning (FL) with graph-structured data distributed across multiple clients. In particular, we address the prevalent scenario of interconnected subgraphs, where interconnections between clients significantly influence the learning process. Existing approaches suffer from critical limitations, either requiring the exchange of sensitive node embeddings, thereby posing privacy risks, or relying on computationally-intensive steps, which hinders scalability. To tackle these challenges, we propose FedLap, a novel framework that leverages global structure information via Laplacian smoothing in the spectral domain to effectively capture inter-node dependencies while ensuring privacy and scalability. We provide a formal analysis of the privacy of FedLap, demonstrating that it preserves privacy. Notably, FedLap is the first subgraph FL scheme with strong privacy guarantees. Extensive experiments on benchmark datasets demonstrate that FedLap achieves competitive or superior utility compared to existing techniques.
Destabilizing a Social Network Model via Intrinsic Feedback Vulnerabilities. Rogers, Lane H., Emma J. Reid, and Robert A. Bridges. SafeThings Workshop, IEEE S&P 2025.
Abstract
Social influence plays a significant role in shaping individual sentiments and actions, particularly in a world of ubiquitous digital interconnection. The rapid development of generative AI has engendered well-founded concerns regarding the potential scalable implementation of radicalization techniques in social media. Motivated by these developments, we present a case study investigating the effects of small but intentional perturbations on a simple social network. We employ Taylor's classic model of social influence and tools from robust control theory (most notably the Dynamical Structure Function (DSF)), to identify perturbations that qualitatively alter the system's behavior while remaining as unobtrusive as possible. We examine two such scenarios: perturbations to an existing link and perturbations that introduce a new link to the network. In each case, we identify destabilizing perturbations of minimal norm and simulate their effects. Remarkably, we find that small but targeted alterations to network structure may lead to the radicalization of all agents, exhibiting the potential for large-scale shifts in collective behavior to be triggered by comparatively minuscule adjustments in social influence. Given that this method of identifying perturbations that are innocuous yet destabilizing applies to any suitable dynamical system, our findings emphasize a need for similar analyses to be carried out on real systems (e.g., real social networks), to identify the places where such dynamics may already exist.
Practical Bayes-Optimal Membership Inference Attacks, Marcus Lassila (Chalmers University of Technology), Johan Östman (AI Sweden) Khac-Hoang Ngo (Linköping University), Alexandre Graell i Amat (Chalmers University of Technology). Presented at NeurIPS 2025.
Abstract
We develop practical and theoretically grounded membership inference attacks (MIAs) against both independent and identically distributed (i.i.d.) data and graph-structured data. Building on the Bayesian decision-theoretic framework of Sablayrolles et al., we derive the Bayes-optimal membership inference rule for node-level MIAs against graph neural networks, addressing key open questions about optimal query strategies in the graph setting. We introduce BASE and G-BASE, tractable approximations of the Bayes-optimal membership inference. G-BASE achieves superior performance compared to previously proposed classifier-based node-level MIA attacks. BASE, which is also applicable to non-graph data, matches or exceeds the performance of prior state-of-the-art MIAs, such as LiRA and RMIA, at a significantly lower computational cost. Finally, we show that BASE and RMIA are equivalent under a specific hyperparameter setting, providing a principled, Bayes-optimal justification for the RMIA attack.
Felix Stollenwerk and Tobias Stollenwerk. 2025. Better Embeddings with Coupled Adam. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 27219–27236, Vienna, Austria. Association for Computational Linguistics.
Abstract
Despite their remarkable capabilities, LLMs learn word representations that exhibit the undesirable yet poorly understood feature of anisotropy. In this paper, we argue that the second moment in Adam is a cause of anisotropic embeddings, and suggest a modified optimizer called Coupled Adam to mitigate the problem. Our experiments demonstrate that Coupled Adam significantly improves the quality of embeddings, while also leading to better upstream and downstream performance on large enough datasets.
Kätriin Kukk, Danila Petrelli, Judit Casademont, Eric J. W. Orlowski, Michal Dzielinski, and Maria Jacobson. 2025. BiaSWE: An Expert Annotated Dataset for Misogyny Detection in Swedish. In Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), pages 307–312, Tallinn, Estonia. University of Tartu Library.
Abstract
In this study, we introduce the process for creating BiaSWE, an expert-annotated dataset tailored for misogyny detection in the Swedish language. To address the cultural and linguistic specificity of misogyny in Swedish, we collaborated with experts from the social sciences and humanities. Our interdisciplinary team developed a rigorous annotation process, incorporating both domain knowledge and language expertise, to capture the nuances of misogyny in a Swedish context. This methodology ensures that the dataset is not only culturally relevant but also aligned with broader efforts in bias detection for low-resource languages. The dataset, along with the annotation guidelines, is publicly available for further research.
Browse all 2025 news articles