Skip to main content

Using large language models to outsmart cyber attackers

Wednesday, October 16, 2024

Can AI adapt honeypots to produce more threat intelligence by eliciting greater interaction from attackers? That question will be investigated through an innovative collaboration between Volvo Group, Region Västra Götaland, Aixia, Scaleout Systems, Dakota State University, and AI Sweden.

“It is well-known that attackers are leveraging AI to enhance their capabilities. We aim to do the same for defense – in novel ways,” says Dr. Bobby Bridges, Mathematician and Innovation Leader at AI Sweden. 

In this groundbreaking project, supported by Vinnova funding, the participating partners will develop an innovative framework for AI security combining large language models (LLMs) with federated learning. 

“Our vision is to integrate smart honeypots into federated learning networks. We seek to create new knowledge on how LLMs can be used to design smart, adaptive honeypots,” says Bobby, who recently joined AI Sweden and will manage the project.

A picture of Robert (Bobby) Bridges

Dr Robert (Bobby) Bridges
Mathematician and Innovation Leader at AI Sweden

Honeypots are a well-known tool in cyber security. They are computers that attract hackers' attention, either to divert attacks from an organization's real systems or to learn more about the methods attackers use.

"We have built expertise on large language models and federated learning over several years. Federated learning is a method that allows multiple stakeholders to collaborate in training models. The honeypot project lets us fuse these areas into practical applications within the security domain," says Dr. Mats Nordlund, Director of AI Labs at AI Sweden. He adds:

"Since 2021, AI and security have been part of the work in our Edge Learning Lab, following recommendations from our advisory board. We are now further accelerating our efforts in this area, under the umbrella AI & Security, as the intersection of AI and security is becoming increasingly critical for many of our partners. This includes new projects aimed at making AI itself more robust and secure, as well as exploring how AI can be leveraged to enhance security in other domains. Our work on adaptive honeypots is one example of the latter." 

Mats Nordlund speaking during the AI in Automotive event October 1, 2024

Dr. Mats Nordlund
Director of AI Labs at AI Sweden

Continue reading

Golden honeycomb pattern on a computer component with a metallic shine

AI-Powered Honeypots

Honeypots are decoy computing systems that mimic real environments to trick attackers into revealing their tools. Can AI enhance honeypot deception to increase cybersecurity?
Mats Nordlund speaking during the AI in Automotive event October 1, 2024

AI Sweden forms an AI & Security Consortium

2024-10-16
AI Sweden is establishing an AI & Security Consortium, building on the success of the Edge Learning Consortium and leveraging both new and ongoing projects in the security domain.
A collage with two portraits: Mauricio Muñoz and Dr. Robert (Bobby) Bridges

Two international AI experts strengthen AI Sweden's efforts in security and automotive

2024-09-12
The recruitment of two international AI experts strengthens AI Sweden's capabilities. "I'm delighted to welcome both Dr. Robert (Bobby) Bridges and Mauricio Muñoz to AI Sweden. They bring years of...
A large group of students

From ships to chips: AI talent program tackles security challenges

2024-08-23
How can AI help in detecting suspicious ship movements at sea? How can AI help protect IT infrastructure making it more resistant to hacker attacks? And can we prevent data leakage from AI models...
Johan Östman and Fazeleh Hoseini, Research engineers at AI Sweden

When will an AI model reveal your sensitive data?

2024-06-05
AI models can leak training data–this is known. However, such leakage has mainly been observed in lab-like conditions that often favor the attacker. Today, there are few answers on what the risks look...