Skip to main content

AI-Powered Honeypots

Honeypots are decoy computing systems that mimic real environments to trick attackers into interacting with them. There are two ways a honeypot can increase security: (1) by occupying attacker resources; (2) by enticing attackers to reveal their tools. Can AI enhance honeypot deception to increase cybersecurity?

Golden honeycomb pattern on a computer component with a metallic shine

Image: AI-generated honeycomb pattern

Motivation

Cybersecurity is fundamentally a problem of asymmetry. Any system can be breached given enough time and resources; the goal of defense is to make that effort prohibitively expensive. While traditional tools excel at blocking known threats, they often fail against zero-day attacks—novel exploits that bypass standard detection. Attackers treat these zero-day tools as high-value assets, using them sparingly to avoid detection.

This project leverages cyber deception to disrupt this economic model. By deploying honeypots—decoy systems mimicking high-stakes targets—we incentivize attackers to "burn" their expensive exploits in a controlled environment. This protects real assets while allowing us to observe and analyze the attacker's tools, tactics, and procedures in real-time.

Background: The AI opportunity

The arms race is evolving. Generative AI and large language models (LLMs) are now being used to create automated, reasoning-capable hacking agents; notably, 2025 witnessed the first LLM-powered hacker in the wild. To counter this, our project focuses on empowering honeypots with AI.

While recent studies show LLMs can enhance honeypot interactivity, current solutions are often insufficient against skilled humans or unnecessarily complex for automated bots. Our research seeks to bridge this gap by determining exactly how and when to deploy AI to maximize realism and threat intelligence collection.

Research challenges

We are addressing the critical engineering and theoretical challenges required to move LLM-honeypots from novelty to deployment:

  • Dynamic adaptation: Designing systems that use feedback loops to continuously update their configuration and "personality" to match the attacker’s sophistication.
  • Measurement: Developing quantitative metrics to measure the success of a deception event effectively.
  • Federated learning: Solving the unique privacy challenges of allowing distributed honeypots to learn collectively without revealing instance-specific network data.

Using federated learning in this application presents unique challenges, particularly in ensuring the privacy of network data and preventing over-generalization of instance-specific details.

Project goals

Our research aims to develop a robust, AI-enhanced honeypot system with the following specific objectives:

  • Learning: The system will autonomously improve its intelligence gathering capabilities based on continuous attacker interactions.
  • Federated learning: We aim to enable honeypots across different organizations to learn from one another without leaking private, network-specific data.
  • Attack modeling: The project seeks to transform raw interaction data into updated models of attacker techniques to inform broader defensive strategies.

Expected outcomes

Upon completion, this project aims to deliver:

  • Systemization of knowledge: A consolidation of rapidly emerging research on LLMs in cyber deception to identify key growth areas.
  • Empirically-proven configuration strategies: Validated features for configuring LLM honeypots, specifically contributing to the Beelzebub open-source project.
  • Public datasets: Attack-labeled datasets to support community research into LLM-powered attackers and defenses.
  • Automated tactic labeling: Supervised learning models designed for accurate classification of cyberattack tactics within honeypot data.
  • Generative modeling insights: Initial investigations into using generative AI for attack modeling.
  • Dissemination: Academic publications and presentations detailing the system architecture and findings.

(Beelzebub is a community-driven open‑source honeypot framework using AI to simulate attacker behavior; more info at https://github.com/mariocandela/beelzebub)

Metadata

Funding: Vinnova

Total project budget: 4 028 600 SEK

Project period:  June 2024 - June 2026

Participants: AI Sweden, Volvo Group, Aixia AB, Västra Götalandsregionen (VGR), Scaleout Systems AB and Dakota State University

For more information, contact

A picture of Robert (Bobby) Bridges
Robert (Bobby) Bridges
Mathematician & Innovation Leader
+46 (0)70-003 25 35

Related news articles from AI Sweden

Dr. Robert (Bobby) Bridges calculating and writing on a whiteboard

Using large language models to outsmart cyber attackers

2024-10-16
Can AI adapt honeypots to produce more threat intelligence by eliciting greater interaction from attackers? That question will be investigated through an innovative collaboration between Volvo Group...
Mauricio and Bobby

Two international AI experts strengthen AI Sweden's efforts in security and automotive

2024-09-12
The recruitment of two international AI experts strengthens AI Sweden's capabilities. "I'm delighted to welcome both Dr. Robert (Bobby) Bridges and Mauricio Muñoz to AI Sweden. They bring years of...
A large group of students

From ships to chips: AI talent program tackles security challenges

2024-08-23
How can AI help in detecting suspicious ship movements at sea? How can AI help protect IT infrastructure making it more resistant to hacker attacks? And can we prevent data leakage from AI models...