Skip to main content

Deep-dive sessions for data scientists

Join our deep-dive sessions for data scientists! The deep-dive sessions are intended for experienced data scientists or similar functions that are implementing and applying AI in their work. The meetups are developed to provide a neutral arena for attendees to meet and discuss relevant topics, get inspiration, and learn from each other. 

Deep dive sessions for data scientists

Join our deep-dive sessions for data scientists!

We will meet once every 6 weeks for 2-3 hours depending on the topic. This site will be updated with dates and topics for the different sessions. We will meet and listen to one or two experts during the sessions. You will have time to interact with questions connected to the topic with both the participants and experts. The sessions are only available to AI Sweden's partners. 

To get information about future Deep Dive sessions, register your interest.

We have currently paused Deep Dive Sessions but find recordings and themes below.

 

Why have predictive maintenance solutions failed to scale in the past? - 7 key takeaways mixed with general concepts from machine learning

The tremendous value creation opportunity related to predictive maintenance created hype and noise in the applicability of available solutions. But what differentiates this field from others, which resulted in difficulties for advanced algorithms to scale and generalize? 7 key takeaways mixed with general concepts from machine learning and data science will be presented to make a step toward creating clarity in the noise.

Speaker
Karoly Szipka (PhD)

Organization
→ Ipercept

 

Theory and application of diffusion models

We discussed diffusion models from both a theoretical and a practical perspective and gave examples of how they can and are used in applications outside art and content generation. Emphasis was put on understanding the internals of the models and how the method and training compare with the classic and most popular method for generating images, i.e., GANs. Even though diffusion models seem to solve many issues GANs have, they still suffer from many outstanding problems. We discussed what these issues imply and how they impact their usability in practical applications.

Speakers 
Josef Lindman Hörnlund,
 Co-founder, Machine learning specialist, Modulai 
Emil Larsson, Machine learning specialist, lead. Modulai

Organization
→ Modulai

 

Handling the complexity of model deployment challenges 

During the session, we are going to do a deep dive on what happens after a model is trained and ready for deployment with the most common challenges a data scientist team is facing before, during and after a model is deployed, and how to overcome these challenges.

Fredrik Strålberg has several years of experience as a Data Scientist in Microsoft global delivery orchestrating everything from model training, MLOps to AI architecture. In his current role as a Cloud Solution Architect within AI at Microsoft, he works with different organizations across industries supporting them with their challenges in achieving an enterprise standard on data scientist workloads and infrastructure. Fredrik will share his learnings, experience and insight connected to model deployment challenges and we, at AI Sweden, are eager to listen to the participants' thoughts and their points of view.

Speaker
Fredrik Strålberg, 
Cloud Solution Architect within AI, Microsoft

→ Presentation

 

Attention, the heart of transformers, to improve Information extraction

Attention is a key component in the field of Natural Language Processing as it forms the base of transformers. In this session, we will first provide insight into the concept of attention by applying it in a simplified setting. We will discuss it in the setting of information extraction and machine translation. Information extraction that creates structured information from unstructured sources, uses attention to integrate external knowledge into the algorithm to improve the performance. Once we have a notion of what attention is, we will give an introduction to the architecture of transformers and the basic models implementing this architecture.

Speaker
Severine Verlinden
AI Developer Language Technology, MSc

→ Presentation
→ Recorded session

 

Synthetic Data at Scale for Perception Systems

Synthetic data offers the potential for cheap and scalable solutions for perception systems in autonomous driving and other applications. We will discuss the underlying technologies to create synthetic data and how they can be used in combination with real logged data for scalable machine learning solutions in perception systems. Some technology demos will also be given and there will be breakout sessions around key questions and challenges.
 

Speakers
Prof. Devdatt Dubhashi

Professor, Division of Data Science and AI Chalmers, and Chief Scientist SDS

Anton Kloek
Data scientist, SDS

→ Recorded session

For more information, contact

Raquel Broman
Raquel Broman
Head of Training
+46 (0)76 883 86 74