Hoppa till huvudinnehåll

Deep dive för datavetare

Anslut till våra Deep dive-sessioner för datavetare! Dessa sessioner är avsedda för erfarna datavetare eller liknande funktioner som implementerar och applicerar AI i sitt arbete. Mötet är utformat för att erbjuda en neutral arena för deltagare att träffas och diskutera relevanta ämnen, få inspiration och lära av varandra.

Deep dive sessions for data scientists

Anslut till våra Deep dive-sessioner för datavetare!

Vi kommer att träffas en gång var sjätte vecka i 2-3 timmar beroende på ämnet. Denna sida kommer att uppdateras med datum och ämnen för de olika sessionerna. Vi kommer att träffas och lyssna på en eller två experter under sessionerna. Du kommer att ha tid att interagera med frågor kopplade till ämnet med både deltagarna och experterna. Sessionerna är endast tillgängliga för AI Swedens partners och är på engelska.

För att få information om framtida Deep Dive-sessioner, registrera ditt intresse.

Vi har för närvarande pausat Deep Dive-sessionerna men se inspelningar och teman nedan (på engelska).

 

Why have predictive maintenance solutions failed to scale in the past? - 7 key takeaways mixed with general concepts from machine learning

The tremendous value creation opportunity related to predictive maintenance created hype and noise in the applicability of available solutions. But what differentiates this field from others, which resulted in difficulties for advanced algorithms to scale and generalize? 7 key takeaways mixed with general concepts from machine learning and data science will be presented to make a step toward creating clarity in the noise.

Speaker
Karoly Szipka (PhD)

Organization
→ Ipercept

 

Theory and application of diffusion models

We discussed diffusion models from both a theoretical and a practical perspective and gave examples of how they can and are used in applications outside art and content generation. Emphasis was put on understanding the internals of the models and how the method and training compare with the classic and most popular method for generating images, i.e., GANs. Even though diffusion models seem to solve many issues GANs have, they still suffer from many outstanding problems. We discussed what these issues imply and how they impact their usability in practical applications.

Speakers 
Josef Lindman Hörnlund,
 Co-founder, Machine learning specialist, Modulai 
Emil Larsson, Machine learning specialist, lead. Modulai

Organization
→ Modulai

 

Handling the complexity of model deployment challenges 

During the session, we are going to do a deep dive on what happens after a model is trained and ready for deployment with the most common challenges a data scientist team is facing before, during and after a model is deployed, and how to overcome these challenges.

Fredrik Strålberg has several years of experience as a Data Scientist in Microsoft global delivery orchestrating everything from model training, MLOps to AI architecture. In his current role as a Cloud Solution Architect within AI at Microsoft, he works with different organizations across industries supporting them with their challenges in achieving an enterprise standard on data scientist workloads and infrastructure. Fredrik will share his learnings, experience and insight connected to model deployment challenges and we, at AI Sweden, are eager to listen to the participants' thoughts and their points of view.

Speaker
Fredrik Strålberg, 
Cloud Solution Architect within AI, Microsoft

→ Presentation

 

Attention, the heart of transformers, to improve Information extraction

Attention is a key component in the field of Natural Language Processing as it forms the base of transformers. In this session, we will first provide insight into the concept of attention by applying it in a simplified setting. We will discuss it in the setting of information extraction and machine translation. Information extraction that creates structured information from unstructured sources, uses attention to integrate external knowledge into the algorithm to improve the performance. Once we have a notion of what attention is, we will give an introduction to the architecture of transformers and the basic models implementing this architecture.

Speaker
Severine Verlinden
AI Developer Language Technology, MSc

→ Presentation
→ Recorded session

 

Synthetic Data at Scale for Perception Systems

Synthetic data offers the potential for cheap and scalable solutions for perception systems in autonomous driving and other applications. We will discuss the underlying technologies to create synthetic data and how they can be used in combination with real logged data for scalable machine learning solutions in perception systems. Some technology demos will also be given and there will be breakout sessions around key questions and challenges.
 

Speakers
Prof. Devdatt Dubhashi

Professor, Division of Data Science and AI Chalmers, and Chief Scientist SDS

Anton Kloek
Data scientist, SDS

→ Recorded session

För mer information, kontakta

Raquel Broman
Raquel Broman
Head of Training
+46 (0)76 883 86 74