Learn about best practices for feature engineering, model training, batch/online inference in Python, Spark, and SQL, and how a Feature Store can accelerate your time to value for ML.
Is it taking too long to go from a model in a notebook to one that adds to your business's bottom line? MLOps and the FTI pattern for architecting machine learning (ML) systems are quickly becoming the de facto way to build production ML systems with a Feature Store.
Join us to learn how to architect your ML systems as Feature pipelines, Training pipelines, and Inference pipelines (FTI Pipelines) that are connected together via a Feature Store and managed by best practices from MLOps.
We will present examples of FTI Pipelines from the industry for both batch ML systems and online ML systems in the context of the Hopsworks platform. Learn about best practices for feature engineering, model training, batch/online inference in Python, Spark, and SQL, and how a Feature Store can accelerate your time to value for ML.
Agenda:
09:00: Registration & Coffee
09:30: Introduction & Principles for putting ML in Production
- Developing and architecting ML Systems for Production from Day 1
- FTI pipelines, MLOps Principles, and the Feature Store
10:40: Coffee and Fika
11:00: Sharing Practical Experience from building ML Systems
- How to Build Batch ML Systems
- How to Build Real-Time ML Systems
12:00 - 13:00: Lunch Networking opportunity with your peers and Hopsworks experts