1. Jim Dowling
CEO / Co-Founder
Logical Clocks
Hopsworks
Data-Intensive AI with a Feature Store
(it’s open-source)
Data Engineering Melbourne Meetup
on Walpurgis Night 2020
@jim_dowling
2. Leadership & Offices
Stockholm
Box 1263,
Isafjordsgatan 22
Kista,
Sweden
London
IDEALondon,
69 Wilson St,
London,,
UK
Silicon Valley
470 Ramona St
Palo Alto
California,
USA
Dr. Jim Dowling
CEO
Theo Kakantousis
COO
Prof. Seif Haridi
Chief Scientist
Fabio Buso
VP Engineering
Steffen Grohsschmiedt
Head Of Cloud
www.logicalclocks.com
Shraddha Chouhan
Head Of Marketing
8. Feature Engineering is about Transforming Data
from pyspark.ml.feature import Normalizer
scaledDF = spark.parquet.read(”…”)
l1_norm=Normalizer().setP(1).setInputCol("features").setOutputCol("l1_norm")
l1_norm.transform(scaleDF)
Normalize
9. Features name Pclass Sex Survive Name Balance
Train / Test
Datasets
Survivename PClass Sex Balance
Join key
Feature
Groups
Titanic
Passenger List
Passenger
Bank Account
File format
.tfrecords
.npy
.csv
.hdf5,
.petastorm, etc
Storage
GCS
Amazon S3
HopsFS
Features, FeatureGroups, and Train/Test Datasets are all versioned
Feature Store Concepts
10. Streaming App pushes click features every 5 secs
Streaming App pushes CDC data every 30 secs
Pandas App pushes user profile updates every hour
Batch App pushes featurized weblogs data every day
Online
Feature
Store
Offline
Feature
Store
SQL DW
S3, HDFS
SQL
Event Data
Real-Time Data
Real-time feature transformations (<2 secs) Online
App
Low
Latency
Features
High
Latency
Features
Train,
Batch App
FeatureGroups are ingested at different Cadences
Feature Store
No existing database is both scalable (PBs) and low latency (<10ms). Hence, online + offline Feature Stores.
<10ms
TBs/PBs
11. Feature Store
ClickFeatureGroup
TableFeatureGroup
UserFeatureGroup
LogsFeatureGroup
Event Data
SQL DW
S3, HDFS
SQL
DataFrameAPI
Kafka Input
Flink
RTFeatureGroup
Online
App
Train,
Batch App
FeatureGroup ingestion in Hopsworks
User Clicks
DB Updates
User Profile Updates
Weblogs
Real-time features
Kafka Output
Simplify Ingestion to the Online/Offline Feature Stores by providing a general-purpose DataFrame API.
12. Register a Feature Group with the Feature Store
from hops import featurestore as fs
df = # Spark or Pandas Dataframe
# Do feature engineering on ‘df’
# Register Dataframe as FeatureGroup
fs.create_featuregroup(df, ”titanic_df“)
14. Create Training Datasets using the Feature Store
from hops import featurestore as fs
sample_data = fs.get_features([“name”, “Pclass”, “Sex”, “Balance”, “Survived”])
fs.create_training_dataset(sample_data, “titanic_training_dataset",
data_format="tfrecords“, training_dataset_version=1)
15. US-West-la
MySQL
NDB1 Model
Online Application
1.JDBC 2.Predict
1. Build a Feature Vector using the Online Feature Store
US-West-1c
MySQL
NDB3Model
~5-50ms
Online Feature Store: High Availability & Low-Latency
US-West-1b
MySQL
NDB2Model
2-20ms
2. Send the Feature Vector to a Model for Prediction
17. APPLICATIONS
API
DASHBOARDS
HOPSWORKS
DATASOURCES
ORCHESTRATION
In Airflow
BATCH
Apache Beam
Apache Spark
STREAMING
Apache Beam
Apache Spark
Apache Flink
HOPSWORKS
FEATURE
STORE
DISTRIBUTED
ML & DL
Pip
Conda
Tensorflow
scikit-learn
PyTorch
Jupyter
Notebooks
Tensorboard
FILESYSTEM & METADATA STORAGE
HopsFS
MODEL
SERVING
Kubernetes
MODEL
MONITORING
Kafka
+
Spark Streaming
Data Preparation
& Ingestion
Experimentation
& Model Training
Deploy
& Productionalize
Apache
Kafka
18. 1
Feature
Engineering
2
Feature
Selection
3
Training &
Validation
4 Serving 5 Prediction
Train/Test Data
(S3, HDFS, etc)
Online
Application
Batch
Application
Data Warehouse
Data Lake
Feature
Engineering
Offline
Feature Store
Feature
Selection
Scoring &
Validation
Train
Model
Serving
Online
Feature Store
Model
Repository
Monitor
Experiments
Deploy
Feature Vector
Kafka
19. More in Hopsworks
Multi-Worker Training for TensorFlow (using PySpark)
https://databricks.com/session/distributed-deep-learning-with-apache-spark-and-tensorflow
Maggy: Async HParam Tuning and Parallel Ablation Studies (using PySpark)
https://databricks.com/session_eu19/asynchronous-hyperparameter-optimization-with-apache-spark
Project-Based Multi-Tenancy
Implicit Provenance for ML Workflows
Instrument instead of rewrite (TFX, MLFlow) – enabled by a CDC API
Secure Sensitive data on a shared cluster:
Datasets, Hive DBs, Feature Stores, Kafka Topics all private to Projects – but can be shared.
Conda environment per project (sane Python dependency management in a cluster).
20. Trying out Hopsworks
Full Featured
AGPL-v3 License Model
Hopsworks Community
Kubernetes Support
• Model Serving
• Other services for robustness (Jupyter, more coming)
Authentication (LDAP, Kerberos, OAuth2)
Github support
Hopsworks Enterprise
Managed SAAS platform (currently only on AWS)
Hopsworks.ai
21. Show us some love!
@hopsworks
http://github.com/logicalclocks/hopsworks