http://pipeline.ai
free gpu-based community edition: http://community.pipeline.ai
github: https://github.com/PipelineAI/pipeline
video/screenshare: https://youtu.be/eoGazZ6wa8g
Applying my Netflix experience to a real-world problem in the ML and AI world, I will demonstrate a full-featured, open-source, end-to-end TensorFlow Model Training and Deployment System using the latest advancements from Kubernetes, Istio, and TensorFlow.
In addition to training and hyper-parameter tuning, our model deployment pipeline will include continuous canary deployments of our TensorFlow Models into a live, hybrid-cloud production environment.
This is the holy grail of data science - rapid and safe experiments of ML / AI models directly in production.
Following the Successful Netflix Culture that I lived and breathed (https://www.slideshare.net/reed2001/culture-1798664/2-Netflix_CultureFreedom_Responsibility2), I give Data Scientists the Freedom and Responsibility to extend their ML / AI pipelines and experiments safely into production.
Offline, batch training and validation is for the slow and weak. Online, real-time training and validation on live production data is for the fast and strong.
Learn to be fast and strong by attending this meetup.
High Performance Machine Learning with Kubernetes, Istio, and GPUs - San Francisco and Seattle Kubernetes Meetups
1. HIGH PERFORMANCE DISTRIBUTED TENSORFLOW
IN PRODUCTION WITH GPUS AND KUBERNETES!
CHRIS FREGLY
FOUNDER @ PIPELINE.AI
2. AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Routing
3. AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Routing
4. INTRODUCTIONS: ME
§ Chris Fregly, Founder & Engineer @PipelineAI
§ Formerly Netflix, Databricks, IBM Spark Tech
§ Advanced Spark and TensorFlow Meetup
§ Please Join Our 60,000+ Global Members!!
Contact Me
chris@pipeline.ai
@cfregly
Global Locations
* San Francisco
* Chicago
* Austin
* Washington DC
* Dusseldorf
* London
5. INTRODUCTIONS: YOU
§ Data Scientist, Data Engineer, Data Analyst, Data Curious
§ Want to Deploy ML/AI Models Rapidly and Safely
§ Need to Trace or Explain Model Predictions
§ Have a Decent Grasp of Computer Science Fundamentals
6. PIPELINE.AI IS 100% OPEN SOURCE
§ https://github.com/PipelineAI/pipeline/
§ Please Star ! this GitHub Repo!
§ VC’s Value GitHub Stars @ $1,500 Each (?!)
Git Repo Geo Heat Map: http://jrvis.com/red-dwarf/
8. RECENT PIPELINE.AI NEWS
Sept 2017
Dec 2017
Jan 2018
Certified Google ML/AI Expert
Feb
2018
Public
Beta!!
http://pipeline.ai
9. WHY HEAVY FOCUS ON MODEL SERVING?
Model Training
Batch & Boring
Offline in Research Lab
Pipeline Ends at Training
No Insight into Live Production
Small Number of Data Scientists
Optimizations Very Well-Known
Real-Time & Exciting!!
Online in Live Production
Pipeline Extends into Production
Continuous Insight into Live Production
Huuuuuuge Number of Application Users
Runtime Optimizations Not Yet Explored
<<<
Model Serving
100’s Training Jobs per Day 1,000,000’s Predictions per Sec
10. CLOUD-BASED MODEL SERVING OPTIONS
§ AWS SageMaker
§ Released Nov 2017 @ Re-invent
§ Custom Docker Images for Training/Serving (PipelineAI Images)
§ Distributed TensorFlow Training through Estimator API
§ Traffic Splitting for A/B Model Testing
§ Google Cloud ML Engine
§ Mostly Command-Line Based
§ Driving TensorFlow Open Source API (ie. Estimator API)
§ Azure ML
11. BUILD MODEL WITH THE RUNTIME
§ Package Model + Runtime into 1 Docker Image
§ Emphasizes Immutable Deployment and Infrastructure
§ Same Image Across All Environments
§ No Library or Dependency Surprises from Laptop to Production
§ Allows Tuning Model + Runtime Together
pipeline predict-server-build --model-name=mnist
--model-tag=A
--model-type=tensorflow
--model-runtime=tfserving
--model-chip=gpu
--model-path=./tensorflow/mnist/
Build Local
Model Server A
12. TUNE MODEL + RUNTIME TOGETHER
§ Model Training Optimizations
§ Model Hyper-Parameters (ie. Learning Rate)
§ Reduced Precision (ie. FP16 Half Precision)
§ Model Serving (Post-Train) Optimizations
§ Quantize Model Weights + Activations From 32-bit to 8-bit
§ Fuse Neural Network Layers Together
§ Model Runtime Optimizations
§ Runtime Config: Request Batch Size, etc
§ Different Runtime: TensorFlow Serving CPU/GPU, Nvidia TensorRT
13. SERVING (POST-TRAIN) OPTIMIZATIONS
§ Prepare Model for Serving
§ Simplify Network, Reduce Size
§ Reduce Precision -> Fast Math
§ Some Tools
§ Graph Transform Tool (GTT)
§ tfcompile
After Training
After
Optimizing!
pipeline optimize --optimization-list=[‘quantize_weights’,‘tfcompile’]
--model-name=mnist
--model-tag=A
--model-path=./tensorflow/mnist/model
--model-inputs=[‘x’]
--model-outputs=[‘add’]
--output-path=./tensorflow/mnist/optimized_model
Linear
Regression
70MB –> 70K (?!)
14. NVIDIA TENSOR-RT RUNTIME
§ Post-Training Model Optimizations
§ Specific to Nvidia GPUs
§ GPU-Optimized Prediction Runtime
§ Alternative to TensorFlow Serving
§ PipelineAI Supports TensorRT!
15. TENSORFLOW LITE RUNTIME
§ Post-Training Model Optimizations
§ Currently Supports iOS and Android
§ On-Device Prediction Runtime
§ Low-Latency, Fast Startup
§ Selective Operator Loading
§ 70KB Min - 300KB Max Runtime Footprint
§ Supports Accelerators (GPU, TPU)
§ Falls Back to CPU without Accelerator
§ Java and C++ APIs
16. 3 DIFFERENT RUNTIMES, SAME MODEL
pipeline predict-server-build --model-name=mnist
--model-tag=C
--model-type=tensorflow
--model-runtime=tensorrt
--model-chip=gpu
--model-path=./tensorflow/mnist/
Build Local
Model Server C
pipeline predict-server-build --model-name=mnist
--model-tag=A
--model-type=tensorflow
--model-runtime=tfserving
--model-chip=cpu
--model-path=./tensorflow/mnist/
Build Local
Model Server A
pipeline predict-server-build --model-name=mnist
--model-tag=B
--model-type=tensorflow
--model-runtime=tfserving
--model-chip=gpu
--model-path=./tensorflow/mnist/
Build Local
Model Server B
Same
Model
17. RUN A LOADTEST LOCALLY!
§ Perform Mini-Load Test on Local Model Server
§ Immediate, Local Prediction Performance Metrics
§ Compare to Previous Model + Runtime Variations
§ Gain Intuition Before Push to Prod
pipeline predict-server-start --model-name=mnist
--model-tag=A
--memory-limit=2G
pipeline predict-http-test --model-endpoint-url=http://localhost:8080
--test-request-path=test_request.json
--test-request-concurrency=1000
Start Local
LoadTest
Start Local
Model Servers
18. PUSH IMAGE TO DOCKER REGISTRY
§ Supports All Public + Private Docker Registries
§ DockerHub, Artifactory, Quay, AWS, Google, …
§ Or Self-Hosted, Private Docker Registry
pipeline predict-server-push --model-name=mnist
--model-tag=A
--image-registry-url=<your-registry>
--image-registry-repo=<your-repo>
Push Images to
Docker Registry
19. DEPLOY MODELS SAFELY TO PROD
§ Deploy from CLI or Jupyter Notebook
§ Tear-Down and Rollback Models Quickly
§ Shadow Canary: Deploy to 20% Live Traffic
§ Split Canary: Deploy to 97-2-1% Live Traffic
pipeline predict-kube-start --model-name=mnist
--model-tag=BStart Cluster B
pipeline predict-kube-start --model-name=mnist
--model-tag=CStart Cluster C
pipeline predict-kube-start --model-name=mnist
--model-tag=AStart Cluster A
pipeline predict-kube-route --model-name=mnist
--model-tag-and-weight-dict='{"A":97, "B":2, "C”:1}'Route Live Traffic
20. COMPARE MODELS OFFLINE & ONLINE
§ Offline, Batch Metrics
§ Validation + Training Accuracy
§ CPU + GPU Utilization
§ Online, Live Prediction Values
§ Compare Relative Precision
§ Newly-Seen, Streaming Data
§ Online, Real-Time Metrics
§ Response Time, Throughput
§ Cost ($) Per Prediction
21. ENSEMBLE PREDICTION AUDIT TRAIL
§ Necessary for Explainability
§ Fine-Grained Request Tracing
§ Used for Model Ensembles
22. REAL-TIME PREDICTION STREAMS
§ Visually Compare Real-time Predictions
Features and
Inputs
Predictions and
Confidences
Model B Model CModel A
26. SHIFT TRAFFIC TO MIN(CLOUD CO$T)
§ Based on Cost ($) Per Prediction
§ Cost Changes Throughout Day
§ Lose AWS Spot Instances
§ Google Cloud Becomes Cheaper
§ Shift Across Clouds & On-Prem
27. PSEUDO-CONTINUOUS TRAINING
§ Identify and Fix Borderline (Unconfident) Predictions
§ Facilitate ”Human in the Loop”
§ Fix Predictions Along Class Boundaries
§ Retrain with Newly-Labeled Data
§ Game-ify the Labeling Process
§ Easy Path to Crowd-Sourced Labeling
28. CONTINUOUS MODEL TRAINING
§ The Holy Grail of Machine Learning!
§ PipelineAI Supports Continuous Model Training!
§ Kafka
§ Kinesis
§ Spark Streaming
§ Flink
§ Heron
§ …
31. HANDS-ON EXERCISES
§ Combo of Jupyter Notebooks and Command Line
§ Command Line through Jupyter Terminal
§ Some Exercises Based on Experimental Features
You May See Errors. Stay Calm. It’s OK!!
32. LET’S EXPLORE OUR ENVIRONMENT
§ Navigate to the following notebook:
01_Explore_Environment
§ https://github.com/PipelineAI/notebooks
34. BREAK!! Need Help?
Use the Chat!
§ Please Star ! this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline
35. AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Routing
36. AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Feed, Train, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
37. SETTING UP TENSORFLOW WITH GPUS
§ Very Painful!
§ Especially inside Docker
§ Use nvidia-docker
§ Especially on Kubernetes!
§ Use the Latest Kubernetes (with Init Script Support)
§ http://pipeline.ai for GitHub + DockerHub Links
39. GPU HALF-PRECISION SUPPORT
§ FP32 is “Full Precision”, FP16 is “Half Precision”
§ Two(2) FP16’s in Each FP32 GPU Core for 2x Throughput!
§ Lower Precision is OK for Approx. Deep Learning Use Cases
§ The Network Matters Most – Not Individual Neuron Accuracy
§ Supported by Pascal P100 (2016) and Volta V100 (2017)
Set the following on GPU’s with CC 5.3+:
TF_FP16_MATMUL_USE_FP32_COMPUTE=0
TF_XLA_FLAGS=--xla_enable_fast_math=1
40. VOLTA V100 (2017) VS. PASCAL P100 (2016)
§ 84 Streaming Multiprocessors (SM’s)
§ 5,376 GPU Cores
§ 672 Tensor Cores (ie. Google TPU)
§ Mixed FP16/FP32 Precision
§ Matrix Dims Should be Multiples of 8
§ More Shared Memory
§ New L0 Instruction Cache
§ Faster L1 Data Cache
§ V100 vs. P100 Performance
§ 12x Training, 6x Inference
41. FP32 VS. FP16 ON AWS GPU INSTANCES
FP16 Half Precision
87.2 T ops/second for p3 Volta V100
4.1 T ops/second for g3 Tesla M60
1.6 T ops/second for p2 Tesla K80
FP32 Full Precision
15.4 T ops/second for p3 Volta V100
4.0 T ops/second for g3 Tesla M60
3.3 T ops/second for p2 Tesla K80
42. § Currently Supports the Following:
§ Tesla K80
§ Pascal P100
§ Volta V100 Coming Soon?
§ TPUs (Only in Google Cloud)
§ Attach GPUs to CPU Instances
§ Similar to AWS Elastic GPU, except less confusing
WHAT ABOUT GOOGLE CLOUD?
43. V100 AND CUDA 9
§ Independent Thread Scheduling - Finally!!
§ Similar to CPU fine-grained thread synchronization semantics
§ Allows GPU to yield execution of any thread
§ Still Optimized for SIMT (Same Instruction Multi-Thread)
§ SIMT units automatically scheduled together
§ Explicit Synchronization
P100 V100
44. GPU CUDA PROGRAMMING
§ Barbaric, But Fun Barbaric
§ Must Know Hardware Very Well
§ Hardware Changes are Painful
§ Use the Profilers & Debuggers
45. CUDA STREAMS
§ Asynchronous I/O Transfer
§ Overlap Compute and I/O
§ Keep GPUs Saturated!
§ Used Heavily by TensorFlow
Bad
Good
Bad
Good
47. LET’S SEE WHAT THIS THING CAN DO!
§ Navigate to the following notebook:
01a_Explore_GPU
01b_Explore_Numba
§ https://github.com/PipelineAI/notebooks
48. AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Feed, Train, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
49. TRAINING TERMINOLOGY
§ Tensors: N-Dimensional Arrays
§ ie. Scalar, Vector, Matrix
§ Operations: MatMul, Add, SummaryLog,…
§ Graph: Graph of Operations (DAG)
§ Session: Contains Graph(s)
§ Feeds: Feed Inputs into Placeholder
§ Fetches: Fetch Output from Operation
§ Variables: What We Learn Through Training
§ aka “Weights”, “Parameters”
§ Devices: Hardware Device (GPU, CPU, TPU, ...)
-TensorFlow-
Trains
Variables
-User-
Fetches
Outputs
-User-
Feeds
Inputs
-TensorFlow-
Performs
Operations
-TensorFlow-
Flows
Tensors
with tf.device(“/cpu:0,/gpu:15”):
51. TENSORFLOW GRAPH EXECUTION
§ Lazy Execution by Default
§ Similar to Spark
§ Eager Execution Now Supported (TensorFlow 1.4+)
§ Similar to PyTorch
§ "Linearize” Execution to Minimize RAM Usage
§ Useful on Single GPU with Limited RAM
52. OPERATION PARALLELISM
§ Inter-Op (Between-Op) Parallelism
§ By default, TensorFlow runs multiple ops in parallel
§ Useful for low core and small memory/cache envs
§ Set to one (1)
§ Intra-Op (Within-Op) Parallelism
§ Different threads can use same set of data in RAM
§ Useful for compute-bound workloads (CNNs)
§ Set to # of cores (>=2)
53. TENSORFLOW MODEL
§ MetaGraph
§ Combines GraphDef and Metadata
§ GraphDef
§ Architecture of your model (nodes, edges)
§ Metadata
§ Asset: Accompanying assets to your model
§ SignatureDef: Maps external to internal tensors
§ Variables
§ Stored separately during training (checkpoint)
§ Allows training to continue from any checkpoint
§ Variables are “frozen” into Constants when preparing for inference
GraphDef
x
W
mul add
b
MetaGraph
Metadata
Assets
SignatureDef
Tags
Version
Variables:
“W” : 0.328
“b” : -1.407
54. SAVED MODEL FORMAT
§ Different Format than Traditional Exporter
§ Contains Checkpoints, 1..* MetaGraph’s, and Assets
§ Export Manually with SavedModelBuilder
§ Estimator.export_savedmodel()
§ Hooks to Generate SignatureDef
§ Use saved_model_cli to Verify
§ Used by TensorFlow Serving
§ New Standard Export Format? (Catching on Slowly…)
55. BATCH (RE-)NORMALIZATION (2015, 2017)
§ Each Mini-Batch May Have Wildly Different Distributions
§ Normalize per Batch (and Layer)
§ Faster Training, Learns Quicker
§ Final Model is More Accurate
§ TensorFlow is already on 2nd Generation Batch Algorithm
§ First-Class Support for Fusing Batch Norm Layers
§ Final mean + variance Are Folded Into Graph Later
-- (Almost) Always Use Batch (Re-)Normalization! --
z = tf.matmul(a_prev, W)
a = tf.nn.relu(z)
a_mean, a_var = tf.nn.moments(a, [0])
scale = tf.Variable(tf.ones([depth/channels]))
beta = tf.Variable(tf.zeros ([depth/channels]))
bn = tf.nn.batch_normalizaton(a, a_mean, a_var,
beta, scale, 0.001)
56. DROPOUT (2014)
§ Training Technique
§ Prevents Overfitting
§ Helps Avoid Local Minima
§ Inherent Ensembling Technique
§ Creates and Combines Different Neural Architectures
§ Expressed as Probability Percentage (ie. 50%)
§ Boost Other Weights During Validation & Prediction
Perform Dropout
(Training Phase)
Boost for Dropout
(Validation & Prediction Phase)
0%
Dropout
50%
Dropout
58. FEED TENSORFLOW TRAINING PIPELINE
§ Training is Limited by the Ingestion Pipeline
§ Number One Problem We See Today
§ Scaling GPUs Up / Out Doesn’t Help
§ GPUs are Heavily Under-Utilized
§ Use tf.dataset API for best perf
§ Efficient parallel async I/O (C++)
Tesla K80 Volta V100
59. DON’T USE FEED_DICT!!
§ feed_dict Requires Python <-> C++ Serialization
§ Not Optimized for Production Ingestion Pipelines
§ Retrieves Next Batch After Current Batch is Done
§ Single-Threaded, Synchronous
§ CPUs/GPUs Not Fully Utilized!
§ Use Queue or Dataset APIs
§ Queues are old & complex
sess.run(train_step, feed_dict={…}
60. DETECT UNDERUTILIZED CPUS, GPUS
§ Instrument Training Code to Generate “Timelines”
§ Analyze with Google Web
Tracing Framework (WTF)
§ Monitor CPU with top, GPU with nvidia-smi
http://google.github.io/tracing-framework/
from tensorflow.python.client import timeline
trace =
timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.json', 'w') as trace_file:
trace_file.write(
trace.generate_chrome_trace_format(show_memory=True))
61. QUEUES
§ More than traditional Queue
§ Uses CUDA Streams
§ Perform I/O, pre-processing, cropping, shuffling, …
§ Pull from HDFS, S3, Google Storage, Kafka, ...
§ Combine many small files into large TFRecord files
§ Use CPUs to free GPUs for compute
§ Helps saturate CPUs and GPUs
62. QUEUE CAPACITY PLANNING
§ batch_size
§ # examples / batch (ie. 64 jpg)
§ Limited by GPU RAM
§ num_processing_threads
§ CPU threads pull and pre-process batches of data
§ Limited by CPU Cores
§ queue_capacity
§ Limited by CPU RAM (ie. 5 * batch_size)
63. DATASET API
tf.Tensor => tf.data.Dataset
Functional Transformations
Python Generator => tf.data.Dataset
Dataset.from_tensors((features, labels))
Dataset.from_tensor_slices((features, labels))
TextLineDataset(filenames)
dataset.map(lambda x: tf.decode_jpeg(x))
dataset.repeat(NUM_EPOCHS)
dataset.batch(BATCH_SIZE)
def generator():
while True:
yield ...
dataset.from_generator(generator, tf.int32)
Dataset => One-Shot Iterator
Dataset => Initializable Iter
iter = dataset.make_one_shot_iterator()
next_element = iter.get_next()
while …:
sess.run(next_element)
iter = dataset.make_initializable_iterator()
sess.run(iter.initializer, feed_dict=PARAMS)
next_element = iter.get_next()
while …:
sess.run(next_element)
TIP: Use Dataset.prefetch() and parallel version of Dataset.map()
64. FUTURE OF DATASET API
§ Replace Queue
§ More Functional Operators
§ Automatic GPU Data Staging
§ Under-utilized GPUs Assisting with Data Ingestion
§ Advanced, RL-based Device Placement Strategies
65. LET’S FEED DATA WITH A QUEUE
§ Navigate to the following notebook:
02_Datasets_EagerExecution
§ https://github.com/PipelineAI/notebooks
66. LET’S FEED DATA WITH A QUEUE
§ Navigate to the following notebook:
02a_EagerExecution_GPU
§ https://github.com/PipelineAI/notebooks
68. BREAK!! Need Help?
Use the Chat!
§ Please Star ! this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline
69. LET’S TRAIN A MODEL (CPU)
§ Navigate to the following notebook:
03_Train_Model_CPU
§ https://github.com/PipelineAI/notebooks
70. LET’S TRAIN A MODEL (GPU)
§ Navigate to the following notebook:
03a_Train_Model_GPU
§ https://github.com/PipelineAI/notebooks
71. TENSORFLOW DEBUGGER
§ Step through Operations
§ Inspect Inputs and Outputs
§ Wrap Session in Debug Session
sess = tf.Session(config=config)
sess =
tf_debug.LocalCLIDebugWrapperSession(sess)
72. LET’S DEBUG A MODEL
§ Navigate to the following notebook:
04_Debug_Model
§ https://github.com/PipelineAI/notebooks
73. AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Train, Inspect, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
74. SINGLE NODE, MULTI-GPU TRAINING
§ cpu:0
§ By default, all CPUs
§ Requires extra config to target a CPU
§ gpu:0..n
§ Each GPU has a unique id
§ TF usually prefers a single GPU
§ xla_cpu:0, xla_gpu:0..n
§ “JIT Compiler Device”
§ Hints TensorFlow to attempt JIT Compile
with tf.device(“/cpu:0”):
with tf.device(“/gpu:0”):
with tf.device(“/gpu:1”):
GPU 0 GPU 1
75. DISTRIBUTED, MULTI-NODE TRAINING
§ TensorFlow Automatically Inserts Send and Receive Ops into Graph
§ Parameter Server Synchronously Aggregates Updates to Variables
§ Nodes with Multiple GPUs will Pre-Aggregate Before Sending to PS
Worker0 Worker0
Worker1
Worker0 Worker1 Worker2
gpu0 gpu1
gpu2 gpu3
gpu0 gpu1
gpu2 gpu3
gpu0 gpu1
gpu2 gpu3
gpu0
gpu1
gpu0
gpu0
Single
Node
Multiple
Nodes
76. DATA PARALLEL VS. MODEL PARALLEL
§ Data Parallel (“Between-Graph Replication”)
§ Send exact same model to each device
§ Each device operates on partition of data
§ ie. Spark sends same function to many workers
§ Each worker operates on their partition of data
§ Model Parallel (“In-Graph Replication”)
§ Send different partition of model to each device
§ Each device operates on all data
§ Difficult, but required for larger models with lower-memory GPUs
77. SYNCHRONOUS VS. ASYNCHRONOUS
§ Synchronous
§ Nodes compute gradients
§ Nodes update Parameter Server (PS)
§ Nodes sync on PS for latest gradients
§ Asynchronous
§ Some nodes delay in computing gradients
§ Nodes don’t update PS
§ Nodes get stale gradients from PS
§ May not converge due to stale reads!
78. CHIEF WORKER
§ Chief Defaults to Worker Task 0
§ Task 0 is guaranteed to exist
§ Performs Maintenance Tasks
§ Writes log summaries
§ Instructs PS to checkpoint vars
§ Performs PS health checks
§ (Re-)Initialize variables at (re-)start of training
79. NODE AND PROCESS FAILURES
§ Checkpoint to Persistent Storage (HDFS, S3)
§ Use MonitoredTrainingSession and Hooks
§ Use a Good Cluster Orchestrator (ie. Kubernetes, Mesos)
§ Understand Failure Modes and Recovery States
Stateless, Not Bad: Training Continues Stateful, Bad: Training Must Stop Dios Mio! Long Night Ahead…
80. ESTIMATOR API (1/2)
§ Supports Keras!
§ Unified API for Local + Distributed
§ Provide Clear Path to Production
§ Enable Rapid Model Experiments
§ Provide Flexible Parameter Tuning
§ Enable Downstream Optimizing & Serving Infra( )
§ Nudge Users to Best Practices Through Opinions
§ Provide Hooks/Callbacks to Override Opinions
81. ESTIMATOR API (2/2)
§ “Train-to-Serve” Design
§ Create Custom Estimator or Re-Use Canned Estimator
§ Hides Session, Graph, Layers, Iterative Loops (Train, Eval, Predict)
§ Hooks for All Phases of Model Training and Evaluation
§ Load Input: input_fn()
§ Train: model_fn() and train()
§ Evaluate: eval_fn() and evaluate()
§ Performance Metrics: Loss, Accuracy, …
§ Save and Export: export_savedmodel()
§ Predict: predict() Uses the slow sess.run()
https://github.com/GoogleCloudPlatform/cloudml-samples
/blob/master/census/customestimator/
82. EXPERIMENT API
§ Easier-to-Use Distributed TensorFlow
§ Same API for Local and Distributed (*Theoretically)
§ Combines Estimator with input_fn()
§ Used for Training, Evaluation, & Hyper-Parameter Tuning
§ Distributed Training Defaults to Data-Parallel & Async
§ Cluster Configuration is Fixed at Start of Training Job
§ No Auto-Scaling Allowed, but That’s OK for Training
83. ESTIMATOR & EXPERIMENT CONFIGS
§ TF_CONFIG
§ Special environment variable for config
§ Defines ClusterSpec in JSON incl. master, workers, PS’s
§ Distributed mode ‘{“environment”:“cloud”}’
§ Local: ‘{environment”:“local”, {“task”:{”type”:”worker”}}’
§ RunConfig: Defines checkpoint interval, output directory,
§ HParams: Hyper-parameter tuning parameters and ranges
§ learn_runner creates RunConfig before calling run() & tune()
§ schedule is set based on {”task”:{”type”:…}}
TF_CONFIG=
'{
"environment": "cloud",
"cluster":
{
"master":["worker0:2222”],
"worker":["worker1:2222"],
"ps": ["ps0:2222"]
},
"task": {"type": "ps",
"index": "0"}
}'
84. ESTIMATOR + KERAS
§ Distributed TensorFlow (Estimator) + Easy to Use (Keras)
§ tf.keras.estimator.model_to_estimator()
# Instantiate a Keras inception v3 model.
keras_inception_v3 = tf.keras.applications.inception_v3.InceptionV3(weights=None)
# Compile model with the optimizer, loss, and metrics you'd like to train with.
keras_inception_v3.compile(optimizer=tf.keras.optimizers.SGD(lr=0.0001, momentum=0.9),
loss='categorical_crossentropy',
metric='accuracy')
# Create an Estimator from the compiled Keras model.
est_inception_v3 = tf.keras.estimator.model_to_estimator(keras_model=keras_inception_v3)
# Treat the derived Estimator as you would any other Estimator. For example,
# the following derived Estimator calls the train method:
est_inception_v3.train(input_fn=my_training_set, steps=2000)
85. “CANNED” ESTIMATORS
§ Commonly-Used Estimators
§ Pre-Tested and Pre-Tuned
§ DNNClassifer, TensorForestEstimator
§ Always Use Canned Estimators If Possible
§ Reduce Lines of Code, Complexity, and Bugs
§ Use FeatureColumn to Define & Create Features
Custom vs. Canned
@ Google, August 2017
86. ESTIMATOR + DATASET API
def input_fn():
def generator():
while True:
yield ...
my_dataset = tf.data.dataset.from_generator(generator, tf.int32)
# A one-shot iterator automatically initializes itself on first use.
iter = my_dataset.make_one_shot_iterator()
# The return value of get_next() matches the dataset element type.
images, labels = iter.get_next()
return images, labels
# The input_fn can be used as a regular Estimator input function.
estimator = tf.estimator.Estimator(…)
estimator.train(train_input_fn=input_fn, …)
88. MULTIPLE HEADS (OBJECTIVES)
§ Single-Objective Estimator
§ Single classification prediction
§ Multi-Objective Estimator
§ One (1) classification prediction
§ One(1) final layer to feed into next model
§ Multiple Heads Used to Ensemble Models
§ Treats neural network as a feature engineering step
§ Supported by TensorFlow Serving
89. LAYERS API
§ Standalone Layer or Entire Sub-Graphs
§ Functions of Tensor Inputs & Outputs
§ Mix and Match with Operations
§ Assumes 1st Dimension is Batch Size
§ Handles One (1) to Many (*) Inputs
§ Metrics are Layers
§ Loss Metric (Per Mini-Batch)
§ Accuracy and MSE (Across Mini-Batches)
90. FEATURE_COLUMN API
§ Used by Canned Estimator
§ Declaratively Specify Training Inputs
§ Converts Sparse to Dense Tensors
§ Sparse Features: Query Keyword, ProductID
§ Dense Features: One-Hot, Multi-Hot
§ Wide/Linear: Use Feature-Crossing
§ Deep: Use Embeddings
91. FEATURE CROSSING
§ Create New Features by Combining Existing Features
§ Limitation: Combinations Must Exist in Training Dataset
base_columns = [
education, marital_status, relationship, workclass, occupation, age_buckets
]
crossed_columns = [
tf.feature_column.crossed_column(
['education', 'occupation'], hash_bucket_size=1000),
tf.feature_column.crossed_column(
['age_buckets', 'education', 'occupation'], hash_bucket_size=1000)
]
92. FEATURE_COLUMN EXAMPLES
§ Continuous + One-Hot + Embedding
deep_columns = [
age,
education_num,
capital_gain,
capital_loss,
hours_per_week,
tf.feature_column.indicator_column(workclass),
tf.feature_column.indicator_column(education),
tf.feature_column.indicator_column(marital_status),
tf.feature_column.indicator_column(relationship),
# To show an example of embedding
tf.feature_column.embedding_column(occupation, dimension=8),
]
93. SEPARATE TRAINING + EVALUATION
§ Separate Training and Evaluation Clusters
§ Evaluate Upon Checkpoint
§ Avoid Resource Contention
§ Training Continues in Parallel with Evaluation
Training
Cluster
Evaluation
Cluster
Parameter Server
Cluster
94. LET’S TRAIN DISTRIBUTED TENSORFLOW
§ Navigate to the following notebook:
05_Train_Model_Distributed_CPU
-or- 05a_Train_Model_Distributed_GPU
§ https://github.com/PipelineAI/notebooks
96. BREAK!! Need Help?
Use the Chat!
§ Please Star ! this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline
97. AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Train, Inspect, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
98. XLA FRAMEWORK
§ XLA: “Accelerated Linear Algebra”
§ Reduce Reliance on Custom Operators
§ Intermediate Representation used by Hardware Vendors
§ Improve Portability
§ Increase Execution Speed
§ Decrease Memory Usage
§ Decrease Mobile Footprint
Helps TensorFlow Be Flexible AND Performant!!
99. XLA HIGH LEVEL OPTIMIZER (HLO)
§ HLO: “High Level Optimizer”
§ Compiler Intermediate Representation (IR)
§ Independent of source and target language
§ XLA Step 1 Emits Target-Independent HLO
§ XLA Step 2 Emits Target-Dependent LLVM
§ LLVM Emits Native Code Specific to Target
§ Supports x86-64, ARM64 (CPU), and NVPTX (GPU)
100. JIT COMPILER
§ JIT: “Just-In-Time” Compiler
§ Built on XLA Framework
§ Reduce Memory Movement – Especially with GPUs
§ Reduce Overhead of Multiple Function Calls
§ Similar to Spark Operator Fusing in Spark 2.0
§ Unroll Loops, Fuse Operators, Fold Constants, …
§ Scopes: session, device, with jit_scope():
101. VISUALIZING JIT COMPILER IN ACTION
Before JIT After JIT
Google Web Tracing Framework:
http://google.github.io/tracing-framework/
from tensorflow.python.client import timeline
trace =
timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.json', 'w') as trace_file:
trace_file.write(
trace.generate_chrome_trace_format(show_memory=True))
run_options = tf.RunOptions(trace_level=tf.RunOptions.SOFTWARE_TRACE)
run_metadata = tf.RunMetadata()
sess.run(options=run_options,
run_metadata=run_metadata)
103. LET’S TRAIN WITH XLA CPU
§ Navigate to the following notebook:
06_Train_Model_XLA_CPU
§ https://github.com/PipelineAI/notebooks
104. LET’S TRAIN WITH XLA GPU
§ Navigate to the following notebook:
06a_Train_Model_XLA_GPU
§ https://github.com/PipelineAI/notebooks
105. AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Routing
107. AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
108. AOT COMPILER
§ Standalone, Ahead-Of-Time (AOT) Compiler
§ Built on XLA framework
§ tfcompile
§ Creates executable with minimal TensorFlow Runtime needed
§ Includes only dependencies needed by subgraph computation
§ Creates functions with feeds (inputs) and fetches (outputs)
§ Packaged as cc_libary header and object files to link into your app
§ Commonly used for mobile device inference graph
§ Currently, only CPU x86-64 and ARM are supported - no GPU
109. GRAPH TRANSFORM TOOL (GTT)
§ Post-Training Optimization to Prepare for Inference
§ Remove Training-only Ops (checkpoint, drop out, logs)
§ Remove Unreachable Nodes between Given feed -> fetch
§ Fuse Adjacent Operators to Improve Memory Bandwidth
§ Fold Final Batch Norm mean and variance into Variables
§ Round Weights/Variables to improve compression (ie. 70%)
§ Quantize (FP32 -> INT8) to Speed Up Math Operations
112. AFTER STRIPPING UNUSED NODES
§ Optimizations
§ strip_unused_nodes
§ Results
§ Graph much simpler
§ File size much smaller
113. AFTER REMOVING UNUSED NODES
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ Results
§ Pesky nodes removed
§ File size a bit smaller
114. AFTER FOLDING CONSTANTS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ Results
§ Placeholders (feeds) -> Variables*
(*Why Variables and not Constants?)
115. AFTER FOLDING BATCH NORMS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ Results
§ Graph remains the same
§ File size approximately the same
116. AFTER QUANTIZING WEIGHTS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ quantize_weights
§ Results
§ Graph is same, file size is smaller, compute is faster
117. WEIGHT QUANTIZATION
§ FP16 and INT8 Are Smaller and Computationally Simpler
§ Weights/Variables are Constants
§ Easy to Linearly Quantize
118. LET’S OPTIMIZE FOR INFERENCE
§ Navigate to the following notebook:
07_Optimize_Model_Weights*
*Why just CPU version? Why not GPU also?
§ https://github.com/PipelineAI/notebooks
120. ACTIVATION QUANTIZATION
§ Activations Not Known Ahead of Time
§ Depends on input, not easy to quantize
§ Requires Additional Calibration Step
§ Use a “representative” dataset
§ Per Neural Network Layer…
§ Collect histogram of activation values
§ Generate many quantized distributions with different saturation thresholds
§ Choose threshold to minimize…
KL_divergence(ref_distribution, quant_distribution)
§ Not Much Time or Data is Required (Minutes on Commodity Hardware)
124. AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
125. MODEL SERVING TERMINOLOGY
§ Inference
§ Only Forward Propagation through Network
§ Predict, Classify, Regress, …
§ Bundle
§ GraphDef, Variables, Metadata, …
§ Assets
§ ie. Map of ClassificationID -> String
§ {9283: “penguin”, 9284: “bridge”}
§ Version
§ Every Model Has a Version Number (Integer)
§ Version Policy
§ ie. Serve Only Latest (Highest), Serve Both Latest and Previous, …
126. TENSORFLOW SERVING FEATURES
§ Supports Auto-Scaling
§ Custom Loaders beyond File-based
§ Tune for Low-latency or High-throughput
§ Serve Diff Models/Versions in Same Process
§ Customize Models Types beyond HashMap and TensorFlow
§ Customize Version Policies for A/B and Bandit Tests
§ Support Request Draining for Graceful Model Updates
§ Enable Request Batching for Diff Use Cases and HW
§ Supports Optimized Transport with GRPC and Protocol Buffers
127. PREDICTION SERVICE
§ Predict (Original, Generic)
§ Input: List of Tensor
§ Output: List of Tensor
§ Classify
§ Input: List of tf.Example (key, value) pairs
§ Output: List of (class_label: String, score: float)
§ Regress
§ Input: List of tf.Example (key, value) pairs
§ Output: List of (label: String, score: float)
129. MULTI-HEADED INFERENCE
§ Inputs Pass Through Model One Time
§ Model Returns Multiple Predictions:
1. Human-readable prediction (ie. “penguin”, “church”,…)
2. Final layer of scores (float vector)
§ Final Layer of floats Pass to the Next Model in Ensemble
§ Optimizes Bandwidth, CPU/GPU, Latency, Memory
§ Enables Complex Model Composing and Ensembling
130. BUILD YOUR OWN MODEL SERVER
§ Adapt GRPC(Google) <-> HTTP (REST of the World)
§ Perform Batch Inference vs. Request/Response
§ Handle Requests Asynchronously
§ Support Mobile, Embedded Inference
§ Customize Request Batching
§ Add Circuit Breakers, Fallbacks
§ Control Latency Requirements
§ Reduce Number of Moving Parts
#include
“tensorflow_serving/model_servers/server_core.h”
class MyTensorFlowModelServer {
ServerCore::Options options;
// set options (model name, path, etc)
std::unique_ptr<ServerCore> core;
TF_CHECK_OK(
ServerCore::Create(std::move(options), &core)
);
}
Compile and Link with
libtensorflow.so
131. RUNTIME OPTION: NVIDIA TENSOR-RT
§ Post-Training Model Optimizations
§ Specific to Nvidia GPU
§ Similar to TF Graph Transform Tool
§ GPU-Optimized Prediction Runtime
§ Alternative to TensorFlow Serving
§ PipelineAI Supports TensorRT!
132. AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
133. SAVED MODEL FORMAT
§ Navigate to the following notebook:
09_Prepare_Model_Deployment
§ https://github.com/PipelineAI/notebooks
134. LET’S DEPLOY OPTIMIZED MODEL
§ Navigate to the following notebook:
10_Deploy_Model_And_Test
§ https://github.com/PipelineAI/notebooks
135. AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
136. REQUEST BATCH TUNING
§ max_batch_size
§ Enables throughput/latency tradeoff
§ Bounded by RAM
§ batch_timeout_micros
§ Defines batch time window, latency upper-bound
§ Bounded by RAM
§ num_batch_threads
§ Defines parallelism
§ Bounded by CPU cores
§ max_enqueued_batches
§ Defines queue upper bound, throttling
§ Bounded by RAM
Reaching either threshold
will trigger a batch
Separate, Non-Batched Requests
Combined, Batched Requests
137. ADVANCED BATCHING & SERVING TIPS
§ Batch Just the GPU/TPU Portions of the Computation Graph
§ Batch Arbitrary Sub-Graphs using Batch / Unbatch Graph Ops
§ Distribute Large Models Into Shards Across TensorFlow Model Servers
§ Batch RNNs Used for Sequential and Time-Series Data
§ Find Best Batching Strategy For Your Data Through Experimentation
§ BasicBatchScheduler: Homogeneous requests (ie Regress or Classify)
§ SharedBatchScheduler: Mixed requests, multi-step, ensemble predict
§ StreamingBatchScheduler: Mixed CPU/GPU/IO-bound Workloads
§ Serve Only One (1) Model Inside One (1) TensorFlow Serving Process
§ Much Easier to Debug, Tune, Scale, and Manage Models in Production.
138. LET’S DEPLOY OPTIMIZED MODEL
§ Navigate to the following notebook:
11_Optimize_Runtime_And_Test
§ https://github.com/PipelineAI/notebooks
139. AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Routing
140. AGENDA
Part 3: Advanced Model Serving + Routing
§ Kubernetes Ingress, Egress, Networking
§ Istio and Envoy Architecture
§ Intelligent Traffic Routing and Scaling
§ Metrics, Chaos Monkey, Production Readiness
141. KUBERNETES INGRESS
§ Single Service
§ Can also use Service (LoadBalancer or NodePort)
§ Fan Out & Name-Based Virtual Hosting
§ Route Traffic Using Path or Host Header
§ Reduces # of load balancers needed
§ 404 Implemented as default backend
§ Federation / Hybrid-Cloud
§ Creates Ingress objects in every cluster
§ Monitors health and capacity of pods within each cluster
§ Routes clients to appropriate backend anywhere in federation
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway-fanout
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80
Fan Out (Path)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway-virtualhost
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: foo.bar.com
http:
paths:
backend:
serviceName: s1
servicePort: 80
- host: bar.foo.com
http:
paths:
backend:
serviceName: s2
servicePort: 80
Virtual Hosting
142. KUBERNETES INGRESS CONTROLLER
§ Ingress Controller Types
§ Google Cloud: kubernetes.io/ingress.class: gce
§ Nginx: kubernetes.io/ingress.class: nginx
§ Istio: kubernetes.io/ingress.class: istio
§ Must Start Ingress Controller Manually
§ Just deploying Ingress is not enough
§ Not started by kube-controller-manager
§ Start Istio Ingress Controller
kubectl apply -f
$ISTIO_INSTALL_PATH/install/kubernetes/istio.yaml
153. ISTIO AUTO-SCALING
§ Traffic Routing and Auto-Scaling Occur Independently
§ Istio Continues to Obey Traffic Splits After Auto-Scaling
§ Auto-Scaling May Occur In Response to New Traffic Route
154. A/B & BANDIT MODEL TESTING
§ Perform Live Experiments in Production
§ Compare Existing Model A with Model B, Model C
§ Safe Split-Canary Deployment
§ Pro Tip: Keep Ingress Simple – Use Route Rules Instead!
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: predict-mnist-20-5-75
spec:
destination:
name: predict-mnist
precedence: 2 # Greater than global deny-all
route:
- labels:
version: A
weight: 20 # 20% still routes to model A
- labels:
version: B # 5% routes to new model B
weight: 5
- labels:
version: C # 75% routes to new model C
weight: 75
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: predict-mnist-1-2-97
spec:
destination:
name: predict-mnist
precedence: 2 # Greater than global deny-all
route:
- labels:
version: A
weight: 1 # 1% routes to model A
- labels:
version: B # 2% routes to new model B
weight: 2
- labels:
version: C # 97% routes to new model C
weight: 97
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: predict-mnist-97-2-1
spec:
destination:
name: predict-mnist
precedence: 2 # Greater than global deny-all
route:
- labels:
version: A
weight: 97 # 97% still routes to model A
- labels:
version: B # 2% routes to new model B
weight: 2
- labels:
version: C # 1% routes to new model C
weight: 1
155. AGENDA
Part 3: Advanced Model Serving + Routing
§ Kubernetes Ingress, Egress, Networking
§ Istio and Envoy Architecture
§ Intelligent Traffic Routing and Scaling
§ Metrics, Chaos Monkey, Production Readiness
158. SPECIAL THANKS TO CHRISTIAN POSTA
§ http://blog.christianposta.com/istio-workshop
159. AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Routing
160. THANK YOU!!
§ Please Star ! this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline
Contact Me
chris@pipeline.ai
@cfregly