SlideShare ist ein Scribd-Unternehmen logo
1 von 202
Downloaden Sie, um offline zu lesen
HYPER-PARAMETER TUNING ACROSS THE ENTIRE
AI PIPELINE: MODEL TRAINING TO PREDICTING
GPU TECH CONFERENCE -- SAN JOSE, MARCH 2018
CHRIS FREGLY
FOUNDER @ PIPELINEAI
KEY TAKE-AWAYS
With PipelineAI, You Can…
§ Hyper-Parameter Tuning From Training to Inference
§ Generate Hardware-Specific Pipeline Optimizations
§ Deploy & Compare Optimizations in Live Production
§ Perform Continuous Model Training & Data Labeling
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Routing
INTRODUCTIONS: ME
§ Chris Fregly, Founder & Engineer @ PipelineAI
§ Formerly Netflix, Databricks, IBM Spark Tech
§ Founder @ Advanced Spark TensorFlow Meetup
§ Please Join Our 75,000+ Global Members!!
Contact Me
chris@pipeline.ai
@cfregly
Global Locations
* San Francisco
* Chicago
* Austin
* Washington DC
* Dusseldorf
* London
INTRODUCTIONS: YOU
You Want To …
§ Perform Hyper-Parameter Tuning Across *Entire* Pipeline
§ Measure Results of Tuning Both Offline *and* Online
§ Deploy Models Rapidly, Safely, *Directly* in Production
§ Trace and Explain *Live* Model Predictions
PIPELINEAI IS OPEN SOURCE
§ https://github.com/PipelineAI/pipeline/
§ Please Star this GitHub Repo!
§ “Each Star is Worth $1,500 in Seed Money”
- A Prominent Venture Capitalist in Silicon Valley
http://jrvis.com/red-dwarf/
PIPELINEAI ANNOUNCEMENTS
http://pipeline.aihttp://community.pipeline.ai
PIPELINEAI SUPPORTS ALL MAJOR MODELS
PIPELINEAI TERMINOLOGY
§ “Flask-App Falacy”: Flask is Not Enough for Production-izing ML/AI Models
§ “Pipeline”: All Phases Including Train, Validate, Optimize, Deploy, and Predict
§ “Experiment”: Across All Environments from Research Lab to Live Production
§ “Turning Knobs”: Hyper-Parameter Tuning Across All Phases of the Pipeline
§ “Model Serving”: Models Serving Predictions in Live Production
§ “Runtime”: Execution Environment for Any Phase of Pipeline (TensorRT, Caffe)
§ “Train-to-Serve”: Training with Intent to Serve Predictions
§ “Train-Serving Skew”: Model Performs Poorly on Live Data
§ “Post-Training Optimization”: Prepare Model and Runtime for Fast Inference
http://NoFlaskApp.com
Any Runtime
Any Device CPU, GPU, TPU, IoT
Any Network and System Configuration
Any Clouud and On-Premise Environment
AnyModel
AnyLanguage
AnyFramework
AnyHyper-
Parameter
1,000,000’s of
Model + Runtime Pipeline
Combinations
We Find the Best Combinations
For Your Model and Workload!
WHOLE-PIPELINE HYPER-PARAMETER TUNING
WHOLE-PIPELINE HYPER-PARAMETER TUNING
WHOLE-PIPELINE HYPER-PARAMETERS
Training: Hyperparameters
pipelinedb.add("learning_rate", 0.025)
pipelinedb.add(”batch_size", 8192)
pipelinedb.add(”num_epochs", 100)
^^ THIS IS WHERE MOST DATA SCIENTISTS END BECAUSE ^^
^^ THEY HAVE NO WAY OF COLLECTING ANYTHING MORE ^^
^^ UNTIL NOW! ^^
pipelinedb.add("ec2_instance_type", "g3.4xlarge”)
pipelinedb.add("utilized_memory_gigabyte", 20)
pipelinedb.add(“network_speed_gigabit”, 10)
pipelinedb.add("training_precision_bits", 16)
pipelinedb.add("accelerator_type", "nvidia_gpu_v100") # google_tpu
pipelinedb.add(“cpu_to_accelerator_network_type", “pcie”) # nvlink
pipelinedb.add(“cpu_to_accelerator_network_bandwidth_gigabit”, 100)
Training: Results
pipelinedb.add("training_accuracy_percent", 95)
pipelinedb.add(“validation_accuracy_percent", 94)
pipelinedb.add("training_auc", 0.70)
pipelinedb.add(“validation_auc", 0.69)
pipelinedb.add(”time_to_train_seconds", 0.69)
Optimization: Hyperparameters
pipelinedb.add(”batch_norm_fusing", True)
pipelinedb.add("weight_quantization_bits", 8) # 2-bit, 7-bit
Optimization: Results (Collected At End of Optimization)
pipelinedb.add("weight_quantization_reduction_percent", 50)
Inference: Hyperparameters
pipelinedb.add("runtime_type", ”tfserving") # python,tensorrt
Pipelinedb.add(“runtime_chip”, “gpu”)
pipelinedb.add("model_type", "tensorflow") # caffe, scikit
pipelinedb.add("request_batch_window_ms", 10)
pipelinedb.add("request_batch_size", 1000)
Inference: Results (Every ~15 Mins Inside PipelineAI Runtime)
pipelinedb.add("latency_99_percentile_ms", 5)
pipelinedb.add("cost_per_prediction_usd", 0.000001)
pipelinedb.add("24_hr_auc", 0.70)
pipelinedb.add("48_hr_auc", 0.30)
Training Optimizing
Inferencing
WHY EMPHASIS ON MODEL INFERENCE?
Model Training
Batch & Boring
Offline in Research Lab
Pipeline Ends at Training
No Insight into Live Production
Small Number of Data Scientists
Optimizations Are Very Well-Known
Real-Time & Exciting!!
Online in Live Production
No Ability To Turn Inference Knobs (Yet)
Extend Model Validation Into Production
Huuuuuuge Number of Application Users
Inference Optimizations Not Yet Explored
<<<
Model Inference
100’s Training Jobs per Day 1,000,000’s Predictions per Sec
GROWTH IN ML/AI MODELS
2017 2026
Data
Scientists
44,000
11,500,000
$39 Billion in 2017
$2 Trillion by 2026
2017 2026
Models
Trained
50,000,000
200,000
2017 2026
Model
Predictions
250,000,000,000
4,000,000
2016 2026 2016 2026 2016 2026
MODEL DEPLOYMENT OPTIONS
§ AWS SageMaker
§ Released Nov 2017 @ Re-invent
§ Custom Docker Images for Training/Serving (ie. PipelineAI Images)
§ Distributed TensorFlow Training through Estimator API
§ Traffic Splitting for A/B Model Testing
§ Google Cloud ML Engine
§ Mostly Command-Line Based
§ Driving TensorFlow Open Source API (ie. Estimator API)
§ Azure ML
§ On-Premise Docker, Docker Swarm, Kubernetes, Mesos
PipelineAI Supports All
Hybrid-Cloud, On-Prem,
and Air-Gap Deployments!
WHOLE-PIPELINE OPTIMIZATION OPTIONS
§ Model Training Optimizations
§ Model Hyper-Parameters (ie. Learning Rate)
§ Reduced Precision (ie. FP16 Half Precision)
§ Model Optimizations to Prepare for Inference
§ Quantize Model Weights + Activations From 32-bit to 8-bit
§ Fuse Neural Network Layers Together
§ Model Inference Runtime Optimizations
§ Runtime Config: Request Batch Size, etc
§ Different Runtime: TensorFlow Serving CPU/GPU, Nvidia TensorRT
NVIDIA TENSOR-RT RUNTIME
§ Post-Training Model Optimizations
§ Specific to Nvidia GPUs
§ GPU-Optimized Prediction Runtime
§ Alternative to TensorFlow Serving
§ PipelineAI Supports TensorRT!
TENSORFLOW LITE OPTIMIZING CONVERTER
§ Post-Training Model Optimizations
§ Currently Supports iOS and Android
§ On-Device Prediction Runtime
§ Low-Latency, Fast Startup
§ Selective Operator Loading
§ 70KB Min - 300KB Max Runtime Footprint
§ Supports Accelerators (GPU, TPU)
§ Falls Back to CPU without Accelerator
§ Java and C++ APIs
bazel build tensorflow/contrib/lite/toco:toco && 
./bazel-bin/third_party/tensorflow/contrib/lite/toco/toco 
--input_file=frozen_eval_graph.pb 
--output_file=tflite_model.tflite 
--input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE 
--inference_type=QUANTIZED_UINT8 
--input_shape="1,224, 224,3" 
--input_array=input 
--output_array=outputs 
--std_value=127.5 --mean_value=127.5
PIPELINEAI QUICK START
§ http://quickstart.pipeline.ai
§ Any Model, Any Training Runtime, Any Prediction Runtime
§ Support for Docker, Docker Swarm, Kubernetes, Mesos
§ Package Model+Runtime into a Docker Image
§ Emphasizes Immutable Deployment and Infrastructure
§ Same Image Across All Environments
§ No Library or Dependency Surprises from Laptop to Production
§ Allows Tuning Offline and Online Model+Runtime Together
STEP 1: BUILD MODEL+TRAINING SERVER
§ Train Model with Specific Hyper-Parameters
§ Monitor and Compare Validation Accuracy
§ Tune Hyper-Parameters to Improve Accuracy
pipeline train-server-build --model-name=mnist 
--model-tag=A 
--model-type=tensorflow 
--model-path=./tensorflow/mnist/0.025/model 
Build Model
Training Server A
(Learning Rate 0.025)
pipeline train-server-build --model-name=mnist 
--model-tag=B 
--model-type=tensorflow 
--model-path=./tensorflow/mnist/0.050/model 
Build Model
Training Server B
(Learning Rate 0.050)
STEP 2: TRAIN, MEASURE, TUNE
§ Train Model with Specific Hyper-Parameters
§ Monitor abnd Compare Validation Accuracy
§ Tune Hyper-Parameters to Improve Accuracy
pipeline train-server-start --model-name=mnist 
--model-tag=A 
--input-host-path=./tensorflow/mnist/input 
--output-host-path=./tensorflow/mnist/output 
--train-args= "--learning-rate=0.025 --batch-size=128"
Train
Model A
(Learning Rate 0.025)
pipeline train-server-start --model-name=mnist 
--model-tag=B 
--input-host-path=./tensorflow/mnist/input 
--output-host-path=./tensorflow/mnist/output 
--train-args= "--learning-rate=0.025 --batch-size=128"
Train
Model B
(Learning Rate 0.050)
STEP 3: CREATE PREDICT() METHOD
def predict(request: bytes) -> bytes:
return _model.predict(request)Basic Insight:
def predict(request: bytes) -> bytes:
# Step 1: Transform Request (JSON => np.array)
transformed_request = _transform_request(request)
# Step 2: Model Predict
predictions = _model.predict(transformed_request)
# Step 3: Transform Response (np.array => JSON)
transformed_response = _transform_response(predictions)
return transformed_response
Detailed Insight:
§ Multiple Levels of Performance Metrics and Logging
§ Enterprise Adapters for All Metrics and Logging Systems
pipeline predict-server-logs --model-name=mnist --model-tag=cpu
View
Logs
STEP 4: BUILD MODEL+PREDICTION SERVER
pipeline predict-server-build --model-name=mnist 
--model-tag=C 
--model-type=tensorflow 
--model-runtime=tensorrt 
--model-chip=gpu 
--model-path=./tensorflow/mnist/
Build Local
Model Server C
TensorRT GPU
pipeline predict-server-build --model-name=mnist 
--model-tag=A 
--model-type=tensorflow 
--model-runtime=tfserving 
--model-chip=cpu 
--model-path=./tensorflow/mnist/
Build Local
Model Server A
TF Serving CPU
pipeline predict-server-build --model-name=mnist 
--model-tag=B 
--model-type=tensorflow 
--model-runtime=tfserving 
--model-chip=gpu 
--model-path=./tensorflow/mnist/
Build Local
Model Server B
TF Serving GPU
Same Model,
3 Different
Prediction
Runtimes
STEP 5: PREDICT, MEASURE, TUNE (LOCAL)
§ Perform Mini-Load Test on Local Model Server
§ Immediate Feedback on Prediction Performance
§ Compare to Previous Model+Runtime Variations
§ Gain Intuition Before Pushing to Prod
pipeline predict-server-start --model-name=mnist 
--model-tag=A 
--memory-limit=2G
pipeline predict-http-test --model-endpoint-url=http://localhost:8080 
--test-request-path=test_request.json 
--test-request-concurrency=1000
Start Local
Predict Load Test
Start Local
Model Server
STEP 6: DEPLOY, MEASURE, TUNE (IN PROD)
§ Deploy from CLI or Jupyter Notebook
§ Tear-Down and Rollback Models Quickly
§ Shadow Canary: Deploy to 20% Live Traffic
§ Split Canary: Deploy to 97-2-1% Live Traffic
pipeline predict-kube-start --model-name=mnist 
--model-tag=BStart Cluster B
pipeline predict-kube-start --model-name=mnist 
--model-tag=CStart Cluster C
pipeline predict-kube-start --model-name=mnist 
--model-tag=AStart Cluster A
pipeline predict-kube-route --model-name=mnist 
--model-split-tag-and-weight-dict='{"A":97, "B":2, "C”:1}' 
--model-shadow-tag-list='[]'
Route Live Traffic
STEP 7: OPTIMIZE, MEASURE, RE-DEPLOY
§ Prepare Model for Predicting
§ Simplify Network, Reduce Size
§ Reduce Precision -> Fast Math
§ Some Tools
§ Graph Transform Tool (GTT)
§ tfcompile
After Training
After
Optimizing!
pipeline optimize --optimization-list=[‘quantize_weights’,‘tfcompile’] 
--model-name=mnist 
--model-tag=A 
--model-path=./tensorflow/mnist/model 
--model-inputs=[‘x’] 
--model-outputs=[‘add’] 
--output-path=./tensorflow/mnist/optimized_model
Linear
Regression
Model Size: 70MB –> 70K (!)
STEP 8: EVALUATE MODEL+RUNTIME VARIANT
§ Offline, Batch Metrics
§ Validation + Training Accuracy
§ CPU + GPU Utilization
§ Online, Live Prediction Values
§ Compare Relative Precision
§ Newly-Seen, Streaming Data
§ Online, Real-Time Metrics
§ Response Time, Throughput
§ Cost ($) Per Prediction
STEP 9: DETERMINE PIPELINEAI EFFICIENCY
STEP 10: SHIFT TRAFFIC TO BEST VARIANT
§ A/B Tests
§ Inflexible and Boring
§ Multi-Armed Bandits
§ Adaptive and Exciting!
pipeline predict-kube-route --model-name=mnist 
--model-split-tag-and-weight-dict='{"A":1, "B":2, "C”:97}’ 
--model-shadow-tag-list='[]'
Dynamically Route
Traffic to Winning
Model+Runtime
PIPELINE PROFILING AND TUNING
§ Instrument Code to Generate “Timelines” for Any Metric
§ Analyze with Google Web
Tracing Framework (WTF)
§ Can Also Monitor CPU with top, GPU with nvidia-smi
http://google.github.io/tracing-framework/
from tensorflow.python.client import timeline
trace =
timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.json', 'w') as trace_file:
trace_file.write(
trace.generate_chrome_trace_format(show_memory=True))
MODEL AND ENSEMBLE TRACING/AUDITING
§ Necessary for Model Explain-ability
§ Fine-Grained Request Tracing
§ Used for Model Ensembles
VIEW REAL-TIME PREDICTION STREAMS
§ Visually Compare Real-time Predictions
Features and
Inputs
Predictions and
Confidences
Model B Model CModel A
CONTINUOUS DATA LABELING AND FIXING
§ Identify and Fix Borderline (Unconfident) Predictions
§ Fix Predictions Along Class Boundaries
§ Facilitate ”Human in the Loop”
§ Path to Crowd-Sourced Labeling
§ Retrain with Newly-Labeled Data
§ Game-ify the Labeling Process
CONTINUOUS MODEL TRAINING
§ The Holy Grail of Machine Learning
§ Kafka, Kinesis, Spark Streaming, Flink, Storm, Heron
PipelineAI Supports
Continuous Model Training
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Traffic Routing
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Feed, Train, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
SETTING UP TENSORFLOW WITH GPUS
§ Very Painful!
§ Especially inside Docker
§ Use nvidia-docker
§ Especially on Kubernetes!
§ Use the Latest Kubernetes (with Init Script Support)
§ http://pipeline.ai for GitHub + DockerHub Links
TENSORFLOW + CUDA + NVIDIA GPU
VOLTA V100 AND TENSOR CORES
§ 84 Streaming Multiprocessors (SM’s)
§ 5,376 GPU Cores
§ 640 Tensor Cores (ie. Google TPU)
§ Can Perform 640 FP16 4x4 Matrix Multiplies
§ 120 TFLOPS = 4x FP32 and 10x FP64
§ Allows Mixed FP16/FP32 Precision Operations
§ Matrix Dims Should be Multiples of 8
§ More Shared Memory
§ New L0 Instruction Cache
§ Faster L1 Data Cache
GPU HALF-PRECISION SUPPORT
§ FP32: “Full Precision”, FP16: “Half Precision”
§ Two(2) FP16’s in 1 FP32 GPU Core
§ 2x Throughput!
§ Lower Precision is OK
§ Deep learning is approximate
§ The Network Matters Most
§ Not individual neuron accuracy
MORE ON HALF-PRECISION
§ 1997: Related Work by SGI
§ Commercial Request from ILM in 2002
§ Implemented in Silicon by Nvidia in 2002
§ Supported by Pascal P100 and Volta V100
MORE ON REDUCED-PRECISION
§ Less Precision => Less Memory & Bandwidth
=> Faster Math & Less Energy
§ Fits into Smaller Places Close to ALU’s
§ 4-bit, 2-bit, 1-bit (?!) Quantization
§ More Layers Help Maintain Accuracy at Reduced Precision
§ Tip: Scale and Center Dynamic Range at Each Layer
§ Otherwise, FP16’s become 0 - model may not converge!
GPU: 4-WAY DOT PRODUCT OF 8-BIT INTS
§ GPU Hardware and CUDA Support
§ Compute Capability (CC) >= 6.1
FP16 VS. INT8
§ FP16 Has Larger Dynamic Range Than INT8
§ Larger Dynamic Range Allows Higher Precision
§ Truncated FP32 Dynamic Range Higher Than FP16
§ Not IEEE 754 Standard, But Worth Exploring
ENABLING FP16 IN TENSORFLOW
§ Harder Than You Think!
§ TPUs are 16-bit Native
GPU’s With CC 5.3+ (Only), Set the Following:
TF_FP16_MATMUL_USE_FP32_COMPUTE=0
TF_FP16_CONV_USE_FP32_COMPUTE=0
TF_XLA_FLAGS=--xla_enable_fast_math=1
Pascal P100 Volta V100
FP32 VS. FP16 ON AWS GPU INSTANCES
FP16 Half Precision
87.2 T ops/second for p3 Volta V100
4.1 T ops/second for g3 Tesla M60
1.6 T ops/second for p2 Tesla K80
FP32 Full Precision
15.4 T ops/second for p3 Volta V100
4.0 T ops/second for g3 Tesla M60
3.3 T ops/second for p2 Tesla K80
§ Tesla K80
§ Pascal P100
§ Volta V100 (Beta)
§ TPU (Beta, Google Cloud Only)
GOOGLE CLOUD GPU + TPU
GOOGLE CLOUD TPUS
§ Attach/Detach As Needed
§ Scale In/Out As Needed
§ 180 TFlops per Device
§ TPU Pod = 64 TPUs
= 11.5 PetaFlops
§ $6.50 per TPU Hour
§ Supports 16-bit TensorFlow
V100 AND CUDA 9
§ Independent Thread Scheduling - Finally!!
§ Similar to CPU fine-grained thread synchronization semantics
§ Allows GPU to yield execution of any thread
§ Still Optimized for SIMT (Same Instruction Multi-Thread)
§ SIMT units automatically scheduled together
§ Explicit Synchronization
P100 V100
New CUDA
Thread Cooperative Groups
https://devblogs.nvidia.com/cooperative-groups/
GPU CUDA PROGRAMMING
§ Barbaric, But Fun Barbaric
§ Must Know Hardware Very Well
§ Hardware Changes are Painful
§ Use the Profilers & Debuggers
CUDA STREAMS
§ Asynchronous I/O Transfer
§ Overlap Compute and I/O
§ Keep GPUs Saturated!
§ Used Heavily by TensorFlow
Bad
Good
Bad
Good
CUDA SHARED AND UNIFIED MEMORY
NUMBA AND PYCUDA
§ Numba is Drop-In Replacement for Numpy
§ PyCuda is Python Binding for CUDA
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Feed, Train, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
TRAINING TERMINOLOGY
§ Tensors: N-Dimensional Arrays
§ ie. Scalar, Vector, Matrix
§ Operations: MatMul, Add, SummaryLog,…
§ Graph: Graph of Operations (DAG)
§ Session: Contains Graph(s)
§ Feeds: Feed Inputs into Placeholder
§ Fetches: Fetch Output from Operation
§ Variables: What We Learn Through Training
§ aka “Weights”, “Parameters”
§ Devices: Hardware Device (GPU, CPU, TPU, ...)
-TensorFlow-
Trains
Variables
-User-
Fetches
Outputs
-User-
Feeds
Inputs
-TensorFlow-
Performs
Operations
-TensorFlow-
Flows
Tensors
with tf.device(“/cpu:0,/gpu:15”):
TENSORFLOW SESSION
Session
graph: GraphDef
Variables:
“W” : 0.328
“b” : -1.407
Variables are
Randomly
Initialized,
then
Periodically
Checkpointed
GraphDef is
Created During
Training, then
Frozen for
Inference
TENSORFLOW GRAPH EXECUTION
§ Lazy Execution by Default
§ Similar to Spark
§ Eager Execution
§ Similar to PyTorch
§ "Linearize” Execution Minimizes RAM
§ Useful on Single GPU with Limited RAM
§ May Need to Re-Compute (CPU/GPU) vs Store (RAM)
OPERATION PARALLELISM
§ Inter-Op (Between-Op) Parallelism
§ By default, TensorFlow runs multiple ops in parallel
§ Useful for low core and small memory/cache envs
§ Set to one (1)
§ Intra-Op (Within-Op) Parallelism
§ Different threads can use same set of data in RAM
§ Useful for compute-bound workloads (CNNs)
§ Set to # of cores (>=2)
TENSORFLOW MODEL
§ MetaGraph
§ Combines GraphDef and Metadata
§ GraphDef
§ Architecture of your model (nodes, edges)
§ Metadata
§ Asset: Accompanying assets to your model
§ SignatureDef: Maps external to internal tensors
§ Variables
§ Stored separately during training (checkpoint)
§ Allows training to continue from any checkpoint
§ Variables are “frozen” into Constants when preparing for inference
GraphDef
x
W
mul add
b
MetaGraph
Metadata
Assets
SignatureDef
Tags
Version
Variables:
“W” : 0.328
“b” : -1.407
STOCHASTIC GRADIENT DESCENT (SGD)
§ Or “Simply Go Down” J
§ Small Batch Sizes Are Ideal
§ But not too small!
§ Parallel, Distributed Training Across Devices
§ Each device calculates gradients on small batch
§ Gradients averaged across all devices
§ Training is Fast, Batches are Small
EXTEND EXISTING DATA PIPELINES
§ Data Processing
§ HDFS/Hadoop
§ Spark
§ Containers
§ Docker
§ Schedulers
§ Kubernetes
§ Mesos
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow-hadoop</artifactId>
</dependency>
https://github.com/tensorflow/ecosystem
KUBERNETES AND SPARK 2.3
§ Kubernetes-Native
§ Schedule Spark Workers
# Submit Spark Job to Kubernetes Cluster
bin/spark-submit 
--master k8s://https://xx.yy.zz.ww 
--deploy-mode cluster 
--name spark-pi 
--class org.apache.spark.examples.SparkPi 
--conf spark.executor.instances=5 
--conf spark.kubernetes.container.image=<spark-image> 
--conf spark.kubernetes.driver.pod.name=spark-pi-driver 
local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar
# View Kubernetes Resources
kubectl get pods -l 'spark-role in (driver, executor)' -w
# View Driver Logs in Real-Time
kubectl logs –f spark-pi-driver http://blog.kubernetes.io/2018/03/
apache-spark-23-with-native-kubernetes.html
http://community.pipeline.ai
TENSORFLOW + SPARK OPTIONS
§ TensorFlow on Spark (Yahoo!)
§ TensorFrames <-Dead Project->
§ Separate Clusters for Spark and TensorFlow
§ Spark: Boring Batch ETL
§ TensorFlow: Exciting AI Model Training and Serving
§ Hand-Off Point is S3, HDFS, Google Cloud Storage
TENSORFLOW + KAFKA
§ TensorFlow Dataset API Now Supports Kafka!!
from tensorflow.contrib.kafka.python.ops import kafka_dataset_ops
repeat_dataset = kafka_dataset_ops.KafkaDataset(topics,
group="test",
eof=True)
.repeat(num_epochs)
batch_dataset = repeat_dataset.batch(batch_size)
…
TENSORFLOW I/O
§ TFRecord File Format
§ TensorFlow Python and C++ Dataset API
§ Python Module and Packaging
§ Comfort with Python’s Lack of Strong Typing
§ C++ Concurrency Constructs
§ Protocol Buffers
§ Old Queue API
§ GPU/CUDA Memory Tricks And a Lot of Coffee!
FEED TENSORFLOW TRAINING PIPELINE
§ Training is Limited by the Ingestion Pipeline
§ Number One Problem We See Today
§ Scaling GPUs Up / Out Doesn’t Help
§ GPUs are Heavily Under-Utilized
§ Use tf.dataset API for best perf
§ Efficient parallel async I/O (C++)
Tesla K80 Volta V100
DON’T USE FEED_DICT!!
§ feed_dict Requires Python <-> C++ Serialization
§ Not Optimized for Production Ingestion Pipelines
§ Retrieves Next Batch After Current Batch is Done
§ Single-Threaded, Synchronous
§ CPUs/GPUs Not Fully Utilized!
§ Use Queue or Dataset APIs
§ Queues are old & complex
sess.run(train_step, feed_dict={…}
DETECT UNDERUTILIZED CPUS, GPUS
§ Instrument Code to Generate “Timelines”
§ Analyze with Google Web
Tracing Framework (WTF)
§ Monitor CPU with top, GPU with nvidia-smi
http://google.github.io/tracing-framework/
from tensorflow.python.client import timeline
trace =
timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.json', 'w') as trace_file:
trace_file.write(
trace.generate_chrome_trace_format(show_memory=True))
QUEUES
§ More than Traditional Queue
§ Uses CUDA Streams
§ Perform I/O, Pre-processing, Cropping, Shuffling, …
§ Pull from HDFS, S3, Google Storage, Kafka, ...
§ Combine Many Small Files into Large TFRecord Files
§ Use CPUs to Free GPUs for Compute
§ Helps Saturate CPUs and GPUs
QUEUE CAPACITY PLANNING
§ batch_size
§ # examples / batch (ie. 64 jpg)
§ Limited by GPU RAM
§ num_processing_threads
§ CPU threads pull and pre-process batches of data
§ Limited by CPU Cores
§ queue_capacity
§ Limited by CPU RAM (ie. 5 * batch_size)
TF.DTYPE
§ tf.float32, tf.int32, tf.string, etc
§ Default is usually tf.float32
§ Most TF operations support numpy natively
# Tuple of (tf.float32 scalar, tf.int32 array of 100 elements)
(tf.random_uniform([1]), tf.random_uniform([1, 100], dtype=tf.int32))
TF.TRAIN.FEATURE
§ Three(3) Feature Types
§ Bytes
§ Float
§ Int64
§ Actually, They Are Lists of 0..* Values of 3 Types Above
§ BytesList
§ FloatList
§ Int64List
TF.TRAIN.FEATURES
§ Map of {String -> Feature}
§ Better Name is “FeatureMap”
§ Organize Feature into Categories
§ Access Feature Using
Features[’feature_name’]
TF.TRAIN.FEATURELIST
§ List of 0..* Feature
§ Access Feature Using
FeatureList[0]
TF.TRAIN.FEATURELISTS
§ Map of {String -> FeatureList}
§ Better Name is “FeatureListMap”
§ Organize FeatureList into Categories
§ Access FeatureList Using
FeatureLists[’feature_list_name’]
TF.TRAIN.EXAMPLE
§ Key-Value Dictionary
§ String -> tf.train.Feature
§ Not a Self-Describing Format (?!)
§ Must Establish Schema Upfront by Writers and Readers
§ Must Obey the Following Conventions
§ Feature K must be of Type T in all Examples
§ Feature K can be omitted, default can be configured
§ If Feature K exists as empty, no default is applied
TF.TFRECORD
§ Contains many tf.train.Example’s
=> tf.train.Example contains many tf.train.Feature’s
=> tf.train.Feature contains BytesList, FloatList, Int64List
§ Record-Oriented Format of Binary Strings (ProtoBuffer)
§ Must Convert tf.train.Example to Serialized String
§ Use tf.train.Example.SerializeToString()
§ Used for Large Scale ML/AI Training
§ Not Meant for Random or Non-Sequential Access
§ Compression: GZIP, ZLIB
uint64 length
uint32 masked_crc32_of_length
byte data[length]
uint32 masked_crc32_of_data
EMBRACE BINARY FORMATS!
§ Unreadable and Scary, But Much More Efficient
§ Better Use of Memory and Disk Cache
§ Faster Copying and Moving
§ Smaller on the Wire
I
CONVERTING MNIST DATA TO TFRECORD
def convert_to_tfrecord(data, name):
images = data.images
labels = data.labels
num_examples = data.num_examples
rows = images.shape[1]
cols = images.shape[2]
depth = images.shape[3]
filename = os.path.join(FLAGS.directory, name + '.tfrecords’)
with tf.python_io.TFRecordWriter(filename) as writer:
for index in range(num_examples):
image_raw = images[index].tostring()
example = tf.train.Example(
features = tf.train.Features(
feature = {'height': tf.train.Feature(int64_list=tf.train.Int64List(value=[rows])),
'width': tf.train.Feature(int64_list=tf.train.Int64List(value=[cols])),
'depth': tf.train.Feature(int64_list=tf.train.Int64List(value=[depth])),
'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[index])),
'image_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_raw]))
}))
writer.write(example.SerializeToString())
tf.python_io.TFRecordWriter
READING TF.TFRECORD’S
§ tf.data.TFRecordDatasetß Preferred (Dataset API)
§ tf.TFRecordReader()ß Not Preferred (Queue API)
§ tf.python_io.tf_record_iterator ß Preferred
§ Used as Python Generator
for serialized_example in tf.python_io.tf_record_iterator(filename):
example = tf.train.Example()
example.ParseFromString(serialized_example)
image_raw example.features.feature['image_raw’].string_list.value
height = example.features.feature[‘height'].int32_list.value[0]
…
DE-SERIALIZING TF.TFRECORD’S
feature_map = {'height': tf.train.Feature(int64_list=tf.train.Int64List(value=[rows])),
'width': tf.train.Feature(int64_list=tf.train.Int64List(value=[cols])),
'depth': tf.train.Feature(int64_list=tf.train.Int64List(value=[depth])),
'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[index])),
'image_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_raw]))
deserialized_features = tf.parse_single_example(serialized_example, features=feature_map)
# Cast height from String to int32
height = tf.cast(deserialized_features[‘height’], tf.int32)
…
# Convert raw image from string to float32
image_raw = tf.decode_raw(deserialized_features[‘image_raw'], tf.float32)
MORE TF.TRAIN.FEATURE CONSTRUCTS
§ tf.VarLenFeature
§ tf.FixedLenFeature, tf.FixedLenSequenceFeature
§ tf.SparseFeature
feature_map = {'height': tf.FixedLenFeature((), tf.int32, …)),
…
'image_raw': tf.train.VarLenFeature(tf.string, …))
deserialized_features = tf.parse_single_example(serialized_example, features=feature_map)
# Cast height from String to int32
height = tf.cast(deserialized_features[‘height’], tf.int32)
…
# Convert raw image from string to float32
image_raw = tf.decode_raw(deserialized_features[‘image_raw'], tf.float32)
TF.DATA.DATASET
tf.Tensor => tf.data.Dataset
Functional Transformations
Python Generator => tf.data.Dataset
Dataset.from_tensors((features, labels))
Dataset.from_tensor_slices((features, labels))
TextLineDataset(filenames)
dataset.map(lambda x: tf.decode_jpeg(x))
dataset.repeat(NUM_EPOCHS)
dataset.batch(BATCH_SIZE)
def generator():
while True:
yield ...
dataset.from_generator(generator, tf.int32)
Dataset => One-Shot Iterator
Dataset => Initializable Iter
iter = dataset.make_one_shot_iterator()
next_element = iter.get_next()
while …:
sess.run(next_element)
iter = dataset.make_initializable_iterator()
sess.run(iter.initializer, feed_dict=PARAMS)
next_element = iter.get_next()
while …:
sess.run(next_element)
TIP: Use Dataset.prefetch() and parallel version of Dataset.map()
MORE TF.DATA.DATASET CONSTRUCTS
§ FixedLengthRecordDataset
§ Binary Files
§ TextLineDataset
§ CSV, JSON, XML, etc
§ TFRecordDataset
§ TFRecords
§ Iterator
“The TF Dataset Dude”
Tutorial: https://t.co/havjwJ46EY
DATASET API TRANSFORMATIONS
Standard Custom (Contrib)
CUSTOM TF.PY_FUNC() TRANSFORMATION
§ Custom Python Function
§ Similar to Spark Python UDF (Eek!)
§ You Will Suffer a Big Performance Penalty
§ Try to Use TensorFlow-Native Operations
§ Remember, you can build your own in C++!
TF.DATA.ITERATOR TYPES
§ One Shot: Iterates Once Through the Dataset
§ Currently, best Iterator to use with Estimator API
§ Initializable: Runs iterator.initializer() Once
§ Re-Initializable: Runs iterator.initializer() Many
§ Ie. Random shuffling between iterations (epochs) of training
§ Feedable: Switch Between Different Dataset
§ Uses Feed and Placeholder to explicitly feed the iterator
§ Doesn’t require initialization when switching
TF.DATA.ITERATOR SIMPLE EXAMPLE
dataset = tf.data.Dataset.range(5)
iterator = dataset.make_initializable_iterator()
next_element = iterator.get_next()
# Typically `result` will be the output of a model, or an optimizer's
# training operation.
result = tf.add(next_element, next_element)
sess.run(iterator.initializer)
while True:
try:
sess.run(result) # è 0, 2, 4, 6, 8
except tf.errors.OutOfRangeError:
print(‘End of dataset…’)
break
TF.DATA.ITERATOR TEXT EXAMPLE
filenames = ["/var/data/file1.txt", "/var/data/file2.txt"]
dataset = tf.data.TextLineDataset(filenames)
filenames = ["/var/data/file1.txt", "/var/data/file2.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.flat_map(
lambda filename: (
tf.data.TextLineDataset(filename)
.skip(1)
.filter(lambda line: tf.not_equal(tf.substr(line, 0, 1), "#"))))
§ Skip 1st Header Line and Comment Lines Starting with `#`
TF.DATA.ITERATOR NUMPY EXAMPLE
# Load the training data into two NumPy arrays, for example using `np.load()`.
with np.load("/var/data/training_data.npy") as data:
features = data["features"]
labels = data["labels"]
# Assume that each row of `features` corresponds to the same row as `labels`.
assert features.shape[0] == labels.shape[0]
features_placeholder = tf.placeholder(features.dtype, features.shape)
labels_placeholder = tf.placeholder(labels.dtype, labels.shape)
dataset = tf.data.Dataset.from_tensor_slices((features_placeholder, labels_placeholder))
# …Your Dataset Transformations…
iterator = dataset.make_initializable_iterator()
sess.run(iterator.initializer, feed_dict={features_placeholder: features,
labels_placeholder: labels})
TF.DATA.ITERATOR TFRECORD EXAMPLE
filenames = tf.placeholder(tf.string, shape=[None])
dataset = tf.data.TFRecordDataset(filenames)
dataset = dataset.map(...) # Parse the record into tensors.
dataset = dataset.repeat() # Repeat the input indefinitely.
dataset = dataset.batch(32) # Batches of size 32
iterator = dataset.make_initializable_iterator()
# You can feed the initializer with the appropriate filenames for the current
# phase of execution, e.g. training vs. validation.
# Initialize `iterator` with training data.
training_filenames = ["/var/data/file1.tfrecord", "/var/data/file2.tfrecord"]
sess.run(iterator.initializer, feed_dict={filenames: training_filenames})
# Initialize `iterator` with validation data.
validation_filenames = ["/var/data/validation1.tfrecord", ...]
sess.run(iterator.initializer, feed_dict={filenames: validation_filenames})
FUTURE OF DATASET API
§ Replaces Queue API
§ More Functional Operators
§ Automatic GPU Data Staging and Pre-Fetching
§ Under-utilized GPUs Assisting with Data Ingestion
§ More Profiling and Recommendations for Ingestion
TF.ESTIMATOR.ESTIMATOR (1/2)
§ Supports Keras!
§ Unified API for Local + Distributed
§ Provide Clear Path to Production
§ Enable Rapid Model Experiments
§ Provide Flexible Parameter Tuning
§ Enable Downstream Optimizing & Serving Infra( )
§ Nudge Users to Best Practices Through Opinions
§ Provide Hooks/Callbacks to Override Opinions
TF.ESTIMATOR.ESTIMATOR (2/2)
§ “Train-to-Serve” Design
§ Create Custom Estimator or Re-Use Canned Estimator
§ Hides Session, Graph, Layers, Iterative Loops (Train, Eval, Predict)
§ Hooks for All Phases of Model Training and Evaluation
§ Load Input: input_fn()
§ Train: model_fn() and train()
§ Evaluate: eval_fn() and evaluate()
§ Performance Metrics: Loss, Accuracy, …
§ Save and Export: export_savedmodel()
§ Predict: predict() Uses the slow sess.run()
https://github.com/GoogleCloudPlatform/cloudml-samples
/blob/master/census/customestimator/
TF.CONTRIB.LEARN.EXPERIMENT
§ Easier-to-Use Distributed TensorFlow
§ Same API for Local and Distributed
§ Combines Estimator with input_fn()
§ Used for Training, Evaluation, & Hyper-Parameter Tuning
§ Distributed Training Defaults to Data-Parallel & Async
§ Cluster Configuration is Fixed at Start of Training Job
§ No Auto-Scaling Allowed, but That’s OK for Training
Note: The Experiment API Will Likely Be Deprecated Soon
ESTIMATOR + EXPERIMENT CONFIGS
§ TF_CONFIG
§ Special environment variable for config
§ Defines ClusterSpec in JSON incl. master, workers, PS’s
§ Distributed mode ‘{“environment”:“cloud”}’
§ Local: ‘{environment”:“local”, {“task”:{”type”:”worker”}}’
§ RunConfig: Defines checkpoint interval, output directory,
§ HParams: Hyper-parameter tuning parameters and ranges
§ learn_runner creates RunConfig before calling run() & tune()
§ schedule is set based on {”task”:{”type”:…}}
TF_CONFIG=
'{
"environment": "cloud",
"cluster":
{
"master":["worker0:2222”],
"worker":["worker1:2222"],
"ps": ["ps0:2222"]
},
"task": {"type": "ps",
"index": "0"}
}'
ESTIMATOR + KERAS
§ Distributed TensorFlow (Estimator) + Easy to Use (Keras)
§ tf.keras.estimator.model_to_estimator()
# Instantiate a Keras inception v3 model.
keras_inception_v3 = tf.keras.applications.inception_v3.InceptionV3(weights=None)
# Compile model with the optimizer, loss, and metrics you'd like to train with.
keras_inception_v3.compile(optimizer=tf.keras.optimizers.SGD(lr=0.0001, momentum=0.9),
loss='categorical_crossentropy',
metric='accuracy')
# Create an Estimator from the compiled Keras model.
est_inception_v3 = tf.keras.estimator.model_to_estimator(keras_model=keras_inception_v3)
# Treat the derived Estimator as you would any other Estimator. For example,
# the following derived Estimator calls the train method:
est_inception_v3.train(input_fn=my_training_set, steps=2000)
“CANNED” ESTIMATORS
§ Commonly-Used Estimators
§ Pre-Tested and Pre-Tuned
§ DNNClassifer, TensorForestEstimator
§ Always Use Canned Estimators If Possible
§ Reduce Lines of Code, Complexity, and Bugs
§ Use FeatureColumn to Define & Create Features
Custom vs. Canned
@ Google, August 2017
ESTIMATOR + DATASET API
def input_fn():
def generator():
while True:
yield ...
my_dataset = tf.data.dataset.from_generator(generator, tf.int32)
# A one-shot iterator automatically initializes itself on first use.
iter = my_dataset.make_one_shot_iterator()
# The return value of get_next() matches the dataset element type.
images, labels = iter.get_next()
return images, labels
# The input_fn can be used as a regular Estimator input function.
estimator = tf.estimator.Estimator(…)
estimator.train(train_input_fn=input_fn, …)
OPTIMIZER + ESTIMATOR API + TPU’S
run_config = tpu_config.RunConfig()
tpu_config = tf.contrib.tpu.TPUConfig(FLAGS.iterations,
FLAGS.num_shards)
estimator = tpu_estimator.TpuEstimator(model_fn=model_fn,
config=run_config)
estimator.train(input_fn=input_fn, num_epochs=10, …)
optimizer = tpu_optimizer.CrossShardOptimizer(
tf.train.GradientDescentOptimizer(learning_rate=…))
train_op = optimizer.minimize(loss)
estimator_spec = tf.estimator.EstimatorSpec(train_op=train_op, loss=…)
https://www.tensorflow.org/programmers_guide/using_tpu
DATASET API TIMELINES (TENSORBOARD)
§ Use Dataset.prefetch()!!
§ Helps prevent bottlenecks in I/O pipeline
TPU COMPATIBILITY (TENSORBOARD>=1.6)
TPU PROFILING
pip install cloud-tpu-profiler==1.5.1
capture_tpu_profile 
--tpu_name=$TPU_NAME 
--logdir=$MODEL_DIR
https://cloud.google.com/tpu/docs/cloud-tpu-tools
tensorboard 
--logdir=$MODEL_DIR
TPU TIMELINE (TENSORBOARD)
INPUT PIPELINE ANALYSIS
§ Determine if Pipeline is Input-Bound
TF.CONTRIB.LEARN.HEAD (OBJECTIVES)
§ Single-Objective Estimator
§ Single classification prediction
§ Multi-Objective Estimator
§ One (1) classification prediction
§ One(1) final layer to feed into next model
§ Multiple Heads Used to Ensemble Models
§ Treats neural network as a feature engineering step
§ Supported by TensorFlow Serving
TF.LAYERS
§ Standalone Layer or Entire Sub-Graphs
§ Functions of Tensor Inputs & Outputs
§ Mix and Match with Operations
§ Assumes 1st Dimension is Batch Size
§ Handles One (1) to Many (*) Inputs
§ Metrics are Layers
§ Loss Metric (Per Mini-Batch)
§ Accuracy and MSE (Across Mini-Batches)
TF.FEATURE_COLUMN
§ Used by Canned Estimator
§ Declaratively Specify Training Inputs
§ Converts Sparse to Dense Tensors
§ Sparse Features: Query Keyword, ProductID
§ Dense Features: One-Hot, Multi-Hot
§ Wide/Linear: Use Feature-Crossing
§ Deep: Use Embeddings
TF.FEATURE_COLUMN EXAMPLE
§ Continuous + One-Hot + Embedding
deep_columns = [
age,
education_num,
capital_gain,
capital_loss,
hours_per_week,
tf.feature_column.indicator_column(workclass),
tf.feature_column.indicator_column(education),
tf.feature_column.indicator_column(marital_status),
tf.feature_column.indicator_column(relationship),
# To show an example of embedding
tf.feature_column.embedding_column(occupation, dimension=8),
]
FEATURE CROSSING
§ Create New Features by Combining Existing Features
§ Limitation: Combinations Must Exist in Training Dataset
base_columns = [
education, marital_status, relationship, workclass, occupation, age_buckets
]
crossed_columns = [
tf.feature_column.crossed_column(
['education', 'occupation'], hash_bucket_size=1000),
tf.feature_column.crossed_column(
['age_buckets', 'education', 'occupation'], hash_bucket_size=1000)
]
SEPARATE TRAINING + EVALUATION
§ Separate Training and Evaluation Clusters
§ Evaluate Upon Checkpoint
§ Avoid Resource Contention
§ Training Continues in Parallel with Evaluation
Training
Cluster
Evaluation
Cluster
Parameter Server
Cluster
BATCH (RE-)NORMALIZATION (2015, 2017)
§ Each Mini-Batch May Have Wildly Different Distributions
§ Normalize per Batch (and Layer)
§ Faster Training, Learns Quicker
§ Final Model is More Accurate
§ TensorFlow is already on 2nd Generation Batch Algorithm
§ First-Class Support for Fusing Batch Norm Layers
§ Final mean + variance Are Folded Into Graph Later
-- (Almost) Always Use Batch (Re-)Normalization! --
z = tf.matmul(a_prev, W)
a = tf.nn.relu(z)
a_mean, a_var = tf.nn.moments(a, [0])
scale = tf.Variable(tf.ones([depth/channels]))
beta = tf.Variable(tf.zeros ([depth/channels]))
bn = tf.nn.batch_normalizaton(a, a_mean, a_var,
beta, scale, 0.001)
DROPOUT (2014)
§ Training Technique
§ Prevents Overfitting
§ Helps Avoid Local Minima
§ Inherent Ensembling Technique
§ Creates and Combines Different Neural Architectures
§ Expressed as Probability Percentage (ie. 50%)
§ Boost Other Weights During Validation & Prediction
Perform Dropout
(Training Phase)
Boost for Dropout
(Validation & Prediction Phase)
0%
Dropout
50%
Dropout
BATCH NORM, DROPOUT + ESTIMATOR API
§ Must Specify Evaluation or Training Mode
§ These Will Behave Differently Depending on Mode
SAVED MODEL FORMAT
§ Different Format than Traditional Exporter
§ Contains Checkpoints, 1..* MetaGraph’s, and Assets
§ Export Manually with SavedModelBuilder
§ Estimator.export_savedmodel()
§ Hooks to Generate SignatureDef
§ Use saved_model_cli to Verify
§ Used by TensorFlow Serving
§ New Standard Export Format? (Catching on Slowly…)
TENSORFLOW DEBUGGER
§ Step through Operations
§ Inspect Inputs and Outputs
§ Wrap Session in Debug Session
sess = tf.Session(config=config)
sess =
tf_debug.LocalCLIDebugWrapperSession(sess)
https://www.tensorflow.org/programmers_guide/debugger
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Train, Inspect, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
SINGLE NODE, MULTI-GPU TRAINING
§ cpu:0
§ By default, all CPUs
§ Requires extra config to target a CPU
§ gpu:0..n
§ Each GPU has a unique id
§ TF usually prefers a single GPU
§ xla_cpu:0, xla_gpu:0..n
§ “JIT Compiler Device”
§ Hints TensorFlow to attempt JIT Compile
with tf.device(“/cpu:0”):
with tf.device(“/gpu:0”):
with tf.device(“/gpu:1”):
GPU 0 GPU 1
DISTRIBUTED, MULTI-NODE TRAINING
§ TensorFlow Automatically Inserts Send and Receive Ops into Graph
§ Parameter Server Synchronously Aggregates Updates to Variables
§ Nodes with Multiple GPUs will Pre-Aggregate Before Sending to PS
Worker0 Worker0
Worker1
Worker0 Worker1 Worker2
gpu0 gpu1
gpu2 gpu3
gpu0 gpu1
gpu2 gpu3
gpu0 gpu1
gpu2 gpu3
gpu0
gpu1
gpu0
gpu0
Single
Node
Multiple
Nodes
DATA PARALLEL VS. MODEL PARALLEL
§ Data Parallel (“Between-Graph Replication”)
§ Send exact same model to each device
§ Each device operates on partition of data
§ ie. Spark sends same function to many workers
§ Each worker operates on their partition of data
§ Model Parallel (“In-Graph Replication”)
§ Send different partition of model to each device
§ Each device operates on all data
§ Difficult, but required for larger models with lower-memory GPUs
SYNCHRONOUS VS. ASYNCHRONOUS
§ Synchronous
§ Nodes compute gradients
§ Nodes update Parameter Server (PS)
§ Nodes sync on PS for latest gradients
§ Asynchronous
§ Some nodes delay in computing gradients
§ Nodes don’t update PS
§ Nodes get stale gradients from PS
§ May not converge due to stale reads!
CHIEF WORKER
§ Chief Defaults to Worker Task 0
§ Task 0 is guaranteed to exist
§ Performs Maintenance Tasks
§ Writes log summaries
§ Instructs PS to checkpoint vars
§ Performs PS health checks
§ (Re-)Initialize variables at (re-)start of training
NODE AND PROCESS FAILURES
§ Checkpoint to Persistent Storage (HDFS, S3)
§ Use MonitoredTrainingSession and Hooks
§ Use a Good Cluster Orchestrator (ie. Kubernetes, Mesos)
§ Understand Failure Modes and Recovery States
Stateless, Not Bad: Training Continues Stateful, Bad: Training Must Stop Dios Mio! Long Night Ahead…
ADVANCED DEVICE PLACEMENT STRATEGIES
§ Re-Inforcement Learning Adapts to Real-Time Conditions
§ Manual Device Placement is Static
§ TensorFlow Grappler Project
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Train, Inspect, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
XLA FRAMEWORK
§ XLA: “Accelerated Linear Algebra”
§ Reduce Reliance on Custom Operators
§ Intermediate Representation used by Hardware Vendors
§ Improve Portability
§ Increase Execution Speed
§ Decrease Memory Usage
§ Decrease Mobile Footprint
Helps TensorFlow Be Flexible AND Performant!!
XLA HIGH LEVEL OPTIMIZER (HLO)
§ HLO: “High Level Optimizer”
§ Compiler Intermediate Representation (IR)
§ Independent of source and target language
§ XLA Step 1 Emits Target-Independent HLO
§ XLA Step 2 Emits Target-Dependent LLVM
§ LLVM Emits Native Code Specific to Target
§ Supports x86-64, ARM64 (CPU), and NVPTX (GPU)
XLA IS DESIGNED FOR RE-USE
§ Pluggable Backends
§ HLO “Toolkit”
§ Call BLAS or cuDNN
§ Use LLVM or BYO Low-Level-Optimizer
MINIMAL XLA BACKEND
§ HLO / LLVM Pipeline
§ StreamExecutor Plugin
XLA CPU BACKEND
XLA GPU / NVIDIA PTX BACKEND
XLA GPU / OPENCL BACKEND
CPU HLO PIPELINE
GPU HLO PIPELINE
XLA PERFORMANCE OPTIMIZATIONS
§ JIT Training
§ MNIST: 30% Speed Up
§ Inception: 20% Speed Up
§ Basic LSTM: 80% Speed Up
§ Translation Model BNMT: 20% Speed Up
§ AOT Inference (Next Section)
§ LSTM Model Size: 1 MB => 10 KB
JIT COMPILER
§ JIT: “Just-In-Time” Compiler
§ Built on XLA Framework
§ Reduce Memory Movement – Especially with GPUs
§ Reduce Overhead of Multiple Function Calls
§ Similar to Spark Operator Fusing in Spark 2.0
§ Unroll Loops, Fuse Operators, Fold Constants, …
§ Scopes: session, device, with jit_scope():
TO JIT OR NOT TO JIT
VISUALIZING JIT COMPILER IN ACTION
Before JIT After JIT
Google Web Tracing Framework:
http://google.github.io/tracing-framework/
from tensorflow.python.client import timeline
trace =
timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.json', 'w') as trace_file:
trace_file.write(
trace.generate_chrome_trace_format(show_memory=True))
run_options = tf.RunOptions(trace_level=tf.RunOptions.SOFTWARE_TRACE)
run_metadata = tf.RunMetadata()
sess.run(options=run_options,
run_metadata=run_metadata)
VISUALIZING FUSING OPERATORS
pip install graphviz
dot -Tpng 
/tmp/hlo_graph_1.w5LcGs.dot 
-o hlo_graph_1.png
GraphViz:
http://www.graphviz.org
hlo_*.dot files generated by XLA
XLA COMPILATION SUMMARY
§ Generates Code and Libraries for Your Computation
§ Packages Libraries Needed for Your
§ Eliminates Dispatch Overhead of Operations
§ Fuses Operations to Avoid Memory Round Trip
§ Analyzes Buffers to Reuse Memory
§ Updates Memory In-Place
§ Unrolls Loops with Your Data Dimensions (ie.Batch Size)
§ Vectorizes Operations Specific to Your Data Dimensions
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Traffic Routing
WE ARE NOW…
…OPTIMIZING Models
AFTER Model Training
TO IMPROVE Model Serving
PERFORMANCE!
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
AOT COMPILER
§ Standalone, Ahead-Of-Time (AOT) Compiler
§ Built on XLA framework
§ tfcompile
§ Creates executable with minimal TensorFlow Runtime needed
§ Includes only dependencies needed by subgraph computation
§ Creates functions with feeds (inputs) and fetches (outputs)
§ Packaged as cc_libary header and object files to link into your app
§ Commonly used for mobile device inference graph
§ Currently, only CPU x86-64 and ARM are supported - no GPU
GRAPH TRANSFORM TOOL (GTT)
§ Post-Training Optimization to Prepare for Inference
§ Remove Training-only Ops (checkpoint, drop out, logs)
§ Remove Unreachable Nodes between Given feed -> fetch
§ Fuse Adjacent Operators to Improve Memory Bandwidth
§ Fold Final Batch Norm mean and variance into Variables
§ Round Weights/Variables to improve compression (ie. 70%)
§ Quantize (FP32 -> INT8) to Speed Up Math Operations
AFTER TRAINING, BEFORE OPTIMIZATION
-TensorFlow-
Trains
Variables
-User-
Fetches
Outputs
-User-
Feeds
Inputs
-TensorFlow-
Performs
Operations
-TensorFlow-
Flows
Tensors
?!
POST-TRAINING GRAPH TRANSFORMS
transform_graph 
--in_graph=unoptimized_cpu_graph.pb  ß Original Graph
--out_graph=optimized_cpu_graph.pb  ß Transformed Graph
--inputs=’x_observed:0'  ß Feed (Input)
--outputs=’Add:0'  ß Fetch (Output)
--transforms=' ß List of Transforms
strip_unused_nodes
remove_nodes(op=Identity, op=CheckNumerics)
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms
quantize_weights
quantize_nodes'
AFTER STRIPPING UNUSED NODES
§ Optimizations
§ strip_unused_nodes
§ Results
§ Graph much simpler
§ File size much smaller
AFTER REMOVING UNUSED NODES
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ Results
§ Pesky nodes removed
§ File size a bit smaller
AFTER FOLDING CONSTANTS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ Results
§ Placeholders (feeds) -> Variables*
(*Why Variables and not Constants?)
AFTER FOLDING BATCH NORMS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ Results
§ Graph remains the same
§ File size approximately the same
AFTER QUANTIZING WEIGHTS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ quantize_weights
§ Results
§ Graph is same, file size is smaller, compute is faster
WEIGHT (VARIABLE) QUANTIZATION
§ FP16 or INT8: Smaller & Computationally Faster than FP32
§ Easy to “Linearly Quantize” (Re-Encode) FP32 -> INT8
Easy Breezy!
BENEFITS OF 32-BIT TO 8-BIT QUANTIZE
§ First Class Hardware and CUDA Support
§ One 32-Bit GPU Core: 4-Way Dot Product of 8-Bit Ints
§ GPU Compute Capability (CC) >= 6.1 Only
ACTIVATION QUANTIZATION
§ Activations Not Known Ahead of Time
§ Depends on input, not easy to quantize
§ Requires Additional Calibration Step
§ Use representative, diverse validation dataset
§ ~1000 samples, ~10 minutes,, cheap hardware
§ Run 32-Bit Inference with Calibration Data
§ Collect histogram of activation values at each layer
§ Generate many quantized distributions at diff saturation thresholds
§ Choose Saturation Threshold That Minimizes Accuracy Loss
CHOOSING SATURATION THRESHOLD
§ Trade-off Between Range & Precision
§ INT8 Should Encode Same Information As Original FP32
§ Minimize Loss of Information Across Encoding/Distributions
§ Use KL_Divergence(32bit_dist, 8bit_dist)
§ Compares 2 distributions
§ Similar to Cross-Entropy
SATURATE TO MINIMIZE ACCURACY LOSS
§ Helps Preserve Accuracy After Activation Quantization
§ Goal: Find Threshold (T) That Minimizes Accuracy Loss
No Saturation Saturation
AUTO-CALIBRATE: PIPELINEAI + TENSOR-RT
Pre-Requisites
§ 32-Bit Trained Model (TensorFlow, Caffe)
§ Small Calibration Dataset (Validation)
PipelineAI + TensorRT Optimizations
§ Run 32-Bit Inference on Calibration Dataset
§ Collect Required Statistics
§ Use KL_Divergence to Determine Saturation Thresholds
§ Perform 32-Bit Float -> 8-Bit Int Quantization
§ Generate Calibration Table and INT8 Execution Engine
32-BIT TO 8-BIT QUANTIZATION RESULTS
Accuracy of INT8 Models Comparable to FP32
AFTER ACTIVATION QUANTIZATION
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ quantize_weights
§ quantize_nodes (activations)
§ Results
§ Larger graph, needs calibration!
Requires Additional
freeze_requantization_ranges
TF.CONTRIB.QUANTIZE()
§ “Fake Quantization Ops”
FREEZING MODEL FOR DEPLOYMENT
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ quantize_weights
§ quantize_nodes
§ freeze_graph
§ Results
§ Variables -> Constants
Finally!
We’re Ready to Deploy!!
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
MODEL SERVING TERMINOLOGY
§ Inference
§ Only Forward Propagation through Network
§ Predict, Classify, Regress, …
§ Bundle
§ GraphDef, Variables, Metadata, …
§ Assets
§ ie. Map of ClassificationID -> String
§ {9283: “penguin”, 9284: “bridge”}
§ Version
§ Every Model Has a Version Number (Integer)
§ Version Policy
§ ie. Serve Only Latest (Highest), Serve Both Latest and Previous, …
TENSORFLOW SERVING FEATURES
§ Supports Auto-Scaling
§ Custom Loaders beyond File-based
§ Tune for Low-latency or High-throughput
§ Serve Diff Models/Versions in Same Process
§ Customize Models Types beyond HashMap and TensorFlow
§ Customize Version Policies for A/B and Bandit Tests
§ Support Request Draining for Graceful Model Updates
§ Enable Request Batching for Diff Use Cases and HW
§ Supports Optimized Transport with GRPC and Protocol Buffers
GRPC :: PROTOBUFFERS AS HTTP :: JSON
PREDICTION SERVICE
§ Predict (Original, Generic)
§ Input: List of Tensor
§ Output: List of Tensor
§ Classify
§ Input: List of tf.Example (key, value) pairs
§ Output: List of (class_label: String, score: float)
§ Regress
§ Input: List of tf.Example (key, value) pairs
§ Output: List of (label: String, score: float)
PREDICTION INPUTS + OUTPUTS
§ SignatureDef
§ Defines inputs and outputs
§ Maps external (logical) to internal (physical) tensor names
§ Allows internal (physical) tensor names to change
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils
graph = tf.get_default_graph()
x_observed = graph.get_tensor_by_name('x_observed:0')
y_pred = graph.get_tensor_by_name('add:0')
inputs_map = {'inputs': x_observed}
outputs_map = {'outputs': y_pred}
predict_signature =
signature_def_utils.predict_signature_def(inputs=inputs_map,
outputs=outputs_map)
MULTI-HEADED INFERENCE
§ Inputs Pass Through Model One Time
§ Model Returns Multiple Predictions:
1. Human-readable prediction (ie. “penguin”, “church”,…)
2. Final layer of scores (float vector)
§ Final Layer of floats Pass to the Next Model in Ensemble
§ Optimizes Bandwidth, CPU/GPU, Latency, Memory
§ Enables Complex Model Composing and Ensembling
BUILD YOUR OWN MODEL SERVER
§ Adapt GRPC(Google) <-> HTTP (REST of the World)
§ Perform Batch Inference vs. Request/Response
§ Handle Requests Asynchronously
§ Support Mobile, Embedded Inference
§ Customize Request Batching
§ Add Circuit Breakers, Fallbacks
§ Control Latency Requirements
§ Reduce Number of Moving Parts
#include
“tensorflow_serving/model_servers/server_core.h”
class MyTensorFlowModelServer {
ServerCore::Options options;
// set options (model name, path, etc)
std::unique_ptr<ServerCore> core;
TF_CHECK_OK(
ServerCore::Create(std::move(options), &core)
);
}
Compile and Link with
libtensorflow.so
RUNTIME OPTION: NVIDIA TENSOR-RT
§ Post-Training Model Optimizations
§ Specific to Nvidia GPU
§ Similar to TF Graph Transform Tool
§ GPU-Optimized Prediction Runtime
§ Alternative to TensorFlow Serving
§ PipelineAI Supports TensorRT!
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
REQUEST BATCH TUNING
§ max_batch_size
§ Enables throughput/latency tradeoff
§ Bounded by RAM
§ batch_timeout_micros
§ Defines batch time window, latency upper-bound
§ Bounded by RAM
§ num_batch_threads
§ Defines parallelism
§ Bounded by CPU cores
§ max_enqueued_batches
§ Defines queue upper bound, throttling
§ Bounded by RAM
Reaching either threshold
will trigger a batch
Separate, Non-Batched Requests
Combined, Batched Requests
ADVANCED BATCHING & SERVING TIPS
§ Batch Just the GPU/TPU Portions of the Computation Graph
§ Batch Arbitrary Sub-Graphs using Batch / Unbatch Graph Ops
§ Distribute Large Models Into Shards Across TensorFlow Model Servers
§ Batch RNNs Used for Sequential and Time-Series Data
§ Find Best Batching Strategy For Your Data Through Experimentation
§ BasicBatchScheduler: Homogeneous requests (ie Regress or Classify)
§ SharedBatchScheduler: Mixed requests, multi-step, ensemble predict
§ StreamingBatchScheduler: Mixed CPU/GPU/IO-bound Workloads
§ Serve Only One (1) Model Inside One (1) TensorFlow Serving Process
§ Much Easier to Debug, Tune, Scale, and Manage Models in Production.
PIPELINE.AI FUNCTIONS (SERVERLESS)
§ Supports Kubernetes
§ Supports Docker Swarm
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Traffic Routing
AGENDA
Part 3: Advanced Model Serving + Traffic
Routing
§ Kubernetes Ingress, Egress, Networking
§ Istio and Envoy Architecture
§ Intelligent Traffic Routing and Scaling
§ Metrics, Chaos Monkey, Production Readiness
KUBERNETES PRIORITY SCHEDULING
Workloads can …
§ access the entire cluster up
to the autoscaler max size
§ trigger autoscaling until
higher-priority workload
§ “fill the cracks” of resource
usage of higher-priority work
(i.e., wait to run until resources are feed
KUBERNETES INGRESS
§ Single Service
§ Can also use Service (LoadBalancer or NodePort)
§ Fan Out & Name-Based Virtual Hosting
§ Route Traffic Using Path or Host Header
§ Reduces # of load balancers needed
§ 404 Implemented as default backend
§ Federation / Hybrid-Cloud
§ Creates Ingress objects in every cluster
§ Monitors health and capacity of pods within each cluster
§ Routes clients to appropriate backend anywhere in federation
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway-fanout
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80
Fan Out (Path)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway-virtualhost
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: foo.bar.com
http:
paths:
backend:
serviceName: s1
servicePort: 80
- host: bar.foo.com
http:
paths:
backend:
serviceName: s2
servicePort: 80
Virtual Hosting
KUBERNETES INGRESS CONTROLLER
§ Ingress Controller Types
§ Google Cloud: kubernetes.io/ingress.class: gce
§ Nginx: kubernetes.io/ingress.class: nginx
§ Istio: kubernetes.io/ingress.class: istio
§ Must Start Ingress Controller Manually
§ Just deploying Ingress is not enough
§ Not started by kube-controller-manager
§ Start Istio Ingress Controller
kubectl apply -f 
$ISTIO_INSTALL_PATH/install/kubernetes/istio.yaml
ISTIO EGRESS
§ While-list Domains To Access From Within Service Mesh
§ Apply RoutingRules
§ Apply DestinationPolicys
§ Supports TLS, HTTP, GRPC
kind: EgressRule
metadata:
name: pipeline-api-egress
spec:
destination:
service: api.pipeline.ai
ports:
- port: 80
protocol: http
- port: 443
protocol: https
AGENDA
Part 3: Advanced Model Serving + Traffic
Routing
§ Kubernetes Ingress, Egress, Networking
§ Istio and Envoy Architecture
§ Intelligent Traffic Routing and Scaling
§ Metrics, Chaos Monkey, Production Readiness
ISTIO ARCHITECTURE: INGRESS
ISTIO ARCHITECTURE: ENVOY
§ Lyft Project
§ High-perf Proxy (C++)
§ Lots of Metrics
§ Zone-Aware
§ Service Discovery
§ Load Balancing
§ Fault Injection, Circuits
§ %-based Traffic Split, Shadow
§ Sidecar Pattern
§ Rate Limiting, Retries, Outlier Detection, Timeout with Budget, …
ISTIO ARCHITECTURE: MIXER
§ Enforce Access Control
§ Evaluate Request-Attrs
§ Collect Metrics
§ Platform-Independent
§ Extensible Plugin Model
ISTIO ARCHITECTURE: PILOT
§ Envoy service discovery
§ Intelligent routing
§ A/B Tests
§ Canary deployments
§ RouteRule->Envoy conf
§ Propagates to sidecars
§ Supports Kube, Consul, ...
ISTIO ARCHITECTURE: SECURITY
§ Mutual TLS Auth
§ Credential Management
§ Uses Service-Identity
§ Canary Deployments
§ Fine-grained ACLs
§ Attribute & Role-based
§ Auditing & Monitoring
AGENDA
Part 3: Advanced Model Serving + Traffic
Routing
§ Kubernetes Ingress, Egress, Networking
§ Istio and Envoy Architecture
§ Intelligent Traffic Routing and Scaling
§ Metrics, Chaos Monkey, Production Readiness
ISTIO ROUTE RULES
§ Kubernetes Custom Resource Definition (CRD)
kind: CustomResourceDefinition
metadata:
name: routerules.config.istio.io
spec:
group: config.istio.io
names:
kind: RouteRule
listKind: RouteRuleList
plural: routerules
singular: routerule
scope: Namespaced
version: v1alpha2
ADVANCED TRAFFIC ROUTING RULES
§ Content-based Routing
§ Uses headers, username, payload, …
§ Cross-Environment Routing
§ Shadow traffic prod=>staging
ISTIO DESTINATION POLICIES
§ Load Balancing
§ ROUND_ROBIN (default)
§ LEAST_CONN (between 2 randomly-selected hosts)
§ RANDOM
§ Circuit Breaker
§ Max connections
§ Max requests per conn
§ Consecutive errors
§ Penalty timer (15 mins)
§ Scan windows (5 mins)
circuitBreaker:
simpleCb:
maxConnections: 100
httpMaxRequests: 1000
httpMaxRequestsPerConnection: 10
httpConsecutiveErrors: 7
sleepWindow: 15m
httpDetectionInterval: 5m
ISTIO AUTO-SCALING
§ Traffic Routing and Auto-Scaling Occur Independently
§ Istio Continues to Obey Traffic Splits After Auto-Scaling
§ Auto-Scaling May Occur In Response to New Traffic Route
A/B & BANDIT MODEL TESTING
§ Perform Live Experiments in Production
§ Compare Existing Model A with Model B, Model C
§ Safe Split-Canary Deployment
§ Pro Tip: Keep Ingress Simple – Use Route Rules Instead!
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: predict-mnist-20-5-75
spec:
destination:
name: predict-mnist
precedence: 2 # Greater than global deny-all
route:
- labels:
version: A
weight: 20 # 20% still routes to model A
- labels:
version: B # 5% routes to new model B
weight: 5
- labels:
version: C # 75% routes to new model C
weight: 75
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: predict-mnist-1-2-97
spec:
destination:
name: predict-mnist
precedence: 2 # Greater than global deny-all
route:
- labels:
version: A
weight: 1 # 1% routes to model A
- labels:
version: B # 2% routes to new model B
weight: 2
- labels:
version: C # 97% routes to new model C
weight: 97
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: predict-mnist-97-2-1
spec:
destination:
name: predict-mnist
precedence: 2 # Greater than global deny-all
route:
- labels:
version: A
weight: 97 # 97% still routes to model A
- labels:
version: B # 2% routes to new model B
weight: 2
- labels:
version: C # 1% routes to new model C
weight: 1
AGENDA
Part 3: Advanced Model Serving + Traffic
Routing
§ Kubernetes Ingress, Egress, Networking
§ Istio and Envoy Architecture
§ Intelligent Traffic Routing and Scaling
§ Metrics, Chaos Monkey, Production Readiness
ISTIO METRICS AND MONITORING
§ Verify Traffic Splits
§ Fine-Grained Request Tracing
ISTIO & CHAOS + LATENCY MONKEY
§ Fault Injection
§ Delay
§ Abort
kind: RouteRule
metadata:
name: predict-mnist
spec:
destination:
name: predict-mnist
httpFault:
abort:
httpStatus: 420
percent: 100
kind: RouteRule
metadata:
name: predict-mnist
spec:
destination:
name: predict-mnist
httpFault:
delay:
fixedDelay: 7.000s
percent: 100
SPECIAL THANKS TO CHRISTIAN POSTA
§ http://blog.christianposta.com/istio-workshop
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Traffic Routing
PIPELINE.AI SUPPORTS ALL MAJOR MODELS
PIPELINE.AI ANNOUNCEMENTS
http://pipeline.aihttp://community.pipeline.ai
THANK YOU!!
§ Please Star this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline
Contact Me
chris@pipeline.ai
@cfregly

Weitere ähnliche Inhalte

Was ist angesagt?

Optimizing, Profiling, and Deploying TensorFlow AI Models with GPUs - San Fra...
Optimizing, Profiling, and Deploying TensorFlow AI Models with GPUs - San Fra...Optimizing, Profiling, and Deploying TensorFlow AI Models with GPUs - San Fra...
Optimizing, Profiling, and Deploying TensorFlow AI Models with GPUs - San Fra...Chris Fregly
 
High Performance TensorFlow in Production -- Sydney ML / AI Train Workshop @ ...
High Performance TensorFlow in Production -- Sydney ML / AI Train Workshop @ ...High Performance TensorFlow in Production -- Sydney ML / AI Train Workshop @ ...
High Performance TensorFlow in Production -- Sydney ML / AI Train Workshop @ ...Chris Fregly
 
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...Chris Fregly
 
High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...
High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...
High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...Chris Fregly
 
Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...
Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...
Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...Chris Fregly
 
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUsOptimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUsChris Fregly
 
High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017
High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017
High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017Chris Fregly
 
High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...
High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...
High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...Chris Fregly
 
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...Chris Fregly
 
High Performance Distributed TensorFlow with GPUs and Kubernetes
High Performance Distributed TensorFlow with GPUs and KubernetesHigh Performance Distributed TensorFlow with GPUs and Kubernetes
High Performance Distributed TensorFlow with GPUs and Kubernetesinside-BigData.com
 
Quest for the Perfect Workflow for McrFRED
Quest for the Perfect Workflow for McrFREDQuest for the Perfect Workflow for McrFRED
Quest for the Perfect Workflow for McrFREDAndi Smith
 
Optimizing Application Performance on Kubernetes
Optimizing Application Performance on KubernetesOptimizing Application Performance on Kubernetes
Optimizing Application Performance on KubernetesDinakar Guniguntala
 
Introduction to Polyaxon
Introduction to PolyaxonIntroduction to Polyaxon
Introduction to PolyaxonYu Ishikawa
 
Advanced Spark and TensorFlow Meetup 08-04-2016 One Click Spark ML Pipeline D...
Advanced Spark and TensorFlow Meetup 08-04-2016 One Click Spark ML Pipeline D...Advanced Spark and TensorFlow Meetup 08-04-2016 One Click Spark ML Pipeline D...
Advanced Spark and TensorFlow Meetup 08-04-2016 One Click Spark ML Pipeline D...Chris Fregly
 
Going FaaSter, Functions as a Service at Netflix
Going FaaSter, Functions as a Service at NetflixGoing FaaSter, Functions as a Service at Netflix
Going FaaSter, Functions as a Service at NetflixYunong Xiao
 
Configuring Highly Scalable Compile Masters, Vasco Cardoso, AWS
Configuring Highly Scalable Compile Masters, Vasco Cardoso, AWSConfiguring Highly Scalable Compile Masters, Vasco Cardoso, AWS
Configuring Highly Scalable Compile Masters, Vasco Cardoso, AWSPuppet
 
Migrating to a Bazel-based CI System: 6 Learnings - Or Shachar
Migrating to a Bazel-based CI System: 6 Learnings - Or ShacharMigrating to a Bazel-based CI System: 6 Learnings - Or Shachar
Migrating to a Bazel-based CI System: 6 Learnings - Or ShacharWix Engineering
 
Adtech x Scala x Performance tuning
Adtech x Scala x Performance tuningAdtech x Scala x Performance tuning
Adtech x Scala x Performance tuningYosuke Mizutani
 
TensorFlow meetup: Keras - Pytorch - TensorFlow.js
TensorFlow meetup: Keras - Pytorch - TensorFlow.jsTensorFlow meetup: Keras - Pytorch - TensorFlow.js
TensorFlow meetup: Keras - Pytorch - TensorFlow.jsStijn Decubber
 
Migrating to a bazel based CI system: 6 learnings
Migrating to a bazel based CI system: 6 learnings Migrating to a bazel based CI system: 6 learnings
Migrating to a bazel based CI system: 6 learnings Or Shachar
 

Was ist angesagt? (20)

Optimizing, Profiling, and Deploying TensorFlow AI Models with GPUs - San Fra...
Optimizing, Profiling, and Deploying TensorFlow AI Models with GPUs - San Fra...Optimizing, Profiling, and Deploying TensorFlow AI Models with GPUs - San Fra...
Optimizing, Profiling, and Deploying TensorFlow AI Models with GPUs - San Fra...
 
High Performance TensorFlow in Production -- Sydney ML / AI Train Workshop @ ...
High Performance TensorFlow in Production -- Sydney ML / AI Train Workshop @ ...High Performance TensorFlow in Production -- Sydney ML / AI Train Workshop @ ...
High Performance TensorFlow in Production -- Sydney ML / AI Train Workshop @ ...
 
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...
 
High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...
High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...
High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...
 
Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...
Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...
Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...
 
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUsOptimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
 
High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017
High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017
High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017
 
High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...
High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...
High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...
 
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
 
High Performance Distributed TensorFlow with GPUs and Kubernetes
High Performance Distributed TensorFlow with GPUs and KubernetesHigh Performance Distributed TensorFlow with GPUs and Kubernetes
High Performance Distributed TensorFlow with GPUs and Kubernetes
 
Quest for the Perfect Workflow for McrFRED
Quest for the Perfect Workflow for McrFREDQuest for the Perfect Workflow for McrFRED
Quest for the Perfect Workflow for McrFRED
 
Optimizing Application Performance on Kubernetes
Optimizing Application Performance on KubernetesOptimizing Application Performance on Kubernetes
Optimizing Application Performance on Kubernetes
 
Introduction to Polyaxon
Introduction to PolyaxonIntroduction to Polyaxon
Introduction to Polyaxon
 
Advanced Spark and TensorFlow Meetup 08-04-2016 One Click Spark ML Pipeline D...
Advanced Spark and TensorFlow Meetup 08-04-2016 One Click Spark ML Pipeline D...Advanced Spark and TensorFlow Meetup 08-04-2016 One Click Spark ML Pipeline D...
Advanced Spark and TensorFlow Meetup 08-04-2016 One Click Spark ML Pipeline D...
 
Going FaaSter, Functions as a Service at Netflix
Going FaaSter, Functions as a Service at NetflixGoing FaaSter, Functions as a Service at Netflix
Going FaaSter, Functions as a Service at Netflix
 
Configuring Highly Scalable Compile Masters, Vasco Cardoso, AWS
Configuring Highly Scalable Compile Masters, Vasco Cardoso, AWSConfiguring Highly Scalable Compile Masters, Vasco Cardoso, AWS
Configuring Highly Scalable Compile Masters, Vasco Cardoso, AWS
 
Migrating to a Bazel-based CI System: 6 Learnings - Or Shachar
Migrating to a Bazel-based CI System: 6 Learnings - Or ShacharMigrating to a Bazel-based CI System: 6 Learnings - Or Shachar
Migrating to a Bazel-based CI System: 6 Learnings - Or Shachar
 
Adtech x Scala x Performance tuning
Adtech x Scala x Performance tuningAdtech x Scala x Performance tuning
Adtech x Scala x Performance tuning
 
TensorFlow meetup: Keras - Pytorch - TensorFlow.js
TensorFlow meetup: Keras - Pytorch - TensorFlow.jsTensorFlow meetup: Keras - Pytorch - TensorFlow.js
TensorFlow meetup: Keras - Pytorch - TensorFlow.js
 
Migrating to a bazel based CI system: 6 learnings
Migrating to a bazel based CI system: 6 learnings Migrating to a bazel based CI system: 6 learnings
Migrating to a bazel based CI system: 6 learnings
 

Ähnlich wie Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San Jose March 2018

Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AI
Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AIOptimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AI
Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AIData Con LA
 
Build, train, and deploy Machine Learning models at scale (May 2018)
Build, train, and deploy Machine Learning models at scale (May 2018)Build, train, and deploy Machine Learning models at scale (May 2018)
Build, train, and deploy Machine Learning models at scale (May 2018)Julien SIMON
 
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...SQUADEX
 
Distributed Model Training using MXNet with Horovod
Distributed Model Training using MXNet with HorovodDistributed Model Training using MXNet with Horovod
Distributed Model Training using MXNet with HorovodLin Yuan
 
AIM410R Deep Learning Applications with TensorFlow, featuring Mobileye (Decem...
AIM410R Deep Learning Applications with TensorFlow, featuring Mobileye (Decem...AIM410R Deep Learning Applications with TensorFlow, featuring Mobileye (Decem...
AIM410R Deep Learning Applications with TensorFlow, featuring Mobileye (Decem...Julien SIMON
 
Build, train, and deploy Machine Learning models at scale (May 2018)
Build, train, and deploy Machine Learning models at scale (May 2018)Build, train, and deploy Machine Learning models at scale (May 2018)
Build, train, and deploy Machine Learning models at scale (May 2018)Julien SIMON
 
DevoxxUK: Optimizating Application Performance on Kubernetes
DevoxxUK: Optimizating Application Performance on KubernetesDevoxxUK: Optimizating Application Performance on Kubernetes
DevoxxUK: Optimizating Application Performance on KubernetesDinakar Guniguntala
 
KFServing - Serverless Model Inferencing
KFServing - Serverless Model InferencingKFServing - Serverless Model Inferencing
KFServing - Serverless Model InferencingAnimesh Singh
 
Scaling AI in production using PyTorch
Scaling AI in production using PyTorchScaling AI in production using PyTorch
Scaling AI in production using PyTorchgeetachauhan
 
Scaling Machine Learning to Billions of Parameters - Spark Summit 2016
Scaling Machine Learning to Billions of Parameters - Spark Summit 2016Scaling Machine Learning to Billions of Parameters - Spark Summit 2016
Scaling Machine Learning to Billions of Parameters - Spark Summit 2016Badri Narayan Bhaskar
 
Scaling Machine Learning To Billions Of Parameters
Scaling Machine Learning To Billions Of ParametersScaling Machine Learning To Billions Of Parameters
Scaling Machine Learning To Billions Of ParametersJen Aman
 
Scaling machine learning to millions of users with Apache Beam
Scaling machine learning to millions of users with Apache BeamScaling machine learning to millions of users with Apache Beam
Scaling machine learning to millions of users with Apache BeamTatiana Al-Chueyr
 
_Python Ireland Meetup - Serverless ML - Dowling.pdf
_Python Ireland Meetup - Serverless ML - Dowling.pdf_Python Ireland Meetup - Serverless ML - Dowling.pdf
_Python Ireland Meetup - Serverless ML - Dowling.pdfJim Dowling
 
Building machine learning service in your business — Eric Chen (Uber) @PAPIs ...
Building machine learning service in your business — Eric Chen (Uber) @PAPIs ...Building machine learning service in your business — Eric Chen (Uber) @PAPIs ...
Building machine learning service in your business — Eric Chen (Uber) @PAPIs ...PAPIs.io
 
The Convergence of HPC and Deep Learning
The Convergence of HPC and Deep LearningThe Convergence of HPC and Deep Learning
The Convergence of HPC and Deep Learninginside-BigData.com
 
Deep Learning Using Caffe2 on AWS - MCL313 - re:Invent 2017
Deep Learning Using Caffe2 on AWS - MCL313 - re:Invent 2017Deep Learning Using Caffe2 on AWS - MCL313 - re:Invent 2017
Deep Learning Using Caffe2 on AWS - MCL313 - re:Invent 2017Amazon Web Services
 
Apache Submarine: Unified Machine Learning Platform
Apache Submarine: Unified Machine Learning PlatformApache Submarine: Unified Machine Learning Platform
Apache Submarine: Unified Machine Learning PlatformWangda Tan
 
Stress Testing at Twitter: a tale of New Year Eves
Stress Testing at Twitter: a tale of New Year EvesStress Testing at Twitter: a tale of New Year Eves
Stress Testing at Twitter: a tale of New Year EvesHerval Freire
 
AWS re:Invent 2016: Building HPC Clusters as Code in the (Almost) Infinite Cl...
AWS re:Invent 2016: Building HPC Clusters as Code in the (Almost) Infinite Cl...AWS re:Invent 2016: Building HPC Clusters as Code in the (Almost) Infinite Cl...
AWS re:Invent 2016: Building HPC Clusters as Code in the (Almost) Infinite Cl...Amazon Web Services
 

Ähnlich wie Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San Jose March 2018 (20)

Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AI
Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AIOptimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AI
Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AI
 
Build, train, and deploy Machine Learning models at scale (May 2018)
Build, train, and deploy Machine Learning models at scale (May 2018)Build, train, and deploy Machine Learning models at scale (May 2018)
Build, train, and deploy Machine Learning models at scale (May 2018)
 
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...
 
Distributed Model Training using MXNet with Horovod
Distributed Model Training using MXNet with HorovodDistributed Model Training using MXNet with Horovod
Distributed Model Training using MXNet with Horovod
 
AIM410R Deep Learning Applications with TensorFlow, featuring Mobileye (Decem...
AIM410R Deep Learning Applications with TensorFlow, featuring Mobileye (Decem...AIM410R Deep Learning Applications with TensorFlow, featuring Mobileye (Decem...
AIM410R Deep Learning Applications with TensorFlow, featuring Mobileye (Decem...
 
Build, train, and deploy Machine Learning models at scale (May 2018)
Build, train, and deploy Machine Learning models at scale (May 2018)Build, train, and deploy Machine Learning models at scale (May 2018)
Build, train, and deploy Machine Learning models at scale (May 2018)
 
DevoxxUK: Optimizating Application Performance on Kubernetes
DevoxxUK: Optimizating Application Performance on KubernetesDevoxxUK: Optimizating Application Performance on Kubernetes
DevoxxUK: Optimizating Application Performance on Kubernetes
 
KFServing - Serverless Model Inferencing
KFServing - Serverless Model InferencingKFServing - Serverless Model Inferencing
KFServing - Serverless Model Inferencing
 
Scaling AI in production using PyTorch
Scaling AI in production using PyTorchScaling AI in production using PyTorch
Scaling AI in production using PyTorch
 
Scaling Machine Learning to Billions of Parameters - Spark Summit 2016
Scaling Machine Learning to Billions of Parameters - Spark Summit 2016Scaling Machine Learning to Billions of Parameters - Spark Summit 2016
Scaling Machine Learning to Billions of Parameters - Spark Summit 2016
 
Scaling Machine Learning To Billions Of Parameters
Scaling Machine Learning To Billions Of ParametersScaling Machine Learning To Billions Of Parameters
Scaling Machine Learning To Billions Of Parameters
 
Scaling machine learning to millions of users with Apache Beam
Scaling machine learning to millions of users with Apache BeamScaling machine learning to millions of users with Apache Beam
Scaling machine learning to millions of users with Apache Beam
 
_Python Ireland Meetup - Serverless ML - Dowling.pdf
_Python Ireland Meetup - Serverless ML - Dowling.pdf_Python Ireland Meetup - Serverless ML - Dowling.pdf
_Python Ireland Meetup - Serverless ML - Dowling.pdf
 
Building machine learning service in your business — Eric Chen (Uber) @PAPIs ...
Building machine learning service in your business — Eric Chen (Uber) @PAPIs ...Building machine learning service in your business — Eric Chen (Uber) @PAPIs ...
Building machine learning service in your business — Eric Chen (Uber) @PAPIs ...
 
The Convergence of HPC and Deep Learning
The Convergence of HPC and Deep LearningThe Convergence of HPC and Deep Learning
The Convergence of HPC and Deep Learning
 
Deep Learning Using Caffe2 on AWS - MCL313 - re:Invent 2017
Deep Learning Using Caffe2 on AWS - MCL313 - re:Invent 2017Deep Learning Using Caffe2 on AWS - MCL313 - re:Invent 2017
Deep Learning Using Caffe2 on AWS - MCL313 - re:Invent 2017
 
Apache Submarine: Unified Machine Learning Platform
Apache Submarine: Unified Machine Learning PlatformApache Submarine: Unified Machine Learning Platform
Apache Submarine: Unified Machine Learning Platform
 
Stress Testing at Twitter: a tale of New Year Eves
Stress Testing at Twitter: a tale of New Year EvesStress Testing at Twitter: a tale of New Year Eves
Stress Testing at Twitter: a tale of New Year Eves
 
Spark ML Pipeline serving
Spark ML Pipeline servingSpark ML Pipeline serving
Spark ML Pipeline serving
 
AWS re:Invent 2016: Building HPC Clusters as Code in the (Almost) Infinite Cl...
AWS re:Invent 2016: Building HPC Clusters as Code in the (Almost) Infinite Cl...AWS re:Invent 2016: Building HPC Clusters as Code in the (Almost) Infinite Cl...
AWS re:Invent 2016: Building HPC Clusters as Code in the (Almost) Infinite Cl...
 

Mehr von Chris Fregly

AWS reInvent 2022 reCap AI/ML and Data
AWS reInvent 2022 reCap AI/ML and DataAWS reInvent 2022 reCap AI/ML and Data
AWS reInvent 2022 reCap AI/ML and DataChris Fregly
 
Pandas on AWS - Let me count the ways.pdf
Pandas on AWS - Let me count the ways.pdfPandas on AWS - Let me count the ways.pdf
Pandas on AWS - Let me count the ways.pdfChris Fregly
 
Ray AI Runtime (AIR) on AWS - Data Science On AWS Meetup
Ray AI Runtime (AIR) on AWS - Data Science On AWS MeetupRay AI Runtime (AIR) on AWS - Data Science On AWS Meetup
Ray AI Runtime (AIR) on AWS - Data Science On AWS MeetupChris Fregly
 
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds UpdatedSmokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds UpdatedChris Fregly
 
Amazon reInvent 2020 Recap: AI and Machine Learning
Amazon reInvent 2020 Recap:  AI and Machine LearningAmazon reInvent 2020 Recap:  AI and Machine Learning
Amazon reInvent 2020 Recap: AI and Machine LearningChris Fregly
 
Waking the Data Scientist at 2am: Detect Model Degradation on Production Mod...
Waking the Data Scientist at 2am:  Detect Model Degradation on Production Mod...Waking the Data Scientist at 2am:  Detect Model Degradation on Production Mod...
Waking the Data Scientist at 2am: Detect Model Degradation on Production Mod...Chris Fregly
 
Quantum Computing with Amazon Braket
Quantum Computing with Amazon BraketQuantum Computing with Amazon Braket
Quantum Computing with Amazon BraketChris Fregly
 
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-PersonChris Fregly
 
AWS Re:Invent 2019 Re:Cap
AWS Re:Invent 2019 Re:CapAWS Re:Invent 2019 Re:Cap
AWS Re:Invent 2019 Re:CapChris Fregly
 
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...Chris Fregly
 
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...Chris Fregly
 
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...Chris Fregly
 
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...Chris Fregly
 
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...Chris Fregly
 
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...Chris Fregly
 

Mehr von Chris Fregly (15)

AWS reInvent 2022 reCap AI/ML and Data
AWS reInvent 2022 reCap AI/ML and DataAWS reInvent 2022 reCap AI/ML and Data
AWS reInvent 2022 reCap AI/ML and Data
 
Pandas on AWS - Let me count the ways.pdf
Pandas on AWS - Let me count the ways.pdfPandas on AWS - Let me count the ways.pdf
Pandas on AWS - Let me count the ways.pdf
 
Ray AI Runtime (AIR) on AWS - Data Science On AWS Meetup
Ray AI Runtime (AIR) on AWS - Data Science On AWS MeetupRay AI Runtime (AIR) on AWS - Data Science On AWS Meetup
Ray AI Runtime (AIR) on AWS - Data Science On AWS Meetup
 
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds UpdatedSmokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
 
Amazon reInvent 2020 Recap: AI and Machine Learning
Amazon reInvent 2020 Recap:  AI and Machine LearningAmazon reInvent 2020 Recap:  AI and Machine Learning
Amazon reInvent 2020 Recap: AI and Machine Learning
 
Waking the Data Scientist at 2am: Detect Model Degradation on Production Mod...
Waking the Data Scientist at 2am:  Detect Model Degradation on Production Mod...Waking the Data Scientist at 2am:  Detect Model Degradation on Production Mod...
Waking the Data Scientist at 2am: Detect Model Degradation on Production Mod...
 
Quantum Computing with Amazon Braket
Quantum Computing with Amazon BraketQuantum Computing with Amazon Braket
Quantum Computing with Amazon Braket
 
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
 
AWS Re:Invent 2019 Re:Cap
AWS Re:Invent 2019 Re:CapAWS Re:Invent 2019 Re:Cap
AWS Re:Invent 2019 Re:Cap
 
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
 
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
 
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
 
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
 
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
 
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
 

Kürzlich hochgeladen

How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerThousandEyes
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Modelsaagamshah0812
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providermohitmore19
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsAndolasoft Inc
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...OnePlan Solutions
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...panagenda
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsAlberto González Trastoy
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfkalichargn70th171
 
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceCALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceanilsa9823
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVshikhaohhpro
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Steffen Staab
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...MyIntelliSource, Inc.
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxComplianceQuest1
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdfWave PLM
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfkalichargn70th171
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...Health
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxbodapatigopi8531
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsArshad QA
 

Kürzlich hochgeladen (20)

How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.js
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
 
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceCALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTV
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docx
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptx
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 

Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San Jose March 2018

  • 1. HYPER-PARAMETER TUNING ACROSS THE ENTIRE AI PIPELINE: MODEL TRAINING TO PREDICTING GPU TECH CONFERENCE -- SAN JOSE, MARCH 2018 CHRIS FREGLY FOUNDER @ PIPELINEAI
  • 2. KEY TAKE-AWAYS With PipelineAI, You Can… § Hyper-Parameter Tuning From Training to Inference § Generate Hardware-Specific Pipeline Optimizations § Deploy & Compare Optimizations in Live Production § Perform Continuous Model Training & Data Labeling
  • 3. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Routing
  • 4. INTRODUCTIONS: ME § Chris Fregly, Founder & Engineer @ PipelineAI § Formerly Netflix, Databricks, IBM Spark Tech § Founder @ Advanced Spark TensorFlow Meetup § Please Join Our 75,000+ Global Members!! Contact Me chris@pipeline.ai @cfregly Global Locations * San Francisco * Chicago * Austin * Washington DC * Dusseldorf * London
  • 5. INTRODUCTIONS: YOU You Want To … § Perform Hyper-Parameter Tuning Across *Entire* Pipeline § Measure Results of Tuning Both Offline *and* Online § Deploy Models Rapidly, Safely, *Directly* in Production § Trace and Explain *Live* Model Predictions
  • 6. PIPELINEAI IS OPEN SOURCE § https://github.com/PipelineAI/pipeline/ § Please Star this GitHub Repo! § “Each Star is Worth $1,500 in Seed Money” - A Prominent Venture Capitalist in Silicon Valley http://jrvis.com/red-dwarf/
  • 8. PIPELINEAI SUPPORTS ALL MAJOR MODELS
  • 9. PIPELINEAI TERMINOLOGY § “Flask-App Falacy”: Flask is Not Enough for Production-izing ML/AI Models § “Pipeline”: All Phases Including Train, Validate, Optimize, Deploy, and Predict § “Experiment”: Across All Environments from Research Lab to Live Production § “Turning Knobs”: Hyper-Parameter Tuning Across All Phases of the Pipeline § “Model Serving”: Models Serving Predictions in Live Production § “Runtime”: Execution Environment for Any Phase of Pipeline (TensorRT, Caffe) § “Train-to-Serve”: Training with Intent to Serve Predictions § “Train-Serving Skew”: Model Performs Poorly on Live Data § “Post-Training Optimization”: Prepare Model and Runtime for Fast Inference http://NoFlaskApp.com
  • 10. Any Runtime Any Device CPU, GPU, TPU, IoT Any Network and System Configuration Any Clouud and On-Premise Environment AnyModel AnyLanguage AnyFramework AnyHyper- Parameter 1,000,000’s of Model + Runtime Pipeline Combinations We Find the Best Combinations For Your Model and Workload! WHOLE-PIPELINE HYPER-PARAMETER TUNING
  • 12. WHOLE-PIPELINE HYPER-PARAMETERS Training: Hyperparameters pipelinedb.add("learning_rate", 0.025) pipelinedb.add(”batch_size", 8192) pipelinedb.add(”num_epochs", 100) ^^ THIS IS WHERE MOST DATA SCIENTISTS END BECAUSE ^^ ^^ THEY HAVE NO WAY OF COLLECTING ANYTHING MORE ^^ ^^ UNTIL NOW! ^^ pipelinedb.add("ec2_instance_type", "g3.4xlarge”) pipelinedb.add("utilized_memory_gigabyte", 20) pipelinedb.add(“network_speed_gigabit”, 10) pipelinedb.add("training_precision_bits", 16) pipelinedb.add("accelerator_type", "nvidia_gpu_v100") # google_tpu pipelinedb.add(“cpu_to_accelerator_network_type", “pcie”) # nvlink pipelinedb.add(“cpu_to_accelerator_network_bandwidth_gigabit”, 100) Training: Results pipelinedb.add("training_accuracy_percent", 95) pipelinedb.add(“validation_accuracy_percent", 94) pipelinedb.add("training_auc", 0.70) pipelinedb.add(“validation_auc", 0.69) pipelinedb.add(”time_to_train_seconds", 0.69) Optimization: Hyperparameters pipelinedb.add(”batch_norm_fusing", True) pipelinedb.add("weight_quantization_bits", 8) # 2-bit, 7-bit Optimization: Results (Collected At End of Optimization) pipelinedb.add("weight_quantization_reduction_percent", 50) Inference: Hyperparameters pipelinedb.add("runtime_type", ”tfserving") # python,tensorrt Pipelinedb.add(“runtime_chip”, “gpu”) pipelinedb.add("model_type", "tensorflow") # caffe, scikit pipelinedb.add("request_batch_window_ms", 10) pipelinedb.add("request_batch_size", 1000) Inference: Results (Every ~15 Mins Inside PipelineAI Runtime) pipelinedb.add("latency_99_percentile_ms", 5) pipelinedb.add("cost_per_prediction_usd", 0.000001) pipelinedb.add("24_hr_auc", 0.70) pipelinedb.add("48_hr_auc", 0.30) Training Optimizing Inferencing
  • 13. WHY EMPHASIS ON MODEL INFERENCE? Model Training Batch & Boring Offline in Research Lab Pipeline Ends at Training No Insight into Live Production Small Number of Data Scientists Optimizations Are Very Well-Known Real-Time & Exciting!! Online in Live Production No Ability To Turn Inference Knobs (Yet) Extend Model Validation Into Production Huuuuuuge Number of Application Users Inference Optimizations Not Yet Explored <<< Model Inference 100’s Training Jobs per Day 1,000,000’s Predictions per Sec
  • 14. GROWTH IN ML/AI MODELS 2017 2026 Data Scientists 44,000 11,500,000 $39 Billion in 2017 $2 Trillion by 2026 2017 2026 Models Trained 50,000,000 200,000 2017 2026 Model Predictions 250,000,000,000 4,000,000 2016 2026 2016 2026 2016 2026
  • 15. MODEL DEPLOYMENT OPTIONS § AWS SageMaker § Released Nov 2017 @ Re-invent § Custom Docker Images for Training/Serving (ie. PipelineAI Images) § Distributed TensorFlow Training through Estimator API § Traffic Splitting for A/B Model Testing § Google Cloud ML Engine § Mostly Command-Line Based § Driving TensorFlow Open Source API (ie. Estimator API) § Azure ML § On-Premise Docker, Docker Swarm, Kubernetes, Mesos PipelineAI Supports All Hybrid-Cloud, On-Prem, and Air-Gap Deployments!
  • 16. WHOLE-PIPELINE OPTIMIZATION OPTIONS § Model Training Optimizations § Model Hyper-Parameters (ie. Learning Rate) § Reduced Precision (ie. FP16 Half Precision) § Model Optimizations to Prepare for Inference § Quantize Model Weights + Activations From 32-bit to 8-bit § Fuse Neural Network Layers Together § Model Inference Runtime Optimizations § Runtime Config: Request Batch Size, etc § Different Runtime: TensorFlow Serving CPU/GPU, Nvidia TensorRT
  • 17. NVIDIA TENSOR-RT RUNTIME § Post-Training Model Optimizations § Specific to Nvidia GPUs § GPU-Optimized Prediction Runtime § Alternative to TensorFlow Serving § PipelineAI Supports TensorRT!
  • 18. TENSORFLOW LITE OPTIMIZING CONVERTER § Post-Training Model Optimizations § Currently Supports iOS and Android § On-Device Prediction Runtime § Low-Latency, Fast Startup § Selective Operator Loading § 70KB Min - 300KB Max Runtime Footprint § Supports Accelerators (GPU, TPU) § Falls Back to CPU without Accelerator § Java and C++ APIs bazel build tensorflow/contrib/lite/toco:toco && ./bazel-bin/third_party/tensorflow/contrib/lite/toco/toco --input_file=frozen_eval_graph.pb --output_file=tflite_model.tflite --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --inference_type=QUANTIZED_UINT8 --input_shape="1,224, 224,3" --input_array=input --output_array=outputs --std_value=127.5 --mean_value=127.5
  • 19. PIPELINEAI QUICK START § http://quickstart.pipeline.ai § Any Model, Any Training Runtime, Any Prediction Runtime § Support for Docker, Docker Swarm, Kubernetes, Mesos § Package Model+Runtime into a Docker Image § Emphasizes Immutable Deployment and Infrastructure § Same Image Across All Environments § No Library or Dependency Surprises from Laptop to Production § Allows Tuning Offline and Online Model+Runtime Together
  • 20. STEP 1: BUILD MODEL+TRAINING SERVER § Train Model with Specific Hyper-Parameters § Monitor and Compare Validation Accuracy § Tune Hyper-Parameters to Improve Accuracy pipeline train-server-build --model-name=mnist --model-tag=A --model-type=tensorflow --model-path=./tensorflow/mnist/0.025/model Build Model Training Server A (Learning Rate 0.025) pipeline train-server-build --model-name=mnist --model-tag=B --model-type=tensorflow --model-path=./tensorflow/mnist/0.050/model Build Model Training Server B (Learning Rate 0.050)
  • 21. STEP 2: TRAIN, MEASURE, TUNE § Train Model with Specific Hyper-Parameters § Monitor abnd Compare Validation Accuracy § Tune Hyper-Parameters to Improve Accuracy pipeline train-server-start --model-name=mnist --model-tag=A --input-host-path=./tensorflow/mnist/input --output-host-path=./tensorflow/mnist/output --train-args= "--learning-rate=0.025 --batch-size=128" Train Model A (Learning Rate 0.025) pipeline train-server-start --model-name=mnist --model-tag=B --input-host-path=./tensorflow/mnist/input --output-host-path=./tensorflow/mnist/output --train-args= "--learning-rate=0.025 --batch-size=128" Train Model B (Learning Rate 0.050)
  • 22. STEP 3: CREATE PREDICT() METHOD def predict(request: bytes) -> bytes: return _model.predict(request)Basic Insight: def predict(request: bytes) -> bytes: # Step 1: Transform Request (JSON => np.array) transformed_request = _transform_request(request) # Step 2: Model Predict predictions = _model.predict(transformed_request) # Step 3: Transform Response (np.array => JSON) transformed_response = _transform_response(predictions) return transformed_response Detailed Insight: § Multiple Levels of Performance Metrics and Logging § Enterprise Adapters for All Metrics and Logging Systems pipeline predict-server-logs --model-name=mnist --model-tag=cpu View Logs
  • 23. STEP 4: BUILD MODEL+PREDICTION SERVER pipeline predict-server-build --model-name=mnist --model-tag=C --model-type=tensorflow --model-runtime=tensorrt --model-chip=gpu --model-path=./tensorflow/mnist/ Build Local Model Server C TensorRT GPU pipeline predict-server-build --model-name=mnist --model-tag=A --model-type=tensorflow --model-runtime=tfserving --model-chip=cpu --model-path=./tensorflow/mnist/ Build Local Model Server A TF Serving CPU pipeline predict-server-build --model-name=mnist --model-tag=B --model-type=tensorflow --model-runtime=tfserving --model-chip=gpu --model-path=./tensorflow/mnist/ Build Local Model Server B TF Serving GPU Same Model, 3 Different Prediction Runtimes
  • 24. STEP 5: PREDICT, MEASURE, TUNE (LOCAL) § Perform Mini-Load Test on Local Model Server § Immediate Feedback on Prediction Performance § Compare to Previous Model+Runtime Variations § Gain Intuition Before Pushing to Prod pipeline predict-server-start --model-name=mnist --model-tag=A --memory-limit=2G pipeline predict-http-test --model-endpoint-url=http://localhost:8080 --test-request-path=test_request.json --test-request-concurrency=1000 Start Local Predict Load Test Start Local Model Server
  • 25. STEP 6: DEPLOY, MEASURE, TUNE (IN PROD) § Deploy from CLI or Jupyter Notebook § Tear-Down and Rollback Models Quickly § Shadow Canary: Deploy to 20% Live Traffic § Split Canary: Deploy to 97-2-1% Live Traffic pipeline predict-kube-start --model-name=mnist --model-tag=BStart Cluster B pipeline predict-kube-start --model-name=mnist --model-tag=CStart Cluster C pipeline predict-kube-start --model-name=mnist --model-tag=AStart Cluster A pipeline predict-kube-route --model-name=mnist --model-split-tag-and-weight-dict='{"A":97, "B":2, "C”:1}' --model-shadow-tag-list='[]' Route Live Traffic
  • 26. STEP 7: OPTIMIZE, MEASURE, RE-DEPLOY § Prepare Model for Predicting § Simplify Network, Reduce Size § Reduce Precision -> Fast Math § Some Tools § Graph Transform Tool (GTT) § tfcompile After Training After Optimizing! pipeline optimize --optimization-list=[‘quantize_weights’,‘tfcompile’] --model-name=mnist --model-tag=A --model-path=./tensorflow/mnist/model --model-inputs=[‘x’] --model-outputs=[‘add’] --output-path=./tensorflow/mnist/optimized_model Linear Regression Model Size: 70MB –> 70K (!)
  • 27. STEP 8: EVALUATE MODEL+RUNTIME VARIANT § Offline, Batch Metrics § Validation + Training Accuracy § CPU + GPU Utilization § Online, Live Prediction Values § Compare Relative Precision § Newly-Seen, Streaming Data § Online, Real-Time Metrics § Response Time, Throughput § Cost ($) Per Prediction
  • 28. STEP 9: DETERMINE PIPELINEAI EFFICIENCY
  • 29. STEP 10: SHIFT TRAFFIC TO BEST VARIANT § A/B Tests § Inflexible and Boring § Multi-Armed Bandits § Adaptive and Exciting! pipeline predict-kube-route --model-name=mnist --model-split-tag-and-weight-dict='{"A":1, "B":2, "C”:97}’ --model-shadow-tag-list='[]' Dynamically Route Traffic to Winning Model+Runtime
  • 30. PIPELINE PROFILING AND TUNING § Instrument Code to Generate “Timelines” for Any Metric § Analyze with Google Web Tracing Framework (WTF) § Can Also Monitor CPU with top, GPU with nvidia-smi http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True))
  • 31. MODEL AND ENSEMBLE TRACING/AUDITING § Necessary for Model Explain-ability § Fine-Grained Request Tracing § Used for Model Ensembles
  • 32. VIEW REAL-TIME PREDICTION STREAMS § Visually Compare Real-time Predictions Features and Inputs Predictions and Confidences Model B Model CModel A
  • 33. CONTINUOUS DATA LABELING AND FIXING § Identify and Fix Borderline (Unconfident) Predictions § Fix Predictions Along Class Boundaries § Facilitate ”Human in the Loop” § Path to Crowd-Sourced Labeling § Retrain with Newly-Labeled Data § Game-ify the Labeling Process
  • 34. CONTINUOUS MODEL TRAINING § The Holy Grail of Machine Learning § Kafka, Kinesis, Spark Streaming, Flink, Storm, Heron PipelineAI Supports Continuous Model Training
  • 35. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Traffic Routing
  • 36. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Feed, Train, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  • 37. SETTING UP TENSORFLOW WITH GPUS § Very Painful! § Especially inside Docker § Use nvidia-docker § Especially on Kubernetes! § Use the Latest Kubernetes (with Init Script Support) § http://pipeline.ai for GitHub + DockerHub Links
  • 38. TENSORFLOW + CUDA + NVIDIA GPU
  • 39. VOLTA V100 AND TENSOR CORES § 84 Streaming Multiprocessors (SM’s) § 5,376 GPU Cores § 640 Tensor Cores (ie. Google TPU) § Can Perform 640 FP16 4x4 Matrix Multiplies § 120 TFLOPS = 4x FP32 and 10x FP64 § Allows Mixed FP16/FP32 Precision Operations § Matrix Dims Should be Multiples of 8 § More Shared Memory § New L0 Instruction Cache § Faster L1 Data Cache
  • 40. GPU HALF-PRECISION SUPPORT § FP32: “Full Precision”, FP16: “Half Precision” § Two(2) FP16’s in 1 FP32 GPU Core § 2x Throughput! § Lower Precision is OK § Deep learning is approximate § The Network Matters Most § Not individual neuron accuracy
  • 41. MORE ON HALF-PRECISION § 1997: Related Work by SGI § Commercial Request from ILM in 2002 § Implemented in Silicon by Nvidia in 2002 § Supported by Pascal P100 and Volta V100
  • 42. MORE ON REDUCED-PRECISION § Less Precision => Less Memory & Bandwidth => Faster Math & Less Energy § Fits into Smaller Places Close to ALU’s § 4-bit, 2-bit, 1-bit (?!) Quantization § More Layers Help Maintain Accuracy at Reduced Precision § Tip: Scale and Center Dynamic Range at Each Layer § Otherwise, FP16’s become 0 - model may not converge!
  • 43. GPU: 4-WAY DOT PRODUCT OF 8-BIT INTS § GPU Hardware and CUDA Support § Compute Capability (CC) >= 6.1
  • 44. FP16 VS. INT8 § FP16 Has Larger Dynamic Range Than INT8 § Larger Dynamic Range Allows Higher Precision § Truncated FP32 Dynamic Range Higher Than FP16 § Not IEEE 754 Standard, But Worth Exploring
  • 45. ENABLING FP16 IN TENSORFLOW § Harder Than You Think! § TPUs are 16-bit Native GPU’s With CC 5.3+ (Only), Set the Following: TF_FP16_MATMUL_USE_FP32_COMPUTE=0 TF_FP16_CONV_USE_FP32_COMPUTE=0 TF_XLA_FLAGS=--xla_enable_fast_math=1 Pascal P100 Volta V100
  • 46. FP32 VS. FP16 ON AWS GPU INSTANCES FP16 Half Precision 87.2 T ops/second for p3 Volta V100 4.1 T ops/second for g3 Tesla M60 1.6 T ops/second for p2 Tesla K80 FP32 Full Precision 15.4 T ops/second for p3 Volta V100 4.0 T ops/second for g3 Tesla M60 3.3 T ops/second for p2 Tesla K80
  • 47. § Tesla K80 § Pascal P100 § Volta V100 (Beta) § TPU (Beta, Google Cloud Only) GOOGLE CLOUD GPU + TPU
  • 48. GOOGLE CLOUD TPUS § Attach/Detach As Needed § Scale In/Out As Needed § 180 TFlops per Device § TPU Pod = 64 TPUs = 11.5 PetaFlops § $6.50 per TPU Hour § Supports 16-bit TensorFlow
  • 49. V100 AND CUDA 9 § Independent Thread Scheduling - Finally!! § Similar to CPU fine-grained thread synchronization semantics § Allows GPU to yield execution of any thread § Still Optimized for SIMT (Same Instruction Multi-Thread) § SIMT units automatically scheduled together § Explicit Synchronization P100 V100 New CUDA Thread Cooperative Groups https://devblogs.nvidia.com/cooperative-groups/
  • 50. GPU CUDA PROGRAMMING § Barbaric, But Fun Barbaric § Must Know Hardware Very Well § Hardware Changes are Painful § Use the Profilers & Debuggers
  • 51. CUDA STREAMS § Asynchronous I/O Transfer § Overlap Compute and I/O § Keep GPUs Saturated! § Used Heavily by TensorFlow Bad Good Bad Good
  • 52. CUDA SHARED AND UNIFIED MEMORY
  • 53. NUMBA AND PYCUDA § Numba is Drop-In Replacement for Numpy § PyCuda is Python Binding for CUDA
  • 54. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Feed, Train, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  • 55. TRAINING TERMINOLOGY § Tensors: N-Dimensional Arrays § ie. Scalar, Vector, Matrix § Operations: MatMul, Add, SummaryLog,… § Graph: Graph of Operations (DAG) § Session: Contains Graph(s) § Feeds: Feed Inputs into Placeholder § Fetches: Fetch Output from Operation § Variables: What We Learn Through Training § aka “Weights”, “Parameters” § Devices: Hardware Device (GPU, CPU, TPU, ...) -TensorFlow- Trains Variables -User- Fetches Outputs -User- Feeds Inputs -TensorFlow- Performs Operations -TensorFlow- Flows Tensors with tf.device(“/cpu:0,/gpu:15”):
  • 56. TENSORFLOW SESSION Session graph: GraphDef Variables: “W” : 0.328 “b” : -1.407 Variables are Randomly Initialized, then Periodically Checkpointed GraphDef is Created During Training, then Frozen for Inference
  • 57. TENSORFLOW GRAPH EXECUTION § Lazy Execution by Default § Similar to Spark § Eager Execution § Similar to PyTorch § "Linearize” Execution Minimizes RAM § Useful on Single GPU with Limited RAM § May Need to Re-Compute (CPU/GPU) vs Store (RAM)
  • 58. OPERATION PARALLELISM § Inter-Op (Between-Op) Parallelism § By default, TensorFlow runs multiple ops in parallel § Useful for low core and small memory/cache envs § Set to one (1) § Intra-Op (Within-Op) Parallelism § Different threads can use same set of data in RAM § Useful for compute-bound workloads (CNNs) § Set to # of cores (>=2)
  • 59. TENSORFLOW MODEL § MetaGraph § Combines GraphDef and Metadata § GraphDef § Architecture of your model (nodes, edges) § Metadata § Asset: Accompanying assets to your model § SignatureDef: Maps external to internal tensors § Variables § Stored separately during training (checkpoint) § Allows training to continue from any checkpoint § Variables are “frozen” into Constants when preparing for inference GraphDef x W mul add b MetaGraph Metadata Assets SignatureDef Tags Version Variables: “W” : 0.328 “b” : -1.407
  • 60. STOCHASTIC GRADIENT DESCENT (SGD) § Or “Simply Go Down” J § Small Batch Sizes Are Ideal § But not too small! § Parallel, Distributed Training Across Devices § Each device calculates gradients on small batch § Gradients averaged across all devices § Training is Fast, Batches are Small
  • 61. EXTEND EXISTING DATA PIPELINES § Data Processing § HDFS/Hadoop § Spark § Containers § Docker § Schedulers § Kubernetes § Mesos <dependency> <groupId>org.tensorflow</groupId> <artifactId>tensorflow-hadoop</artifactId> </dependency> https://github.com/tensorflow/ecosystem
  • 62. KUBERNETES AND SPARK 2.3 § Kubernetes-Native § Schedule Spark Workers # Submit Spark Job to Kubernetes Cluster bin/spark-submit --master k8s://https://xx.yy.zz.ww --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=5 --conf spark.kubernetes.container.image=<spark-image> --conf spark.kubernetes.driver.pod.name=spark-pi-driver local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar # View Kubernetes Resources kubectl get pods -l 'spark-role in (driver, executor)' -w # View Driver Logs in Real-Time kubectl logs –f spark-pi-driver http://blog.kubernetes.io/2018/03/ apache-spark-23-with-native-kubernetes.html http://community.pipeline.ai
  • 63. TENSORFLOW + SPARK OPTIONS § TensorFlow on Spark (Yahoo!) § TensorFrames <-Dead Project-> § Separate Clusters for Spark and TensorFlow § Spark: Boring Batch ETL § TensorFlow: Exciting AI Model Training and Serving § Hand-Off Point is S3, HDFS, Google Cloud Storage
  • 64. TENSORFLOW + KAFKA § TensorFlow Dataset API Now Supports Kafka!! from tensorflow.contrib.kafka.python.ops import kafka_dataset_ops repeat_dataset = kafka_dataset_ops.KafkaDataset(topics, group="test", eof=True) .repeat(num_epochs) batch_dataset = repeat_dataset.batch(batch_size) …
  • 65. TENSORFLOW I/O § TFRecord File Format § TensorFlow Python and C++ Dataset API § Python Module and Packaging § Comfort with Python’s Lack of Strong Typing § C++ Concurrency Constructs § Protocol Buffers § Old Queue API § GPU/CUDA Memory Tricks And a Lot of Coffee!
  • 66. FEED TENSORFLOW TRAINING PIPELINE § Training is Limited by the Ingestion Pipeline § Number One Problem We See Today § Scaling GPUs Up / Out Doesn’t Help § GPUs are Heavily Under-Utilized § Use tf.dataset API for best perf § Efficient parallel async I/O (C++) Tesla K80 Volta V100
  • 67. DON’T USE FEED_DICT!! § feed_dict Requires Python <-> C++ Serialization § Not Optimized for Production Ingestion Pipelines § Retrieves Next Batch After Current Batch is Done § Single-Threaded, Synchronous § CPUs/GPUs Not Fully Utilized! § Use Queue or Dataset APIs § Queues are old & complex sess.run(train_step, feed_dict={…}
  • 68. DETECT UNDERUTILIZED CPUS, GPUS § Instrument Code to Generate “Timelines” § Analyze with Google Web Tracing Framework (WTF) § Monitor CPU with top, GPU with nvidia-smi http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True))
  • 69. QUEUES § More than Traditional Queue § Uses CUDA Streams § Perform I/O, Pre-processing, Cropping, Shuffling, … § Pull from HDFS, S3, Google Storage, Kafka, ... § Combine Many Small Files into Large TFRecord Files § Use CPUs to Free GPUs for Compute § Helps Saturate CPUs and GPUs
  • 70. QUEUE CAPACITY PLANNING § batch_size § # examples / batch (ie. 64 jpg) § Limited by GPU RAM § num_processing_threads § CPU threads pull and pre-process batches of data § Limited by CPU Cores § queue_capacity § Limited by CPU RAM (ie. 5 * batch_size)
  • 71. TF.DTYPE § tf.float32, tf.int32, tf.string, etc § Default is usually tf.float32 § Most TF operations support numpy natively # Tuple of (tf.float32 scalar, tf.int32 array of 100 elements) (tf.random_uniform([1]), tf.random_uniform([1, 100], dtype=tf.int32))
  • 72. TF.TRAIN.FEATURE § Three(3) Feature Types § Bytes § Float § Int64 § Actually, They Are Lists of 0..* Values of 3 Types Above § BytesList § FloatList § Int64List
  • 73. TF.TRAIN.FEATURES § Map of {String -> Feature} § Better Name is “FeatureMap” § Organize Feature into Categories § Access Feature Using Features[’feature_name’]
  • 74. TF.TRAIN.FEATURELIST § List of 0..* Feature § Access Feature Using FeatureList[0]
  • 75. TF.TRAIN.FEATURELISTS § Map of {String -> FeatureList} § Better Name is “FeatureListMap” § Organize FeatureList into Categories § Access FeatureList Using FeatureLists[’feature_list_name’]
  • 76. TF.TRAIN.EXAMPLE § Key-Value Dictionary § String -> tf.train.Feature § Not a Self-Describing Format (?!) § Must Establish Schema Upfront by Writers and Readers § Must Obey the Following Conventions § Feature K must be of Type T in all Examples § Feature K can be omitted, default can be configured § If Feature K exists as empty, no default is applied
  • 77. TF.TFRECORD § Contains many tf.train.Example’s => tf.train.Example contains many tf.train.Feature’s => tf.train.Feature contains BytesList, FloatList, Int64List § Record-Oriented Format of Binary Strings (ProtoBuffer) § Must Convert tf.train.Example to Serialized String § Use tf.train.Example.SerializeToString() § Used for Large Scale ML/AI Training § Not Meant for Random or Non-Sequential Access § Compression: GZIP, ZLIB uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_data
  • 78. EMBRACE BINARY FORMATS! § Unreadable and Scary, But Much More Efficient § Better Use of Memory and Disk Cache § Faster Copying and Moving § Smaller on the Wire I
  • 79. CONVERTING MNIST DATA TO TFRECORD def convert_to_tfrecord(data, name): images = data.images labels = data.labels num_examples = data.num_examples rows = images.shape[1] cols = images.shape[2] depth = images.shape[3] filename = os.path.join(FLAGS.directory, name + '.tfrecords’) with tf.python_io.TFRecordWriter(filename) as writer: for index in range(num_examples): image_raw = images[index].tostring() example = tf.train.Example( features = tf.train.Features( feature = {'height': tf.train.Feature(int64_list=tf.train.Int64List(value=[rows])), 'width': tf.train.Feature(int64_list=tf.train.Int64List(value=[cols])), 'depth': tf.train.Feature(int64_list=tf.train.Int64List(value=[depth])), 'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[index])), 'image_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_raw])) })) writer.write(example.SerializeToString()) tf.python_io.TFRecordWriter
  • 80. READING TF.TFRECORD’S § tf.data.TFRecordDatasetß Preferred (Dataset API) § tf.TFRecordReader()ß Not Preferred (Queue API) § tf.python_io.tf_record_iterator ß Preferred § Used as Python Generator for serialized_example in tf.python_io.tf_record_iterator(filename): example = tf.train.Example() example.ParseFromString(serialized_example) image_raw example.features.feature['image_raw’].string_list.value height = example.features.feature[‘height'].int32_list.value[0] …
  • 81. DE-SERIALIZING TF.TFRECORD’S feature_map = {'height': tf.train.Feature(int64_list=tf.train.Int64List(value=[rows])), 'width': tf.train.Feature(int64_list=tf.train.Int64List(value=[cols])), 'depth': tf.train.Feature(int64_list=tf.train.Int64List(value=[depth])), 'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[index])), 'image_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_raw])) deserialized_features = tf.parse_single_example(serialized_example, features=feature_map) # Cast height from String to int32 height = tf.cast(deserialized_features[‘height’], tf.int32) … # Convert raw image from string to float32 image_raw = tf.decode_raw(deserialized_features[‘image_raw'], tf.float32)
  • 82. MORE TF.TRAIN.FEATURE CONSTRUCTS § tf.VarLenFeature § tf.FixedLenFeature, tf.FixedLenSequenceFeature § tf.SparseFeature feature_map = {'height': tf.FixedLenFeature((), tf.int32, …)), … 'image_raw': tf.train.VarLenFeature(tf.string, …)) deserialized_features = tf.parse_single_example(serialized_example, features=feature_map) # Cast height from String to int32 height = tf.cast(deserialized_features[‘height’], tf.int32) … # Convert raw image from string to float32 image_raw = tf.decode_raw(deserialized_features[‘image_raw'], tf.float32)
  • 83. TF.DATA.DATASET tf.Tensor => tf.data.Dataset Functional Transformations Python Generator => tf.data.Dataset Dataset.from_tensors((features, labels)) Dataset.from_tensor_slices((features, labels)) TextLineDataset(filenames) dataset.map(lambda x: tf.decode_jpeg(x)) dataset.repeat(NUM_EPOCHS) dataset.batch(BATCH_SIZE) def generator(): while True: yield ... dataset.from_generator(generator, tf.int32) Dataset => One-Shot Iterator Dataset => Initializable Iter iter = dataset.make_one_shot_iterator() next_element = iter.get_next() while …: sess.run(next_element) iter = dataset.make_initializable_iterator() sess.run(iter.initializer, feed_dict=PARAMS) next_element = iter.get_next() while …: sess.run(next_element) TIP: Use Dataset.prefetch() and parallel version of Dataset.map()
  • 84. MORE TF.DATA.DATASET CONSTRUCTS § FixedLengthRecordDataset § Binary Files § TextLineDataset § CSV, JSON, XML, etc § TFRecordDataset § TFRecords § Iterator “The TF Dataset Dude” Tutorial: https://t.co/havjwJ46EY
  • 86. CUSTOM TF.PY_FUNC() TRANSFORMATION § Custom Python Function § Similar to Spark Python UDF (Eek!) § You Will Suffer a Big Performance Penalty § Try to Use TensorFlow-Native Operations § Remember, you can build your own in C++!
  • 87. TF.DATA.ITERATOR TYPES § One Shot: Iterates Once Through the Dataset § Currently, best Iterator to use with Estimator API § Initializable: Runs iterator.initializer() Once § Re-Initializable: Runs iterator.initializer() Many § Ie. Random shuffling between iterations (epochs) of training § Feedable: Switch Between Different Dataset § Uses Feed and Placeholder to explicitly feed the iterator § Doesn’t require initialization when switching
  • 88. TF.DATA.ITERATOR SIMPLE EXAMPLE dataset = tf.data.Dataset.range(5) iterator = dataset.make_initializable_iterator() next_element = iterator.get_next() # Typically `result` will be the output of a model, or an optimizer's # training operation. result = tf.add(next_element, next_element) sess.run(iterator.initializer) while True: try: sess.run(result) # è 0, 2, 4, 6, 8 except tf.errors.OutOfRangeError: print(‘End of dataset…’) break
  • 89. TF.DATA.ITERATOR TEXT EXAMPLE filenames = ["/var/data/file1.txt", "/var/data/file2.txt"] dataset = tf.data.TextLineDataset(filenames) filenames = ["/var/data/file1.txt", "/var/data/file2.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.flat_map( lambda filename: ( tf.data.TextLineDataset(filename) .skip(1) .filter(lambda line: tf.not_equal(tf.substr(line, 0, 1), "#")))) § Skip 1st Header Line and Comment Lines Starting with `#`
  • 90. TF.DATA.ITERATOR NUMPY EXAMPLE # Load the training data into two NumPy arrays, for example using `np.load()`. with np.load("/var/data/training_data.npy") as data: features = data["features"] labels = data["labels"] # Assume that each row of `features` corresponds to the same row as `labels`. assert features.shape[0] == labels.shape[0] features_placeholder = tf.placeholder(features.dtype, features.shape) labels_placeholder = tf.placeholder(labels.dtype, labels.shape) dataset = tf.data.Dataset.from_tensor_slices((features_placeholder, labels_placeholder)) # …Your Dataset Transformations… iterator = dataset.make_initializable_iterator() sess.run(iterator.initializer, feed_dict={features_placeholder: features, labels_placeholder: labels})
  • 91. TF.DATA.ITERATOR TFRECORD EXAMPLE filenames = tf.placeholder(tf.string, shape=[None]) dataset = tf.data.TFRecordDataset(filenames) dataset = dataset.map(...) # Parse the record into tensors. dataset = dataset.repeat() # Repeat the input indefinitely. dataset = dataset.batch(32) # Batches of size 32 iterator = dataset.make_initializable_iterator() # You can feed the initializer with the appropriate filenames for the current # phase of execution, e.g. training vs. validation. # Initialize `iterator` with training data. training_filenames = ["/var/data/file1.tfrecord", "/var/data/file2.tfrecord"] sess.run(iterator.initializer, feed_dict={filenames: training_filenames}) # Initialize `iterator` with validation data. validation_filenames = ["/var/data/validation1.tfrecord", ...] sess.run(iterator.initializer, feed_dict={filenames: validation_filenames})
  • 92. FUTURE OF DATASET API § Replaces Queue API § More Functional Operators § Automatic GPU Data Staging and Pre-Fetching § Under-utilized GPUs Assisting with Data Ingestion § More Profiling and Recommendations for Ingestion
  • 93. TF.ESTIMATOR.ESTIMATOR (1/2) § Supports Keras! § Unified API for Local + Distributed § Provide Clear Path to Production § Enable Rapid Model Experiments § Provide Flexible Parameter Tuning § Enable Downstream Optimizing & Serving Infra( ) § Nudge Users to Best Practices Through Opinions § Provide Hooks/Callbacks to Override Opinions
  • 94. TF.ESTIMATOR.ESTIMATOR (2/2) § “Train-to-Serve” Design § Create Custom Estimator or Re-Use Canned Estimator § Hides Session, Graph, Layers, Iterative Loops (Train, Eval, Predict) § Hooks for All Phases of Model Training and Evaluation § Load Input: input_fn() § Train: model_fn() and train() § Evaluate: eval_fn() and evaluate() § Performance Metrics: Loss, Accuracy, … § Save and Export: export_savedmodel() § Predict: predict() Uses the slow sess.run() https://github.com/GoogleCloudPlatform/cloudml-samples /blob/master/census/customestimator/
  • 95. TF.CONTRIB.LEARN.EXPERIMENT § Easier-to-Use Distributed TensorFlow § Same API for Local and Distributed § Combines Estimator with input_fn() § Used for Training, Evaluation, & Hyper-Parameter Tuning § Distributed Training Defaults to Data-Parallel & Async § Cluster Configuration is Fixed at Start of Training Job § No Auto-Scaling Allowed, but That’s OK for Training Note: The Experiment API Will Likely Be Deprecated Soon
  • 96. ESTIMATOR + EXPERIMENT CONFIGS § TF_CONFIG § Special environment variable for config § Defines ClusterSpec in JSON incl. master, workers, PS’s § Distributed mode ‘{“environment”:“cloud”}’ § Local: ‘{environment”:“local”, {“task”:{”type”:”worker”}}’ § RunConfig: Defines checkpoint interval, output directory, § HParams: Hyper-parameter tuning parameters and ranges § learn_runner creates RunConfig before calling run() & tune() § schedule is set based on {”task”:{”type”:…}} TF_CONFIG= '{ "environment": "cloud", "cluster": { "master":["worker0:2222”], "worker":["worker1:2222"], "ps": ["ps0:2222"] }, "task": {"type": "ps", "index": "0"} }'
  • 97. ESTIMATOR + KERAS § Distributed TensorFlow (Estimator) + Easy to Use (Keras) § tf.keras.estimator.model_to_estimator() # Instantiate a Keras inception v3 model. keras_inception_v3 = tf.keras.applications.inception_v3.InceptionV3(weights=None) # Compile model with the optimizer, loss, and metrics you'd like to train with. keras_inception_v3.compile(optimizer=tf.keras.optimizers.SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metric='accuracy') # Create an Estimator from the compiled Keras model. est_inception_v3 = tf.keras.estimator.model_to_estimator(keras_model=keras_inception_v3) # Treat the derived Estimator as you would any other Estimator. For example, # the following derived Estimator calls the train method: est_inception_v3.train(input_fn=my_training_set, steps=2000)
  • 98. “CANNED” ESTIMATORS § Commonly-Used Estimators § Pre-Tested and Pre-Tuned § DNNClassifer, TensorForestEstimator § Always Use Canned Estimators If Possible § Reduce Lines of Code, Complexity, and Bugs § Use FeatureColumn to Define & Create Features Custom vs. Canned @ Google, August 2017
  • 99. ESTIMATOR + DATASET API def input_fn(): def generator(): while True: yield ... my_dataset = tf.data.dataset.from_generator(generator, tf.int32) # A one-shot iterator automatically initializes itself on first use. iter = my_dataset.make_one_shot_iterator() # The return value of get_next() matches the dataset element type. images, labels = iter.get_next() return images, labels # The input_fn can be used as a regular Estimator input function. estimator = tf.estimator.Estimator(…) estimator.train(train_input_fn=input_fn, …)
  • 100. OPTIMIZER + ESTIMATOR API + TPU’S run_config = tpu_config.RunConfig() tpu_config = tf.contrib.tpu.TPUConfig(FLAGS.iterations, FLAGS.num_shards) estimator = tpu_estimator.TpuEstimator(model_fn=model_fn, config=run_config) estimator.train(input_fn=input_fn, num_epochs=10, …) optimizer = tpu_optimizer.CrossShardOptimizer( tf.train.GradientDescentOptimizer(learning_rate=…)) train_op = optimizer.minimize(loss) estimator_spec = tf.estimator.EstimatorSpec(train_op=train_op, loss=…) https://www.tensorflow.org/programmers_guide/using_tpu
  • 101. DATASET API TIMELINES (TENSORBOARD) § Use Dataset.prefetch()!! § Helps prevent bottlenecks in I/O pipeline
  • 103. TPU PROFILING pip install cloud-tpu-profiler==1.5.1 capture_tpu_profile --tpu_name=$TPU_NAME --logdir=$MODEL_DIR https://cloud.google.com/tpu/docs/cloud-tpu-tools tensorboard --logdir=$MODEL_DIR
  • 105. INPUT PIPELINE ANALYSIS § Determine if Pipeline is Input-Bound
  • 106. TF.CONTRIB.LEARN.HEAD (OBJECTIVES) § Single-Objective Estimator § Single classification prediction § Multi-Objective Estimator § One (1) classification prediction § One(1) final layer to feed into next model § Multiple Heads Used to Ensemble Models § Treats neural network as a feature engineering step § Supported by TensorFlow Serving
  • 107. TF.LAYERS § Standalone Layer or Entire Sub-Graphs § Functions of Tensor Inputs & Outputs § Mix and Match with Operations § Assumes 1st Dimension is Batch Size § Handles One (1) to Many (*) Inputs § Metrics are Layers § Loss Metric (Per Mini-Batch) § Accuracy and MSE (Across Mini-Batches)
  • 108. TF.FEATURE_COLUMN § Used by Canned Estimator § Declaratively Specify Training Inputs § Converts Sparse to Dense Tensors § Sparse Features: Query Keyword, ProductID § Dense Features: One-Hot, Multi-Hot § Wide/Linear: Use Feature-Crossing § Deep: Use Embeddings
  • 109. TF.FEATURE_COLUMN EXAMPLE § Continuous + One-Hot + Embedding deep_columns = [ age, education_num, capital_gain, capital_loss, hours_per_week, tf.feature_column.indicator_column(workclass), tf.feature_column.indicator_column(education), tf.feature_column.indicator_column(marital_status), tf.feature_column.indicator_column(relationship), # To show an example of embedding tf.feature_column.embedding_column(occupation, dimension=8), ]
  • 110. FEATURE CROSSING § Create New Features by Combining Existing Features § Limitation: Combinations Must Exist in Training Dataset base_columns = [ education, marital_status, relationship, workclass, occupation, age_buckets ] crossed_columns = [ tf.feature_column.crossed_column( ['education', 'occupation'], hash_bucket_size=1000), tf.feature_column.crossed_column( ['age_buckets', 'education', 'occupation'], hash_bucket_size=1000) ]
  • 111. SEPARATE TRAINING + EVALUATION § Separate Training and Evaluation Clusters § Evaluate Upon Checkpoint § Avoid Resource Contention § Training Continues in Parallel with Evaluation Training Cluster Evaluation Cluster Parameter Server Cluster
  • 112. BATCH (RE-)NORMALIZATION (2015, 2017) § Each Mini-Batch May Have Wildly Different Distributions § Normalize per Batch (and Layer) § Faster Training, Learns Quicker § Final Model is More Accurate § TensorFlow is already on 2nd Generation Batch Algorithm § First-Class Support for Fusing Batch Norm Layers § Final mean + variance Are Folded Into Graph Later -- (Almost) Always Use Batch (Re-)Normalization! -- z = tf.matmul(a_prev, W) a = tf.nn.relu(z) a_mean, a_var = tf.nn.moments(a, [0]) scale = tf.Variable(tf.ones([depth/channels])) beta = tf.Variable(tf.zeros ([depth/channels])) bn = tf.nn.batch_normalizaton(a, a_mean, a_var, beta, scale, 0.001)
  • 113. DROPOUT (2014) § Training Technique § Prevents Overfitting § Helps Avoid Local Minima § Inherent Ensembling Technique § Creates and Combines Different Neural Architectures § Expressed as Probability Percentage (ie. 50%) § Boost Other Weights During Validation & Prediction Perform Dropout (Training Phase) Boost for Dropout (Validation & Prediction Phase) 0% Dropout 50% Dropout
  • 114. BATCH NORM, DROPOUT + ESTIMATOR API § Must Specify Evaluation or Training Mode § These Will Behave Differently Depending on Mode
  • 115. SAVED MODEL FORMAT § Different Format than Traditional Exporter § Contains Checkpoints, 1..* MetaGraph’s, and Assets § Export Manually with SavedModelBuilder § Estimator.export_savedmodel() § Hooks to Generate SignatureDef § Use saved_model_cli to Verify § Used by TensorFlow Serving § New Standard Export Format? (Catching on Slowly…)
  • 116. TENSORFLOW DEBUGGER § Step through Operations § Inspect Inputs and Outputs § Wrap Session in Debug Session sess = tf.Session(config=config) sess = tf_debug.LocalCLIDebugWrapperSession(sess) https://www.tensorflow.org/programmers_guide/debugger
  • 117. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Train, Inspect, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  • 118. SINGLE NODE, MULTI-GPU TRAINING § cpu:0 § By default, all CPUs § Requires extra config to target a CPU § gpu:0..n § Each GPU has a unique id § TF usually prefers a single GPU § xla_cpu:0, xla_gpu:0..n § “JIT Compiler Device” § Hints TensorFlow to attempt JIT Compile with tf.device(“/cpu:0”): with tf.device(“/gpu:0”): with tf.device(“/gpu:1”): GPU 0 GPU 1
  • 119. DISTRIBUTED, MULTI-NODE TRAINING § TensorFlow Automatically Inserts Send and Receive Ops into Graph § Parameter Server Synchronously Aggregates Updates to Variables § Nodes with Multiple GPUs will Pre-Aggregate Before Sending to PS Worker0 Worker0 Worker1 Worker0 Worker1 Worker2 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu0 gpu0 Single Node Multiple Nodes
  • 120. DATA PARALLEL VS. MODEL PARALLEL § Data Parallel (“Between-Graph Replication”) § Send exact same model to each device § Each device operates on partition of data § ie. Spark sends same function to many workers § Each worker operates on their partition of data § Model Parallel (“In-Graph Replication”) § Send different partition of model to each device § Each device operates on all data § Difficult, but required for larger models with lower-memory GPUs
  • 121. SYNCHRONOUS VS. ASYNCHRONOUS § Synchronous § Nodes compute gradients § Nodes update Parameter Server (PS) § Nodes sync on PS for latest gradients § Asynchronous § Some nodes delay in computing gradients § Nodes don’t update PS § Nodes get stale gradients from PS § May not converge due to stale reads!
  • 122. CHIEF WORKER § Chief Defaults to Worker Task 0 § Task 0 is guaranteed to exist § Performs Maintenance Tasks § Writes log summaries § Instructs PS to checkpoint vars § Performs PS health checks § (Re-)Initialize variables at (re-)start of training
  • 123. NODE AND PROCESS FAILURES § Checkpoint to Persistent Storage (HDFS, S3) § Use MonitoredTrainingSession and Hooks § Use a Good Cluster Orchestrator (ie. Kubernetes, Mesos) § Understand Failure Modes and Recovery States Stateless, Not Bad: Training Continues Stateful, Bad: Training Must Stop Dios Mio! Long Night Ahead…
  • 124. ADVANCED DEVICE PLACEMENT STRATEGIES § Re-Inforcement Learning Adapts to Real-Time Conditions § Manual Device Placement is Static § TensorFlow Grappler Project
  • 125. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Train, Inspect, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  • 126. XLA FRAMEWORK § XLA: “Accelerated Linear Algebra” § Reduce Reliance on Custom Operators § Intermediate Representation used by Hardware Vendors § Improve Portability § Increase Execution Speed § Decrease Memory Usage § Decrease Mobile Footprint Helps TensorFlow Be Flexible AND Performant!!
  • 127. XLA HIGH LEVEL OPTIMIZER (HLO) § HLO: “High Level Optimizer” § Compiler Intermediate Representation (IR) § Independent of source and target language § XLA Step 1 Emits Target-Independent HLO § XLA Step 2 Emits Target-Dependent LLVM § LLVM Emits Native Code Specific to Target § Supports x86-64, ARM64 (CPU), and NVPTX (GPU)
  • 128. XLA IS DESIGNED FOR RE-USE § Pluggable Backends § HLO “Toolkit” § Call BLAS or cuDNN § Use LLVM or BYO Low-Level-Optimizer
  • 129. MINIMAL XLA BACKEND § HLO / LLVM Pipeline § StreamExecutor Plugin
  • 131. XLA GPU / NVIDIA PTX BACKEND
  • 132. XLA GPU / OPENCL BACKEND
  • 135. XLA PERFORMANCE OPTIMIZATIONS § JIT Training § MNIST: 30% Speed Up § Inception: 20% Speed Up § Basic LSTM: 80% Speed Up § Translation Model BNMT: 20% Speed Up § AOT Inference (Next Section) § LSTM Model Size: 1 MB => 10 KB
  • 136. JIT COMPILER § JIT: “Just-In-Time” Compiler § Built on XLA Framework § Reduce Memory Movement – Especially with GPUs § Reduce Overhead of Multiple Function Calls § Similar to Spark Operator Fusing in Spark 2.0 § Unroll Loops, Fuse Operators, Fold Constants, … § Scopes: session, device, with jit_scope():
  • 137. TO JIT OR NOT TO JIT
  • 138. VISUALIZING JIT COMPILER IN ACTION Before JIT After JIT Google Web Tracing Framework: http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True)) run_options = tf.RunOptions(trace_level=tf.RunOptions.SOFTWARE_TRACE) run_metadata = tf.RunMetadata() sess.run(options=run_options, run_metadata=run_metadata)
  • 139. VISUALIZING FUSING OPERATORS pip install graphviz dot -Tpng /tmp/hlo_graph_1.w5LcGs.dot -o hlo_graph_1.png GraphViz: http://www.graphviz.org hlo_*.dot files generated by XLA
  • 140. XLA COMPILATION SUMMARY § Generates Code and Libraries for Your Computation § Packages Libraries Needed for Your § Eliminates Dispatch Overhead of Operations § Fuses Operations to Avoid Memory Round Trip § Analyzes Buffers to Reuse Memory § Updates Memory In-Place § Unrolls Loops with Your Data Dimensions (ie.Batch Size) § Vectorizes Operations Specific to Your Data Dimensions
  • 141. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Traffic Routing
  • 142. WE ARE NOW… …OPTIMIZING Models AFTER Model Training TO IMPROVE Model Serving PERFORMANCE!
  • 143. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  • 144. AOT COMPILER § Standalone, Ahead-Of-Time (AOT) Compiler § Built on XLA framework § tfcompile § Creates executable with minimal TensorFlow Runtime needed § Includes only dependencies needed by subgraph computation § Creates functions with feeds (inputs) and fetches (outputs) § Packaged as cc_libary header and object files to link into your app § Commonly used for mobile device inference graph § Currently, only CPU x86-64 and ARM are supported - no GPU
  • 145. GRAPH TRANSFORM TOOL (GTT) § Post-Training Optimization to Prepare for Inference § Remove Training-only Ops (checkpoint, drop out, logs) § Remove Unreachable Nodes between Given feed -> fetch § Fuse Adjacent Operators to Improve Memory Bandwidth § Fold Final Batch Norm mean and variance into Variables § Round Weights/Variables to improve compression (ie. 70%) § Quantize (FP32 -> INT8) to Speed Up Math Operations
  • 146. AFTER TRAINING, BEFORE OPTIMIZATION -TensorFlow- Trains Variables -User- Fetches Outputs -User- Feeds Inputs -TensorFlow- Performs Operations -TensorFlow- Flows Tensors ?!
  • 147. POST-TRAINING GRAPH TRANSFORMS transform_graph --in_graph=unoptimized_cpu_graph.pb ß Original Graph --out_graph=optimized_cpu_graph.pb ß Transformed Graph --inputs=’x_observed:0' ß Feed (Input) --outputs=’Add:0' ß Fetch (Output) --transforms=' ß List of Transforms strip_unused_nodes remove_nodes(op=Identity, op=CheckNumerics) fold_constants(ignore_errors=true) fold_batch_norms fold_old_batch_norms quantize_weights quantize_nodes'
  • 148. AFTER STRIPPING UNUSED NODES § Optimizations § strip_unused_nodes § Results § Graph much simpler § File size much smaller
  • 149. AFTER REMOVING UNUSED NODES § Optimizations § strip_unused_nodes § remove_nodes § Results § Pesky nodes removed § File size a bit smaller
  • 150. AFTER FOLDING CONSTANTS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § Results § Placeholders (feeds) -> Variables* (*Why Variables and not Constants?)
  • 151. AFTER FOLDING BATCH NORMS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § Results § Graph remains the same § File size approximately the same
  • 152. AFTER QUANTIZING WEIGHTS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § Results § Graph is same, file size is smaller, compute is faster
  • 153. WEIGHT (VARIABLE) QUANTIZATION § FP16 or INT8: Smaller & Computationally Faster than FP32 § Easy to “Linearly Quantize” (Re-Encode) FP32 -> INT8 Easy Breezy!
  • 154. BENEFITS OF 32-BIT TO 8-BIT QUANTIZE § First Class Hardware and CUDA Support § One 32-Bit GPU Core: 4-Way Dot Product of 8-Bit Ints § GPU Compute Capability (CC) >= 6.1 Only
  • 155. ACTIVATION QUANTIZATION § Activations Not Known Ahead of Time § Depends on input, not easy to quantize § Requires Additional Calibration Step § Use representative, diverse validation dataset § ~1000 samples, ~10 minutes,, cheap hardware § Run 32-Bit Inference with Calibration Data § Collect histogram of activation values at each layer § Generate many quantized distributions at diff saturation thresholds § Choose Saturation Threshold That Minimizes Accuracy Loss
  • 156. CHOOSING SATURATION THRESHOLD § Trade-off Between Range & Precision § INT8 Should Encode Same Information As Original FP32 § Minimize Loss of Information Across Encoding/Distributions § Use KL_Divergence(32bit_dist, 8bit_dist) § Compares 2 distributions § Similar to Cross-Entropy
  • 157. SATURATE TO MINIMIZE ACCURACY LOSS § Helps Preserve Accuracy After Activation Quantization § Goal: Find Threshold (T) That Minimizes Accuracy Loss No Saturation Saturation
  • 158. AUTO-CALIBRATE: PIPELINEAI + TENSOR-RT Pre-Requisites § 32-Bit Trained Model (TensorFlow, Caffe) § Small Calibration Dataset (Validation) PipelineAI + TensorRT Optimizations § Run 32-Bit Inference on Calibration Dataset § Collect Required Statistics § Use KL_Divergence to Determine Saturation Thresholds § Perform 32-Bit Float -> 8-Bit Int Quantization § Generate Calibration Table and INT8 Execution Engine
  • 159. 32-BIT TO 8-BIT QUANTIZATION RESULTS Accuracy of INT8 Models Comparable to FP32
  • 160. AFTER ACTIVATION QUANTIZATION § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § quantize_nodes (activations) § Results § Larger graph, needs calibration! Requires Additional freeze_requantization_ranges
  • 162. FREEZING MODEL FOR DEPLOYMENT § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § quantize_nodes § freeze_graph § Results § Variables -> Constants Finally! We’re Ready to Deploy!!
  • 163. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  • 164. MODEL SERVING TERMINOLOGY § Inference § Only Forward Propagation through Network § Predict, Classify, Regress, … § Bundle § GraphDef, Variables, Metadata, … § Assets § ie. Map of ClassificationID -> String § {9283: “penguin”, 9284: “bridge”} § Version § Every Model Has a Version Number (Integer) § Version Policy § ie. Serve Only Latest (Highest), Serve Both Latest and Previous, …
  • 165. TENSORFLOW SERVING FEATURES § Supports Auto-Scaling § Custom Loaders beyond File-based § Tune for Low-latency or High-throughput § Serve Diff Models/Versions in Same Process § Customize Models Types beyond HashMap and TensorFlow § Customize Version Policies for A/B and Bandit Tests § Support Request Draining for Graceful Model Updates § Enable Request Batching for Diff Use Cases and HW § Supports Optimized Transport with GRPC and Protocol Buffers
  • 166. GRPC :: PROTOBUFFERS AS HTTP :: JSON
  • 167. PREDICTION SERVICE § Predict (Original, Generic) § Input: List of Tensor § Output: List of Tensor § Classify § Input: List of tf.Example (key, value) pairs § Output: List of (class_label: String, score: float) § Regress § Input: List of tf.Example (key, value) pairs § Output: List of (label: String, score: float)
  • 168. PREDICTION INPUTS + OUTPUTS § SignatureDef § Defines inputs and outputs § Maps external (logical) to internal (physical) tensor names § Allows internal (physical) tensor names to change from tensorflow.python.saved_model import utils from tensorflow.python.saved_model import signature_constants from tensorflow.python.saved_model import signature_def_utils graph = tf.get_default_graph() x_observed = graph.get_tensor_by_name('x_observed:0') y_pred = graph.get_tensor_by_name('add:0') inputs_map = {'inputs': x_observed} outputs_map = {'outputs': y_pred} predict_signature = signature_def_utils.predict_signature_def(inputs=inputs_map, outputs=outputs_map)
  • 169. MULTI-HEADED INFERENCE § Inputs Pass Through Model One Time § Model Returns Multiple Predictions: 1. Human-readable prediction (ie. “penguin”, “church”,…) 2. Final layer of scores (float vector) § Final Layer of floats Pass to the Next Model in Ensemble § Optimizes Bandwidth, CPU/GPU, Latency, Memory § Enables Complex Model Composing and Ensembling
  • 170. BUILD YOUR OWN MODEL SERVER § Adapt GRPC(Google) <-> HTTP (REST of the World) § Perform Batch Inference vs. Request/Response § Handle Requests Asynchronously § Support Mobile, Embedded Inference § Customize Request Batching § Add Circuit Breakers, Fallbacks § Control Latency Requirements § Reduce Number of Moving Parts #include “tensorflow_serving/model_servers/server_core.h” class MyTensorFlowModelServer { ServerCore::Options options; // set options (model name, path, etc) std::unique_ptr<ServerCore> core; TF_CHECK_OK( ServerCore::Create(std::move(options), &core) ); } Compile and Link with libtensorflow.so
  • 171. RUNTIME OPTION: NVIDIA TENSOR-RT § Post-Training Model Optimizations § Specific to Nvidia GPU § Similar to TF Graph Transform Tool § GPU-Optimized Prediction Runtime § Alternative to TensorFlow Serving § PipelineAI Supports TensorRT!
  • 172. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  • 173. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  • 174. REQUEST BATCH TUNING § max_batch_size § Enables throughput/latency tradeoff § Bounded by RAM § batch_timeout_micros § Defines batch time window, latency upper-bound § Bounded by RAM § num_batch_threads § Defines parallelism § Bounded by CPU cores § max_enqueued_batches § Defines queue upper bound, throttling § Bounded by RAM Reaching either threshold will trigger a batch Separate, Non-Batched Requests Combined, Batched Requests
  • 175. ADVANCED BATCHING & SERVING TIPS § Batch Just the GPU/TPU Portions of the Computation Graph § Batch Arbitrary Sub-Graphs using Batch / Unbatch Graph Ops § Distribute Large Models Into Shards Across TensorFlow Model Servers § Batch RNNs Used for Sequential and Time-Series Data § Find Best Batching Strategy For Your Data Through Experimentation § BasicBatchScheduler: Homogeneous requests (ie Regress or Classify) § SharedBatchScheduler: Mixed requests, multi-step, ensemble predict § StreamingBatchScheduler: Mixed CPU/GPU/IO-bound Workloads § Serve Only One (1) Model Inside One (1) TensorFlow Serving Process § Much Easier to Debug, Tune, Scale, and Manage Models in Production.
  • 176. PIPELINE.AI FUNCTIONS (SERVERLESS) § Supports Kubernetes § Supports Docker Swarm
  • 177. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Traffic Routing
  • 178. AGENDA Part 3: Advanced Model Serving + Traffic Routing § Kubernetes Ingress, Egress, Networking § Istio and Envoy Architecture § Intelligent Traffic Routing and Scaling § Metrics, Chaos Monkey, Production Readiness
  • 179. KUBERNETES PRIORITY SCHEDULING Workloads can … § access the entire cluster up to the autoscaler max size § trigger autoscaling until higher-priority workload § “fill the cracks” of resource usage of higher-priority work (i.e., wait to run until resources are feed
  • 180. KUBERNETES INGRESS § Single Service § Can also use Service (LoadBalancer or NodePort) § Fan Out & Name-Based Virtual Hosting § Route Traffic Using Path or Host Header § Reduces # of load balancers needed § 404 Implemented as default backend § Federation / Hybrid-Cloud § Creates Ingress objects in every cluster § Monitors health and capacity of pods within each cluster § Routes clients to appropriate backend anywhere in federation apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gateway-fanout annotations: kubernetes.io/ingress.class: istio spec: rules: - host: foo.bar.com http: paths: - path: /foo backend: serviceName: s1 servicePort: 80 - path: /bar backend: serviceName: s2 servicePort: 80 Fan Out (Path) apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gateway-virtualhost annotations: kubernetes.io/ingress.class: istio spec: rules: - host: foo.bar.com http: paths: backend: serviceName: s1 servicePort: 80 - host: bar.foo.com http: paths: backend: serviceName: s2 servicePort: 80 Virtual Hosting
  • 181. KUBERNETES INGRESS CONTROLLER § Ingress Controller Types § Google Cloud: kubernetes.io/ingress.class: gce § Nginx: kubernetes.io/ingress.class: nginx § Istio: kubernetes.io/ingress.class: istio § Must Start Ingress Controller Manually § Just deploying Ingress is not enough § Not started by kube-controller-manager § Start Istio Ingress Controller kubectl apply -f $ISTIO_INSTALL_PATH/install/kubernetes/istio.yaml
  • 182. ISTIO EGRESS § While-list Domains To Access From Within Service Mesh § Apply RoutingRules § Apply DestinationPolicys § Supports TLS, HTTP, GRPC kind: EgressRule metadata: name: pipeline-api-egress spec: destination: service: api.pipeline.ai ports: - port: 80 protocol: http - port: 443 protocol: https
  • 183. AGENDA Part 3: Advanced Model Serving + Traffic Routing § Kubernetes Ingress, Egress, Networking § Istio and Envoy Architecture § Intelligent Traffic Routing and Scaling § Metrics, Chaos Monkey, Production Readiness
  • 185. ISTIO ARCHITECTURE: ENVOY § Lyft Project § High-perf Proxy (C++) § Lots of Metrics § Zone-Aware § Service Discovery § Load Balancing § Fault Injection, Circuits § %-based Traffic Split, Shadow § Sidecar Pattern § Rate Limiting, Retries, Outlier Detection, Timeout with Budget, …
  • 186. ISTIO ARCHITECTURE: MIXER § Enforce Access Control § Evaluate Request-Attrs § Collect Metrics § Platform-Independent § Extensible Plugin Model
  • 187. ISTIO ARCHITECTURE: PILOT § Envoy service discovery § Intelligent routing § A/B Tests § Canary deployments § RouteRule->Envoy conf § Propagates to sidecars § Supports Kube, Consul, ...
  • 188. ISTIO ARCHITECTURE: SECURITY § Mutual TLS Auth § Credential Management § Uses Service-Identity § Canary Deployments § Fine-grained ACLs § Attribute & Role-based § Auditing & Monitoring
  • 189. AGENDA Part 3: Advanced Model Serving + Traffic Routing § Kubernetes Ingress, Egress, Networking § Istio and Envoy Architecture § Intelligent Traffic Routing and Scaling § Metrics, Chaos Monkey, Production Readiness
  • 190. ISTIO ROUTE RULES § Kubernetes Custom Resource Definition (CRD) kind: CustomResourceDefinition metadata: name: routerules.config.istio.io spec: group: config.istio.io names: kind: RouteRule listKind: RouteRuleList plural: routerules singular: routerule scope: Namespaced version: v1alpha2
  • 191. ADVANCED TRAFFIC ROUTING RULES § Content-based Routing § Uses headers, username, payload, … § Cross-Environment Routing § Shadow traffic prod=>staging
  • 192. ISTIO DESTINATION POLICIES § Load Balancing § ROUND_ROBIN (default) § LEAST_CONN (between 2 randomly-selected hosts) § RANDOM § Circuit Breaker § Max connections § Max requests per conn § Consecutive errors § Penalty timer (15 mins) § Scan windows (5 mins) circuitBreaker: simpleCb: maxConnections: 100 httpMaxRequests: 1000 httpMaxRequestsPerConnection: 10 httpConsecutiveErrors: 7 sleepWindow: 15m httpDetectionInterval: 5m
  • 193. ISTIO AUTO-SCALING § Traffic Routing and Auto-Scaling Occur Independently § Istio Continues to Obey Traffic Splits After Auto-Scaling § Auto-Scaling May Occur In Response to New Traffic Route
  • 194. A/B & BANDIT MODEL TESTING § Perform Live Experiments in Production § Compare Existing Model A with Model B, Model C § Safe Split-Canary Deployment § Pro Tip: Keep Ingress Simple – Use Route Rules Instead! apiVersion: config.istio.io/v1alpha2 kind: RouteRule metadata: name: predict-mnist-20-5-75 spec: destination: name: predict-mnist precedence: 2 # Greater than global deny-all route: - labels: version: A weight: 20 # 20% still routes to model A - labels: version: B # 5% routes to new model B weight: 5 - labels: version: C # 75% routes to new model C weight: 75 apiVersion: config.istio.io/v1alpha2 kind: RouteRule metadata: name: predict-mnist-1-2-97 spec: destination: name: predict-mnist precedence: 2 # Greater than global deny-all route: - labels: version: A weight: 1 # 1% routes to model A - labels: version: B # 2% routes to new model B weight: 2 - labels: version: C # 97% routes to new model C weight: 97 apiVersion: config.istio.io/v1alpha2 kind: RouteRule metadata: name: predict-mnist-97-2-1 spec: destination: name: predict-mnist precedence: 2 # Greater than global deny-all route: - labels: version: A weight: 97 # 97% still routes to model A - labels: version: B # 2% routes to new model B weight: 2 - labels: version: C # 1% routes to new model C weight: 1
  • 195. AGENDA Part 3: Advanced Model Serving + Traffic Routing § Kubernetes Ingress, Egress, Networking § Istio and Envoy Architecture § Intelligent Traffic Routing and Scaling § Metrics, Chaos Monkey, Production Readiness
  • 196. ISTIO METRICS AND MONITORING § Verify Traffic Splits § Fine-Grained Request Tracing
  • 197. ISTIO & CHAOS + LATENCY MONKEY § Fault Injection § Delay § Abort kind: RouteRule metadata: name: predict-mnist spec: destination: name: predict-mnist httpFault: abort: httpStatus: 420 percent: 100 kind: RouteRule metadata: name: predict-mnist spec: destination: name: predict-mnist httpFault: delay: fixedDelay: 7.000s percent: 100
  • 198. SPECIAL THANKS TO CHRISTIAN POSTA § http://blog.christianposta.com/istio-workshop
  • 199. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Traffic Routing
  • 200. PIPELINE.AI SUPPORTS ALL MAJOR MODELS
  • 202. THANK YOU!! § Please Star this GitHub Repo! § All slides, code, notebooks, and Docker images here: https://github.com/PipelineAI/pipeline Contact Me chris@pipeline.ai @cfregly