Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.
HYPER-PARAMETER TUNING ACROSS THE ENTIRE
AI PIPELINE: MODEL TRAINING TO PREDICTING
GPU TECH CONFERENCE -- SAN JOSE, MARCH ...
KEY TAKE-AWAYS
With PipelineAI, You Can…
§ Hyper-Parameter Tuning From Training to Inference
§ Generate Hardware-Specific ...
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Ad...
INTRODUCTIONS: ME
§ Chris Fregly, Founder & Engineer @ PipelineAI
§ Formerly Netflix, Databricks, IBM Spark Tech
§ Founder...
INTRODUCTIONS: YOU
You Want To …
§ Perform Hyper-Parameter Tuning Across *Entire* Pipeline
§ Measure Results of Tuning Bot...
PIPELINEAI IS OPEN SOURCE
§ https://github.com/PipelineAI/pipeline/
§ Please Star this GitHub Repo!
§ “Each Star is Worth ...
PIPELINEAI ANNOUNCEMENTS
http://pipeline.aihttp://community.pipeline.ai
PIPELINEAI SUPPORTS ALL MAJOR MODELS
PIPELINEAI TERMINOLOGY
§ “Flask-App Falacy”: Flask is Not Enough for Production-izing ML/AI Models
§ “Pipeline”: All Phase...
Any Runtime
Any Device CPU, GPU, TPU, IoT
Any Network and System Configuration
Any Clouud and On-Premise Environment
AnyMo...
WHOLE-PIPELINE HYPER-PARAMETER TUNING
WHOLE-PIPELINE HYPER-PARAMETERS
Training: Hyperparameters
pipelinedb.add("learning_rate", 0.025)
pipelinedb.add(”batch_siz...
WHY EMPHASIS ON MODEL INFERENCE?
Model Training
Batch & Boring
Offline in Research Lab
Pipeline Ends at Training
No Insigh...
GROWTH IN ML/AI MODELS
2017 2026
Data
Scientists
44,000
11,500,000
$39 Billion in 2017
$2 Trillion by 2026
2017 2026
Model...
MODEL DEPLOYMENT OPTIONS
§ AWS SageMaker
§ Released Nov 2017 @ Re-invent
§ Custom Docker Images for Training/Serving (ie. ...
WHOLE-PIPELINE OPTIMIZATION OPTIONS
§ Model Training Optimizations
§ Model Hyper-Parameters (ie. Learning Rate)
§ Reduced ...
NVIDIA TENSOR-RT RUNTIME
§ Post-Training Model Optimizations
§ Specific to Nvidia GPUs
§ GPU-Optimized Prediction Runtime
...
TENSORFLOW LITE OPTIMIZING CONVERTER
§ Post-Training Model Optimizations
§ Currently Supports iOS and Android
§ On-Device ...
PIPELINEAI QUICK START
§ http://quickstart.pipeline.ai
§ Any Model, Any Training Runtime, Any Prediction Runtime
§ Support...
STEP 1: BUILD MODEL+TRAINING SERVER
§ Train Model with Specific Hyper-Parameters
§ Monitor and Compare Validation Accuracy...
STEP 2: TRAIN, MEASURE, TUNE
§ Train Model with Specific Hyper-Parameters
§ Monitor abnd Compare Validation Accuracy
§ Tun...
STEP 3: CREATE PREDICT() METHOD
def predict(request: bytes) -> bytes:
return _model.predict(request)Basic Insight:
def pre...
STEP 4: BUILD MODEL+PREDICTION SERVER
pipeline predict-server-build --model-name=mnist 
--model-tag=C 
--model-type=tensor...
STEP 5: PREDICT, MEASURE, TUNE (LOCAL)
§ Perform Mini-Load Test on Local Model Server
§ Immediate Feedback on Prediction P...
STEP 6: DEPLOY, MEASURE, TUNE (IN PROD)
§ Deploy from CLI or Jupyter Notebook
§ Tear-Down and Rollback Models Quickly
§ Sh...
STEP 7: OPTIMIZE, MEASURE, RE-DEPLOY
§ Prepare Model for Predicting
§ Simplify Network, Reduce Size
§ Reduce Precision -> ...
STEP 8: EVALUATE MODEL+RUNTIME VARIANT
§ Offline, Batch Metrics
§ Validation + Training Accuracy
§ CPU + GPU Utilization
§...
STEP 9: DETERMINE PIPELINEAI EFFICIENCY
STEP 10: SHIFT TRAFFIC TO BEST VARIANT
§ A/B Tests
§ Inflexible and Boring
§ Multi-Armed Bandits
§ Adaptive and Exciting!
...
PIPELINE PROFILING AND TUNING
§ Instrument Code to Generate “Timelines” for Any Metric
§ Analyze with Google Web
Tracing F...
MODEL AND ENSEMBLE TRACING/AUDITING
§ Necessary for Model Explain-ability
§ Fine-Grained Request Tracing
§ Used for Model ...
VIEW REAL-TIME PREDICTION STREAMS
§ Visually Compare Real-time Predictions
Features and
Inputs
Predictions and
Confidences...
CONTINUOUS DATA LABELING AND FIXING
§ Identify and Fix Borderline (Unconfident) Predictions
§ Fix Predictions Along Class ...
CONTINUOUS MODEL TRAINING
§ The Holy Grail of Machine Learning
§ Kafka, Kinesis, Spark Streaming, Flink, Storm, Heron
Pipe...
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Ad...
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Feed, Train, and Debug TensorFlow Models
§ TensorFlow ...
SETTING UP TENSORFLOW WITH GPUS
§ Very Painful!
§ Especially inside Docker
§ Use nvidia-docker
§ Especially on Kubernetes!...
TENSORFLOW + CUDA + NVIDIA GPU
VOLTA V100 AND TENSOR CORES
§ 84 Streaming Multiprocessors (SM’s)
§ 5,376 GPU Cores
§ 640 Tensor Cores (ie. Google TPU)
§ ...
GPU HALF-PRECISION SUPPORT
§ FP32: “Full Precision”, FP16: “Half Precision”
§ Two(2) FP16’s in 1 FP32 GPU Core
§ 2x Throug...
MORE ON HALF-PRECISION
§ 1997: Related Work by SGI
§ Commercial Request from ILM in 2002
§ Implemented in Silicon by Nvidi...
MORE ON REDUCED-PRECISION
§ Less Precision => Less Memory & Bandwidth
=> Faster Math & Less Energy
§ Fits into Smaller Pla...
GPU: 4-WAY DOT PRODUCT OF 8-BIT INTS
§ GPU Hardware and CUDA Support
§ Compute Capability (CC) >= 6.1
FP16 VS. INT8
§ FP16 Has Larger Dynamic Range Than INT8
§ Larger Dynamic Range Allows Higher Precision
§ Truncated FP32 Dy...
ENABLING FP16 IN TENSORFLOW
§ Harder Than You Think!
§ TPUs are 16-bit Native
GPU’s With CC 5.3+ (Only), Set the Following...
FP32 VS. FP16 ON AWS GPU INSTANCES
FP16 Half Precision
87.2 T ops/second for p3 Volta V100
4.1 T ops/second for g3 Tesla M...
§ Tesla K80
§ Pascal P100
§ Volta V100 (Beta)
§ TPU (Beta, Google Cloud Only)
GOOGLE CLOUD GPU + TPU
GOOGLE CLOUD TPUS
§ Attach/Detach As Needed
§ Scale In/Out As Needed
§ 180 TFlops per Device
§ TPU Pod = 64 TPUs
= 11.5 Pe...
V100 AND CUDA 9
§ Independent Thread Scheduling - Finally!!
§ Similar to CPU fine-grained thread synchronization semantics...
GPU CUDA PROGRAMMING
§ Barbaric, But Fun Barbaric
§ Must Know Hardware Very Well
§ Hardware Changes are Painful
§ Use the ...
CUDA STREAMS
§ Asynchronous I/O Transfer
§ Overlap Compute and I/O
§ Keep GPUs Saturated!
§ Used Heavily by TensorFlow
Bad...
CUDA SHARED AND UNIFIED MEMORY
NUMBA AND PYCUDA
§ Numba is Drop-In Replacement for Numpy
§ PyCuda is Python Binding for CUDA
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Feed, Train, and Debug TensorFlow Models
§ TensorFlow ...
TRAINING TERMINOLOGY
§ Tensors: N-Dimensional Arrays
§ ie. Scalar, Vector, Matrix
§ Operations: MatMul, Add, SummaryLog,…
...
TENSORFLOW SESSION
Session
graph: GraphDef
Variables:
“W” : 0.328
“b” : -1.407
Variables are
Randomly
Initialized,
then
Pe...
TENSORFLOW GRAPH EXECUTION
§ Lazy Execution by Default
§ Similar to Spark
§ Eager Execution
§ Similar to PyTorch
§ "Linear...
OPERATION PARALLELISM
§ Inter-Op (Between-Op) Parallelism
§ By default, TensorFlow runs multiple ops in parallel
§ Useful ...
TENSORFLOW MODEL
§ MetaGraph
§ Combines GraphDef and Metadata
§ GraphDef
§ Architecture of your model (nodes, edges)
§ Met...
STOCHASTIC GRADIENT DESCENT (SGD)
§ Or “Simply Go Down” J
§ Small Batch Sizes Are Ideal
§ But not too small!
§ Parallel, D...
EXTEND EXISTING DATA PIPELINES
§ Data Processing
§ HDFS/Hadoop
§ Spark
§ Containers
§ Docker
§ Schedulers
§ Kubernetes
§ M...
KUBERNETES AND SPARK 2.3
§ Kubernetes-Native
§ Schedule Spark Workers
# Submit Spark Job to Kubernetes Cluster
bin/spark-s...
TENSORFLOW + SPARK OPTIONS
§ TensorFlow on Spark (Yahoo!)
§ TensorFrames <-Dead Project->
§ Separate Clusters for Spark an...
TENSORFLOW + KAFKA
§ TensorFlow Dataset API Now Supports Kafka!!
from tensorflow.contrib.kafka.python.ops import kafka_dat...
TENSORFLOW I/O
§ TFRecord File Format
§ TensorFlow Python and C++ Dataset API
§ Python Module and Packaging
§ Comfort with...
FEED TENSORFLOW TRAINING PIPELINE
§ Training is Limited by the Ingestion Pipeline
§ Number One Problem We See Today
§ Scal...
DON’T USE FEED_DICT!!
§ feed_dict Requires Python <-> C++ Serialization
§ Not Optimized for Production Ingestion Pipelines...
DETECT UNDERUTILIZED CPUS, GPUS
§ Instrument Code to Generate “Timelines”
§ Analyze with Google Web
Tracing Framework (WTF...
QUEUES
§ More than Traditional Queue
§ Uses CUDA Streams
§ Perform I/O, Pre-processing, Cropping, Shuffling, …
§ Pull from...
QUEUE CAPACITY PLANNING
§ batch_size
§ # examples / batch (ie. 64 jpg)
§ Limited by GPU RAM
§ num_processing_threads
§ CPU...
TF.DTYPE
§ tf.float32, tf.int32, tf.string, etc
§ Default is usually tf.float32
§ Most TF operations support numpy nativel...
TF.TRAIN.FEATURE
§ Three(3) Feature Types
§ Bytes
§ Float
§ Int64
§ Actually, They Are Lists of 0..* Values of 3 Types Abo...
TF.TRAIN.FEATURES
§ Map of {String -> Feature}
§ Better Name is “FeatureMap”
§ Organize Feature into Categories
§ Access F...
TF.TRAIN.FEATURELIST
§ List of 0..* Feature
§ Access Feature Using
FeatureList[0]
TF.TRAIN.FEATURELISTS
§ Map of {String -> FeatureList}
§ Better Name is “FeatureListMap”
§ Organize FeatureList into Categ...
TF.TRAIN.EXAMPLE
§ Key-Value Dictionary
§ String -> tf.train.Feature
§ Not a Self-Describing Format (?!)
§ Must Establish ...
TF.TFRECORD
§ Contains many tf.train.Example’s
=> tf.train.Example contains many tf.train.Feature’s
=> tf.train.Feature co...
EMBRACE BINARY FORMATS!
§ Unreadable and Scary, But Much More Efficient
§ Better Use of Memory and Disk Cache
§ Faster Cop...
CONVERTING MNIST DATA TO TFRECORD
def convert_to_tfrecord(data, name):
images = data.images
labels = data.labels
num_examp...
READING TF.TFRECORD’S
§ tf.data.TFRecordDatasetß Preferred (Dataset API)
§ tf.TFRecordReader()ß Not Preferred (Queue API)
...
DE-SERIALIZING TF.TFRECORD’S
feature_map = {'height': tf.train.Feature(int64_list=tf.train.Int64List(value=[rows])),
'widt...
MORE TF.TRAIN.FEATURE CONSTRUCTS
§ tf.VarLenFeature
§ tf.FixedLenFeature, tf.FixedLenSequenceFeature
§ tf.SparseFeature
fe...
TF.DATA.DATASET
tf.Tensor => tf.data.Dataset
Functional Transformations
Python Generator => tf.data.Dataset
Dataset.from_t...
MORE TF.DATA.DATASET CONSTRUCTS
§ FixedLengthRecordDataset
§ Binary Files
§ TextLineDataset
§ CSV, JSON, XML, etc
§ TFReco...
DATASET API TRANSFORMATIONS
Standard Custom (Contrib)
CUSTOM TF.PY_FUNC() TRANSFORMATION
§ Custom Python Function
§ Similar to Spark Python UDF (Eek!)
§ You Will Suffer a Big P...
TF.DATA.ITERATOR TYPES
§ One Shot: Iterates Once Through the Dataset
§ Currently, best Iterator to use with Estimator API
...
TF.DATA.ITERATOR SIMPLE EXAMPLE
dataset = tf.data.Dataset.range(5)
iterator = dataset.make_initializable_iterator()
next_e...
TF.DATA.ITERATOR TEXT EXAMPLE
filenames = ["/var/data/file1.txt", "/var/data/file2.txt"]
dataset = tf.data.TextLineDataset...
TF.DATA.ITERATOR NUMPY EXAMPLE
# Load the training data into two NumPy arrays, for example using `np.load()`.
with np.load...
TF.DATA.ITERATOR TFRECORD EXAMPLE
filenames = tf.placeholder(tf.string, shape=[None])
dataset = tf.data.TFRecordDataset(fi...
FUTURE OF DATASET API
§ Replaces Queue API
§ More Functional Operators
§ Automatic GPU Data Staging and Pre-Fetching
§ Und...
TF.ESTIMATOR.ESTIMATOR (1/2)
§ Supports Keras!
§ Unified API for Local + Distributed
§ Provide Clear Path to Production
§ ...
TF.ESTIMATOR.ESTIMATOR (2/2)
§ “Train-to-Serve” Design
§ Create Custom Estimator or Re-Use Canned Estimator
§ Hides Sessio...
TF.CONTRIB.LEARN.EXPERIMENT
§ Easier-to-Use Distributed TensorFlow
§ Same API for Local and Distributed
§ Combines Estimat...
ESTIMATOR + EXPERIMENT CONFIGS
§ TF_CONFIG
§ Special environment variable for config
§ Defines ClusterSpec in JSON incl. m...
ESTIMATOR + KERAS
§ Distributed TensorFlow (Estimator) + Easy to Use (Keras)
§ tf.keras.estimator.model_to_estimator()
# I...
“CANNED” ESTIMATORS
§ Commonly-Used Estimators
§ Pre-Tested and Pre-Tuned
§ DNNClassifer, TensorForestEstimator
§ Always U...
ESTIMATOR + DATASET API
def input_fn():
def generator():
while True:
yield ...
my_dataset = tf.data.dataset.from_generator...
OPTIMIZER + ESTIMATOR API + TPU’S
run_config = tpu_config.RunConfig()
tpu_config = tf.contrib.tpu.TPUConfig(FLAGS.iteratio...
DATASET API TIMELINES (TENSORBOARD)
§ Use Dataset.prefetch()!!
§ Helps prevent bottlenecks in I/O pipeline
TPU COMPATIBILITY (TENSORBOARD>=1.6)
TPU PROFILING
pip install cloud-tpu-profiler==1.5.1
capture_tpu_profile 
--tpu_name=$TPU_NAME 
--logdir=$MODEL_DIR
https:/...
TPU TIMELINE (TENSORBOARD)
INPUT PIPELINE ANALYSIS
§ Determine if Pipeline is Input-Bound
TF.CONTRIB.LEARN.HEAD (OBJECTIVES)
§ Single-Objective Estimator
§ Single classification prediction
§ Multi-Objective Estim...
TF.LAYERS
§ Standalone Layer or Entire Sub-Graphs
§ Functions of Tensor Inputs & Outputs
§ Mix and Match with Operations
§...
TF.FEATURE_COLUMN
§ Used by Canned Estimator
§ Declaratively Specify Training Inputs
§ Converts Sparse to Dense Tensors
§ ...
TF.FEATURE_COLUMN EXAMPLE
§ Continuous + One-Hot + Embedding
deep_columns = [
age,
education_num,
capital_gain,
capital_lo...
FEATURE CROSSING
§ Create New Features by Combining Existing Features
§ Limitation: Combinations Must Exist in Training Da...
SEPARATE TRAINING + EVALUATION
§ Separate Training and Evaluation Clusters
§ Evaluate Upon Checkpoint
§ Avoid Resource Con...
BATCH (RE-)NORMALIZATION (2015, 2017)
§ Each Mini-Batch May Have Wildly Different Distributions
§ Normalize per Batch (and...
DROPOUT (2014)
§ Training Technique
§ Prevents Overfitting
§ Helps Avoid Local Minima
§ Inherent Ensembling Technique
§ Cr...
BATCH NORM, DROPOUT + ESTIMATOR API
§ Must Specify Evaluation or Training Mode
§ These Will Behave Differently Depending o...
SAVED MODEL FORMAT
§ Different Format than Traditional Exporter
§ Contains Checkpoints, 1..* MetaGraph’s, and Assets
§ Exp...
TENSORFLOW DEBUGGER
§ Step through Operations
§ Inspect Inputs and Outputs
§ Wrap Session in Debug Session
sess = tf.Sessi...
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Train, Inspect, and Debug TensorFlow Models
§ TensorFl...
SINGLE NODE, MULTI-GPU TRAINING
§ cpu:0
§ By default, all CPUs
§ Requires extra config to target a CPU
§ gpu:0..n
§ Each G...
DISTRIBUTED, MULTI-NODE TRAINING
§ TensorFlow Automatically Inserts Send and Receive Ops into Graph
§ Parameter Server Syn...
DATA PARALLEL VS. MODEL PARALLEL
§ Data Parallel (“Between-Graph Replication”)
§ Send exact same model to each device
§ Ea...
SYNCHRONOUS VS. ASYNCHRONOUS
§ Synchronous
§ Nodes compute gradients
§ Nodes update Parameter Server (PS)
§ Nodes sync on ...
CHIEF WORKER
§ Chief Defaults to Worker Task 0
§ Task 0 is guaranteed to exist
§ Performs Maintenance Tasks
§ Writes log s...
NODE AND PROCESS FAILURES
§ Checkpoint to Persistent Storage (HDFS, S3)
§ Use MonitoredTrainingSession and Hooks
§ Use a G...
ADVANCED DEVICE PLACEMENT STRATEGIES
§ Re-Inforcement Learning Adapts to Real-Time Conditions
§ Manual Device Placement is...
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Train, Inspect, and Debug TensorFlow Models
§ TensorFl...
XLA FRAMEWORK
§ XLA: “Accelerated Linear Algebra”
§ Reduce Reliance on Custom Operators
§ Intermediate Representation used...
XLA HIGH LEVEL OPTIMIZER (HLO)
§ HLO: “High Level Optimizer”
§ Compiler Intermediate Representation (IR)
§ Independent of ...
XLA IS DESIGNED FOR RE-USE
§ Pluggable Backends
§ HLO “Toolkit”
§ Call BLAS or cuDNN
§ Use LLVM or BYO Low-Level-Optimizer
MINIMAL XLA BACKEND
§ HLO / LLVM Pipeline
§ StreamExecutor Plugin
XLA CPU BACKEND
XLA GPU / NVIDIA PTX BACKEND
XLA GPU / OPENCL BACKEND
CPU HLO PIPELINE
GPU HLO PIPELINE
XLA PERFORMANCE OPTIMIZATIONS
§ JIT Training
§ MNIST: 30% Speed Up
§ Inception: 20% Speed Up
§ Basic LSTM: 80% Speed Up
§ ...
JIT COMPILER
§ JIT: “Just-In-Time” Compiler
§ Built on XLA Framework
§ Reduce Memory Movement – Especially with GPUs
§ Red...
TO JIT OR NOT TO JIT
VISUALIZING JIT COMPILER IN ACTION
Before JIT After JIT
Google Web Tracing Framework:
http://google.github.io/tracing-fram...
VISUALIZING FUSING OPERATORS
pip install graphviz
dot -Tpng 
/tmp/hlo_graph_1.w5LcGs.dot 
-o hlo_graph_1.png
GraphViz:
htt...
XLA COMPILATION SUMMARY
§ Generates Code and Libraries for Your Computation
§ Packages Libraries Needed for Your
§ Elimina...
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Ad...
WE ARE NOW…
…OPTIMIZING Models
AFTER Model Training
TO IMPROVE Model Serving
PERFORMANCE!
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serv...
AOT COMPILER
§ Standalone, Ahead-Of-Time (AOT) Compiler
§ Built on XLA framework
§ tfcompile
§ Creates executable with min...
GRAPH TRANSFORM TOOL (GTT)
§ Post-Training Optimization to Prepare for Inference
§ Remove Training-only Ops (checkpoint, d...
AFTER TRAINING, BEFORE OPTIMIZATION
-TensorFlow-
Trains
Variables
-User-
Fetches
Outputs
-User-
Feeds
Inputs
-TensorFlow-
...
POST-TRAINING GRAPH TRANSFORMS
transform_graph 
--in_graph=unoptimized_cpu_graph.pb  ß Original Graph
--out_graph=optimize...
AFTER STRIPPING UNUSED NODES
§ Optimizations
§ strip_unused_nodes
§ Results
§ Graph much simpler
§ File size much smaller
AFTER REMOVING UNUSED NODES
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ Results
§ Pesky nodes removed
§ File siz...
AFTER FOLDING CONSTANTS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ Results
§ Placeholders (fee...
AFTER FOLDING BATCH NORMS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ Result...
AFTER QUANTIZING WEIGHTS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ quantiz...
WEIGHT (VARIABLE) QUANTIZATION
§ FP16 or INT8: Smaller & Computationally Faster than FP32
§ Easy to “Linearly Quantize” (R...
BENEFITS OF 32-BIT TO 8-BIT QUANTIZE
§ First Class Hardware and CUDA Support
§ One 32-Bit GPU Core: 4-Way Dot Product of 8...
ACTIVATION QUANTIZATION
§ Activations Not Known Ahead of Time
§ Depends on input, not easy to quantize
§ Requires Addition...
CHOOSING SATURATION THRESHOLD
§ Trade-off Between Range & Precision
§ INT8 Should Encode Same Information As Original FP32...
SATURATE TO MINIMIZE ACCURACY LOSS
§ Helps Preserve Accuracy After Activation Quantization
§ Goal: Find Threshold (T) That...
AUTO-CALIBRATE: PIPELINEAI + TENSOR-RT
Pre-Requisites
§ 32-Bit Trained Model (TensorFlow, Caffe)
§ Small Calibration Datas...
32-BIT TO 8-BIT QUANTIZATION RESULTS
Accuracy of INT8 Models Comparable to FP32
AFTER ACTIVATION QUANTIZATION
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ qu...
TF.CONTRIB.QUANTIZE()
§ “Fake Quantization Ops”
FREEZING MODEL FOR DEPLOYMENT
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ qu...
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serv...
MODEL SERVING TERMINOLOGY
§ Inference
§ Only Forward Propagation through Network
§ Predict, Classify, Regress, …
§ Bundle
...
TENSORFLOW SERVING FEATURES
§ Supports Auto-Scaling
§ Custom Loaders beyond File-based
§ Tune for Low-latency or High-thro...
GRPC :: PROTOBUFFERS AS HTTP :: JSON
PREDICTION SERVICE
§ Predict (Original, Generic)
§ Input: List of Tensor
§ Output: List of Tensor
§ Classify
§ Input: List...
PREDICTION INPUTS + OUTPUTS
§ SignatureDef
§ Defines inputs and outputs
§ Maps external (logical) to internal (physical) t...
MULTI-HEADED INFERENCE
§ Inputs Pass Through Model One Time
§ Model Returns Multiple Predictions:
1. Human-readable predic...
BUILD YOUR OWN MODEL SERVER
§ Adapt GRPC(Google) <-> HTTP (REST of the World)
§ Perform Batch Inference vs. Request/Respon...
RUNTIME OPTION: NVIDIA TENSOR-RT
§ Post-Training Model Optimizations
§ Specific to Nvidia GPU
§ Similar to TF Graph Transf...
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serv...
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serv...
REQUEST BATCH TUNING
§ max_batch_size
§ Enables throughput/latency tradeoff
§ Bounded by RAM
§ batch_timeout_micros
§ Defi...
ADVANCED BATCHING & SERVING TIPS
§ Batch Just the GPU/TPU Portions of the Computation Graph
§ Batch Arbitrary Sub-Graphs u...
PIPELINE.AI FUNCTIONS (SERVERLESS)
§ Supports Kubernetes
§ Supports Docker Swarm
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Ad...
AGENDA
Part 3: Advanced Model Serving + Traffic
Routing
§ Kubernetes Ingress, Egress, Networking
§ Istio and Envoy Archite...
KUBERNETES PRIORITY SCHEDULING
Workloads can …
§ access the entire cluster up
to the autoscaler max size
§ trigger autosca...
KUBERNETES INGRESS
§ Single Service
§ Can also use Service (LoadBalancer or NodePort)
§ Fan Out & Name-Based Virtual Hosti...
KUBERNETES INGRESS CONTROLLER
§ Ingress Controller Types
§ Google Cloud: kubernetes.io/ingress.class: gce
§ Nginx: kuberne...
ISTIO EGRESS
§ While-list Domains To Access From Within Service Mesh
§ Apply RoutingRules
§ Apply DestinationPolicys
§ Sup...
AGENDA
Part 3: Advanced Model Serving + Traffic
Routing
§ Kubernetes Ingress, Egress, Networking
§ Istio and Envoy Archite...
ISTIO ARCHITECTURE: INGRESS
ISTIO ARCHITECTURE: ENVOY
§ Lyft Project
§ High-perf Proxy (C++)
§ Lots of Metrics
§ Zone-Aware
§ Service Discovery
§ Load...
ISTIO ARCHITECTURE: MIXER
§ Enforce Access Control
§ Evaluate Request-Attrs
§ Collect Metrics
§ Platform-Independent
§ Ext...
ISTIO ARCHITECTURE: PILOT
§ Envoy service discovery
§ Intelligent routing
§ A/B Tests
§ Canary deployments
§ RouteRule->En...
ISTIO ARCHITECTURE: SECURITY
§ Mutual TLS Auth
§ Credential Management
§ Uses Service-Identity
§ Canary Deployments
§ Fine...
AGENDA
Part 3: Advanced Model Serving + Traffic
Routing
§ Kubernetes Ingress, Egress, Networking
§ Istio and Envoy Archite...
ISTIO ROUTE RULES
§ Kubernetes Custom Resource Definition (CRD)
kind: CustomResourceDefinition
metadata:
name: routerules....
ADVANCED TRAFFIC ROUTING RULES
§ Content-based Routing
§ Uses headers, username, payload, …
§ Cross-Environment Routing
§ ...
ISTIO DESTINATION POLICIES
§ Load Balancing
§ ROUND_ROBIN (default)
§ LEAST_CONN (between 2 randomly-selected hosts)
§ RAN...
ISTIO AUTO-SCALING
§ Traffic Routing and Auto-Scaling Occur Independently
§ Istio Continues to Obey Traffic Splits After A...
A/B & BANDIT MODEL TESTING
§ Perform Live Experiments in Production
§ Compare Existing Model A with Model B, Model C
§ Saf...
AGENDA
Part 3: Advanced Model Serving + Traffic
Routing
§ Kubernetes Ingress, Egress, Networking
§ Istio and Envoy Archite...
ISTIO METRICS AND MONITORING
§ Verify Traffic Splits
§ Fine-Grained Request Tracing
ISTIO & CHAOS + LATENCY MONKEY
§ Fault Injection
§ Delay
§ Abort
kind: RouteRule
metadata:
name: predict-mnist
spec:
desti...
SPECIAL THANKS TO CHRISTIAN POSTA
§ http://blog.christianposta.com/istio-workshop
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Ad...
PIPELINE.AI SUPPORTS ALL MAJOR MODELS
PIPELINE.AI ANNOUNCEMENTS
http://pipeline.aihttp://community.pipeline.ai
THANK YOU!!
§ Please Star this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/Pipe...
Nächste SlideShare
Wird geladen in …5
×

Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San Jose March 2018

4.412 Aufrufe

Veröffentlicht am

Chris Fregly, Founder @ PipelineAI, will walk you through a real-world, complete end-to-end Pipeline-optimization example. We highlight hyper-parameters - and model pipeline phases - that have never been exposed until now.

While most Hyperparameter Optimizers stop at the training phase (ie. learning rate, tree depth, ec2 instance type, etc), we extend model validation and tuning into a new post-training optimization phase including 8-bit reduced precision weight quantization and neural network layer fusing - among many other framework and hardware-specific optimizations.

Next, we introduce hyperparameters at the prediction phase including request-batch sizing and chipset (CPU v. GPU v. TPU).

Lastly, we determine a PipelineAI Efficiency Score of our overall Pipeline including Cost, Accuracy, and Time. We show techniques to maximize this PipelineAI Efficiency Score using our massive PipelineDB along with the Pipeline-wide hyper-parameter tuning techniques mentioned in this talk.

Bio

Chris Fregly is Founder and Applied AI Engineer at PipelineAI, a Real-Time Machine Learning and Artificial Intelligence Startup based in San Francisco.

He is also an Apache Spark Contributor, a Netflix Open Source Committer, founder of the Global Advanced Spark and TensorFlow Meetup, author of the O’Reilly Training and Video Series titled, "High Performance TensorFlow in Production with Kubernetes and GPUs."

Previously, Chris was a Distributed Systems Engineer at Netflix, a Data Solutions Engineer at Databricks, and a Founding Member and Principal Engineer at the IBM Spark Technology Center in San Francisco.

Veröffentlicht in: Software

Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San Jose March 2018

  1. 1. HYPER-PARAMETER TUNING ACROSS THE ENTIRE AI PIPELINE: MODEL TRAINING TO PREDICTING GPU TECH CONFERENCE -- SAN JOSE, MARCH 2018 CHRIS FREGLY FOUNDER @ PIPELINEAI
  2. 2. KEY TAKE-AWAYS With PipelineAI, You Can… § Hyper-Parameter Tuning From Training to Inference § Generate Hardware-Specific Pipeline Optimizations § Deploy & Compare Optimizations in Live Production § Perform Continuous Model Training & Data Labeling
  3. 3. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Routing
  4. 4. INTRODUCTIONS: ME § Chris Fregly, Founder & Engineer @ PipelineAI § Formerly Netflix, Databricks, IBM Spark Tech § Founder @ Advanced Spark TensorFlow Meetup § Please Join Our 75,000+ Global Members!! Contact Me chris@pipeline.ai @cfregly Global Locations * San Francisco * Chicago * Austin * Washington DC * Dusseldorf * London
  5. 5. INTRODUCTIONS: YOU You Want To … § Perform Hyper-Parameter Tuning Across *Entire* Pipeline § Measure Results of Tuning Both Offline *and* Online § Deploy Models Rapidly, Safely, *Directly* in Production § Trace and Explain *Live* Model Predictions
  6. 6. PIPELINEAI IS OPEN SOURCE § https://github.com/PipelineAI/pipeline/ § Please Star this GitHub Repo! § “Each Star is Worth $1,500 in Seed Money” - A Prominent Venture Capitalist in Silicon Valley http://jrvis.com/red-dwarf/
  7. 7. PIPELINEAI ANNOUNCEMENTS http://pipeline.aihttp://community.pipeline.ai
  8. 8. PIPELINEAI SUPPORTS ALL MAJOR MODELS
  9. 9. PIPELINEAI TERMINOLOGY § “Flask-App Falacy”: Flask is Not Enough for Production-izing ML/AI Models § “Pipeline”: All Phases Including Train, Validate, Optimize, Deploy, and Predict § “Experiment”: Across All Environments from Research Lab to Live Production § “Turning Knobs”: Hyper-Parameter Tuning Across All Phases of the Pipeline § “Model Serving”: Models Serving Predictions in Live Production § “Runtime”: Execution Environment for Any Phase of Pipeline (TensorRT, Caffe) § “Train-to-Serve”: Training with Intent to Serve Predictions § “Train-Serving Skew”: Model Performs Poorly on Live Data § “Post-Training Optimization”: Prepare Model and Runtime for Fast Inference http://NoFlaskApp.com
  10. 10. Any Runtime Any Device CPU, GPU, TPU, IoT Any Network and System Configuration Any Clouud and On-Premise Environment AnyModel AnyLanguage AnyFramework AnyHyper- Parameter 1,000,000’s of Model + Runtime Pipeline Combinations We Find the Best Combinations For Your Model and Workload! WHOLE-PIPELINE HYPER-PARAMETER TUNING
  11. 11. WHOLE-PIPELINE HYPER-PARAMETER TUNING
  12. 12. WHOLE-PIPELINE HYPER-PARAMETERS Training: Hyperparameters pipelinedb.add("learning_rate", 0.025) pipelinedb.add(”batch_size", 8192) pipelinedb.add(”num_epochs", 100) ^^ THIS IS WHERE MOST DATA SCIENTISTS END BECAUSE ^^ ^^ THEY HAVE NO WAY OF COLLECTING ANYTHING MORE ^^ ^^ UNTIL NOW! ^^ pipelinedb.add("ec2_instance_type", "g3.4xlarge”) pipelinedb.add("utilized_memory_gigabyte", 20) pipelinedb.add(“network_speed_gigabit”, 10) pipelinedb.add("training_precision_bits", 16) pipelinedb.add("accelerator_type", "nvidia_gpu_v100") # google_tpu pipelinedb.add(“cpu_to_accelerator_network_type", “pcie”) # nvlink pipelinedb.add(“cpu_to_accelerator_network_bandwidth_gigabit”, 100) Training: Results pipelinedb.add("training_accuracy_percent", 95) pipelinedb.add(“validation_accuracy_percent", 94) pipelinedb.add("training_auc", 0.70) pipelinedb.add(“validation_auc", 0.69) pipelinedb.add(”time_to_train_seconds", 0.69) Optimization: Hyperparameters pipelinedb.add(”batch_norm_fusing", True) pipelinedb.add("weight_quantization_bits", 8) # 2-bit, 7-bit Optimization: Results (Collected At End of Optimization) pipelinedb.add("weight_quantization_reduction_percent", 50) Inference: Hyperparameters pipelinedb.add("runtime_type", ”tfserving") # python,tensorrt Pipelinedb.add(“runtime_chip”, “gpu”) pipelinedb.add("model_type", "tensorflow") # caffe, scikit pipelinedb.add("request_batch_window_ms", 10) pipelinedb.add("request_batch_size", 1000) Inference: Results (Every ~15 Mins Inside PipelineAI Runtime) pipelinedb.add("latency_99_percentile_ms", 5) pipelinedb.add("cost_per_prediction_usd", 0.000001) pipelinedb.add("24_hr_auc", 0.70) pipelinedb.add("48_hr_auc", 0.30) Training Optimizing Inferencing
  13. 13. WHY EMPHASIS ON MODEL INFERENCE? Model Training Batch & Boring Offline in Research Lab Pipeline Ends at Training No Insight into Live Production Small Number of Data Scientists Optimizations Are Very Well-Known Real-Time & Exciting!! Online in Live Production No Ability To Turn Inference Knobs (Yet) Extend Model Validation Into Production Huuuuuuge Number of Application Users Inference Optimizations Not Yet Explored <<< Model Inference 100’s Training Jobs per Day 1,000,000’s Predictions per Sec
  14. 14. GROWTH IN ML/AI MODELS 2017 2026 Data Scientists 44,000 11,500,000 $39 Billion in 2017 $2 Trillion by 2026 2017 2026 Models Trained 50,000,000 200,000 2017 2026 Model Predictions 250,000,000,000 4,000,000 2016 2026 2016 2026 2016 2026
  15. 15. MODEL DEPLOYMENT OPTIONS § AWS SageMaker § Released Nov 2017 @ Re-invent § Custom Docker Images for Training/Serving (ie. PipelineAI Images) § Distributed TensorFlow Training through Estimator API § Traffic Splitting for A/B Model Testing § Google Cloud ML Engine § Mostly Command-Line Based § Driving TensorFlow Open Source API (ie. Estimator API) § Azure ML § On-Premise Docker, Docker Swarm, Kubernetes, Mesos PipelineAI Supports All Hybrid-Cloud, On-Prem, and Air-Gap Deployments!
  16. 16. WHOLE-PIPELINE OPTIMIZATION OPTIONS § Model Training Optimizations § Model Hyper-Parameters (ie. Learning Rate) § Reduced Precision (ie. FP16 Half Precision) § Model Optimizations to Prepare for Inference § Quantize Model Weights + Activations From 32-bit to 8-bit § Fuse Neural Network Layers Together § Model Inference Runtime Optimizations § Runtime Config: Request Batch Size, etc § Different Runtime: TensorFlow Serving CPU/GPU, Nvidia TensorRT
  17. 17. NVIDIA TENSOR-RT RUNTIME § Post-Training Model Optimizations § Specific to Nvidia GPUs § GPU-Optimized Prediction Runtime § Alternative to TensorFlow Serving § PipelineAI Supports TensorRT!
  18. 18. TENSORFLOW LITE OPTIMIZING CONVERTER § Post-Training Model Optimizations § Currently Supports iOS and Android § On-Device Prediction Runtime § Low-Latency, Fast Startup § Selective Operator Loading § 70KB Min - 300KB Max Runtime Footprint § Supports Accelerators (GPU, TPU) § Falls Back to CPU without Accelerator § Java and C++ APIs bazel build tensorflow/contrib/lite/toco:toco && ./bazel-bin/third_party/tensorflow/contrib/lite/toco/toco --input_file=frozen_eval_graph.pb --output_file=tflite_model.tflite --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --inference_type=QUANTIZED_UINT8 --input_shape="1,224, 224,3" --input_array=input --output_array=outputs --std_value=127.5 --mean_value=127.5
  19. 19. PIPELINEAI QUICK START § http://quickstart.pipeline.ai § Any Model, Any Training Runtime, Any Prediction Runtime § Support for Docker, Docker Swarm, Kubernetes, Mesos § Package Model+Runtime into a Docker Image § Emphasizes Immutable Deployment and Infrastructure § Same Image Across All Environments § No Library or Dependency Surprises from Laptop to Production § Allows Tuning Offline and Online Model+Runtime Together
  20. 20. STEP 1: BUILD MODEL+TRAINING SERVER § Train Model with Specific Hyper-Parameters § Monitor and Compare Validation Accuracy § Tune Hyper-Parameters to Improve Accuracy pipeline train-server-build --model-name=mnist --model-tag=A --model-type=tensorflow --model-path=./tensorflow/mnist/0.025/model Build Model Training Server A (Learning Rate 0.025) pipeline train-server-build --model-name=mnist --model-tag=B --model-type=tensorflow --model-path=./tensorflow/mnist/0.050/model Build Model Training Server B (Learning Rate 0.050)
  21. 21. STEP 2: TRAIN, MEASURE, TUNE § Train Model with Specific Hyper-Parameters § Monitor abnd Compare Validation Accuracy § Tune Hyper-Parameters to Improve Accuracy pipeline train-server-start --model-name=mnist --model-tag=A --input-host-path=./tensorflow/mnist/input --output-host-path=./tensorflow/mnist/output --train-args= "--learning-rate=0.025 --batch-size=128" Train Model A (Learning Rate 0.025) pipeline train-server-start --model-name=mnist --model-tag=B --input-host-path=./tensorflow/mnist/input --output-host-path=./tensorflow/mnist/output --train-args= "--learning-rate=0.025 --batch-size=128" Train Model B (Learning Rate 0.050)
  22. 22. STEP 3: CREATE PREDICT() METHOD def predict(request: bytes) -> bytes: return _model.predict(request)Basic Insight: def predict(request: bytes) -> bytes: # Step 1: Transform Request (JSON => np.array) transformed_request = _transform_request(request) # Step 2: Model Predict predictions = _model.predict(transformed_request) # Step 3: Transform Response (np.array => JSON) transformed_response = _transform_response(predictions) return transformed_response Detailed Insight: § Multiple Levels of Performance Metrics and Logging § Enterprise Adapters for All Metrics and Logging Systems pipeline predict-server-logs --model-name=mnist --model-tag=cpu View Logs
  23. 23. STEP 4: BUILD MODEL+PREDICTION SERVER pipeline predict-server-build --model-name=mnist --model-tag=C --model-type=tensorflow --model-runtime=tensorrt --model-chip=gpu --model-path=./tensorflow/mnist/ Build Local Model Server C TensorRT GPU pipeline predict-server-build --model-name=mnist --model-tag=A --model-type=tensorflow --model-runtime=tfserving --model-chip=cpu --model-path=./tensorflow/mnist/ Build Local Model Server A TF Serving CPU pipeline predict-server-build --model-name=mnist --model-tag=B --model-type=tensorflow --model-runtime=tfserving --model-chip=gpu --model-path=./tensorflow/mnist/ Build Local Model Server B TF Serving GPU Same Model, 3 Different Prediction Runtimes
  24. 24. STEP 5: PREDICT, MEASURE, TUNE (LOCAL) § Perform Mini-Load Test on Local Model Server § Immediate Feedback on Prediction Performance § Compare to Previous Model+Runtime Variations § Gain Intuition Before Pushing to Prod pipeline predict-server-start --model-name=mnist --model-tag=A --memory-limit=2G pipeline predict-http-test --model-endpoint-url=http://localhost:8080 --test-request-path=test_request.json --test-request-concurrency=1000 Start Local Predict Load Test Start Local Model Server
  25. 25. STEP 6: DEPLOY, MEASURE, TUNE (IN PROD) § Deploy from CLI or Jupyter Notebook § Tear-Down and Rollback Models Quickly § Shadow Canary: Deploy to 20% Live Traffic § Split Canary: Deploy to 97-2-1% Live Traffic pipeline predict-kube-start --model-name=mnist --model-tag=BStart Cluster B pipeline predict-kube-start --model-name=mnist --model-tag=CStart Cluster C pipeline predict-kube-start --model-name=mnist --model-tag=AStart Cluster A pipeline predict-kube-route --model-name=mnist --model-split-tag-and-weight-dict='{"A":97, "B":2, "C”:1}' --model-shadow-tag-list='[]' Route Live Traffic
  26. 26. STEP 7: OPTIMIZE, MEASURE, RE-DEPLOY § Prepare Model for Predicting § Simplify Network, Reduce Size § Reduce Precision -> Fast Math § Some Tools § Graph Transform Tool (GTT) § tfcompile After Training After Optimizing! pipeline optimize --optimization-list=[‘quantize_weights’,‘tfcompile’] --model-name=mnist --model-tag=A --model-path=./tensorflow/mnist/model --model-inputs=[‘x’] --model-outputs=[‘add’] --output-path=./tensorflow/mnist/optimized_model Linear Regression Model Size: 70MB –> 70K (!)
  27. 27. STEP 8: EVALUATE MODEL+RUNTIME VARIANT § Offline, Batch Metrics § Validation + Training Accuracy § CPU + GPU Utilization § Online, Live Prediction Values § Compare Relative Precision § Newly-Seen, Streaming Data § Online, Real-Time Metrics § Response Time, Throughput § Cost ($) Per Prediction
  28. 28. STEP 9: DETERMINE PIPELINEAI EFFICIENCY
  29. 29. STEP 10: SHIFT TRAFFIC TO BEST VARIANT § A/B Tests § Inflexible and Boring § Multi-Armed Bandits § Adaptive and Exciting! pipeline predict-kube-route --model-name=mnist --model-split-tag-and-weight-dict='{"A":1, "B":2, "C”:97}’ --model-shadow-tag-list='[]' Dynamically Route Traffic to Winning Model+Runtime
  30. 30. PIPELINE PROFILING AND TUNING § Instrument Code to Generate “Timelines” for Any Metric § Analyze with Google Web Tracing Framework (WTF) § Can Also Monitor CPU with top, GPU with nvidia-smi http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True))
  31. 31. MODEL AND ENSEMBLE TRACING/AUDITING § Necessary for Model Explain-ability § Fine-Grained Request Tracing § Used for Model Ensembles
  32. 32. VIEW REAL-TIME PREDICTION STREAMS § Visually Compare Real-time Predictions Features and Inputs Predictions and Confidences Model B Model CModel A
  33. 33. CONTINUOUS DATA LABELING AND FIXING § Identify and Fix Borderline (Unconfident) Predictions § Fix Predictions Along Class Boundaries § Facilitate ”Human in the Loop” § Path to Crowd-Sourced Labeling § Retrain with Newly-Labeled Data § Game-ify the Labeling Process
  34. 34. CONTINUOUS MODEL TRAINING § The Holy Grail of Machine Learning § Kafka, Kinesis, Spark Streaming, Flink, Storm, Heron PipelineAI Supports Continuous Model Training
  35. 35. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Traffic Routing
  36. 36. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Feed, Train, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  37. 37. SETTING UP TENSORFLOW WITH GPUS § Very Painful! § Especially inside Docker § Use nvidia-docker § Especially on Kubernetes! § Use the Latest Kubernetes (with Init Script Support) § http://pipeline.ai for GitHub + DockerHub Links
  38. 38. TENSORFLOW + CUDA + NVIDIA GPU
  39. 39. VOLTA V100 AND TENSOR CORES § 84 Streaming Multiprocessors (SM’s) § 5,376 GPU Cores § 640 Tensor Cores (ie. Google TPU) § Can Perform 640 FP16 4x4 Matrix Multiplies § 120 TFLOPS = 4x FP32 and 10x FP64 § Allows Mixed FP16/FP32 Precision Operations § Matrix Dims Should be Multiples of 8 § More Shared Memory § New L0 Instruction Cache § Faster L1 Data Cache
  40. 40. GPU HALF-PRECISION SUPPORT § FP32: “Full Precision”, FP16: “Half Precision” § Two(2) FP16’s in 1 FP32 GPU Core § 2x Throughput! § Lower Precision is OK § Deep learning is approximate § The Network Matters Most § Not individual neuron accuracy
  41. 41. MORE ON HALF-PRECISION § 1997: Related Work by SGI § Commercial Request from ILM in 2002 § Implemented in Silicon by Nvidia in 2002 § Supported by Pascal P100 and Volta V100
  42. 42. MORE ON REDUCED-PRECISION § Less Precision => Less Memory & Bandwidth => Faster Math & Less Energy § Fits into Smaller Places Close to ALU’s § 4-bit, 2-bit, 1-bit (?!) Quantization § More Layers Help Maintain Accuracy at Reduced Precision § Tip: Scale and Center Dynamic Range at Each Layer § Otherwise, FP16’s become 0 - model may not converge!
  43. 43. GPU: 4-WAY DOT PRODUCT OF 8-BIT INTS § GPU Hardware and CUDA Support § Compute Capability (CC) >= 6.1
  44. 44. FP16 VS. INT8 § FP16 Has Larger Dynamic Range Than INT8 § Larger Dynamic Range Allows Higher Precision § Truncated FP32 Dynamic Range Higher Than FP16 § Not IEEE 754 Standard, But Worth Exploring
  45. 45. ENABLING FP16 IN TENSORFLOW § Harder Than You Think! § TPUs are 16-bit Native GPU’s With CC 5.3+ (Only), Set the Following: TF_FP16_MATMUL_USE_FP32_COMPUTE=0 TF_FP16_CONV_USE_FP32_COMPUTE=0 TF_XLA_FLAGS=--xla_enable_fast_math=1 Pascal P100 Volta V100
  46. 46. FP32 VS. FP16 ON AWS GPU INSTANCES FP16 Half Precision 87.2 T ops/second for p3 Volta V100 4.1 T ops/second for g3 Tesla M60 1.6 T ops/second for p2 Tesla K80 FP32 Full Precision 15.4 T ops/second for p3 Volta V100 4.0 T ops/second for g3 Tesla M60 3.3 T ops/second for p2 Tesla K80
  47. 47. § Tesla K80 § Pascal P100 § Volta V100 (Beta) § TPU (Beta, Google Cloud Only) GOOGLE CLOUD GPU + TPU
  48. 48. GOOGLE CLOUD TPUS § Attach/Detach As Needed § Scale In/Out As Needed § 180 TFlops per Device § TPU Pod = 64 TPUs = 11.5 PetaFlops § $6.50 per TPU Hour § Supports 16-bit TensorFlow
  49. 49. V100 AND CUDA 9 § Independent Thread Scheduling - Finally!! § Similar to CPU fine-grained thread synchronization semantics § Allows GPU to yield execution of any thread § Still Optimized for SIMT (Same Instruction Multi-Thread) § SIMT units automatically scheduled together § Explicit Synchronization P100 V100 New CUDA Thread Cooperative Groups https://devblogs.nvidia.com/cooperative-groups/
  50. 50. GPU CUDA PROGRAMMING § Barbaric, But Fun Barbaric § Must Know Hardware Very Well § Hardware Changes are Painful § Use the Profilers & Debuggers
  51. 51. CUDA STREAMS § Asynchronous I/O Transfer § Overlap Compute and I/O § Keep GPUs Saturated! § Used Heavily by TensorFlow Bad Good Bad Good
  52. 52. CUDA SHARED AND UNIFIED MEMORY
  53. 53. NUMBA AND PYCUDA § Numba is Drop-In Replacement for Numpy § PyCuda is Python Binding for CUDA
  54. 54. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Feed, Train, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  55. 55. TRAINING TERMINOLOGY § Tensors: N-Dimensional Arrays § ie. Scalar, Vector, Matrix § Operations: MatMul, Add, SummaryLog,… § Graph: Graph of Operations (DAG) § Session: Contains Graph(s) § Feeds: Feed Inputs into Placeholder § Fetches: Fetch Output from Operation § Variables: What We Learn Through Training § aka “Weights”, “Parameters” § Devices: Hardware Device (GPU, CPU, TPU, ...) -TensorFlow- Trains Variables -User- Fetches Outputs -User- Feeds Inputs -TensorFlow- Performs Operations -TensorFlow- Flows Tensors with tf.device(“/cpu:0,/gpu:15”):
  56. 56. TENSORFLOW SESSION Session graph: GraphDef Variables: “W” : 0.328 “b” : -1.407 Variables are Randomly Initialized, then Periodically Checkpointed GraphDef is Created During Training, then Frozen for Inference
  57. 57. TENSORFLOW GRAPH EXECUTION § Lazy Execution by Default § Similar to Spark § Eager Execution § Similar to PyTorch § "Linearize” Execution Minimizes RAM § Useful on Single GPU with Limited RAM § May Need to Re-Compute (CPU/GPU) vs Store (RAM)
  58. 58. OPERATION PARALLELISM § Inter-Op (Between-Op) Parallelism § By default, TensorFlow runs multiple ops in parallel § Useful for low core and small memory/cache envs § Set to one (1) § Intra-Op (Within-Op) Parallelism § Different threads can use same set of data in RAM § Useful for compute-bound workloads (CNNs) § Set to # of cores (>=2)
  59. 59. TENSORFLOW MODEL § MetaGraph § Combines GraphDef and Metadata § GraphDef § Architecture of your model (nodes, edges) § Metadata § Asset: Accompanying assets to your model § SignatureDef: Maps external to internal tensors § Variables § Stored separately during training (checkpoint) § Allows training to continue from any checkpoint § Variables are “frozen” into Constants when preparing for inference GraphDef x W mul add b MetaGraph Metadata Assets SignatureDef Tags Version Variables: “W” : 0.328 “b” : -1.407
  60. 60. STOCHASTIC GRADIENT DESCENT (SGD) § Or “Simply Go Down” J § Small Batch Sizes Are Ideal § But not too small! § Parallel, Distributed Training Across Devices § Each device calculates gradients on small batch § Gradients averaged across all devices § Training is Fast, Batches are Small
  61. 61. EXTEND EXISTING DATA PIPELINES § Data Processing § HDFS/Hadoop § Spark § Containers § Docker § Schedulers § Kubernetes § Mesos <dependency> <groupId>org.tensorflow</groupId> <artifactId>tensorflow-hadoop</artifactId> </dependency> https://github.com/tensorflow/ecosystem
  62. 62. KUBERNETES AND SPARK 2.3 § Kubernetes-Native § Schedule Spark Workers # Submit Spark Job to Kubernetes Cluster bin/spark-submit --master k8s://https://xx.yy.zz.ww --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=5 --conf spark.kubernetes.container.image=<spark-image> --conf spark.kubernetes.driver.pod.name=spark-pi-driver local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar # View Kubernetes Resources kubectl get pods -l 'spark-role in (driver, executor)' -w # View Driver Logs in Real-Time kubectl logs –f spark-pi-driver http://blog.kubernetes.io/2018/03/ apache-spark-23-with-native-kubernetes.html http://community.pipeline.ai
  63. 63. TENSORFLOW + SPARK OPTIONS § TensorFlow on Spark (Yahoo!) § TensorFrames <-Dead Project-> § Separate Clusters for Spark and TensorFlow § Spark: Boring Batch ETL § TensorFlow: Exciting AI Model Training and Serving § Hand-Off Point is S3, HDFS, Google Cloud Storage
  64. 64. TENSORFLOW + KAFKA § TensorFlow Dataset API Now Supports Kafka!! from tensorflow.contrib.kafka.python.ops import kafka_dataset_ops repeat_dataset = kafka_dataset_ops.KafkaDataset(topics, group="test", eof=True) .repeat(num_epochs) batch_dataset = repeat_dataset.batch(batch_size) …
  65. 65. TENSORFLOW I/O § TFRecord File Format § TensorFlow Python and C++ Dataset API § Python Module and Packaging § Comfort with Python’s Lack of Strong Typing § C++ Concurrency Constructs § Protocol Buffers § Old Queue API § GPU/CUDA Memory Tricks And a Lot of Coffee!
  66. 66. FEED TENSORFLOW TRAINING PIPELINE § Training is Limited by the Ingestion Pipeline § Number One Problem We See Today § Scaling GPUs Up / Out Doesn’t Help § GPUs are Heavily Under-Utilized § Use tf.dataset API for best perf § Efficient parallel async I/O (C++) Tesla K80 Volta V100
  67. 67. DON’T USE FEED_DICT!! § feed_dict Requires Python <-> C++ Serialization § Not Optimized for Production Ingestion Pipelines § Retrieves Next Batch After Current Batch is Done § Single-Threaded, Synchronous § CPUs/GPUs Not Fully Utilized! § Use Queue or Dataset APIs § Queues are old & complex sess.run(train_step, feed_dict={…}
  68. 68. DETECT UNDERUTILIZED CPUS, GPUS § Instrument Code to Generate “Timelines” § Analyze with Google Web Tracing Framework (WTF) § Monitor CPU with top, GPU with nvidia-smi http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True))
  69. 69. QUEUES § More than Traditional Queue § Uses CUDA Streams § Perform I/O, Pre-processing, Cropping, Shuffling, … § Pull from HDFS, S3, Google Storage, Kafka, ... § Combine Many Small Files into Large TFRecord Files § Use CPUs to Free GPUs for Compute § Helps Saturate CPUs and GPUs
  70. 70. QUEUE CAPACITY PLANNING § batch_size § # examples / batch (ie. 64 jpg) § Limited by GPU RAM § num_processing_threads § CPU threads pull and pre-process batches of data § Limited by CPU Cores § queue_capacity § Limited by CPU RAM (ie. 5 * batch_size)
  71. 71. TF.DTYPE § tf.float32, tf.int32, tf.string, etc § Default is usually tf.float32 § Most TF operations support numpy natively # Tuple of (tf.float32 scalar, tf.int32 array of 100 elements) (tf.random_uniform([1]), tf.random_uniform([1, 100], dtype=tf.int32))
  72. 72. TF.TRAIN.FEATURE § Three(3) Feature Types § Bytes § Float § Int64 § Actually, They Are Lists of 0..* Values of 3 Types Above § BytesList § FloatList § Int64List
  73. 73. TF.TRAIN.FEATURES § Map of {String -> Feature} § Better Name is “FeatureMap” § Organize Feature into Categories § Access Feature Using Features[’feature_name’]
  74. 74. TF.TRAIN.FEATURELIST § List of 0..* Feature § Access Feature Using FeatureList[0]
  75. 75. TF.TRAIN.FEATURELISTS § Map of {String -> FeatureList} § Better Name is “FeatureListMap” § Organize FeatureList into Categories § Access FeatureList Using FeatureLists[’feature_list_name’]
  76. 76. TF.TRAIN.EXAMPLE § Key-Value Dictionary § String -> tf.train.Feature § Not a Self-Describing Format (?!) § Must Establish Schema Upfront by Writers and Readers § Must Obey the Following Conventions § Feature K must be of Type T in all Examples § Feature K can be omitted, default can be configured § If Feature K exists as empty, no default is applied
  77. 77. TF.TFRECORD § Contains many tf.train.Example’s => tf.train.Example contains many tf.train.Feature’s => tf.train.Feature contains BytesList, FloatList, Int64List § Record-Oriented Format of Binary Strings (ProtoBuffer) § Must Convert tf.train.Example to Serialized String § Use tf.train.Example.SerializeToString() § Used for Large Scale ML/AI Training § Not Meant for Random or Non-Sequential Access § Compression: GZIP, ZLIB uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_data
  78. 78. EMBRACE BINARY FORMATS! § Unreadable and Scary, But Much More Efficient § Better Use of Memory and Disk Cache § Faster Copying and Moving § Smaller on the Wire I
  79. 79. CONVERTING MNIST DATA TO TFRECORD def convert_to_tfrecord(data, name): images = data.images labels = data.labels num_examples = data.num_examples rows = images.shape[1] cols = images.shape[2] depth = images.shape[3] filename = os.path.join(FLAGS.directory, name + '.tfrecords’) with tf.python_io.TFRecordWriter(filename) as writer: for index in range(num_examples): image_raw = images[index].tostring() example = tf.train.Example( features = tf.train.Features( feature = {'height': tf.train.Feature(int64_list=tf.train.Int64List(value=[rows])), 'width': tf.train.Feature(int64_list=tf.train.Int64List(value=[cols])), 'depth': tf.train.Feature(int64_list=tf.train.Int64List(value=[depth])), 'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[index])), 'image_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_raw])) })) writer.write(example.SerializeToString()) tf.python_io.TFRecordWriter
  80. 80. READING TF.TFRECORD’S § tf.data.TFRecordDatasetß Preferred (Dataset API) § tf.TFRecordReader()ß Not Preferred (Queue API) § tf.python_io.tf_record_iterator ß Preferred § Used as Python Generator for serialized_example in tf.python_io.tf_record_iterator(filename): example = tf.train.Example() example.ParseFromString(serialized_example) image_raw example.features.feature['image_raw’].string_list.value height = example.features.feature[‘height'].int32_list.value[0] …
  81. 81. DE-SERIALIZING TF.TFRECORD’S feature_map = {'height': tf.train.Feature(int64_list=tf.train.Int64List(value=[rows])), 'width': tf.train.Feature(int64_list=tf.train.Int64List(value=[cols])), 'depth': tf.train.Feature(int64_list=tf.train.Int64List(value=[depth])), 'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[index])), 'image_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_raw])) deserialized_features = tf.parse_single_example(serialized_example, features=feature_map) # Cast height from String to int32 height = tf.cast(deserialized_features[‘height’], tf.int32) … # Convert raw image from string to float32 image_raw = tf.decode_raw(deserialized_features[‘image_raw'], tf.float32)
  82. 82. MORE TF.TRAIN.FEATURE CONSTRUCTS § tf.VarLenFeature § tf.FixedLenFeature, tf.FixedLenSequenceFeature § tf.SparseFeature feature_map = {'height': tf.FixedLenFeature((), tf.int32, …)), … 'image_raw': tf.train.VarLenFeature(tf.string, …)) deserialized_features = tf.parse_single_example(serialized_example, features=feature_map) # Cast height from String to int32 height = tf.cast(deserialized_features[‘height’], tf.int32) … # Convert raw image from string to float32 image_raw = tf.decode_raw(deserialized_features[‘image_raw'], tf.float32)
  83. 83. TF.DATA.DATASET tf.Tensor => tf.data.Dataset Functional Transformations Python Generator => tf.data.Dataset Dataset.from_tensors((features, labels)) Dataset.from_tensor_slices((features, labels)) TextLineDataset(filenames) dataset.map(lambda x: tf.decode_jpeg(x)) dataset.repeat(NUM_EPOCHS) dataset.batch(BATCH_SIZE) def generator(): while True: yield ... dataset.from_generator(generator, tf.int32) Dataset => One-Shot Iterator Dataset => Initializable Iter iter = dataset.make_one_shot_iterator() next_element = iter.get_next() while …: sess.run(next_element) iter = dataset.make_initializable_iterator() sess.run(iter.initializer, feed_dict=PARAMS) next_element = iter.get_next() while …: sess.run(next_element) TIP: Use Dataset.prefetch() and parallel version of Dataset.map()
  84. 84. MORE TF.DATA.DATASET CONSTRUCTS § FixedLengthRecordDataset § Binary Files § TextLineDataset § CSV, JSON, XML, etc § TFRecordDataset § TFRecords § Iterator “The TF Dataset Dude” Tutorial: https://t.co/havjwJ46EY
  85. 85. DATASET API TRANSFORMATIONS Standard Custom (Contrib)
  86. 86. CUSTOM TF.PY_FUNC() TRANSFORMATION § Custom Python Function § Similar to Spark Python UDF (Eek!) § You Will Suffer a Big Performance Penalty § Try to Use TensorFlow-Native Operations § Remember, you can build your own in C++!
  87. 87. TF.DATA.ITERATOR TYPES § One Shot: Iterates Once Through the Dataset § Currently, best Iterator to use with Estimator API § Initializable: Runs iterator.initializer() Once § Re-Initializable: Runs iterator.initializer() Many § Ie. Random shuffling between iterations (epochs) of training § Feedable: Switch Between Different Dataset § Uses Feed and Placeholder to explicitly feed the iterator § Doesn’t require initialization when switching
  88. 88. TF.DATA.ITERATOR SIMPLE EXAMPLE dataset = tf.data.Dataset.range(5) iterator = dataset.make_initializable_iterator() next_element = iterator.get_next() # Typically `result` will be the output of a model, or an optimizer's # training operation. result = tf.add(next_element, next_element) sess.run(iterator.initializer) while True: try: sess.run(result) # è 0, 2, 4, 6, 8 except tf.errors.OutOfRangeError: print(‘End of dataset…’) break
  89. 89. TF.DATA.ITERATOR TEXT EXAMPLE filenames = ["/var/data/file1.txt", "/var/data/file2.txt"] dataset = tf.data.TextLineDataset(filenames) filenames = ["/var/data/file1.txt", "/var/data/file2.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.flat_map( lambda filename: ( tf.data.TextLineDataset(filename) .skip(1) .filter(lambda line: tf.not_equal(tf.substr(line, 0, 1), "#")))) § Skip 1st Header Line and Comment Lines Starting with `#`
  90. 90. TF.DATA.ITERATOR NUMPY EXAMPLE # Load the training data into two NumPy arrays, for example using `np.load()`. with np.load("/var/data/training_data.npy") as data: features = data["features"] labels = data["labels"] # Assume that each row of `features` corresponds to the same row as `labels`. assert features.shape[0] == labels.shape[0] features_placeholder = tf.placeholder(features.dtype, features.shape) labels_placeholder = tf.placeholder(labels.dtype, labels.shape) dataset = tf.data.Dataset.from_tensor_slices((features_placeholder, labels_placeholder)) # …Your Dataset Transformations… iterator = dataset.make_initializable_iterator() sess.run(iterator.initializer, feed_dict={features_placeholder: features, labels_placeholder: labels})
  91. 91. TF.DATA.ITERATOR TFRECORD EXAMPLE filenames = tf.placeholder(tf.string, shape=[None]) dataset = tf.data.TFRecordDataset(filenames) dataset = dataset.map(...) # Parse the record into tensors. dataset = dataset.repeat() # Repeat the input indefinitely. dataset = dataset.batch(32) # Batches of size 32 iterator = dataset.make_initializable_iterator() # You can feed the initializer with the appropriate filenames for the current # phase of execution, e.g. training vs. validation. # Initialize `iterator` with training data. training_filenames = ["/var/data/file1.tfrecord", "/var/data/file2.tfrecord"] sess.run(iterator.initializer, feed_dict={filenames: training_filenames}) # Initialize `iterator` with validation data. validation_filenames = ["/var/data/validation1.tfrecord", ...] sess.run(iterator.initializer, feed_dict={filenames: validation_filenames})
  92. 92. FUTURE OF DATASET API § Replaces Queue API § More Functional Operators § Automatic GPU Data Staging and Pre-Fetching § Under-utilized GPUs Assisting with Data Ingestion § More Profiling and Recommendations for Ingestion
  93. 93. TF.ESTIMATOR.ESTIMATOR (1/2) § Supports Keras! § Unified API for Local + Distributed § Provide Clear Path to Production § Enable Rapid Model Experiments § Provide Flexible Parameter Tuning § Enable Downstream Optimizing & Serving Infra( ) § Nudge Users to Best Practices Through Opinions § Provide Hooks/Callbacks to Override Opinions
  94. 94. TF.ESTIMATOR.ESTIMATOR (2/2) § “Train-to-Serve” Design § Create Custom Estimator or Re-Use Canned Estimator § Hides Session, Graph, Layers, Iterative Loops (Train, Eval, Predict) § Hooks for All Phases of Model Training and Evaluation § Load Input: input_fn() § Train: model_fn() and train() § Evaluate: eval_fn() and evaluate() § Performance Metrics: Loss, Accuracy, … § Save and Export: export_savedmodel() § Predict: predict() Uses the slow sess.run() https://github.com/GoogleCloudPlatform/cloudml-samples /blob/master/census/customestimator/
  95. 95. TF.CONTRIB.LEARN.EXPERIMENT § Easier-to-Use Distributed TensorFlow § Same API for Local and Distributed § Combines Estimator with input_fn() § Used for Training, Evaluation, & Hyper-Parameter Tuning § Distributed Training Defaults to Data-Parallel & Async § Cluster Configuration is Fixed at Start of Training Job § No Auto-Scaling Allowed, but That’s OK for Training Note: The Experiment API Will Likely Be Deprecated Soon
  96. 96. ESTIMATOR + EXPERIMENT CONFIGS § TF_CONFIG § Special environment variable for config § Defines ClusterSpec in JSON incl. master, workers, PS’s § Distributed mode ‘{“environment”:“cloud”}’ § Local: ‘{environment”:“local”, {“task”:{”type”:”worker”}}’ § RunConfig: Defines checkpoint interval, output directory, § HParams: Hyper-parameter tuning parameters and ranges § learn_runner creates RunConfig before calling run() & tune() § schedule is set based on {”task”:{”type”:…}} TF_CONFIG= '{ "environment": "cloud", "cluster": { "master":["worker0:2222”], "worker":["worker1:2222"], "ps": ["ps0:2222"] }, "task": {"type": "ps", "index": "0"} }'
  97. 97. ESTIMATOR + KERAS § Distributed TensorFlow (Estimator) + Easy to Use (Keras) § tf.keras.estimator.model_to_estimator() # Instantiate a Keras inception v3 model. keras_inception_v3 = tf.keras.applications.inception_v3.InceptionV3(weights=None) # Compile model with the optimizer, loss, and metrics you'd like to train with. keras_inception_v3.compile(optimizer=tf.keras.optimizers.SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metric='accuracy') # Create an Estimator from the compiled Keras model. est_inception_v3 = tf.keras.estimator.model_to_estimator(keras_model=keras_inception_v3) # Treat the derived Estimator as you would any other Estimator. For example, # the following derived Estimator calls the train method: est_inception_v3.train(input_fn=my_training_set, steps=2000)
  98. 98. “CANNED” ESTIMATORS § Commonly-Used Estimators § Pre-Tested and Pre-Tuned § DNNClassifer, TensorForestEstimator § Always Use Canned Estimators If Possible § Reduce Lines of Code, Complexity, and Bugs § Use FeatureColumn to Define & Create Features Custom vs. Canned @ Google, August 2017
  99. 99. ESTIMATOR + DATASET API def input_fn(): def generator(): while True: yield ... my_dataset = tf.data.dataset.from_generator(generator, tf.int32) # A one-shot iterator automatically initializes itself on first use. iter = my_dataset.make_one_shot_iterator() # The return value of get_next() matches the dataset element type. images, labels = iter.get_next() return images, labels # The input_fn can be used as a regular Estimator input function. estimator = tf.estimator.Estimator(…) estimator.train(train_input_fn=input_fn, …)
  100. 100. OPTIMIZER + ESTIMATOR API + TPU’S run_config = tpu_config.RunConfig() tpu_config = tf.contrib.tpu.TPUConfig(FLAGS.iterations, FLAGS.num_shards) estimator = tpu_estimator.TpuEstimator(model_fn=model_fn, config=run_config) estimator.train(input_fn=input_fn, num_epochs=10, …) optimizer = tpu_optimizer.CrossShardOptimizer( tf.train.GradientDescentOptimizer(learning_rate=…)) train_op = optimizer.minimize(loss) estimator_spec = tf.estimator.EstimatorSpec(train_op=train_op, loss=…) https://www.tensorflow.org/programmers_guide/using_tpu
  101. 101. DATASET API TIMELINES (TENSORBOARD) § Use Dataset.prefetch()!! § Helps prevent bottlenecks in I/O pipeline
  102. 102. TPU COMPATIBILITY (TENSORBOARD>=1.6)
  103. 103. TPU PROFILING pip install cloud-tpu-profiler==1.5.1 capture_tpu_profile --tpu_name=$TPU_NAME --logdir=$MODEL_DIR https://cloud.google.com/tpu/docs/cloud-tpu-tools tensorboard --logdir=$MODEL_DIR
  104. 104. TPU TIMELINE (TENSORBOARD)
  105. 105. INPUT PIPELINE ANALYSIS § Determine if Pipeline is Input-Bound
  106. 106. TF.CONTRIB.LEARN.HEAD (OBJECTIVES) § Single-Objective Estimator § Single classification prediction § Multi-Objective Estimator § One (1) classification prediction § One(1) final layer to feed into next model § Multiple Heads Used to Ensemble Models § Treats neural network as a feature engineering step § Supported by TensorFlow Serving
  107. 107. TF.LAYERS § Standalone Layer or Entire Sub-Graphs § Functions of Tensor Inputs & Outputs § Mix and Match with Operations § Assumes 1st Dimension is Batch Size § Handles One (1) to Many (*) Inputs § Metrics are Layers § Loss Metric (Per Mini-Batch) § Accuracy and MSE (Across Mini-Batches)
  108. 108. TF.FEATURE_COLUMN § Used by Canned Estimator § Declaratively Specify Training Inputs § Converts Sparse to Dense Tensors § Sparse Features: Query Keyword, ProductID § Dense Features: One-Hot, Multi-Hot § Wide/Linear: Use Feature-Crossing § Deep: Use Embeddings
  109. 109. TF.FEATURE_COLUMN EXAMPLE § Continuous + One-Hot + Embedding deep_columns = [ age, education_num, capital_gain, capital_loss, hours_per_week, tf.feature_column.indicator_column(workclass), tf.feature_column.indicator_column(education), tf.feature_column.indicator_column(marital_status), tf.feature_column.indicator_column(relationship), # To show an example of embedding tf.feature_column.embedding_column(occupation, dimension=8), ]
  110. 110. FEATURE CROSSING § Create New Features by Combining Existing Features § Limitation: Combinations Must Exist in Training Dataset base_columns = [ education, marital_status, relationship, workclass, occupation, age_buckets ] crossed_columns = [ tf.feature_column.crossed_column( ['education', 'occupation'], hash_bucket_size=1000), tf.feature_column.crossed_column( ['age_buckets', 'education', 'occupation'], hash_bucket_size=1000) ]
  111. 111. SEPARATE TRAINING + EVALUATION § Separate Training and Evaluation Clusters § Evaluate Upon Checkpoint § Avoid Resource Contention § Training Continues in Parallel with Evaluation Training Cluster Evaluation Cluster Parameter Server Cluster
  112. 112. BATCH (RE-)NORMALIZATION (2015, 2017) § Each Mini-Batch May Have Wildly Different Distributions § Normalize per Batch (and Layer) § Faster Training, Learns Quicker § Final Model is More Accurate § TensorFlow is already on 2nd Generation Batch Algorithm § First-Class Support for Fusing Batch Norm Layers § Final mean + variance Are Folded Into Graph Later -- (Almost) Always Use Batch (Re-)Normalization! -- z = tf.matmul(a_prev, W) a = tf.nn.relu(z) a_mean, a_var = tf.nn.moments(a, [0]) scale = tf.Variable(tf.ones([depth/channels])) beta = tf.Variable(tf.zeros ([depth/channels])) bn = tf.nn.batch_normalizaton(a, a_mean, a_var, beta, scale, 0.001)
  113. 113. DROPOUT (2014) § Training Technique § Prevents Overfitting § Helps Avoid Local Minima § Inherent Ensembling Technique § Creates and Combines Different Neural Architectures § Expressed as Probability Percentage (ie. 50%) § Boost Other Weights During Validation & Prediction Perform Dropout (Training Phase) Boost for Dropout (Validation & Prediction Phase) 0% Dropout 50% Dropout
  114. 114. BATCH NORM, DROPOUT + ESTIMATOR API § Must Specify Evaluation or Training Mode § These Will Behave Differently Depending on Mode
  115. 115. SAVED MODEL FORMAT § Different Format than Traditional Exporter § Contains Checkpoints, 1..* MetaGraph’s, and Assets § Export Manually with SavedModelBuilder § Estimator.export_savedmodel() § Hooks to Generate SignatureDef § Use saved_model_cli to Verify § Used by TensorFlow Serving § New Standard Export Format? (Catching on Slowly…)
  116. 116. TENSORFLOW DEBUGGER § Step through Operations § Inspect Inputs and Outputs § Wrap Session in Debug Session sess = tf.Session(config=config) sess = tf_debug.LocalCLIDebugWrapperSession(sess) https://www.tensorflow.org/programmers_guide/debugger
  117. 117. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Train, Inspect, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  118. 118. SINGLE NODE, MULTI-GPU TRAINING § cpu:0 § By default, all CPUs § Requires extra config to target a CPU § gpu:0..n § Each GPU has a unique id § TF usually prefers a single GPU § xla_cpu:0, xla_gpu:0..n § “JIT Compiler Device” § Hints TensorFlow to attempt JIT Compile with tf.device(“/cpu:0”): with tf.device(“/gpu:0”): with tf.device(“/gpu:1”): GPU 0 GPU 1
  119. 119. DISTRIBUTED, MULTI-NODE TRAINING § TensorFlow Automatically Inserts Send and Receive Ops into Graph § Parameter Server Synchronously Aggregates Updates to Variables § Nodes with Multiple GPUs will Pre-Aggregate Before Sending to PS Worker0 Worker0 Worker1 Worker0 Worker1 Worker2 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu0 gpu0 Single Node Multiple Nodes
  120. 120. DATA PARALLEL VS. MODEL PARALLEL § Data Parallel (“Between-Graph Replication”) § Send exact same model to each device § Each device operates on partition of data § ie. Spark sends same function to many workers § Each worker operates on their partition of data § Model Parallel (“In-Graph Replication”) § Send different partition of model to each device § Each device operates on all data § Difficult, but required for larger models with lower-memory GPUs
  121. 121. SYNCHRONOUS VS. ASYNCHRONOUS § Synchronous § Nodes compute gradients § Nodes update Parameter Server (PS) § Nodes sync on PS for latest gradients § Asynchronous § Some nodes delay in computing gradients § Nodes don’t update PS § Nodes get stale gradients from PS § May not converge due to stale reads!
  122. 122. CHIEF WORKER § Chief Defaults to Worker Task 0 § Task 0 is guaranteed to exist § Performs Maintenance Tasks § Writes log summaries § Instructs PS to checkpoint vars § Performs PS health checks § (Re-)Initialize variables at (re-)start of training
  123. 123. NODE AND PROCESS FAILURES § Checkpoint to Persistent Storage (HDFS, S3) § Use MonitoredTrainingSession and Hooks § Use a Good Cluster Orchestrator (ie. Kubernetes, Mesos) § Understand Failure Modes and Recovery States Stateless, Not Bad: Training Continues Stateful, Bad: Training Must Stop Dios Mio! Long Night Ahead…
  124. 124. ADVANCED DEVICE PLACEMENT STRATEGIES § Re-Inforcement Learning Adapts to Real-Time Conditions § Manual Device Placement is Static § TensorFlow Grappler Project
  125. 125. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Train, Inspect, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  126. 126. XLA FRAMEWORK § XLA: “Accelerated Linear Algebra” § Reduce Reliance on Custom Operators § Intermediate Representation used by Hardware Vendors § Improve Portability § Increase Execution Speed § Decrease Memory Usage § Decrease Mobile Footprint Helps TensorFlow Be Flexible AND Performant!!
  127. 127. XLA HIGH LEVEL OPTIMIZER (HLO) § HLO: “High Level Optimizer” § Compiler Intermediate Representation (IR) § Independent of source and target language § XLA Step 1 Emits Target-Independent HLO § XLA Step 2 Emits Target-Dependent LLVM § LLVM Emits Native Code Specific to Target § Supports x86-64, ARM64 (CPU), and NVPTX (GPU)
  128. 128. XLA IS DESIGNED FOR RE-USE § Pluggable Backends § HLO “Toolkit” § Call BLAS or cuDNN § Use LLVM or BYO Low-Level-Optimizer
  129. 129. MINIMAL XLA BACKEND § HLO / LLVM Pipeline § StreamExecutor Plugin
  130. 130. XLA CPU BACKEND
  131. 131. XLA GPU / NVIDIA PTX BACKEND
  132. 132. XLA GPU / OPENCL BACKEND
  133. 133. CPU HLO PIPELINE
  134. 134. GPU HLO PIPELINE
  135. 135. XLA PERFORMANCE OPTIMIZATIONS § JIT Training § MNIST: 30% Speed Up § Inception: 20% Speed Up § Basic LSTM: 80% Speed Up § Translation Model BNMT: 20% Speed Up § AOT Inference (Next Section) § LSTM Model Size: 1 MB => 10 KB
  136. 136. JIT COMPILER § JIT: “Just-In-Time” Compiler § Built on XLA Framework § Reduce Memory Movement – Especially with GPUs § Reduce Overhead of Multiple Function Calls § Similar to Spark Operator Fusing in Spark 2.0 § Unroll Loops, Fuse Operators, Fold Constants, … § Scopes: session, device, with jit_scope():
  137. 137. TO JIT OR NOT TO JIT
  138. 138. VISUALIZING JIT COMPILER IN ACTION Before JIT After JIT Google Web Tracing Framework: http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True)) run_options = tf.RunOptions(trace_level=tf.RunOptions.SOFTWARE_TRACE) run_metadata = tf.RunMetadata() sess.run(options=run_options, run_metadata=run_metadata)
  139. 139. VISUALIZING FUSING OPERATORS pip install graphviz dot -Tpng /tmp/hlo_graph_1.w5LcGs.dot -o hlo_graph_1.png GraphViz: http://www.graphviz.org hlo_*.dot files generated by XLA
  140. 140. XLA COMPILATION SUMMARY § Generates Code and Libraries for Your Computation § Packages Libraries Needed for Your § Eliminates Dispatch Overhead of Operations § Fuses Operations to Avoid Memory Round Trip § Analyzes Buffers to Reuse Memory § Updates Memory In-Place § Unrolls Loops with Your Data Dimensions (ie.Batch Size) § Vectorizes Operations Specific to Your Data Dimensions
  141. 141. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Traffic Routing
  142. 142. WE ARE NOW… …OPTIMIZING Models AFTER Model Training TO IMPROVE Model Serving PERFORMANCE!
  143. 143. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  144. 144. AOT COMPILER § Standalone, Ahead-Of-Time (AOT) Compiler § Built on XLA framework § tfcompile § Creates executable with minimal TensorFlow Runtime needed § Includes only dependencies needed by subgraph computation § Creates functions with feeds (inputs) and fetches (outputs) § Packaged as cc_libary header and object files to link into your app § Commonly used for mobile device inference graph § Currently, only CPU x86-64 and ARM are supported - no GPU
  145. 145. GRAPH TRANSFORM TOOL (GTT) § Post-Training Optimization to Prepare for Inference § Remove Training-only Ops (checkpoint, drop out, logs) § Remove Unreachable Nodes between Given feed -> fetch § Fuse Adjacent Operators to Improve Memory Bandwidth § Fold Final Batch Norm mean and variance into Variables § Round Weights/Variables to improve compression (ie. 70%) § Quantize (FP32 -> INT8) to Speed Up Math Operations
  146. 146. AFTER TRAINING, BEFORE OPTIMIZATION -TensorFlow- Trains Variables -User- Fetches Outputs -User- Feeds Inputs -TensorFlow- Performs Operations -TensorFlow- Flows Tensors ?!
  147. 147. POST-TRAINING GRAPH TRANSFORMS transform_graph --in_graph=unoptimized_cpu_graph.pb ß Original Graph --out_graph=optimized_cpu_graph.pb ß Transformed Graph --inputs=’x_observed:0' ß Feed (Input) --outputs=’Add:0' ß Fetch (Output) --transforms=' ß List of Transforms strip_unused_nodes remove_nodes(op=Identity, op=CheckNumerics) fold_constants(ignore_errors=true) fold_batch_norms fold_old_batch_norms quantize_weights quantize_nodes'
  148. 148. AFTER STRIPPING UNUSED NODES § Optimizations § strip_unused_nodes § Results § Graph much simpler § File size much smaller
  149. 149. AFTER REMOVING UNUSED NODES § Optimizations § strip_unused_nodes § remove_nodes § Results § Pesky nodes removed § File size a bit smaller
  150. 150. AFTER FOLDING CONSTANTS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § Results § Placeholders (feeds) -> Variables* (*Why Variables and not Constants?)
  151. 151. AFTER FOLDING BATCH NORMS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § Results § Graph remains the same § File size approximately the same
  152. 152. AFTER QUANTIZING WEIGHTS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § Results § Graph is same, file size is smaller, compute is faster
  153. 153. WEIGHT (VARIABLE) QUANTIZATION § FP16 or INT8: Smaller & Computationally Faster than FP32 § Easy to “Linearly Quantize” (Re-Encode) FP32 -> INT8 Easy Breezy!
  154. 154. BENEFITS OF 32-BIT TO 8-BIT QUANTIZE § First Class Hardware and CUDA Support § One 32-Bit GPU Core: 4-Way Dot Product of 8-Bit Ints § GPU Compute Capability (CC) >= 6.1 Only
  155. 155. ACTIVATION QUANTIZATION § Activations Not Known Ahead of Time § Depends on input, not easy to quantize § Requires Additional Calibration Step § Use representative, diverse validation dataset § ~1000 samples, ~10 minutes,, cheap hardware § Run 32-Bit Inference with Calibration Data § Collect histogram of activation values at each layer § Generate many quantized distributions at diff saturation thresholds § Choose Saturation Threshold That Minimizes Accuracy Loss
  156. 156. CHOOSING SATURATION THRESHOLD § Trade-off Between Range & Precision § INT8 Should Encode Same Information As Original FP32 § Minimize Loss of Information Across Encoding/Distributions § Use KL_Divergence(32bit_dist, 8bit_dist) § Compares 2 distributions § Similar to Cross-Entropy
  157. 157. SATURATE TO MINIMIZE ACCURACY LOSS § Helps Preserve Accuracy After Activation Quantization § Goal: Find Threshold (T) That Minimizes Accuracy Loss No Saturation Saturation
  158. 158. AUTO-CALIBRATE: PIPELINEAI + TENSOR-RT Pre-Requisites § 32-Bit Trained Model (TensorFlow, Caffe) § Small Calibration Dataset (Validation) PipelineAI + TensorRT Optimizations § Run 32-Bit Inference on Calibration Dataset § Collect Required Statistics § Use KL_Divergence to Determine Saturation Thresholds § Perform 32-Bit Float -> 8-Bit Int Quantization § Generate Calibration Table and INT8 Execution Engine
  159. 159. 32-BIT TO 8-BIT QUANTIZATION RESULTS Accuracy of INT8 Models Comparable to FP32
  160. 160. AFTER ACTIVATION QUANTIZATION § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § quantize_nodes (activations) § Results § Larger graph, needs calibration! Requires Additional freeze_requantization_ranges
  161. 161. TF.CONTRIB.QUANTIZE() § “Fake Quantization Ops”
  162. 162. FREEZING MODEL FOR DEPLOYMENT § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § quantize_nodes § freeze_graph § Results § Variables -> Constants Finally! We’re Ready to Deploy!!
  163. 163. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  164. 164. MODEL SERVING TERMINOLOGY § Inference § Only Forward Propagation through Network § Predict, Classify, Regress, … § Bundle § GraphDef, Variables, Metadata, … § Assets § ie. Map of ClassificationID -> String § {9283: “penguin”, 9284: “bridge”} § Version § Every Model Has a Version Number (Integer) § Version Policy § ie. Serve Only Latest (Highest), Serve Both Latest and Previous, …
  165. 165. TENSORFLOW SERVING FEATURES § Supports Auto-Scaling § Custom Loaders beyond File-based § Tune for Low-latency or High-throughput § Serve Diff Models/Versions in Same Process § Customize Models Types beyond HashMap and TensorFlow § Customize Version Policies for A/B and Bandit Tests § Support Request Draining for Graceful Model Updates § Enable Request Batching for Diff Use Cases and HW § Supports Optimized Transport with GRPC and Protocol Buffers
  166. 166. GRPC :: PROTOBUFFERS AS HTTP :: JSON
  167. 167. PREDICTION SERVICE § Predict (Original, Generic) § Input: List of Tensor § Output: List of Tensor § Classify § Input: List of tf.Example (key, value) pairs § Output: List of (class_label: String, score: float) § Regress § Input: List of tf.Example (key, value) pairs § Output: List of (label: String, score: float)
  168. 168. PREDICTION INPUTS + OUTPUTS § SignatureDef § Defines inputs and outputs § Maps external (logical) to internal (physical) tensor names § Allows internal (physical) tensor names to change from tensorflow.python.saved_model import utils from tensorflow.python.saved_model import signature_constants from tensorflow.python.saved_model import signature_def_utils graph = tf.get_default_graph() x_observed = graph.get_tensor_by_name('x_observed:0') y_pred = graph.get_tensor_by_name('add:0') inputs_map = {'inputs': x_observed} outputs_map = {'outputs': y_pred} predict_signature = signature_def_utils.predict_signature_def(inputs=inputs_map, outputs=outputs_map)
  169. 169. MULTI-HEADED INFERENCE § Inputs Pass Through Model One Time § Model Returns Multiple Predictions: 1. Human-readable prediction (ie. “penguin”, “church”,…) 2. Final layer of scores (float vector) § Final Layer of floats Pass to the Next Model in Ensemble § Optimizes Bandwidth, CPU/GPU, Latency, Memory § Enables Complex Model Composing and Ensembling
  170. 170. BUILD YOUR OWN MODEL SERVER § Adapt GRPC(Google) <-> HTTP (REST of the World) § Perform Batch Inference vs. Request/Response § Handle Requests Asynchronously § Support Mobile, Embedded Inference § Customize Request Batching § Add Circuit Breakers, Fallbacks § Control Latency Requirements § Reduce Number of Moving Parts #include “tensorflow_serving/model_servers/server_core.h” class MyTensorFlowModelServer { ServerCore::Options options; // set options (model name, path, etc) std::unique_ptr<ServerCore> core; TF_CHECK_OK( ServerCore::Create(std::move(options), &core) ); } Compile and Link with libtensorflow.so
  171. 171. RUNTIME OPTION: NVIDIA TENSOR-RT § Post-Training Model Optimizations § Specific to Nvidia GPU § Similar to TF Graph Transform Tool § GPU-Optimized Prediction Runtime § Alternative to TensorFlow Serving § PipelineAI Supports TensorRT!
  172. 172. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  173. 173. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  174. 174. REQUEST BATCH TUNING § max_batch_size § Enables throughput/latency tradeoff § Bounded by RAM § batch_timeout_micros § Defines batch time window, latency upper-bound § Bounded by RAM § num_batch_threads § Defines parallelism § Bounded by CPU cores § max_enqueued_batches § Defines queue upper bound, throttling § Bounded by RAM Reaching either threshold will trigger a batch Separate, Non-Batched Requests Combined, Batched Requests
  175. 175. ADVANCED BATCHING & SERVING TIPS § Batch Just the GPU/TPU Portions of the Computation Graph § Batch Arbitrary Sub-Graphs using Batch / Unbatch Graph Ops § Distribute Large Models Into Shards Across TensorFlow Model Servers § Batch RNNs Used for Sequential and Time-Series Data § Find Best Batching Strategy For Your Data Through Experimentation § BasicBatchScheduler: Homogeneous requests (ie Regress or Classify) § SharedBatchScheduler: Mixed requests, multi-step, ensemble predict § StreamingBatchScheduler: Mixed CPU/GPU/IO-bound Workloads § Serve Only One (1) Model Inside One (1) TensorFlow Serving Process § Much Easier to Debug, Tune, Scale, and Manage Models in Production.
  176. 176. PIPELINE.AI FUNCTIONS (SERVERLESS) § Supports Kubernetes § Supports Docker Swarm
  177. 177. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Traffic Routing
  178. 178. AGENDA Part 3: Advanced Model Serving + Traffic Routing § Kubernetes Ingress, Egress, Networking § Istio and Envoy Architecture § Intelligent Traffic Routing and Scaling § Metrics, Chaos Monkey, Production Readiness
  179. 179. KUBERNETES PRIORITY SCHEDULING Workloads can … § access the entire cluster up to the autoscaler max size § trigger autoscaling until higher-priority workload § “fill the cracks” of resource usage of higher-priority work (i.e., wait to run until resources are feed
  180. 180. KUBERNETES INGRESS § Single Service § Can also use Service (LoadBalancer or NodePort) § Fan Out & Name-Based Virtual Hosting § Route Traffic Using Path or Host Header § Reduces # of load balancers needed § 404 Implemented as default backend § Federation / Hybrid-Cloud § Creates Ingress objects in every cluster § Monitors health and capacity of pods within each cluster § Routes clients to appropriate backend anywhere in federation apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gateway-fanout annotations: kubernetes.io/ingress.class: istio spec: rules: - host: foo.bar.com http: paths: - path: /foo backend: serviceName: s1 servicePort: 80 - path: /bar backend: serviceName: s2 servicePort: 80 Fan Out (Path) apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gateway-virtualhost annotations: kubernetes.io/ingress.class: istio spec: rules: - host: foo.bar.com http: paths: backend: serviceName: s1 servicePort: 80 - host: bar.foo.com http: paths: backend: serviceName: s2 servicePort: 80 Virtual Hosting
  181. 181. KUBERNETES INGRESS CONTROLLER § Ingress Controller Types § Google Cloud: kubernetes.io/ingress.class: gce § Nginx: kubernetes.io/ingress.class: nginx § Istio: kubernetes.io/ingress.class: istio § Must Start Ingress Controller Manually § Just deploying Ingress is not enough § Not started by kube-controller-manager § Start Istio Ingress Controller kubectl apply -f $ISTIO_INSTALL_PATH/install/kubernetes/istio.yaml
  182. 182. ISTIO EGRESS § While-list Domains To Access From Within Service Mesh § Apply RoutingRules § Apply DestinationPolicys § Supports TLS, HTTP, GRPC kind: EgressRule metadata: name: pipeline-api-egress spec: destination: service: api.pipeline.ai ports: - port: 80 protocol: http - port: 443 protocol: https
  183. 183. AGENDA Part 3: Advanced Model Serving + Traffic Routing § Kubernetes Ingress, Egress, Networking § Istio and Envoy Architecture § Intelligent Traffic Routing and Scaling § Metrics, Chaos Monkey, Production Readiness
  184. 184. ISTIO ARCHITECTURE: INGRESS
  185. 185. ISTIO ARCHITECTURE: ENVOY § Lyft Project § High-perf Proxy (C++) § Lots of Metrics § Zone-Aware § Service Discovery § Load Balancing § Fault Injection, Circuits § %-based Traffic Split, Shadow § Sidecar Pattern § Rate Limiting, Retries, Outlier Detection, Timeout with Budget, …
  186. 186. ISTIO ARCHITECTURE: MIXER § Enforce Access Control § Evaluate Request-Attrs § Collect Metrics § Platform-Independent § Extensible Plugin Model
  187. 187. ISTIO ARCHITECTURE: PILOT § Envoy service discovery § Intelligent routing § A/B Tests § Canary deployments § RouteRule->Envoy conf § Propagates to sidecars § Supports Kube, Consul, ...
  188. 188. ISTIO ARCHITECTURE: SECURITY § Mutual TLS Auth § Credential Management § Uses Service-Identity § Canary Deployments § Fine-grained ACLs § Attribute & Role-based § Auditing & Monitoring
  189. 189. AGENDA Part 3: Advanced Model Serving + Traffic Routing § Kubernetes Ingress, Egress, Networking § Istio and Envoy Architecture § Intelligent Traffic Routing and Scaling § Metrics, Chaos Monkey, Production Readiness
  190. 190. ISTIO ROUTE RULES § Kubernetes Custom Resource Definition (CRD) kind: CustomResourceDefinition metadata: name: routerules.config.istio.io spec: group: config.istio.io names: kind: RouteRule listKind: RouteRuleList plural: routerules singular: routerule scope: Namespaced version: v1alpha2
  191. 191. ADVANCED TRAFFIC ROUTING RULES § Content-based Routing § Uses headers, username, payload, … § Cross-Environment Routing § Shadow traffic prod=>staging
  192. 192. ISTIO DESTINATION POLICIES § Load Balancing § ROUND_ROBIN (default) § LEAST_CONN (between 2 randomly-selected hosts) § RANDOM § Circuit Breaker § Max connections § Max requests per conn § Consecutive errors § Penalty timer (15 mins) § Scan windows (5 mins) circuitBreaker: simpleCb: maxConnections: 100 httpMaxRequests: 1000 httpMaxRequestsPerConnection: 10 httpConsecutiveErrors: 7 sleepWindow: 15m httpDetectionInterval: 5m
  193. 193. ISTIO AUTO-SCALING § Traffic Routing and Auto-Scaling Occur Independently § Istio Continues to Obey Traffic Splits After Auto-Scaling § Auto-Scaling May Occur In Response to New Traffic Route
  194. 194. A/B & BANDIT MODEL TESTING § Perform Live Experiments in Production § Compare Existing Model A with Model B, Model C § Safe Split-Canary Deployment § Pro Tip: Keep Ingress Simple – Use Route Rules Instead! apiVersion: config.istio.io/v1alpha2 kind: RouteRule metadata: name: predict-mnist-20-5-75 spec: destination: name: predict-mnist precedence: 2 # Greater than global deny-all route: - labels: version: A weight: 20 # 20% still routes to model A - labels: version: B # 5% routes to new model B weight: 5 - labels: version: C # 75% routes to new model C weight: 75 apiVersion: config.istio.io/v1alpha2 kind: RouteRule metadata: name: predict-mnist-1-2-97 spec: destination: name: predict-mnist precedence: 2 # Greater than global deny-all route: - labels: version: A weight: 1 # 1% routes to model A - labels: version: B # 2% routes to new model B weight: 2 - labels: version: C # 97% routes to new model C weight: 97 apiVersion: config.istio.io/v1alpha2 kind: RouteRule metadata: name: predict-mnist-97-2-1 spec: destination: name: predict-mnist precedence: 2 # Greater than global deny-all route: - labels: version: A weight: 97 # 97% still routes to model A - labels: version: B # 2% routes to new model B weight: 2 - labels: version: C # 1% routes to new model C weight: 1
  195. 195. AGENDA Part 3: Advanced Model Serving + Traffic Routing § Kubernetes Ingress, Egress, Networking § Istio and Envoy Architecture § Intelligent Traffic Routing and Scaling § Metrics, Chaos Monkey, Production Readiness
  196. 196. ISTIO METRICS AND MONITORING § Verify Traffic Splits § Fine-Grained Request Tracing
  197. 197. ISTIO & CHAOS + LATENCY MONKEY § Fault Injection § Delay § Abort kind: RouteRule metadata: name: predict-mnist spec: destination: name: predict-mnist httpFault: abort: httpStatus: 420 percent: 100 kind: RouteRule metadata: name: predict-mnist spec: destination: name: predict-mnist httpFault: delay: fixedDelay: 7.000s percent: 100
  198. 198. SPECIAL THANKS TO CHRISTIAN POSTA § http://blog.christianposta.com/istio-workshop
  199. 199. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Traffic Routing
  200. 200. PIPELINE.AI SUPPORTS ALL MAJOR MODELS
  201. 201. PIPELINE.AI ANNOUNCEMENTS http://pipeline.aihttp://community.pipeline.ai
  202. 202. THANK YOU!! § Please Star this GitHub Repo! § All slides, code, notebooks, and Docker images here: https://github.com/PipelineAI/pipeline Contact Me chris@pipeline.ai @cfregly

×