Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen. ×

Faster deep learning solutions from training to inference - Michele Tameni - Codemotion Amsterdam 2017

Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Wird geladen in …3
×

Hier ansehen

1 von 28 Anzeige

Faster deep learning solutions from training to inference - Michele Tameni - Codemotion Amsterdam 2017

Herunterladen, um offline zu lesen

Intel Deep Learning SDK enables using of optimized open source deep-learning frameworks, including Caffe and TensorFlow through a step-by-step wizard or iPython interactive notebooks. It includes easy and fast installation of all depended libraries and advanced tools for easy data pre-processing and model training, optimization and deployment, providing an end-to-end solution to the problem. In addition, it supports scale-out on multiple computers for training, as well as using compression methods for deployment of the models on various platforms, addressing memory and speed constraints.

Intel Deep Learning SDK enables using of optimized open source deep-learning frameworks, including Caffe and TensorFlow through a step-by-step wizard or iPython interactive notebooks. It includes easy and fast installation of all depended libraries and advanced tools for easy data pre-processing and model training, optimization and deployment, providing an end-to-end solution to the problem. In addition, it supports scale-out on multiple computers for training, as well as using compression methods for deployment of the models on various platforms, addressing memory and speed constraints.

Anzeige
Anzeige

Weitere Verwandte Inhalte

Diashows für Sie (20)

Ähnlich wie Faster deep learning solutions from training to inference - Michele Tameni - Codemotion Amsterdam 2017 (20)

Anzeige

Weitere von Codemotion (20)

Aktuellste (20)

Anzeige

Faster deep learning solutions from training to inference - Michele Tameni - Codemotion Amsterdam 2017

  1. 1. Faster deep learning solutions from training to inference using Intel® Deep Learning SDK Michele Tameni @netvandal AMSTERDAM 16 - 17 MAY 2017
  2. 2. Ciao! Michele Tameni Intel Software Innovator Software Engineer, Writer, Photographer But not…
  3. 3. !DataScientist
  4. 4. Intel® deep learning SDKIntel® deep learning SDK Easily develop and deploy deep learning solutions, using Intel® Architecture & Popular frameworks
  5. 5. deep learning today deep learning today is not really accessible… …and can be overwhelming is not really accessible… …and can be overwhelming
  6. 6. Visual understanding NPL Speech recognition Deep neural networks are solving real life cognitive tasks person Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia volu DEEP LEARNING
  7. 7. Deep learning is Everywhere AT Intel ManufacturingProcessor Design Sales & Marketing Health Analytics AI ProductsPerceptual Computing
  8. 8. Model is inspired by a multi-layer network of neurons Network Topology DEEP LEARNING
  9. 9. DEEP LEARNING steps Step 1: Training (In Data Center – Over Hours/Days/Weeks) Person Lots of labeled input data Output: Trained Model Create “Deep neural net” math model Step 2: Inference (End point or Data Center - Instantaneous) New input from camera and sensors Output: Classification Trained neural network model 97% person 2% traffic light Trained Model
  10. 10. Intel® Deep Learning SDK - Workflow Data Prep. Build a Model Model Training Training Inference Compre ssion Visualiz ations Algorith mic Feature s Multi- Node Model Optimizer Inference Engine
  11. 11. Intel Vision: Democratize deep learning Allow every Data scientist and Developer to easily deploy Open Sourced Deep Learning Frameworks optimized for Intel® Architecture - delivering end-to-end capabilities, a rich user experience, and tools to boost productivity. Plug & Train Maximize performance Productivity tools Accelerate deployment
  12. 12. Plug & Train
  13. 13. Plug & Train - An easy to use installer Install on Linux CentOS/Ubuntu or Mac Install from Linux, Mac or Windows Use the tool remotely via Chrome browser from any platform.
  14. 14. Maximize performance
  15. 15. Kubernetes Multi-node training Jupyter notebooksBrowser service DLSDK service service Node 3Node 1 Container Container DLSDK Container Data (File System) Node 2 Container Container DLSDK Container Data (File System) Container Data (File System) … Performance boost with distributed training
  16. 16. Productivity tools
  17. 17. Step by Step Wizard Productivity tools Interactive Notebook MODEL VISUALIZATION MODEL COMRESSION
  18. 18. Accelerate deployment
  19. 19. © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice. 19 Intel’s Deep Learning Deployment Toolkit Enable full utilization of Intel® architecture Inference while abstracting HW from developers  Imports trained models from popular DL framework regardless of training HW  Enhances model for improved execution, storage & transmission  Optimizes Inference execution for target hardware (computational graph analysis, scheduling, model compression, quantization)  Enables seamless integration with application logic  Delivers embedded friendly Inference solution Ease of use + Embedded friendly + Extra performance boost 11 22 Convert & OptimizeConvert & Optimize Run!Run! Trained Model 11 22
  20. 20. © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice. 20 Model Optimizer • Import Models from various frameworks (Caffe, TensorFlow, more is planned…) • Performs quantization by analyzing the activations of the network (requires image-set and actual framework) • Performs accuracy feedback (for selective targets by emulation of kernels) • Converts to a unified Model (IR, later Intel® Nervana™ Graph) • Generates OpenVX Code to integrate in OpenVX applications: • Includes fusion to OpenVX kernels known • Generate code calls the OpenVX API to constructed the related OpenVX Graph Trained Model Model OptimizerModel Optimizer AnalyzeAnalyze QuantizeQuantize Optimize topologyOptimize topology ConvertConvert Accuracy and Statistics Report IR Model OpenVX C Code
  21. 21. © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice. 21 Inference Engine Features • Inference Engine Runtime • Simple and Unified API for Inference: Load()  Infer() across all IA. • Provides optimized inference on large IA HW targets: (CPU/GEN/FPGA) • Provides inference feedback: • Deployed optimized topology • Per layer: Performance, memory, activation overflow • Provides API for saving and loading of a loaded network runtime for loading speedup and Functional Safety (AOT, proprietary per target). • Inference Engine Validator • Provides accuracy feedback for selective problem domains (image classification, semantic segmentation) • Provides performance feedback for several batch sizes (and repetitions) • Provides report of inference feedback (see above)
  22. 22. Use Cases
  23. 23. Solve a #1stWorldProblem Solve a #1stWorldProblem
  24. 24. #1stWorldProblem Where can I #WildSwim near home? #1stWorldProblem Where can I #WildSwim near home?
  25. 25. Demo http://software.intel.com/deep-learning-sdk/
  26. 26. Download, use, and provide feedback or search for: Intel Deep Learning SDK http://software.intel.com/deep-learning-sdk/ Thank you!

×