2. AI
2016.3 AlphaGO vs
2016.9 Google (AI)
2015 Google Photo
“Google's AlphaGo AI Continues to Wallop Expert Human Go Player”, Popular Mechanics, 2016/3/10
http://www.popularmechanics.com/technology/a19863/googles-alphago-ai-wins-second-game-go/
8. “Introduction to multi gpu deep learning with DIGITS 2”, Mike Wang
http://www.slideshare.net/papisdotio/introduction-to-multi-gpu-deep-learning-with-digits-2-mike-wang/6
9. “Introduction to multi gpu deep learning with DIGITS 2”, Mike Wang
http://www.slideshare.net/papisdotio/introduction-to-multi-gpu-deep-learning-with-digits-2-mike-wang/6
10. “Introduction to multi gpu deep learning with DIGITS 2”, Mike Wang
http://www.slideshare.net/papisdotio/introduction-to-multi-gpu-deep-learning-with-digits-2-mike-wang/6
24. An Introduction to HBM - High Bandwidth Memory -
Stacked Memory and The Interposer
http://www.guru3d.com/articles-pages/an-introduction-to-hbm-high-bandwidth-memory,2.html
• HBM
DRAM
•
• GPU
Interposer
• 2.5D
25. GDDR5 HBM2
32-bit Bus With 1024-bit
Up-to 1750 MHz (7 Gbps) 2 Gbps
Up-to 28 GB/s per chip 125GB/s (2Tb/s)
per unit
1.5V 1.3V
44. “Accelerating Neural Networks with Binary Arithmetic”
(blog post)
These 32 bit floating point multiplications, however, are very expensive.
In BNNs, floating point multiplications are supplanted with
bitwise XNORs and left and right bit shifts.
This is extremely attractive from a hardware perspective:
binary operations can be implemented computationally efficiently at a low
power cost.
Nervana website (blog post)
https://www.nervanasys.com/accelerating-neural-networks-binary-arithmetic/
32bit
BNN XNOR bit shift