Anzeige
Anzeige

Más contenido relacionado

Anzeige

Resnet.pdf

  1. Learning with Purpose DEEP RESIDUAL NETWORKS Kaiming He et al, “Deep ResidualLearning for Image Recognition” Kaiming He et al, “Identity Mappingsin Deep ResidualNetworks” Andreas Veit et al, “ResidualNetworks Behave Like Ensembles of RelativelyShallowNetworks ”
  2. Learning with Purpose ResNet @ILSVRC & COCO 2015 Competitions 1st places in all five main tracks • ImageNet Classification: “Ultra-deep” 152-layer nets • ImageNet Detection: 16% better than 2nd • ImageNet Localization: 27% better than 2nd • COCO Detection: 11% better than 2nd • COCO Segmentation: 12% better than 2nd
  3. Learning with Purpose Evolution of Deep Networks ImageNet Classification Challenge Error rates by year ImageNet competition results show that the winning solutions have become deeper and deeper: from 8 layers in 2012 to 200+ layers in 2016.
  4. Learning with Purpose What Does Depth Mean? Deep Representation ability Forward(Data flow)
  5. Learning with Purpose
  6. Learning with Purpose What Does Depth Mean? Is learning betternetworks as easyas stacking more layers? Backward(Gradient flow)
  7. Learning with Purpose • The multiplying property of gradients causes the phenomenon • This can be addressed by: – Normalized Initialization – Batch Normalization – Appropriate activation function • Sigmoid(x) →ReLu(x) Gradient Vanishing
  8. Learning with Purpose • Plain networks on Cifar-10 Simply Stacking Layers? • Plain nets: stacking 3*3 conv layers… • 56-layer net has higher training error and test error than 20-layer net
  9. Learning with Purpose Performance Saturation/Degradation • Overly deep plain nets have higher training error • A general phenomenon, observed in many datasets.
  10. Learning with Purpose a shallower model (18 layers) a deeper counterpart (34 layers) • Richer solution space • A deeper model should not have higher training error • A solution by construction: • Original layers:copied from a trained shallower model • Extra layers:set as identity • At least the same trainingerror • Optimizationdifficulties:solvers cannot find the solutionwhen going deeper…
  11. Learning with Purpose • Keep it simple • Base on VGG Phylosophy – All 3*3 conv(almost) – Spatial size /2 => # filters*2 – Simple design; just deep! Network Design
  12. Learning with Purpose Resnet Can Be Deeper
  13. Learning with Purpose • Define H(x)=F(x)+x, the stacked weight layers try to approximate F(x) instead of H(x). Residual Learning Block If the optimal function is closer to an identity mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one ❑ Introduce neither extra parameter nor computation complexity ❑ Element-wise addition is performed on all feature maps
  14. Learning with Purpose • We turn the ReLu activation function after the addition into an identity mapping The Insight of Identity Mapping identity If f is also an identity mapping: x(l+1) ≡ yl
  15. Learning with Purpose • Any xl is directly forward-propagation to any xL, plus residual. • Any xl is additive outcome • In contrast to the multiplicity: Smooth Forward Propagation Plain network, Ignoring BN and ReLU
  16. Learning with Purpose • The gradient flow is also in the form of addition. • The gradient of any layer is unlikely to vanish • In contrast to the multiplicity: Smooth Backward Propagation
  17. Learning with Purpose What if Shortcut Mapping h(x)≠ Identity?
  18. Learning with Purpose If Scaling the Shortcut For an extremely deep network (L is large), if for all i, this factor can be exponentially large; If for all i, this factor can be exponentially small and vanish
  19. Learning with Purpose • The gating should increase the representation ability (parameter increases) • It’s the optimization rather than the representation dominates the results If Gating the Shortcut
  20. Learning with Purpose Results of Using Different Types of Shortcut Identity shortcut is the best
  21. Learning with Purpose Training curves on CIFAR-10 of various shortcuts Solid lines denote test error (y-axis on the right), and dashed lines denote trainingloss (y-axis on the left)
  22. Learning with Purpose On the Usage of Activation Functions Proposed
  23. Learning with Purpose Results of Experiments on Activation
  24. Learning with Purpose ReLu vs. ReLu+BN • BN could block propagation • Keep the shortest path as smooth as possible
  25. Learning with Purpose ReLu vs. Identity • ReLu could block propagation when the network is deep • Pre-activation ease the difficulty in optimization
  26. Learning with Purpose ImageNet Results
  27. Learning with Purpose Conclusion From He Keep the shortest path as smooth (clean) as possible By making h(x) and f(x) identity mapping Forward and backward signals directly flow this path Features of any layer is additive outcome 1000-layer ResNet can be easily trained and have better accuracy
  28. Learning with Purpose Further expansion of Residual network yl yl+1 fl() According to previous analysis, and we replace xl with yl and F with fl We further expand this expression by unrolling the recursion in terms of basic input y. A novel interpretationof residual networks
  29. Learning with Purpose Example of unrolling We take L=3 and l=0 for example of unrolling The dataflows along paths exponentiallyfrom input to output We infer that residual networks have 2^n paths
  30. Learning with Purpose Different from traditional Neural Network In traditional NN, each layer only depends on the previous layer In ResNet, data flows along many paths from input to output. Each path is a unique configuration of which residual module to enter and which to skip
  31. Learning with Purpose Deleting individual module in ResNet Deleting a layer in residual networks at test time (a) is equivalent to zeroing half of the paths. In ordinary feed-forward networks (b) such as VGG or AlexNet, deleting individual layers alters the only viable path from input to output.
  32. Learning with Purpose Deleting individual module in ResNet
  33. Learning with Purpose Deleting many modules in ResNet One key characteristic of ensembles is their smooth performance with respect to the number of members. When k residual modules are removed, the effective number of paths is reduced from 2^n to 2^(n- k) Error increases smoothly when randomly deleting several modules from a residual network
  34. Learning with Purpose Reordering moduals in ResNet Error also increases smoothly when re-ordering a residual network by shuffling building blocks. The degree of reordering is measured by the Kendall Tau correlation coefficient.
  35. Learning with Purpose Conclusion First, unraveled view reveals that residual networks can be viewed as a collection of many paths, instead of a single ultra deep network Second, lesion studies show that, although these paths are trained jointly, they do not strongly depend on each other.
  36. Learning with Purpose Thank you
Anzeige