The document discusses SkipRNN and RepeatRNN, which are recurrent neural network models that can dynamically control computation. SkipRNN learns to skip state updates in RNNs by introducing a binary gate that decides whether to update or copy the state. It was shown to achieve better accuracy with less computation on sequential tasks. RepeatRNN reproduces adaptive computation time, allowing the network to repeat computations for difficult examples. It was found to train and infer faster than baselines on an addition task. Code for both models is available online in PyTorch and TensorFlow.
Automating Google Workspace (GWS) & more with Apps Script
Dynamic RNNs Adapt Computation
1. Xavier Giro-i-Nieto
xavier.giro@upc.edu
Associate Professor
Intelligent Data Science and Artificial Intelligence Center
Universitat Politecnica de Catalunya (UPC)
DEEP
LEARNING
WORKSHOP
Dublin City University
21-22 May 2018
Skipping and Repeating
Samples in RNNs
Case study III
#InsightDL2018
2. bit.ly/InsightDL2018
#InsightDL2018
Our team @ UPC IDEAI Barcelona
Victor
Campos
Amaia
Salvador
Amanda
Duarte
Dani
Fojo
Görkem
Çamli
Akis
Linardos
Santiago
Pascual
Xavi
Giró
Miriam
Bellver
Marta R.
Costa.jussà
Ferran
Marqués
Jordi
Torres (BSC)
Marc
Assens
Sandra
Roca
Miquel
LLobet
Antonio
Bonafonte
38. Reproducing and Analyzing
Adaptive Computation Time
in PyTorch and TensorFlow
Víctor Campos Xavier Giró-i-NietoDani Fojo
Fojo, Daniel, Víctor Campos, and Xavier Giró-i-Nieto. "Comparing Fixed and Adaptive Computation Time for
Recurrent Neural Networks." ICLR Workshop 2018.
51. @DocXavi
Xavier Giro-i-Nieto
Download slides from:
http://bit.ly/InsightDL2018
We are looking for both
industrial & academic partners:
xavier.giro@upc.edu
#InsightDL2018
Click to comment: