SlideShare ist ein Scribd-Unternehmen logo
1 von 30
Downloaden Sie, um offline zu lesen
Eye-Deep
Detecting Diabetes
with Convolutional Neural Networks
team o_O
Mathis Antony
sveitser@gmail.com
Stephan BrĂŒggemann
https://github.com/sveitser/kaggle_diabetic
https://www.kaggle.com/c/diabetic-retinopathy-detection
Intro
● Supervised Learning
○ Training set (features + labels) and test set (only features)
○ Training
■ learn relationship between features and labels (on training set)
○ Testing
■ predict labels from test data and measure performance
● Deep learning
○ Deep → many layers
○ Concepts not “new”
■ More data (internet)
■ More computational power (GPUs)
■ advancements in the field
■ great open source software
Neurons
Complete neuron cell diagram
Mariana Ruiz Villarreal
https://en.wikipedia.org/wiki/Neuron
ReLU: max(x, 0)
Leaky ReLU: max(x/100, x)
x (sum of inputs)
y (output)
Rectified Linear Unit: ReLU
inputs
output
1. sum inputs
2. activation function
Artificial Neurons
Forward Pass on Toy Neural Network
output
input
tail age
weights 1 -1 -2 31 2
-1 2 -2weights
tail? = yes | age = 3 | grumpiness = 10
loss/error:
● (prediction - truth)2
● → (11 - 10)2
= 1
1 3
1·1 - 2·3 = -5 -1·1 + 3·3 = 8 1·1 + 2·2 = 5
1·5 + 2·8 - 2·5 = 11
● Creature grumpiness
● Features: “has tail?”, age
● Target: grumpiness
● Loss: mean squared error
Gradient Descent
1. Compute derivative of loss
function with respect to weights
2. Update weights
η: learning rate
Training
1. Initialize weights randomly
2. Until happy, repeat
a. Forward pass through network (make prediction)
b. Calculate error
c. Backward propagation of errors (backprop)
d. Update weights
● Done in mini batches
● One batch in memory at a time if necessary
● Libraries provide almost everything
Image Convolutions
-1 -1 -1
-1 8 -1
-1 -1 -1
deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution
-1 0 -1
0 -1 0
-1 0 -1
filter
Max Pooling
pool size 3
1 1 1 1 7
0 2 1 1 2
1 2 4 0 1
2 4 5 5 6
2 4 1 4 2
4 74 7
5 65
stride 2
max pooling
pool size 3
stride 2
from sklearn.datasets import load_digits
d = load_digits()
X = d.images
# reshape to n_samples, n_channels, n_x, n_y
# and convert to 32-bit (to train on GPU)
X = X.reshape((-1, 1, 8, 8)).astype('f4')
# standardize
X = (X - X.mean()) / X.std()
# convert target to 32-bit int
y = d.target.astype('i4')
from lasagne import layers
from lasagne.nonlinearities import softmax
my_layers = [
(layers.InputLayer, {'shape': (None, 1, 8, 8)}),
(layers.Conv2DLayer, {'num_filters': 64,
'filter_size': (3, 3)}),
(layers.MaxPool2DLayer, {'pool_size': (3, 3),
'stride': (2, 2)}),
(layers.DenseLayer, {'num_units': 20}),
(layers.DenseLayer, {'num_units': 10,
'nonlinearity': softmax}),
]
from nolearn.lasagne import NeuralNet
net = NeuralNet(my_layers, verbose=1,
max_epochs=20,
update_learning_rate=0.02)
# train network
net.fit(X, y)
# make predictions
y_pred = net.predict(X)
$python nn_example.py
Using gpu device 0: GeForce GTX 980 Ti (CNMeM is disabled)
## Layer information
# name size
--- ---------- ------
0 input0 1x8x8
1 conv2d1 64x6x6
2 maxpool2d2 64x3x3
3 dense3 20
4 dense4 10
epoch train loss valid loss train/val valid acc dur
------- ------------ ------------ ----------- ----------- -----
1 2.17409 1.94451 1.11807 0.55025 0.09s
2 1.67648 1.35972 1.23297 0.64005 0.09s
3 1.03381 0.75170 1.37530 0.89149 0.10s
4 0.56765 0.41487 1.36825 0.90712 0.10s
5 0.33763 0.27013 1.24991 0.94387 0.09s
...
18 0.02589 0.07183 0.36048 0.98438 0.10s
19 0.02325 0.07053 0.32962 0.98438 0.09s
20 0.02129 0.06951 0.30625 0.98698 0.10s
Kaggle https://www.kaggle.com
Problem
● input data
○ high resolution color retinal images
○ training set: 35126 images
○ test set: 53576 images
● target
○ stage of diabetic retinopathy
■ 0 No DR
■ 1 Mild
■ 2 Moderate
■ 3 Severe
■ 4 Proliferative DR
● Highly unbalanced dataset
Metric
● Quadratic (Weighted) Cohen’s kappa (Îș)
○ Agreement between rating of two parties
■ 0 agreement by chance
■ 0 - 0.2 poor
■ ...
■ 0.8 - 1.0 very good
■ 1 total agreement
● “Weighted” → Ordinal classification problem
● “Less penalty for classifying a 0 as a 1 than as a 2”
● Our “solution”:
○ Regression with mean squared error
○ thresholding at [0.5, 1.5, 2.5, 3.5]
Dataset
stage 0 stage 1 stage 2 stage 3 stage 4
What are we looking for?
Saiprasad Ravishankar, Arpit Jain, Anurag Mittal
IEEE Conf. on Computer Vision Pattern Recognition (CVPR) 2009 http://ieeexplore.ieee.
org/xpl/articleDetails.jsp?arnumber=5206763
Preprocessing
● Simple heuristics to isolate and crop foreground
● Resize to 512 pixel squares
● Standardize each channel (RGB) to have zero mean and unit variance
● That’s it!
● But, training large networks requires a lot of data.
Augmentation
● Problem: “Small” Dataset
● Artificially increase size of dataset
○ translation
○ rotation
■ can become the bottleneck for large images
○ flipping
○ shearing
○ stretching
○ color augmentation*
Layer Types
● ☑Convolutional Layer (find features)
● ☑Max Pooling Layer (find features + reduce size)
● ☑Fully Connected Layer (prediction from features)
● Dropout Layer (model averaging, against overfitting)
○ Zero half the neurons
○ Network becomes different for each mini batch
5 2 8 1 9 2 2 5
5 0 0 1 9 0 2 0
3 5
5
2 1
2
● Maxout Layer
○ Take maximum value over 2 Neurons
● Input image to many tiny “images” (feature maps) a few pixels wide.
● Extract features on the way through the network.
● Layers with stride 2 halve width and height of feature maps.
● Handy “Units”
○ 2 - 4 convolutional layers with small filters (2 x 2 to 5 x 5)
○ followed by max pooling layer with stride 2 and pool size 3
● Add ReLUs (or similar)
● 1 or 2 fully connected layers with dropout at the end
● Weight decay for convolutional layers.
● “If it doesn’t overfit, you should probably it bigger”.
● In competition:
○ Tiny features → larger input images: 64 → 128 → 256 → 512 (→ 768)
○ More and more convolution and pooling layers
Network Architecture
Network Architecture Convolution (4x4)
Pooling (3x3, stride 2)
Dropout
Maxout
Fully Connected
32
64
128
1024
256
512
1024
512
512
Training
● Deep networks (many layers) are sometimes hard to train.
● Initialization strategy is very important.
● learning rate:
a. Find largest value for loss still converges
b. When loss doesn’t decrease, decrease learning rate by factor of 5 or 10
● Use “Adam” optimizer or “Nesterov Momentum”.
● In competition
a. Dynamic resampling to deal with class imbalance.
b. Train smaller network and use learned weights to initialize bigger
networks.
c. 200 - 250 epochs
d. ~ 2 days to train one network
What does it “see”?
input (stage 1) 5x5 pixel occluded prediction overlay
Visualizing and Understanding Convolutional Networks
Matthew D Zeiler, Rob Fergus
http://arxiv.org/abs/1311.2901
What does it “see”?
input (stage 1)
Visualizing and Understanding Convolutional Networks
Matthew D Zeiler, Rob Fergus
http://arxiv.org/abs/1311.2901
Feature Extraction
● Output of any layer can be used as features
● Could use pretrained networks for feature extraction (unless kaggle)
output of last pooling layer → features
● Original score: Îș 0.79 (~ rank 13 on final kaggle leaderboard)
● Features of last pooling layer:
○ Blend Network
■ features → FC 32 → maxout → FC 32 → maxout → output
○ Îș ~ 0.80 (~ rank 12)
○ fully connected layers in our convolutional network not well trained
Test Time Averaging (TTA)
● From winners of kaggle plankton competition early 2015:
https://github.com/benanne/kaggle-ndsb
● Average output of last pooling layer over multiple augmentations for
each eye
● Use mean and standard deviation of each feature
● Same blend network
○ features → FC 32 → maxout → FC 32 → maxout → output
● with TTA mean Îș ~ 0.81 (~ rank 11)
● with TTA mean + standard deviation Îș ~ 0.815 (~ rank 10)
● use TTA features from left and right eye and blend
○ [features of this eye, features of patient’s other eye, left eye indicator]
■ left: [left eye features, right eye features, 1] → left eye label
■ right: [right eye features, left eye features, 0] → right eye label
○ mean, standard deviation, indicator: 8193 features
● Train Blend Network: Îș → ~ 0.84 (~ rank 2 - 3)
“Per Patient” Blend
● both eyes for each patient in dataset
● some images look very “different”
● correlation between labels for left and
right eye is very high: ρ ~ 0.85
right eye label
for patients
with left eye
label 3
Final Result
● Ensembling
○ averaged results from 2 similar network architectures and 3 sets of
weights each: Îș → ~ 0.845
HK Electric wins too
Thank you
Q&A
● edu:
○ https://mitpress.mit.edu/books/introduction-machine-learning
○ https://www.coursera.org/course/neuralnets
● code:
○ https://github.com/Lasagne/Lasagne
○ https://github.com/dnouri/nolearn
○ https://github.com/BVLC/caffe
○ https://github.com/dmlc/mxnet
○ https://github.com/tensorflow/tensorflow
○ http://keras.io/
○ https://github.com/scikit-learn/scikit-learn

Weitere Àhnliche Inhalte

Was ist angesagt?

Analyzing Soft Cut-off in Twitter
Analyzing Soft Cut-off in TwitterAnalyzing Soft Cut-off in Twitter
Analyzing Soft Cut-off in Twitter
Niharjyoti Sarangi
 

Was ist angesagt? (12)

Green’s Function Solution of Non-homogenous Singular Sturm-Liouville Problem
Green’s Function Solution of Non-homogenous Singular Sturm-Liouville ProblemGreen’s Function Solution of Non-homogenous Singular Sturm-Liouville Problem
Green’s Function Solution of Non-homogenous Singular Sturm-Liouville Problem
 
Introduction to Matrix Factorization Methods Collaborative Filtering
Introduction to Matrix Factorization Methods Collaborative FilteringIntroduction to Matrix Factorization Methods Collaborative Filtering
Introduction to Matrix Factorization Methods Collaborative Filtering
 
Analyzing Soft Cut-off in Twitter
Analyzing Soft Cut-off in TwitterAnalyzing Soft Cut-off in Twitter
Analyzing Soft Cut-off in Twitter
 
An RKHS Approach to Systematic Kernel Selection in Nonlinear System Identific...
An RKHS Approach to Systematic Kernel Selection in Nonlinear System Identific...An RKHS Approach to Systematic Kernel Selection in Nonlinear System Identific...
An RKHS Approach to Systematic Kernel Selection in Nonlinear System Identific...
 
A CNN BASED MODEL TO RESTORE ILL EXPOSED IMAGES
A CNN BASED MODEL TO RESTORE ILL EXPOSED IMAGESA CNN BASED MODEL TO RESTORE ILL EXPOSED IMAGES
A CNN BASED MODEL TO RESTORE ILL EXPOSED IMAGES
 
Regularized Estimation of Spatial Patterns
Regularized Estimation of Spatial PatternsRegularized Estimation of Spatial Patterns
Regularized Estimation of Spatial Patterns
 
QMC: Operator Splitting Workshop, Projective Splitting with Forward Steps and...
QMC: Operator Splitting Workshop, Projective Splitting with Forward Steps and...QMC: Operator Splitting Workshop, Projective Splitting with Forward Steps and...
QMC: Operator Splitting Workshop, Projective Splitting with Forward Steps and...
 
Session 4 start coding Tensorflow 2.0
Session 4 start coding Tensorflow 2.0Session 4 start coding Tensorflow 2.0
Session 4 start coding Tensorflow 2.0
 
Lecture8 xing
Lecture8 xingLecture8 xing
Lecture8 xing
 
Deep Residual Hashing Neural Network for Image Retrieval
Deep Residual Hashing Neural Network for Image RetrievalDeep Residual Hashing Neural Network for Image Retrieval
Deep Residual Hashing Neural Network for Image Retrieval
 
Solved exercises double integration
Solved exercises double integrationSolved exercises double integration
Solved exercises double integration
 
Parameter Estimation of Generalized Gamma Distribution under Different Loss F...
Parameter Estimation of Generalized Gamma Distribution under Different Loss F...Parameter Estimation of Generalized Gamma Distribution under Different Loss F...
Parameter Estimation of Generalized Gamma Distribution under Different Loss F...
 

Ähnlich wie Eye deep

Using CNTK's Python Interface for Deep LearningDave DeBarr -
Using CNTK's Python Interface for Deep LearningDave DeBarr - Using CNTK's Python Interface for Deep LearningDave DeBarr -
Using CNTK's Python Interface for Deep LearningDave DeBarr -
PyData
 

Ähnlich wie Eye deep (20)

Digit recognizer by convolutional neural network
Digit recognizer by convolutional neural networkDigit recognizer by convolutional neural network
Digit recognizer by convolutional neural network
 
Introduction to Applied Machine Learning
Introduction to Applied Machine LearningIntroduction to Applied Machine Learning
Introduction to Applied Machine Learning
 
Lesson 39
Lesson 39Lesson 39
Lesson 39
 
AI Lesson 39
AI Lesson 39AI Lesson 39
AI Lesson 39
 
Cv mini project (1)
Cv mini project (1)Cv mini project (1)
Cv mini project (1)
 
Waste Classification System using Convolutional Neural Networks.pptx
Waste Classification System using Convolutional Neural Networks.pptxWaste Classification System using Convolutional Neural Networks.pptx
Waste Classification System using Convolutional Neural Networks.pptx
 
Deep learning with TensorFlow
Deep learning with TensorFlowDeep learning with TensorFlow
Deep learning with TensorFlow
 
Accelerating HPC Applications on NVIDIA GPUs with OpenACC
Accelerating HPC Applications on NVIDIA GPUs with OpenACCAccelerating HPC Applications on NVIDIA GPUs with OpenACC
Accelerating HPC Applications on NVIDIA GPUs with OpenACC
 
DALL-E.pdf
DALL-E.pdfDALL-E.pdf
DALL-E.pdf
 
Neural network basic and introduction of Deep learning
Neural network basic and introduction of Deep learningNeural network basic and introduction of Deep learning
Neural network basic and introduction of Deep learning
 
Xgboost
XgboostXgboost
Xgboost
 
Hardware Acceleration for Machine Learning
Hardware Acceleration for Machine LearningHardware Acceleration for Machine Learning
Hardware Acceleration for Machine Learning
 
Learning to Rank with Neural Networks
Learning to Rank with Neural NetworksLearning to Rank with Neural Networks
Learning to Rank with Neural Networks
 
1-pytorch-CNN-RNN.pdf
1-pytorch-CNN-RNN.pdf1-pytorch-CNN-RNN.pdf
1-pytorch-CNN-RNN.pdf
 
Salt Identification Challenge
Salt Identification ChallengeSalt Identification Challenge
Salt Identification Challenge
 
Deep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and RegularizationDeep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and Regularization
 
Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018
Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018
Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018
 
08 neural networks
08 neural networks08 neural networks
08 neural networks
 
Using CNTK's Python Interface for Deep LearningDave DeBarr -
Using CNTK's Python Interface for Deep LearningDave DeBarr - Using CNTK's Python Interface for Deep LearningDave DeBarr -
Using CNTK's Python Interface for Deep LearningDave DeBarr -
 
Machine Learning, Deep Learning and Data Analysis Introduction
Machine Learning, Deep Learning and Data Analysis IntroductionMachine Learning, Deep Learning and Data Analysis Introduction
Machine Learning, Deep Learning and Data Analysis Introduction
 

KĂŒrzlich hochgeladen

âž„đŸ” 7737669865 đŸ”â–» Dindigul Call-girls in Women Seeking Men 🔝Dindigul🔝 Escor...
âž„đŸ” 7737669865 đŸ”â–» Dindigul Call-girls in Women Seeking Men  🔝Dindigul🔝   Escor...âž„đŸ” 7737669865 đŸ”â–» Dindigul Call-girls in Women Seeking Men  🔝Dindigul🔝   Escor...
âž„đŸ” 7737669865 đŸ”â–» Dindigul Call-girls in Women Seeking Men 🔝Dindigul🔝 Escor...
amitlee9823
 
âž„đŸ” 7737669865 đŸ”â–» Bangalore Call-girls in Women Seeking Men 🔝Bangalore🔝 Esc...
âž„đŸ” 7737669865 đŸ”â–» Bangalore Call-girls in Women Seeking Men  🔝Bangalore🔝   Esc...âž„đŸ” 7737669865 đŸ”â–» Bangalore Call-girls in Women Seeking Men  🔝Bangalore🔝   Esc...
âž„đŸ” 7737669865 đŸ”â–» Bangalore Call-girls in Women Seeking Men 🔝Bangalore🔝 Esc...
amitlee9823
 
âž„đŸ” 7737669865 đŸ”â–» Mathura Call-girls in Women Seeking Men 🔝Mathura🔝 Escorts...
âž„đŸ” 7737669865 đŸ”â–» Mathura Call-girls in Women Seeking Men  🔝Mathura🔝   Escorts...âž„đŸ” 7737669865 đŸ”â–» Mathura Call-girls in Women Seeking Men  🔝Mathura🔝   Escorts...
âž„đŸ” 7737669865 đŸ”â–» Mathura Call-girls in Women Seeking Men 🔝Mathura🔝 Escorts...
amitlee9823
 
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
amitlee9823
 
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Call Girls In Bellandur ☎ 7737669865 đŸ„” Book Your One night Stand
Call Girls In Bellandur ☎ 7737669865 đŸ„” Book Your One night StandCall Girls In Bellandur ☎ 7737669865 đŸ„” Book Your One night Stand
Call Girls In Bellandur ☎ 7737669865 đŸ„” Book Your One night Stand
amitlee9823
 
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
amitlee9823
 
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
amitlee9823
 
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
amitlee9823
 
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
amitlee9823
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 

KĂŒrzlich hochgeladen (20)

DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24  Building Real-Time Pipelines With FLaNKDATA SUMMIT 24  Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
 
âž„đŸ” 7737669865 đŸ”â–» Dindigul Call-girls in Women Seeking Men 🔝Dindigul🔝 Escor...
âž„đŸ” 7737669865 đŸ”â–» Dindigul Call-girls in Women Seeking Men  🔝Dindigul🔝   Escor...âž„đŸ” 7737669865 đŸ”â–» Dindigul Call-girls in Women Seeking Men  🔝Dindigul🔝   Escor...
âž„đŸ” 7737669865 đŸ”â–» Dindigul Call-girls in Women Seeking Men 🔝Dindigul🔝 Escor...
 
Midocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFxMidocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFx
 
âž„đŸ” 7737669865 đŸ”â–» Bangalore Call-girls in Women Seeking Men 🔝Bangalore🔝 Esc...
âž„đŸ” 7737669865 đŸ”â–» Bangalore Call-girls in Women Seeking Men  🔝Bangalore🔝   Esc...âž„đŸ” 7737669865 đŸ”â–» Bangalore Call-girls in Women Seeking Men  🔝Bangalore🔝   Esc...
âž„đŸ” 7737669865 đŸ”â–» Bangalore Call-girls in Women Seeking Men 🔝Bangalore🔝 Esc...
 
Predicting Loan Approval: A Data Science Project
Predicting Loan Approval: A Data Science ProjectPredicting Loan Approval: A Data Science Project
Predicting Loan Approval: A Data Science Project
 
âž„đŸ” 7737669865 đŸ”â–» Mathura Call-girls in Women Seeking Men 🔝Mathura🔝 Escorts...
âž„đŸ” 7737669865 đŸ”â–» Mathura Call-girls in Women Seeking Men  🔝Mathura🔝   Escorts...âž„đŸ” 7737669865 đŸ”â–» Mathura Call-girls in Women Seeking Men  🔝Mathura🔝   Escorts...
âž„đŸ” 7737669865 đŸ”â–» Mathura Call-girls in Women Seeking Men 🔝Mathura🔝 Escorts...
 
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
 
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Detecting Credit Card Fraud: A Machine Learning Approach
Detecting Credit Card Fraud: A Machine Learning ApproachDetecting Credit Card Fraud: A Machine Learning Approach
Detecting Credit Card Fraud: A Machine Learning Approach
 
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
 
Call Girls In Bellandur ☎ 7737669865 đŸ„” Book Your One night Stand
Call Girls In Bellandur ☎ 7737669865 đŸ„” Book Your One night StandCall Girls In Bellandur ☎ 7737669865 đŸ„” Book Your One night Stand
Call Girls In Bellandur ☎ 7737669865 đŸ„” Book Your One night Stand
 
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% SecureCall me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
 
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
 
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
 
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
 
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
 
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
 
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 nightCheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 

Eye deep

  • 1. Eye-Deep Detecting Diabetes with Convolutional Neural Networks team o_O Mathis Antony sveitser@gmail.com Stephan BrĂŒggemann https://github.com/sveitser/kaggle_diabetic https://www.kaggle.com/c/diabetic-retinopathy-detection
  • 2. Intro ● Supervised Learning ○ Training set (features + labels) and test set (only features) ○ Training ■ learn relationship between features and labels (on training set) ○ Testing ■ predict labels from test data and measure performance ● Deep learning ○ Deep → many layers ○ Concepts not “new” ■ More data (internet) ■ More computational power (GPUs) ■ advancements in the field ■ great open source software
  • 3. Neurons Complete neuron cell diagram Mariana Ruiz Villarreal https://en.wikipedia.org/wiki/Neuron
  • 4. ReLU: max(x, 0) Leaky ReLU: max(x/100, x) x (sum of inputs) y (output) Rectified Linear Unit: ReLU inputs output 1. sum inputs 2. activation function Artificial Neurons
  • 5. Forward Pass on Toy Neural Network output input tail age weights 1 -1 -2 31 2 -1 2 -2weights tail? = yes | age = 3 | grumpiness = 10 loss/error: ● (prediction - truth)2 ● → (11 - 10)2 = 1 1 3 1·1 - 2·3 = -5 -1·1 + 3·3 = 8 1·1 + 2·2 = 5 1·5 + 2·8 - 2·5 = 11 ● Creature grumpiness ● Features: “has tail?”, age ● Target: grumpiness ● Loss: mean squared error
  • 6. Gradient Descent 1. Compute derivative of loss function with respect to weights 2. Update weights η: learning rate
  • 7. Training 1. Initialize weights randomly 2. Until happy, repeat a. Forward pass through network (make prediction) b. Calculate error c. Backward propagation of errors (backprop) d. Update weights ● Done in mini batches ● One batch in memory at a time if necessary ● Libraries provide almost everything
  • 8. Image Convolutions -1 -1 -1 -1 8 -1 -1 -1 -1 deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution -1 0 -1 0 -1 0 -1 0 -1 filter
  • 9. Max Pooling pool size 3 1 1 1 1 7 0 2 1 1 2 1 2 4 0 1 2 4 5 5 6 2 4 1 4 2 4 74 7 5 65 stride 2 max pooling pool size 3 stride 2
  • 10. from sklearn.datasets import load_digits d = load_digits() X = d.images # reshape to n_samples, n_channels, n_x, n_y # and convert to 32-bit (to train on GPU) X = X.reshape((-1, 1, 8, 8)).astype('f4') # standardize X = (X - X.mean()) / X.std() # convert target to 32-bit int y = d.target.astype('i4') from lasagne import layers from lasagne.nonlinearities import softmax my_layers = [ (layers.InputLayer, {'shape': (None, 1, 8, 8)}), (layers.Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}), (layers.MaxPool2DLayer, {'pool_size': (3, 3), 'stride': (2, 2)}), (layers.DenseLayer, {'num_units': 20}), (layers.DenseLayer, {'num_units': 10, 'nonlinearity': softmax}), ] from nolearn.lasagne import NeuralNet net = NeuralNet(my_layers, verbose=1, max_epochs=20, update_learning_rate=0.02) # train network net.fit(X, y) # make predictions y_pred = net.predict(X)
  • 11. $python nn_example.py Using gpu device 0: GeForce GTX 980 Ti (CNMeM is disabled) ## Layer information # name size --- ---------- ------ 0 input0 1x8x8 1 conv2d1 64x6x6 2 maxpool2d2 64x3x3 3 dense3 20 4 dense4 10 epoch train loss valid loss train/val valid acc dur ------- ------------ ------------ ----------- ----------- ----- 1 2.17409 1.94451 1.11807 0.55025 0.09s 2 1.67648 1.35972 1.23297 0.64005 0.09s 3 1.03381 0.75170 1.37530 0.89149 0.10s 4 0.56765 0.41487 1.36825 0.90712 0.10s 5 0.33763 0.27013 1.24991 0.94387 0.09s ... 18 0.02589 0.07183 0.36048 0.98438 0.10s 19 0.02325 0.07053 0.32962 0.98438 0.09s 20 0.02129 0.06951 0.30625 0.98698 0.10s
  • 13. Problem ● input data ○ high resolution color retinal images ○ training set: 35126 images ○ test set: 53576 images ● target ○ stage of diabetic retinopathy ■ 0 No DR ■ 1 Mild ■ 2 Moderate ■ 3 Severe ■ 4 Proliferative DR ● Highly unbalanced dataset
  • 14. Metric ● Quadratic (Weighted) Cohen’s kappa (Îș) ○ Agreement between rating of two parties ■ 0 agreement by chance ■ 0 - 0.2 poor ■ ... ■ 0.8 - 1.0 very good ■ 1 total agreement ● “Weighted” → Ordinal classification problem ● “Less penalty for classifying a 0 as a 1 than as a 2” ● Our “solution”: ○ Regression with mean squared error ○ thresholding at [0.5, 1.5, 2.5, 3.5]
  • 15. Dataset stage 0 stage 1 stage 2 stage 3 stage 4
  • 16. What are we looking for? Saiprasad Ravishankar, Arpit Jain, Anurag Mittal IEEE Conf. on Computer Vision Pattern Recognition (CVPR) 2009 http://ieeexplore.ieee. org/xpl/articleDetails.jsp?arnumber=5206763
  • 17. Preprocessing ● Simple heuristics to isolate and crop foreground ● Resize to 512 pixel squares ● Standardize each channel (RGB) to have zero mean and unit variance ● That’s it! ● But, training large networks requires a lot of data.
  • 18. Augmentation ● Problem: “Small” Dataset ● Artificially increase size of dataset ○ translation ○ rotation ■ can become the bottleneck for large images ○ flipping ○ shearing ○ stretching ○ color augmentation*
  • 19. Layer Types ● ☑Convolutional Layer (find features) ● ☑Max Pooling Layer (find features + reduce size) ● ☑Fully Connected Layer (prediction from features) ● Dropout Layer (model averaging, against overfitting) ○ Zero half the neurons ○ Network becomes different for each mini batch 5 2 8 1 9 2 2 5 5 0 0 1 9 0 2 0 3 5 5 2 1 2 ● Maxout Layer ○ Take maximum value over 2 Neurons
  • 20. ● Input image to many tiny “images” (feature maps) a few pixels wide. ● Extract features on the way through the network. ● Layers with stride 2 halve width and height of feature maps. ● Handy “Units” ○ 2 - 4 convolutional layers with small filters (2 x 2 to 5 x 5) ○ followed by max pooling layer with stride 2 and pool size 3 ● Add ReLUs (or similar) ● 1 or 2 fully connected layers with dropout at the end ● Weight decay for convolutional layers. ● “If it doesn’t overfit, you should probably it bigger”. ● In competition: ○ Tiny features → larger input images: 64 → 128 → 256 → 512 (→ 768) ○ More and more convolution and pooling layers Network Architecture
  • 21. Network Architecture Convolution (4x4) Pooling (3x3, stride 2) Dropout Maxout Fully Connected 32 64 128 1024 256 512 1024 512 512
  • 22. Training ● Deep networks (many layers) are sometimes hard to train. ● Initialization strategy is very important. ● learning rate: a. Find largest value for loss still converges b. When loss doesn’t decrease, decrease learning rate by factor of 5 or 10 ● Use “Adam” optimizer or “Nesterov Momentum”. ● In competition a. Dynamic resampling to deal with class imbalance. b. Train smaller network and use learned weights to initialize bigger networks. c. 200 - 250 epochs d. ~ 2 days to train one network
  • 23.
  • 24. What does it “see”? input (stage 1) 5x5 pixel occluded prediction overlay Visualizing and Understanding Convolutional Networks Matthew D Zeiler, Rob Fergus http://arxiv.org/abs/1311.2901
  • 25. What does it “see”? input (stage 1) Visualizing and Understanding Convolutional Networks Matthew D Zeiler, Rob Fergus http://arxiv.org/abs/1311.2901
  • 26. Feature Extraction ● Output of any layer can be used as features ● Could use pretrained networks for feature extraction (unless kaggle) output of last pooling layer → features ● Original score: Îș 0.79 (~ rank 13 on final kaggle leaderboard) ● Features of last pooling layer: ○ Blend Network ■ features → FC 32 → maxout → FC 32 → maxout → output ○ Îș ~ 0.80 (~ rank 12) ○ fully connected layers in our convolutional network not well trained
  • 27. Test Time Averaging (TTA) ● From winners of kaggle plankton competition early 2015: https://github.com/benanne/kaggle-ndsb ● Average output of last pooling layer over multiple augmentations for each eye ● Use mean and standard deviation of each feature ● Same blend network ○ features → FC 32 → maxout → FC 32 → maxout → output ● with TTA mean Îș ~ 0.81 (~ rank 11) ● with TTA mean + standard deviation Îș ~ 0.815 (~ rank 10)
  • 28. ● use TTA features from left and right eye and blend ○ [features of this eye, features of patient’s other eye, left eye indicator] ■ left: [left eye features, right eye features, 1] → left eye label ■ right: [right eye features, left eye features, 0] → right eye label ○ mean, standard deviation, indicator: 8193 features ● Train Blend Network: Îș → ~ 0.84 (~ rank 2 - 3) “Per Patient” Blend ● both eyes for each patient in dataset ● some images look very “different” ● correlation between labels for left and right eye is very high: ρ ~ 0.85 right eye label for patients with left eye label 3
  • 29. Final Result ● Ensembling ○ averaged results from 2 similar network architectures and 3 sets of weights each: Îș → ~ 0.845 HK Electric wins too
  • 30. Thank you Q&A ● edu: ○ https://mitpress.mit.edu/books/introduction-machine-learning ○ https://www.coursera.org/course/neuralnets ● code: ○ https://github.com/Lasagne/Lasagne ○ https://github.com/dnouri/nolearn ○ https://github.com/BVLC/caffe ○ https://github.com/dmlc/mxnet ○ https://github.com/tensorflow/tensorflow ○ http://keras.io/ ○ https://github.com/scikit-learn/scikit-learn