Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.
Practical Data Science
An Introduction to Supervised Machine Learning
and Pattern Classification: The Big Picture
Michigan ...
A Little Bit About Myself ...
Developing software & methods for
- Protein ligand docking
- Large scale drug/inhibitor disc...
What is Machine Learning?
http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram
"Field of study that gives com...
http://commons.wikimedia.org/wiki/
File:American_book_company_1916._letter_envelope-2.JPG#filelinks
[public domain]
https:/...
Examples of Machine Learning
http://googleresearch.blogspot.com/2014/11/a-picture-is-worth-thousand-coherent.html
By Steve...
How many of you have used
machine learning before?
Our Agenda
• Concepts and the big picture
• Workflow
• Practical tips & good habits
Learning
• Labeled data
• Direct feedback
• Predict outcome/future
• Decision process
• Reward system
• Learn series of ac...
Unsupervised
learning
Supervised
learning
Clustering:
[DBSCAN on a toy dataset]
Classification:
[SVM on 2 classes of the Wi...
Instances (samples, observations)
Features (attributes, dimensions) Classes (targets)
Nomenclature
sepal_length sepal_widt...
Classification
x1
x2
class1
class2
1) Learn from training data
2) Map unseen (new) data
?
Feature Extraction
Feature Selection
Dimensionality Reduction
Feature Scaling
Raw Data Collection
Pre-Processing
Sampling
...
Feature Extraction
Feature Selection
Dimensionality Reduction
Feature Scaling
Raw Data Collection
Pre-Processing
Sampling
...
A Few Common Classifiers
Decision Tree
Perceptron Naive Bayes
Ensemble Methods: Random Forest, Bagging, AdaBoost
Support Ve...
Discriminative Algorithms
Generative Algorithms
• Models a more general problem: how the data was generated.
• I.e., the d...
Examples of Discriminative Classifiers:
Perceptron
xi1
xi2
w1
w2
Σ yi
y = wTx = w0 + w1x1 + w2x2
1
F. Rosenblatt. The perce...
Discriminative Classifiers:
Perceptron
F. Rosenblatt. The perceptron, a perceiving and recognizing automaton Project Para. ...
Generative Classifiers:
Naive Bayes
Bayes Theorem: P(ωj | xi) =
P(xi | ωj) P(ωj)
P(xi)
Posterior probability =
Likelihood x...
Generative Classifiers:
Naive Bayes
Decision Rule:
Bayes Theorem: P(ωj | xi) =
P(xi | ωj) P(ωj)
P(xi)
pred. class label ωj ...
Class-conditional
probability
(here Gaussian kernel):
Generative Classifiers:
Naive Bayes
Prior probability:
Evidence:
(can...
Generative Classifiers:
Naive Bayes
- Naive conditional independence assumption typically
violated
- Works well for small d...
Non-Parametric Classifiers:
K-Nearest Neighbor
- Simple!
- Lazy learner
- Very susceptible to curse of dimensionality
k=3
e...
Iris Example
Setosa Virginica Versicolor
k = 3
mahalanobis dist.
uniform weights
C = 3
depth = 2
Decision Tree
Entropy =
depth = 4
petal length <= 2.45?
petal length <= 4.75?
Setosa
Virginica Versicolor
Yes No
e.g., 2 (...
"No Free Lunch" :(
Roughly speaking:
“No one model works best for all possible situations.”
Our model is a simplification o...
Which Algorithm?
• What is the size and dimensionality of my training set?
• Is the data linearly separable?
• How much do...
Feature Extraction
Feature Selection
Dimensionality Reduction
Feature Scaling
Raw Data Collection
Pre-Processing
Sampling
...
Missing Values:
- Remove features (columns)
- Remove samples (rows)
- Imputation (mean, nearest neighbor, …)
Sampling:
- R...
Categorical Variables
color size prize class
label
0 green M 10.1 class1
1 red L 13.5 class2
2 blue XL 15.3 class1
ordinal...
Feature Extraction
Feature Selection
Dimensionality Reduction
Feature Scaling
Raw Data Collection
Pre-Processing
Sampling
...
Generalization Error and Overfitting
How well does the model perform on unseen data?
Generalization Error and Overfitting
Error Metrics: Confusion Matrix
TP
[Linear SVM on sepal/petal lengths]
TN
FN
FP
here: “setosa” = “positive”
Error Metrics
TP
[Linear SVM on sepal/petal lengths]
TN
FN
FP
here: “setosa” = “positive”
TP + TN
FP +FN +TP +TN
Accuracy ...
Receiver Operating Characteristic
(ROC) Curves
Test set
Training dataset Test dataset
Complete dataset
Test set
Test set
Test set
1st iteration calc. error
calc. error
c...
k-fold CV and ROC
Feature Selection
- Domain knowledge
- Variance threshold
- Exhaustive search
- Decision trees
- …
IMPORTANT!
(Noise, over...
Dimensionality Reduction
• Transformation onto a new feature subspace
• e.g., Principal Component Analysis (PCA)
• Find di...
0. Standardize data
1. Compute covariance matrix
z =
xik - μk
σik = ∑ (xij - µj) (xik - µk)
σk
1
in -1
σ2
1 σ12 σ13 σ14
σ2...
2. Eigendecomposition and sorting eigenvalues
PCA in 3 Steps
X v = λ v
Eigenvectors
[[ 0.52237162 -0.37231836 -0.72101681 ...
3. Select top k eigenvectors and transform data
PCA in 3 Steps
Eigenvectors
[[ 0.52237162 -0.37231836 -0.72101681 0.261995...
Hyperparameter Optimization:
GridSearch in scikit-learn
Non-Linear Problems
- XOR gate
k=11
uniform weights
C=1
C=1000,
gamma=0.1
depth=4
Kernel Trick
Kernel function
Kernel
Map onto high-dimensional space (non-linear combinations)
Kernel Trick
Trick: No explicit dot product!
Radius Basis Function (RBF) Kernel:
Kernel PCA
PC1, linear PCA PC1, kernel PCA
Feature Extraction
Feature Selection
Dimensionality Reduction
Feature Scaling
Raw Data Collection
Pre-Processing
Sampling
...
Questions?
mail@sebastianraschka.com
https://github.com/rasbt
@rasbt
Thanks!
Additional Slides
Inspiring Literature
P. N. Klein. Coding the Matrix: Linear
Algebra Through Computer Science
Applications. Newtonian Press...
Useful Online Resources
https://www.coursera.org/course/ml
http://stats.stackexchange.com
http://www.kaggle.com
My Favorite Tools
http://stanford.edu/~mwaskom/software/seaborn/
Seaborn
http://www.numpy.org
http://pandas.pydata.org
htt...
class1
class2
Which one to pick?
class1
class2
The problem of overfitting
Generalization error!
Nächste SlideShare
Wird geladen in …5
×

An Introduction to Supervised Machine Learning and Pattern Classification: The Big Picture

71.891 Aufrufe

Veröffentlicht am

Practical Data Science. Slides of an 1 hour introductory talk about predictive modeling using Machine Learning with a focus on supervised learning.

Veröffentlicht in: Daten & Analysen
  • DOWNLOAD FULL BOOKS, INTO AVAILABLE FORMAT ......................................................................................................................... ......................................................................................................................... 1.DOWNLOAD FULL. PDF EBOOK here { https://tinyurl.com/yxufevpm } ......................................................................................................................... 1.DOWNLOAD FULL. EPUB Ebook here { https://tinyurl.com/yxufevpm } ......................................................................................................................... 1.DOWNLOAD FULL. doc Ebook here { https://tinyurl.com/yxufevpm } ......................................................................................................................... 1.DOWNLOAD FULL. PDF EBOOK here { https://tinyurl.com/yxufevpm } ......................................................................................................................... 1.DOWNLOAD FULL. EPUB Ebook here { https://tinyurl.com/yxufevpm } ......................................................................................................................... 1.DOWNLOAD FULL. doc Ebook here { https://tinyurl.com/yxufevpm } ......................................................................................................................... ......................................................................................................................... ......................................................................................................................... .............. Browse by Genre Available eBooks ......................................................................................................................... Art, Biography, Business, Chick Lit, Children's, Christian, Classics, Comics, Contemporary, Cookbooks, Crime, Ebooks, Fantasy, Fiction, Graphic Novels, Historical Fiction, History, Horror, Humor And Comedy, Manga, Memoir, Music, Mystery, Non Fiction, Paranormal, Philosophy, Poetry, Psychology, Religion, Romance, Science, Science Fiction, Self Help, Suspense, Spirituality, Sports, Thriller, Travel, Young Adult,
       Antworten 
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier
  • Make Your Bookie Cry And Pull The Hair By Generating $23,187, Verified profit! Click to find out how ♣♣♣ http://ishbv.com/zcodesys/pdf
       Antworten 
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier
  • New $31,515 USD profit system revealed. using an automated software that predicts the winners for Soccer, Horse racing and even major sports such as NBA, NFL, NHL, MLB! ◆◆◆ https://tinyurl.com/yxcmgjf5
       Antworten 
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier
  • New Betting Robots released! I LOVE this amazing little tool! ♥♥♥ http://scamcb.com/zcodesys/pdf
       Antworten 
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier
  • Download your Automated game predictor software for NFL, MLB and NCAA!  http://scamcb.com/zcodesys/pdf
       Antworten 
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier

An Introduction to Supervised Machine Learning and Pattern Classification: The Big Picture

  1. Practical Data Science An Introduction to Supervised Machine Learning and Pattern Classification: The Big Picture Michigan State University NextGen Bioinformatics Seminars - 2015 Sebastian Raschka Feb. 11, 2015
  2. A Little Bit About Myself ... Developing software & methods for - Protein ligand docking - Large scale drug/inhibitor discovery PhD candidate in Dr. L. Kuhn’s Lab: and some other machine learning side-projects …
  3. What is Machine Learning? http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram "Field of study that gives computers the ability to learn without being explicitly programmed.” (Arthur Samuel, 1959) By Phillip Taylor [CC BY 2.0]
  4. http://commons.wikimedia.org/wiki/ File:American_book_company_1916._letter_envelope-2.JPG#filelinks [public domain] https://flic.kr/p/5BLW6G [CC BY 2.0] Text Recognition Spam Filtering Biology Examples of Machine Learning
  5. Examples of Machine Learning http://googleresearch.blogspot.com/2014/11/a-picture-is-worth-thousand-coherent.html By Steve Jurvetson [CC BY 2.0] Self-driving cars Photo search and many, many more ... Recommendation systems http://commons.wikimedia.org/wiki/File:Netflix_logo.svg [public domain]
  6. How many of you have used machine learning before?
  7. Our Agenda • Concepts and the big picture • Workflow • Practical tips & good habits
  8. Learning • Labeled data • Direct feedback • Predict outcome/future • Decision process • Reward system • Learn series of actions • No labels • No feedback • “Find hidden structure” Unsupervised Supervised Reinforcement
  9. Unsupervised learning Supervised learning Clustering: [DBSCAN on a toy dataset] Classification: [SVM on 2 classes of the Wine dataset] Regression: [Soccer Fantasy Score prediction] Today’s topic Supervised LearningUnsupervised Learning
  10. Instances (samples, observations) Features (attributes, dimensions) Classes (targets) Nomenclature sepal_length sepal_width petal_length petal_width class 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa … … … … … … 50 6.4 3.2 4.5 1.5 veriscolor … … … … … … 150 5.9 3.0 5.1 1.8 virginica https://archive.ics.uci.edu/ml/datasets/Iris IRIS
  11. Classification x1 x2 class1 class2 1) Learn from training data 2) Map unseen (new) data ?
  12. Feature Extraction Feature Selection Dimensionality Reduction Feature Scaling Raw Data Collection Pre-Processing Sampling Test Dataset Training Dataset Learning Algorithm Training Post-Processing Cross Validation Final Classification/ Regression Model New DataPre-Processing Refinement Prediction Split Supervised Learning Sebastian Raschka 2014 Missing Data Performance Metrics Model Selection Hyperparameter Optimization This work is licensed under a Creative Commons Attribution 4.0 International License. Final Model Evaluation
  13. Feature Extraction Feature Selection Dimensionality Reduction Feature Scaling Raw Data Collection Pre-Processing Sampling Test Dataset Training Dataset Learning Algorithm Training Post-Processing Cross Validation Final Classification/ Regression Model New DataPre-Processing Refinement Prediction Split Supervised Learning Sebastian Raschka 2014 Missing Data Performance Metrics Model Selection Hyperparameter Optimization This work is licensed under a Creative Commons Attribution 4.0 International License. Final Model Evaluation
  14. A Few Common Classifiers Decision Tree Perceptron Naive Bayes Ensemble Methods: Random Forest, Bagging, AdaBoost Support Vector Machine K-Nearest Neighbor Logistic Regression Artificial Neural Network / Deep Learning
  15. Discriminative Algorithms Generative Algorithms • Models a more general problem: how the data was generated. • I.e., the distribution of the class; joint probability distribution p(x,y). • Naive Bayes, Bayesian Belief Network classifier, Restricted Boltzmann Machine … • Map x → y directly. • E.g., distinguish between people speaking different languages without learning the languages. • Logistic Regression, SVM, Neural Networks …
  16. Examples of Discriminative Classifiers: Perceptron xi1 xi2 w1 w2 Σ yi y = wTx = w0 + w1x1 + w2x2 1 F. Rosenblatt. The perceptron, a perceiving and recognizing automaton Project Para. Cornell Aeronautical Laboratory, 1957. x1 x2 y ∈ {-1,1} w0 wj = weight xi = training sample yi = desired output yi = actual output t = iteration step η = learning rate θ = threshold (here 0) update rule: wj(t+1) = wj(t) + η(yi - yi)xi 1 if wTxi ≥ θ -1 otherwise ^ ^ ^ ^ yi ^ until t+1 = max iter or error = 0
  17. Discriminative Classifiers: Perceptron F. Rosenblatt. The perceptron, a perceiving and recognizing automaton Project Para. Cornell Aeronautical Laboratory, 1957. - Binary classifier (one vs all, OVA) - Convergence problems (set n iterations) - Modification: stochastic gradient descent - “Modern” perceptron: Support Vector Machine (maximize margin) - Multilayer perceptron (MLP) xi1 xi2 w1 w2 Σ yi 1 y ∈ {-1,1} w0 ^ x1 x2
  18. Generative Classifiers: Naive Bayes Bayes Theorem: P(ωj | xi) = P(xi | ωj) P(ωj) P(xi) Posterior probability = Likelihood x Prior probability Evidence Iris example: P(“Setosa"| xi), xi = [4.5 cm, 7.4 cm]
  19. Generative Classifiers: Naive Bayes Decision Rule: Bayes Theorem: P(ωj | xi) = P(xi | ωj) P(ωj) P(xi) pred. class label ωj argmax P(ωj | xi) i = 1, …, m e.g., j ∈ {Setosa, Versicolor, Virginica}
  20. Class-conditional probability (here Gaussian kernel): Generative Classifiers: Naive Bayes Prior probability: Evidence: (cancels out) (class frequency) P(ωj | xi) = P(xi | ωj) P(ωj) P(xi) P(ωj) = Nωj Nc P(xik |ωj) = 1 √ (2 π σωj 2) exp ( )- (xik - μωj)2 2σωj 2 P(xi |ωj) P(xik |ωj)
  21. Generative Classifiers: Naive Bayes - Naive conditional independence assumption typically violated - Works well for small datasets - Multinomial model still quite popular for text classification (e.g., spam filter)
  22. Non-Parametric Classifiers: K-Nearest Neighbor - Simple! - Lazy learner - Very susceptible to curse of dimensionality k=3 e.g., k=1
  23. Iris Example Setosa Virginica Versicolor k = 3 mahalanobis dist. uniform weights C = 3 depth = 2
  24. Decision Tree Entropy = depth = 4 petal length <= 2.45? petal length <= 4.75? Setosa Virginica Versicolor Yes No e.g., 2 (- 0.5 log2(0.5)) = 1 ∑−pi logk pi i Information Gain = entropy(parent) – [avg entropy(children)] NoYes depth = 2
  25. "No Free Lunch" :( Roughly speaking: “No one model works best for all possible situations.” Our model is a simplification of reality Simplification is based on assumptions (model bias) Assumptions fail in certain situations D. H. Wolpert. The supervised learning no-free-lunch theorems. In Soft Computing and Industry, pages 25–42. Springer, 2002.
  26. Which Algorithm? • What is the size and dimensionality of my training set? • Is the data linearly separable? • How much do I care about computational efficiency? - Model building vs. real-time prediction time - Eager vs. lazy learning / on-line vs. batch learning - prediction performance vs. speed • Do I care about interpretability or should it "just work well?" • ...
  27. Feature Extraction Feature Selection Dimensionality Reduction Feature Scaling Raw Data Collection Pre-Processing Sampling Test Dataset Training Dataset Learning Algorithm Training Post-Processing Cross Validation Final Classification/ Regression Model New DataPre-Processing Refinement Prediction Split Supervised Learning Sebastian Raschka 2014 Missing Data Performance Metrics Model Selection Hyperparameter Optimization This work is licensed under a Creative Commons Attribution 4.0 International License. Final Model Evaluation
  28. Missing Values: - Remove features (columns) - Remove samples (rows) - Imputation (mean, nearest neighbor, …) Sampling: - Random split into training and validation sets - Typically 60/40, 70/30, 80/20 - Don’t use validation set until the very end! (overfitting) Feature Scaling: e.g., standardization: - Faster convergence (gradient descent) - Distances on same scale (k-NN with Euclidean distance) - Mean centering for free - Normal distributed data - Numerical stability by avoiding small weights z = xik - μk σk (use same parameters for the test/new data!)
  29. Categorical Variables color size prize class label 0 green M 10.1 class1 1 red L 13.5 class2 2 blue XL 15.3 class1 ordinalnominal green → (1,0,0) red → (0,1,0) blue → (0,0,1) class label color=blue color=green color=red prize size 0 0 0 1 0 10.1 1 1 1 0 0 1 13.5 2 2 0 1 0 0 15.3 3 M → 1 L → 2 XL → 3
  30. Feature Extraction Feature Selection Dimensionality Reduction Feature Scaling Raw Data Collection Pre-Processing Sampling Test Dataset Training Dataset Learning Algorithm Training Post-Processing Cross Validation Final Classification/ Regression Model New DataPre-Processing Refinement Prediction Split Supervised Learning Sebastian Raschka 2014 Missing Data Performance Metrics Model Selection Hyperparameter Optimization This work is licensed under a Creative Commons Attribution 4.0 International License. Final Model Evaluation
  31. Generalization Error and Overfitting How well does the model perform on unseen data?
  32. Generalization Error and Overfitting
  33. Error Metrics: Confusion Matrix TP [Linear SVM on sepal/petal lengths] TN FN FP here: “setosa” = “positive”
  34. Error Metrics TP [Linear SVM on sepal/petal lengths] TN FN FP here: “setosa” = “positive” TP + TN FP +FN +TP +TN Accuracy = = 1 - Error FP N TP P False Positive Rate = TP TP + FP Precision = True Positive Rate = (Recall) “micro” and “macro” averaging for multi-class
  35. Receiver Operating Characteristic (ROC) Curves
  36. Test set Training dataset Test dataset Complete dataset Test set Test set Test set 1st iteration calc. error calc. error calc. error calc. error calculate avg. error k-fold cross-validation (k=4): 2nd iteration 3rd iteration 4th iteration fold 1 fold 2 fold 3 fold 4 Model Selection
  37. k-fold CV and ROC
  38. Feature Selection - Domain knowledge - Variance threshold - Exhaustive search - Decision trees - … IMPORTANT! (Noise, overfitting, curse of dimensionality, efficiency) X = [x1, x2, x3, x4]start: stop: (if d = k) X = [x1, x3, x4] X = [x1, x3] Simplest example: Greedy Backward Selection
  39. Dimensionality Reduction • Transformation onto a new feature subspace • e.g., Principal Component Analysis (PCA) • Find directions of maximum variance • Retain most of the information
  40. 0. Standardize data 1. Compute covariance matrix z = xik - μk σik = ∑ (xij - µj) (xik - µk) σk 1 in -1 σ2 1 σ12 σ13 σ14 σ21 σ2 2 σ23 σ24 σ31 σ32 σ2 3 σ34 σ41 σ42 σ43 σ2 4 ∑ = PCA in 3 Steps
  41. 2. Eigendecomposition and sorting eigenvalues PCA in 3 Steps X v = λ v Eigenvectors [[ 0.52237162 -0.37231836 -0.72101681 0.26199559] [-0.26335492 -0.92555649 0.24203288 -0.12413481] [ 0.58125401 -0.02109478 0.14089226 -0.80115427] [ 0.56561105 -0.06541577 0.6338014 0.52354627]] Eigenvalues [ 2.93035378 0.92740362 0.14834223 0.02074601] (from high to low)
  42. 3. Select top k eigenvectors and transform data PCA in 3 Steps Eigenvectors [[ 0.52237162 -0.37231836 -0.72101681 0.26199559] [-0.26335492 -0.92555649 0.24203288 -0.12413481] [ 0.58125401 -0.02109478 0.14089226 -0.80115427] [ 0.56561105 -0.06541577 0.6338014 0.52354627]] Eigenvalues [ 2.93035378 0.92740362 0.14834223 0.02074601] [First 2 PCs of Iris]
  43. Hyperparameter Optimization: GridSearch in scikit-learn
  44. Non-Linear Problems - XOR gate k=11 uniform weights C=1 C=1000, gamma=0.1 depth=4
  45. Kernel Trick Kernel function Kernel Map onto high-dimensional space (non-linear combinations)
  46. Kernel Trick Trick: No explicit dot product! Radius Basis Function (RBF) Kernel:
  47. Kernel PCA PC1, linear PCA PC1, kernel PCA
  48. Feature Extraction Feature Selection Dimensionality Reduction Feature Scaling Raw Data Collection Pre-Processing Sampling Test Dataset Training Dataset Learning Algorithm Training Post-Processing Cross Validation Final Classification/ Regression Model New DataPre-Processing Refinement Prediction Split Supervised Learning Sebastian Raschka 2014 Missing Data Performance Metrics Model Selection Hyperparameter Optimization This work is licensed under a Creative Commons Attribution 4.0 International License. Final Model Evaluation
  49. Questions? mail@sebastianraschka.com https://github.com/rasbt @rasbt Thanks!
  50. Additional Slides
  51. Inspiring Literature P. N. Klein. Coding the Matrix: Linear Algebra Through Computer Science Applications. Newtonian Press, 2013. R. Schutt and C. O’Neil. Doing Data Science: Straight Talk from the Frontline. O’Reilly Media, Inc., 2013. S. Gutierrez. Data Scientists at Work. Apress, 2014. R. O. Duda, P. E. Hart, and D. G. Stork. Pattern classification. 2nd. Edition. New York, 2001.
  52. Useful Online Resources https://www.coursera.org/course/ml http://stats.stackexchange.com http://www.kaggle.com
  53. My Favorite Tools http://stanford.edu/~mwaskom/software/seaborn/ Seaborn http://www.numpy.org http://pandas.pydata.org http://scikit-learn.org/stable/ http://ipython.org/notebook.html
  54. class1 class2 Which one to pick?
  55. class1 class2 The problem of overfitting Generalization error!

×