SlideShare ist ein Scribd-Unternehmen logo
1 von 74
Algorithms: The Basic Methods
1-rule Algorithm (1R) Way to find very easy classification rule Generates a one level decision tree which tests just one attribute Steps: Consider each attribute in turn There will be one branch in the decision tree for each value of this attribute Allot the majority class to each branch  Repeat the same for all attributes and choose the one with minimum error
1R Pseudo Code Pseudo code for 1R
1R in action Consider the problem of weather’s effect on play. Data is:
1R in action Let us consider the Outlook parameter first 	Total Error = 4/14
1R in action Consolidated table for all the attributes, ‘*’ represent arbitrary choice from equivalent options:
1R in action From this table we can see that a decision tree on Outlook and Humidity gives minimum error We can choose and of these two attributes and the corresponding rules as our choice of classification rule Missing is treated as just another attribute, one branch in the decision tree dedicated to missing values like any other attribute value
Numeric attributes and 1R To deal with numeric attributes, we Discretize them  The steps are : Sort instances on the basis of attribute’s value Place breakpoints where class changes These breakpoints gives us discrete numerical range Majority class of each range is considered as its range
Numeric attributes and 1R We have the following data for the weather example,
Numeric attributes and 1R Applying the steps we get: The problem with this approach is that we can get a large number of division or Overfitting Therefore we enforce a minimum number of instances , for example taking min = 3 in above example, we get:
Numeric attributes and 1R When two adjacent division have the same majority class, then we can join these two divisions So after this we will get: Which gives the following classification rules:
Statistical Modeling Another classification technique Assumptions  (for a given class): All attributes contributes equally to decision making All attributes are independent of each other
Statistical Modeling: An example Given Data:
Statistical Modeling: An example Data Description: The upper half shows how many time a value of an attribute occurs for a class The lower half shows the same data in terms of fraction 	 For example, class is yes 9 times For class = yes, outlook = sunny 2 times So under outlook = sunny and class = yes  we have 2/9
Statistical Modeling Problem at hand: Solution: Taking into the consideration that all attributes equally and are independent Likelihood of yes = 2/9x3/9x3/9x3/9x9/14 = 0.0053 Likelihood of no = 3/5x1/5x4/5x3/5x5/14 = 0.0206
Statistical Modeling: An example Solution continued.. As can be observed, likelihood of yes is high Using normalization, we can calculate probability as: Probability of yes = (.0053)/(.0053 + .0206) = 20.5% Probability of no = (.0206)/(.0053 + .0206) = 79.5%
Statistical Modeling: An example Derivation using Bayes’ rule: Acc to Bayes’ rule, for a hypothesis H and evidence E that bears on that hypothesis, then  P[H|E] = (P[E|H] x P[H]) / P[E] For our example hypothesis H is that play will be, say, yes and E is the particular combination of attribute values at hand Outlook = sunny(E1) Temperature = cool (E2) Humidity = high(E3) Windy = True (E4)
Statistical Modeling: An example Derivation using Bayes’ rule: Now since E1, E2, E3 and E4 are independent therefore we have 	P[H|E] = (P[E1|H] x P[E2|H] x P[E3|H] x P[E4|H] x P[H] ) / P[E] Replacing values from the table we get,	 	P[yes|E] = (2/9 x 3/9 x 3/9 x 3/9 x 9/14) / P[E] P[E] will be taken care of during normalization of P[yes|E] and P[No|E]  This method is called as Naïve Bayes
Problem and Solution for Naïve Bayes Problem: In case we have an attribute value (Ea)for which P[Ea|H] = 0, then irrespective of other attributes P[H|E] = 0 Solution: We can add a constant to numerator and denominator, a technique called Laplace Estimator for example,  	P1 + P2 + P3 = 1:
Statistical Modeling: Dealing with missing attributes Incase an value is missing, say for attribute Ea in the given data set, we just don’t count it while calculating the P[Ea|H] Incase an attribute is missing in the instance to be classified, then its factor is not there in the expression for P[H|E], for example if outlook is missing then we will have: Likelihood of Yes = 3/9 x 3/9 x 3/9 x 9/14 = 0.0238 	 Likelihood of No = 1/5 x 4/5 x 3/5 x 5/14 = 0.0343
Statistical Modeling: Dealing with numerical attributes Numeric  values are handled by assuming that they have : Normal probability distribution Gaussian probability distribution For a normal distribution we have:             u = mean             sigma = Standard deviation              x = instance under consideration 	     f(x) = contribution of to likelihood figures
Statistical Modeling: Dealing with numerical attributes An example, we have the data:
Statistical Modeling: Dealing with numerical attributes So here we have calculated the mean and standard deviation for numerical attributes like temperature and humidity For temperature =  66 So the contribution of temperature = 66 in P[yes|E] is 0.0340 We do this similarly for other numerical attributes
Divide-and-Conquer: Constructing Decision Trees Steps to construct a decision tree recursively: Select an attribute to placed at root node and make one branch for each possible value  Repeat the process recursively at each branch, using only those instances that reach the branch  If at any time all instances at a node have the classification, stop developing that part of the tree Problem: How to decide which attribute to split on
Divide-and-Conquer: Constructing Decision Trees Steps to find the attribute to split on: We consider all the possible attributes as option and branch them according to different possible values Now for each possible attribute value we calculate Information and then find the Information gain for each attribute option Select that attribute for division which gives a Maximum Information Gain Do this until each branch terminates at an attribute which gives Information = 0
Divide-and-Conquer: Constructing Decision Trees Calculation of Information and Gain: For data: (P1, P2, P3……Pn) such that P1 + P2 + P3 +……. +Pn = 1  Information(P1, P2 …..Pn)  =  -P1logP1 -P2logP2 – P3logP3 ……… -PnlogPn Gain= Information before division – Information after division
Divide-and-Conquer: Constructing Decision Trees Example: Here we have consider each attribute individually Each is divided into branches  according to different possible  values  Below each branch the number of class is marked
Divide-and-Conquer: Constructing Decision Trees Calculations: Using the formulae for Information, initially we have Number of instances with class = Yes is 9  Number of instances with class = No is 5 So we have P1 = 9/14 and P2 = 5/14 Info[9/14, 5/14] = -9/14log(9/14) -5/14log(5/14) = 0.940 bits Now for example lets consider Outlook attribute, we observe the following:
Divide-and-Conquer: Constructing Decision Trees Example Contd. Gain by using Outlook for division        = info([9,5]) – info([2,3],[4,0],[3,2]) 				                          = 0.940 – 0.693 = 0.247 bits Gain (outlook) = 0.247 bits 	Gain (temperature) = 0.029 bits 	Gain (humidity) = 0.152 bits 	Gain (windy) = 0.048 bits So since Outlook gives maximum gain, we will use it for division And we repeat the steps for Outlook = Sunny and Rainy and stop for 	Overcast since we have Information = 0 for it
Divide-and-Conquer: Constructing Decision Trees Highly branching attributes: The problem If we follow the previously subscribed method, it will always favor an attribute with the largest number of  branches In extreme cases it will favor an attribute which has different value for each instance: Identification code
Divide-and-Conquer: Constructing Decision Trees Highly branching attributes: The problem Information for such an attribute is 0 info([0,1]) + info([0,1]) + info([0,1]) + …………. + info([0,1]) = 0 It will hence have the maximum gain and will be chosen for branching But such an attribute is not good for predicting class of an unknown instance nor does it tells anything about the structure of division So we use gain ratio to compensate for this
Divide-and-Conquer: Constructing Decision Trees Highly branching attributes: Gain ratio Gain ratio =  gain/split info To calculate split info, for each instance value we just consider the number of instances covered by each attribute value, irrespective of the class Then we calculate the split info, so for identification code with 14 different values we have: info([1,1,1,…..,1]) = -1/14 x log1/14 x 14 = 3.807 For Outlook we will have the split info: info([5,4,5]) =  -1/5 x log 1/5 -1/4 x log1/4 -1/5 x log 1/5  = 1.577
Divide-and-Conquer: Constructing Decision Trees Highly branching attributes: Gain ratio So we have: And for the ‘highly branched attribute’, gain ratio = 0.247
Divide-and-Conquer: Constructing Decision Trees Highly branching attributes: Gain ratio Though the ‘highly branched attribute’ still have the maximum gain ratio, but its advantage is greatly reduced Problem with using gain ratio: In some situations the gain ratio modification overcompensates and can lead to preferring an attribute just because its intrinsic information        is much lower than that for the other attributes.  A standard fix is to choose the attribute that maximizes the gain ratio, provided that the information gain or that attribute is at least as great as the average information gain for all the attributes examined
Covering Algorithms: Constructing rules Approach: Consider each class in turn Seek a way of covering all instances in it, excluding instances not belonging to this class Identify a rule to do so 	This is called a covering approach because at each stage we identify a rule that covers some of the instances
Covering Algorithms: Constructing rules Visualization: 				Rules for class = a: ,[object Object]
 If x > 1.2 and y > 2.6 then class = a
 If x > 1.2 and y > 2.6 then class = a   If x > 1.4 and y < 2.4 then class = a
Covering Algorithms: Constructing rules Rules Vs Trees: Covering algorithm covers only a single class at a time whereas division takes all the classes in account as decision trees creates a combines concept description Problem of replicated sub trees is avoided in rules Tree for the previous problem:
Covering Algorithms: Constructing rules PRISM Algorithm: A simple covering algorithm Instance space after addition of rules:
Covering Algorithms: Constructing rules PRISM Algorithm: Criteria to select an attribute for division Include as many instances of the desired class and exclude as many instances of other class as possible If a new rule covers t instances of which p are positive examples of the class and t-p are instances of other classes i.e errors, then try to maximize p/t
Covering Algorithms: Constructing rules PRISM Algorithm: Example data
Covering Algorithms: Constructing rules PRISM Algorithm: In action We start with the class = hard and have the following rule: If ? Then recommendation = hard Here ? represents an unknown rule For unknown we have nine choices:
Covering Algorithms: Constructing rules PRISM Algorithm: In action Here the maximum t/p ratio is for astigmatism = yes (choosing randomly between equivalent option in case there coverage is also same) So we get the rule: If astigmatism = yes then recommendation = hard We wont stop at this rule as this rule gives only 4 correct results out of 12 instances it covers We remove the correct instances of the above rule from our example set and start with the rule: If astigmatism = yes and ? then recommendation = hard
Covering Algorithms: Constructing rules PRISM Algorithm: In action Now we have the data as:
Covering Algorithms: Constructing rules PRISM Algorithm: In action And the choices for this data is: We choose tear production rate = normal which has highest t/p
Covering Algorithms: Constructing rules PRISM Algorithm: In action So we have the rule: If astigmatism = yes and tear production rate =  normal then 		recommendation = hard Again, we remove matched instances, now we have the data:
Covering Algorithms: Constructing rules PRISM Algorithm: In action Now again using t/p we finally have the rule (based on maximum coverage): If astigmatism = yes and tear production rate =  normal and spectacle  prescription = myope then recommendation = hard 	 And so on. …..
Covering Algorithms: Constructing rules PRISM Algorithm: Pseudo Code
Covering Algorithms: Constructing rules Rules Vs decision lists The rules produced, for example by PRISM algorithm, are not necessarily to be interpreted in order like decision lists There is no order in which class should be considered while generating rules  Using rules for classification, one instance may receive multiple receive multiple classification or no classification at all In such cases go for the rule with maximum coverage and training examples respecitively These difficulties are not there with decision lists as they are to be interpreted in order and have a default rule at the end
Mining Association Rules Definition: An association rule can predict any number of attributes and also any combination of attributes Parameter for selecting an Association Rule: Coverage: The number of instances they predict correctly Accuracy: The ratio of  coverageand total number of instances the rule is applicable We want association rule with high coverage and a minimum specified accuracy
Mining Association Rules Terminology: Item – set: A combination of attributes Item: An attribute – value pair An example: For the weather data we have a table with each column containing an item – set having different number of attributes With each entry the coverage is also given The table is not complete, just gives us a good idea
Mining Association Rules
Mining Association Rules Generating Association rules: We need to specify a minimum coverage and accuracy for the rules to be generated before hand Steps: Generate the item sets Each item set can be permuted to generate a number of rules For each rule check if the coverage and accuracy is appropriate                     This is how we generate association rules
Mining Association Rules Generating Association rules: For example if we take the item set: humidity = normal, windy = false, play = yes This gives seven potential rules (with accuracy):
Linear models We will look at methods to deal with the prediction of numerical quantities We will see how to use numerical methods for classification
Linear models Numerical Prediction: Linear regression Linear regression is a technique to predict numerical quantities Here we express the class (a numerical quantity)  as a linear combination of attributes  with predetermined weights For example if we have attributes a1,a2,a3…….,ak x = (w0) + (w1)x(a1) + (w2)x(a2) + …… + (wk)x(ak)        Here x represents the predicted class and w0,w1……,wk are the predetermined weights
Linear models Numerical Prediction: Linear regression The weights are calculated by using the training set To choose optimum weights we select the weights with minimum square sum:
Linear models Linear classification: Multi response linear regression  For each class we use linear regression to get a linear expression  When the instance belongs to the class output is 1, otherwise 0 Now for an unclassified instance we use the expression for each class and get an output The class expression giving the maximum output is selected as the classified class This method has the drawbacks that values produced are not proper probabilities
Linear models Linear classification: Logistic regression To get the output as proper probabilities in the range 0 to 1 we use logistic regression Here the output y is defined as:        y = 1/(1+e^(-x)) x = (w0) + (w1)x(a1) + (w2)x(a2) + …… + (wk)x(ak) So the output y will lie in the range (0,1]
Linear models Linear classification: Logistic regression To select appropriate weights for the expression of x, we maximize: To generalize Logistic regression  we can use do the calculation like we did in Multi response linear regression  Again the problem with this approach is that the probabilities of different classes do not sum up to 1
Linear models Linear classification using  the perceptron If instances belonging to different classes can be divided in the instance space by using hyper planes, then they are called linearly separable If instances are linearly separable then we can use perceptron learning rule for classification  Steps: Lets assume that we have only 2 classes The equation of hyper plane is (a0  = 1):        (w0)(a0) + (w1)(a1) + (w2)(a2) +…….. + (wk)(ak) = 0
Linear models Linear classification using  the perceptron Steps (contd.): If the sum (mentioned in previous step) is greater than 0 than we have first class else the second one The algorithm to get the weight and hence the equation of dividing hyper plane (or the perceptron)is:
Instance-based learning General steps: No preprocessing of training sets, just store the training instances as it is To classify a new instance calculate its distance with every stored training instance The unclassified instance is allotted the class of the instance which has the minimum distance from it
Instance-based learning The distance function The distance function we use depends on our application Some of the popular distance functions are: Euclidian distance, Manhattan distance metric etc. The most popular distance metric is Euclidian distance (between teo instances) given by:         K is the number of attributes
Instance-based learning Normalization of data: We normalize attributes such that they lie in the range [0,1], by using the formulae: Missing attributes: In case of nominal attributes, if any of the two attributes are missing or if the attributes are different, the distance is taken as 1  In nominal attributes, if both are missing than difference is 1. If only one attribute is missing than the difference is the either the normalized value of given attribute or one minus that size, which ever is bigger
Instance-based learning Finding nearest neighbors efficiently: Finding nearest neighbor by calculating distance with every attribute of each instance if linear We make this faster by using kd-trees KD-Trees: They are binary trees that divide the input space with a hyper plane and then split each partition again, recursively It stores the points in k dimensional space, k being the number of attributes
Instance-based learning Finding nearest neighbors efficiently:
Instance-based learning Finding nearest neighbors efficiently: Here we see a kd tree and the instances and splits with k=2 As you can see not all child nodes are developed to the same depth We have mentioned the axis along which the division has been done (v or h in this case) Steps to find the nearest neighbor: Construct the kd tree (explained later) Now start from the root node and comparing the appropriate attribute (based on the axis along which the division has been done), move to left or the right sub-tree
Instance-based learning Steps to find the nearest neighbor (contd.): Repeat this step recursively till you reach a node which is either a leaf node or has no appropriate leaf node (left or right) Now you have find the region to which this new instance belong You also have a probable nearest neighbor in the form of the regions leaf node (or immediate neighbor) Calculate the distance of the instance with the probable nearest neighbor. Any closer instance will lie in a circle with radius equal to this distance
Instance-based learning Finding nearest neighbors efficiently: Steps to find the nearest neighbor (contd.): Now we will move redo our recursive trace looking for an instance which is closer to put unclassified instance than the probable nearest neighbor we have We start with the immediate neighbor, if it lies in the circle than we will have to consider it and all its child nodes (if any) If condition of previous step is not true then we check the siblings of the parent of our probable nearest neighbor We repeat these steps till we reach the root In case we find instance(s) which are nearer, we update the nearest neighbor
Instance-based learning Steps to find the nearest neighbor (contd.):
Instance-based learning Construction of KD tree: We need to figure out two things to construct a kd tree: Along which dimension to make the cut Which instance to use to make the cut Deciding the dimension to make the cut: We calculate the variance along each axis The division is done perpendicular to the axis with minimum variance Deciding the instance to be used for division: Just take the median as the point of division  So we repeat these steps recursively till all the points are exhausted
Clustering Clustering techniques apply when rather than predicting the class, we just want the instances to be divided into natural group Iterative instance based learning: k-means Here k represents the number of clusters The instance space is divided in to k clusters K-means forms the cluster so as the sum of square distances of instances from there cluster center is minimum

Weitere ähnliche Inhalte

Was ist angesagt?

Mining Frequent Patterns, Association and Correlations
Mining Frequent Patterns, Association and CorrelationsMining Frequent Patterns, Association and Correlations
Mining Frequent Patterns, Association and CorrelationsJustin Cletus
 
Introduction to Machine Learning Classifiers
Introduction to Machine Learning ClassifiersIntroduction to Machine Learning Classifiers
Introduction to Machine Learning ClassifiersFunctional Imperative
 
Linear Regression vs Logistic Regression | Edureka
Linear Regression vs Logistic Regression | EdurekaLinear Regression vs Logistic Regression | Edureka
Linear Regression vs Logistic Regression | EdurekaEdureka!
 
Bias and variance trade off
Bias and variance trade offBias and variance trade off
Bias and variance trade offVARUN KUMAR
 
Decision Trees
Decision TreesDecision Trees
Decision TreesStudent
 
Ensemble learning
Ensemble learningEnsemble learning
Ensemble learningHaris Jamil
 
Overfitting & Underfitting
Overfitting & UnderfittingOverfitting & Underfitting
Overfitting & UnderfittingSOUMIT KAR
 
CART – Classification & Regression Trees
CART – Classification & Regression TreesCART – Classification & Regression Trees
CART – Classification & Regression TreesHemant Chetwani
 
decision tree regression
decision tree regressiondecision tree regression
decision tree regressionAkhilesh Joshi
 
NAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIERNAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIERKnoldus Inc.
 
Decision tree induction \ Decision Tree Algorithm with Example| Data science
Decision tree induction \ Decision Tree Algorithm with Example| Data scienceDecision tree induction \ Decision Tree Algorithm with Example| Data science
Decision tree induction \ Decision Tree Algorithm with Example| Data scienceMaryamRehman6
 

Was ist angesagt? (20)

Mining Frequent Patterns, Association and Correlations
Mining Frequent Patterns, Association and CorrelationsMining Frequent Patterns, Association and Correlations
Mining Frequent Patterns, Association and Correlations
 
Introduction to Machine Learning Classifiers
Introduction to Machine Learning ClassifiersIntroduction to Machine Learning Classifiers
Introduction to Machine Learning Classifiers
 
Probability Theory for Data Scientists
Probability Theory for Data ScientistsProbability Theory for Data Scientists
Probability Theory for Data Scientists
 
Module 4 part_1
Module 4 part_1Module 4 part_1
Module 4 part_1
 
Linear Regression vs Logistic Regression | Edureka
Linear Regression vs Logistic Regression | EdurekaLinear Regression vs Logistic Regression | Edureka
Linear Regression vs Logistic Regression | Edureka
 
Bias and variance trade off
Bias and variance trade offBias and variance trade off
Bias and variance trade off
 
Data Mining: Outlier analysis
Data Mining: Outlier analysisData Mining: Outlier analysis
Data Mining: Outlier analysis
 
3. mining frequent patterns
3. mining frequent patterns3. mining frequent patterns
3. mining frequent patterns
 
Decision Trees
Decision TreesDecision Trees
Decision Trees
 
PAC Learning
PAC LearningPAC Learning
PAC Learning
 
Ensemble learning
Ensemble learningEnsemble learning
Ensemble learning
 
Overfitting & Underfitting
Overfitting & UnderfittingOverfitting & Underfitting
Overfitting & Underfitting
 
Lecture10 - Naïve Bayes
Lecture10 - Naïve BayesLecture10 - Naïve Bayes
Lecture10 - Naïve Bayes
 
CART – Classification & Regression Trees
CART – Classification & Regression TreesCART – Classification & Regression Trees
CART – Classification & Regression Trees
 
Naive Bayes Presentation
Naive Bayes PresentationNaive Bayes Presentation
Naive Bayes Presentation
 
decision tree regression
decision tree regressiondecision tree regression
decision tree regression
 
Presentation on K-Means Clustering
Presentation on K-Means ClusteringPresentation on K-Means Clustering
Presentation on K-Means Clustering
 
First order logic
First order logicFirst order logic
First order logic
 
NAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIERNAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIER
 
Decision tree induction \ Decision Tree Algorithm with Example| Data science
Decision tree induction \ Decision Tree Algorithm with Example| Data scienceDecision tree induction \ Decision Tree Algorithm with Example| Data science
Decision tree induction \ Decision Tree Algorithm with Example| Data science
 

Andere mochten auch

Classification and Clustering Analysis using Weka
Classification and Clustering Analysis using Weka Classification and Clustering Analysis using Weka
Classification and Clustering Analysis using Weka Ishan Awadhesh
 
Weka presentation
Weka presentationWeka presentation
Weka presentationSaeed Iqbal
 
WEKA: Data Mining Input Concepts Instances And Attributes
WEKA: Data Mining Input Concepts Instances And AttributesWEKA: Data Mining Input Concepts Instances And Attributes
WEKA: Data Mining Input Concepts Instances And AttributesDataminingTools Inc
 
Data Mining with WEKA WEKA
Data Mining with WEKA WEKAData Mining with WEKA WEKA
Data Mining with WEKA WEKAbutest
 
Mineria de datos en weka
Mineria de datos en wekaMineria de datos en weka
Mineria de datos en wekaUri Mon
 
Data mining techniques using weka
Data mining techniques using wekaData mining techniques using weka
Data mining techniques using wekarathorenitin87
 
WEKA - A Data Mining Tool - by Shareek Ahamed
WEKA - A Data Mining Tool - by Shareek AhamedWEKA - A Data Mining Tool - by Shareek Ahamed
WEKA - A Data Mining Tool - by Shareek AhamedShareek Ahamed
 
Weka project - Classification & Association Rule Generation
Weka project - Classification & Association Rule GenerationWeka project - Classification & Association Rule Generation
Weka project - Classification & Association Rule Generationrsathishwaran
 
Programming For Non-Programmers @SXSW 2013
Programming For Non-Programmers @SXSW 2013Programming For Non-Programmers @SXSW 2013
Programming For Non-Programmers @SXSW 2013Chris Castiglione
 
WEKA Tutorial
WEKA TutorialWEKA Tutorial
WEKA Tutorialbutest
 
15 A/B Testing Stats That Will Blow your Mind
15 A/B Testing Stats That Will Blow your Mind15 A/B Testing Stats That Will Blow your Mind
15 A/B Testing Stats That Will Blow your MindWishpond
 
Épica Latina Latín II
Épica Latina Latín IIÉpica Latina Latín II
Épica Latina Latín IIlara
 
Procedures And Functions in Matlab
Procedures And Functions in MatlabProcedures And Functions in Matlab
Procedures And Functions in MatlabDataminingTools Inc
 

Andere mochten auch (20)

Classification and Clustering Analysis using Weka
Classification and Clustering Analysis using Weka Classification and Clustering Analysis using Weka
Classification and Clustering Analysis using Weka
 
Weka presentation
Weka presentationWeka presentation
Weka presentation
 
WEKA: Data Mining Input Concepts Instances And Attributes
WEKA: Data Mining Input Concepts Instances And AttributesWEKA: Data Mining Input Concepts Instances And Attributes
WEKA: Data Mining Input Concepts Instances And Attributes
 
Introducción a weka
Introducción a wekaIntroducción a weka
Introducción a weka
 
Weka
WekaWeka
Weka
 
Data Mining with WEKA WEKA
Data Mining with WEKA WEKAData Mining with WEKA WEKA
Data Mining with WEKA WEKA
 
Mineria de datos en weka
Mineria de datos en wekaMineria de datos en weka
Mineria de datos en weka
 
Data mining techniques using weka
Data mining techniques using wekaData mining techniques using weka
Data mining techniques using weka
 
WEKA - A Data Mining Tool - by Shareek Ahamed
WEKA - A Data Mining Tool - by Shareek AhamedWEKA - A Data Mining Tool - by Shareek Ahamed
WEKA - A Data Mining Tool - by Shareek Ahamed
 
Data Mining using Weka
Data Mining using WekaData Mining using Weka
Data Mining using Weka
 
Weka bike rental
Weka bike rentalWeka bike rental
Weka bike rental
 
Weka project - Classification & Association Rule Generation
Weka project - Classification & Association Rule GenerationWeka project - Classification & Association Rule Generation
Weka project - Classification & Association Rule Generation
 
Programming For Non-Programmers @SXSW 2013
Programming For Non-Programmers @SXSW 2013Programming For Non-Programmers @SXSW 2013
Programming For Non-Programmers @SXSW 2013
 
Lecture7 - IBk
Lecture7 - IBkLecture7 - IBk
Lecture7 - IBk
 
WEKA Tutorial
WEKA TutorialWEKA Tutorial
WEKA Tutorial
 
15 A/B Testing Stats That Will Blow your Mind
15 A/B Testing Stats That Will Blow your Mind15 A/B Testing Stats That Will Blow your Mind
15 A/B Testing Stats That Will Blow your Mind
 
Épica Latina Latín II
Épica Latina Latín IIÉpica Latina Latín II
Épica Latina Latín II
 
Data Applied: Association
Data Applied: AssociationData Applied: Association
Data Applied: Association
 
Anime
AnimeAnime
Anime
 
Procedures And Functions in Matlab
Procedures And Functions in MatlabProcedures And Functions in Matlab
Procedures And Functions in Matlab
 

Ähnlich wie WEKA: Algorithms The Basic Methods

Descision making descision making decision tree.pptx
Descision making descision making decision tree.pptxDescision making descision making decision tree.pptx
Descision making descision making decision tree.pptxcharmeshponnagani
 
Week 4 Lecture 12 Significance Earlier we discussed co.docx
Week 4 Lecture 12 Significance Earlier we discussed co.docxWeek 4 Lecture 12 Significance Earlier we discussed co.docx
Week 4 Lecture 12 Significance Earlier we discussed co.docxcockekeshia
 
WEKA:Practical Machine Learning Tools And Techniques
WEKA:Practical Machine Learning Tools And TechniquesWEKA:Practical Machine Learning Tools And Techniques
WEKA:Practical Machine Learning Tools And Techniquesweka Content
 
WEKA: Practical Machine Learning Tools And Techniques
WEKA: Practical Machine Learning Tools And TechniquesWEKA: Practical Machine Learning Tools And Techniques
WEKA: Practical Machine Learning Tools And TechniquesDataminingTools Inc
 
INTRODUCTION TO BOOSTING.ppt
INTRODUCTION TO BOOSTING.pptINTRODUCTION TO BOOSTING.ppt
INTRODUCTION TO BOOSTING.pptBharatDaiyaBharat
 
An algorithm for building
An algorithm for buildingAn algorithm for building
An algorithm for buildingajmal_fuuast
 
www1.cs.columbia.edu
www1.cs.columbia.eduwww1.cs.columbia.edu
www1.cs.columbia.edubutest
 
Download presentation source
Download presentation sourceDownload presentation source
Download presentation sourcebutest
 
Types of Machine Learning. & Decision Tree.
Types of Machine Learning. & Decision Tree.Types of Machine Learning. & Decision Tree.
Types of Machine Learning. & Decision Tree.Megha Sharma
 
Ensemble methods in machine learning
Ensemble methods in machine learningEnsemble methods in machine learning
Ensemble methods in machine learningSANTHOSH RAJA M G
 
Machine Learning Unit-5 Decesion Trees & Random Forest.pdf
Machine Learning Unit-5 Decesion Trees & Random Forest.pdfMachine Learning Unit-5 Decesion Trees & Random Forest.pdf
Machine Learning Unit-5 Decesion Trees & Random Forest.pdfAdityaSoraut
 
Machine learning session6(decision trees random forrest)
Machine learning   session6(decision trees random forrest)Machine learning   session6(decision trees random forrest)
Machine learning session6(decision trees random forrest)Abhimanyu Dwivedi
 
Data Science Interview Questions | Data Science Interview Questions And Answe...
Data Science Interview Questions | Data Science Interview Questions And Answe...Data Science Interview Questions | Data Science Interview Questions And Answe...
Data Science Interview Questions | Data Science Interview Questions And Answe...Simplilearn
 

Ähnlich wie WEKA: Algorithms The Basic Methods (20)

Data Applied: Decision
Data Applied: DecisionData Applied: Decision
Data Applied: Decision
 
Data Applied:Decision Trees
Data Applied:Decision TreesData Applied:Decision Trees
Data Applied:Decision Trees
 
Decision tree learning
Decision tree learningDecision tree learning
Decision tree learning
 
Descision making descision making decision tree.pptx
Descision making descision making decision tree.pptxDescision making descision making decision tree.pptx
Descision making descision making decision tree.pptx
 
Week 4 Lecture 12 Significance Earlier we discussed co.docx
Week 4 Lecture 12 Significance Earlier we discussed co.docxWeek 4 Lecture 12 Significance Earlier we discussed co.docx
Week 4 Lecture 12 Significance Earlier we discussed co.docx
 
WEKA:Practical Machine Learning Tools And Techniques
WEKA:Practical Machine Learning Tools And TechniquesWEKA:Practical Machine Learning Tools And Techniques
WEKA:Practical Machine Learning Tools And Techniques
 
WEKA: Practical Machine Learning Tools And Techniques
WEKA: Practical Machine Learning Tools And TechniquesWEKA: Practical Machine Learning Tools And Techniques
WEKA: Practical Machine Learning Tools And Techniques
 
Decision theory & decisiontrees
Decision theory & decisiontreesDecision theory & decisiontrees
Decision theory & decisiontrees
 
INTRODUCTION TO BOOSTING.ppt
INTRODUCTION TO BOOSTING.pptINTRODUCTION TO BOOSTING.ppt
INTRODUCTION TO BOOSTING.ppt
 
Lecture4.pptx
Lecture4.pptxLecture4.pptx
Lecture4.pptx
 
An algorithm for building
An algorithm for buildingAn algorithm for building
An algorithm for building
 
www1.cs.columbia.edu
www1.cs.columbia.eduwww1.cs.columbia.edu
www1.cs.columbia.edu
 
Download presentation source
Download presentation sourceDownload presentation source
Download presentation source
 
Decision Tree.pptx
Decision Tree.pptxDecision Tree.pptx
Decision Tree.pptx
 
Types of Machine Learning. & Decision Tree.
Types of Machine Learning. & Decision Tree.Types of Machine Learning. & Decision Tree.
Types of Machine Learning. & Decision Tree.
 
Unit 2-ML.pptx
Unit 2-ML.pptxUnit 2-ML.pptx
Unit 2-ML.pptx
 
Ensemble methods in machine learning
Ensemble methods in machine learningEnsemble methods in machine learning
Ensemble methods in machine learning
 
Machine Learning Unit-5 Decesion Trees & Random Forest.pdf
Machine Learning Unit-5 Decesion Trees & Random Forest.pdfMachine Learning Unit-5 Decesion Trees & Random Forest.pdf
Machine Learning Unit-5 Decesion Trees & Random Forest.pdf
 
Machine learning session6(decision trees random forrest)
Machine learning   session6(decision trees random forrest)Machine learning   session6(decision trees random forrest)
Machine learning session6(decision trees random forrest)
 
Data Science Interview Questions | Data Science Interview Questions And Answe...
Data Science Interview Questions | Data Science Interview Questions And Answe...Data Science Interview Questions | Data Science Interview Questions And Answe...
Data Science Interview Questions | Data Science Interview Questions And Answe...
 

Mehr von DataminingTools Inc

AI: Introduction to artificial intelligence
AI: Introduction to artificial intelligenceAI: Introduction to artificial intelligence
AI: Introduction to artificial intelligenceDataminingTools Inc
 
Data Mining: Text and web mining
Data Mining: Text and web miningData Mining: Text and web mining
Data Mining: Text and web miningDataminingTools Inc
 
Data Mining: Mining stream time series and sequence data
Data Mining: Mining stream time series and sequence dataData Mining: Mining stream time series and sequence data
Data Mining: Mining stream time series and sequence dataDataminingTools Inc
 
Data Mining: Mining ,associations, and correlations
Data Mining: Mining ,associations, and correlationsData Mining: Mining ,associations, and correlations
Data Mining: Mining ,associations, and correlationsDataminingTools Inc
 
Data Mining: Graph mining and social network analysis
Data Mining: Graph mining and social network analysisData Mining: Graph mining and social network analysis
Data Mining: Graph mining and social network analysisDataminingTools Inc
 
Data warehouse and olap technology
Data warehouse and olap technologyData warehouse and olap technology
Data warehouse and olap technologyDataminingTools Inc
 
Data Mining: clustering and analysis
Data Mining: clustering and analysisData Mining: clustering and analysis
Data Mining: clustering and analysisDataminingTools Inc
 

Mehr von DataminingTools Inc (20)

Terminology Machine Learning
Terminology Machine LearningTerminology Machine Learning
Terminology Machine Learning
 
Techniques Machine Learning
Techniques Machine LearningTechniques Machine Learning
Techniques Machine Learning
 
Machine learning Introduction
Machine learning IntroductionMachine learning Introduction
Machine learning Introduction
 
Areas of machine leanring
Areas of machine leanringAreas of machine leanring
Areas of machine leanring
 
AI: Planning and AI
AI: Planning and AIAI: Planning and AI
AI: Planning and AI
 
AI: Logic in AI 2
AI: Logic in AI 2AI: Logic in AI 2
AI: Logic in AI 2
 
AI: Logic in AI
AI: Logic in AIAI: Logic in AI
AI: Logic in AI
 
AI: Learning in AI 2
AI: Learning in AI 2AI: Learning in AI 2
AI: Learning in AI 2
 
AI: Learning in AI
AI: Learning in AI AI: Learning in AI
AI: Learning in AI
 
AI: Introduction to artificial intelligence
AI: Introduction to artificial intelligenceAI: Introduction to artificial intelligence
AI: Introduction to artificial intelligence
 
AI: Belief Networks
AI: Belief NetworksAI: Belief Networks
AI: Belief Networks
 
AI: AI & Searching
AI: AI & SearchingAI: AI & Searching
AI: AI & Searching
 
AI: AI & Problem Solving
AI: AI & Problem SolvingAI: AI & Problem Solving
AI: AI & Problem Solving
 
Data Mining: Text and web mining
Data Mining: Text and web miningData Mining: Text and web mining
Data Mining: Text and web mining
 
Data Mining: Mining stream time series and sequence data
Data Mining: Mining stream time series and sequence dataData Mining: Mining stream time series and sequence data
Data Mining: Mining stream time series and sequence data
 
Data Mining: Mining ,associations, and correlations
Data Mining: Mining ,associations, and correlationsData Mining: Mining ,associations, and correlations
Data Mining: Mining ,associations, and correlations
 
Data Mining: Graph mining and social network analysis
Data Mining: Graph mining and social network analysisData Mining: Graph mining and social network analysis
Data Mining: Graph mining and social network analysis
 
Data warehouse and olap technology
Data warehouse and olap technologyData warehouse and olap technology
Data warehouse and olap technology
 
Data Mining: Data processing
Data Mining: Data processingData Mining: Data processing
Data Mining: Data processing
 
Data Mining: clustering and analysis
Data Mining: clustering and analysisData Mining: clustering and analysis
Data Mining: clustering and analysis
 

Kürzlich hochgeladen

Welding Electrode Making Machine By Deccan Dynamics
Welding Electrode Making Machine By Deccan DynamicsWelding Electrode Making Machine By Deccan Dynamics
Welding Electrode Making Machine By Deccan DynamicsIndiaMART InterMESH Limited
 
Horngren’s Financial & Managerial Accounting, 7th edition by Miller-Nobles so...
Horngren’s Financial & Managerial Accounting, 7th edition by Miller-Nobles so...Horngren’s Financial & Managerial Accounting, 7th edition by Miller-Nobles so...
Horngren’s Financial & Managerial Accounting, 7th edition by Miller-Nobles so...ssuserf63bd7
 
trending-flavors-and-ingredients-in-salty-snacks-us-2024_Redacted-V2.pdf
trending-flavors-and-ingredients-in-salty-snacks-us-2024_Redacted-V2.pdftrending-flavors-and-ingredients-in-salty-snacks-us-2024_Redacted-V2.pdf
trending-flavors-and-ingredients-in-salty-snacks-us-2024_Redacted-V2.pdfMintel Group
 
Memorándum de Entendimiento (MoU) entre Codelco y SQM
Memorándum de Entendimiento (MoU) entre Codelco y SQMMemorándum de Entendimiento (MoU) entre Codelco y SQM
Memorándum de Entendimiento (MoU) entre Codelco y SQMVoces Mineras
 
Salesforce Education Cloud - A Complete Guide.pdf
Salesforce Education Cloud - A Complete Guide.pdfSalesforce Education Cloud - A Complete Guide.pdf
Salesforce Education Cloud - A Complete Guide.pdfHarryJohnson78
 
Kenya Coconut Production Presentation by Dr. Lalith Perera
Kenya Coconut Production Presentation by Dr. Lalith PereraKenya Coconut Production Presentation by Dr. Lalith Perera
Kenya Coconut Production Presentation by Dr. Lalith Pereraictsugar
 
1911 Gold Corporate Presentation Apr 2024.pdf
1911 Gold Corporate Presentation Apr 2024.pdf1911 Gold Corporate Presentation Apr 2024.pdf
1911 Gold Corporate Presentation Apr 2024.pdfShaun Heinrichs
 
PSCC - Capability Statement Presentation
PSCC - Capability Statement PresentationPSCC - Capability Statement Presentation
PSCC - Capability Statement PresentationAnamaria Contreras
 
Driving Business Impact for PMs with Jon Harmer
Driving Business Impact for PMs with Jon HarmerDriving Business Impact for PMs with Jon Harmer
Driving Business Impact for PMs with Jon HarmerAggregage
 
How Generative AI Is Transforming Your Business | Byond Growth Insights | Apr...
How Generative AI Is Transforming Your Business | Byond Growth Insights | Apr...How Generative AI Is Transforming Your Business | Byond Growth Insights | Apr...
How Generative AI Is Transforming Your Business | Byond Growth Insights | Apr...Hector Del Castillo, CPM, CPMM
 
business environment micro environment macro environment.pptx
business environment micro environment macro environment.pptxbusiness environment micro environment macro environment.pptx
business environment micro environment macro environment.pptxShruti Mittal
 
Church Building Grants To Assist With New Construction, Additions, And Restor...
Church Building Grants To Assist With New Construction, Additions, And Restor...Church Building Grants To Assist With New Construction, Additions, And Restor...
Church Building Grants To Assist With New Construction, Additions, And Restor...Americas Got Grants
 
APRIL2024_UKRAINE_xml_0000000000000 .pdf
APRIL2024_UKRAINE_xml_0000000000000 .pdfAPRIL2024_UKRAINE_xml_0000000000000 .pdf
APRIL2024_UKRAINE_xml_0000000000000 .pdfRbc Rbcua
 
MAHA Global and IPR: Do Actions Speak Louder Than Words?
MAHA Global and IPR: Do Actions Speak Louder Than Words?MAHA Global and IPR: Do Actions Speak Louder Than Words?
MAHA Global and IPR: Do Actions Speak Louder Than Words?Olivia Kresic
 
Traction part 2 - EOS Model JAX Bridges.
Traction part 2 - EOS Model JAX Bridges.Traction part 2 - EOS Model JAX Bridges.
Traction part 2 - EOS Model JAX Bridges.Anamaria Contreras
 
Technical Leaders - Working with the Management Team
Technical Leaders - Working with the Management TeamTechnical Leaders - Working with the Management Team
Technical Leaders - Working with the Management TeamArik Fletcher
 
NAB Show Exhibitor List 2024 - Exhibitors Data
NAB Show Exhibitor List 2024 - Exhibitors DataNAB Show Exhibitor List 2024 - Exhibitors Data
NAB Show Exhibitor List 2024 - Exhibitors DataExhibitors Data
 
EUDR Info Meeting Ethiopian coffee exporters
EUDR Info Meeting Ethiopian coffee exportersEUDR Info Meeting Ethiopian coffee exporters
EUDR Info Meeting Ethiopian coffee exportersPeter Horsten
 
Fordham -How effective decision-making is within the IT department - Analysis...
Fordham -How effective decision-making is within the IT department - Analysis...Fordham -How effective decision-making is within the IT department - Analysis...
Fordham -How effective decision-making is within the IT department - Analysis...Peter Ward
 

Kürzlich hochgeladen (20)

Welding Electrode Making Machine By Deccan Dynamics
Welding Electrode Making Machine By Deccan DynamicsWelding Electrode Making Machine By Deccan Dynamics
Welding Electrode Making Machine By Deccan Dynamics
 
Horngren’s Financial & Managerial Accounting, 7th edition by Miller-Nobles so...
Horngren’s Financial & Managerial Accounting, 7th edition by Miller-Nobles so...Horngren’s Financial & Managerial Accounting, 7th edition by Miller-Nobles so...
Horngren’s Financial & Managerial Accounting, 7th edition by Miller-Nobles so...
 
trending-flavors-and-ingredients-in-salty-snacks-us-2024_Redacted-V2.pdf
trending-flavors-and-ingredients-in-salty-snacks-us-2024_Redacted-V2.pdftrending-flavors-and-ingredients-in-salty-snacks-us-2024_Redacted-V2.pdf
trending-flavors-and-ingredients-in-salty-snacks-us-2024_Redacted-V2.pdf
 
Memorándum de Entendimiento (MoU) entre Codelco y SQM
Memorándum de Entendimiento (MoU) entre Codelco y SQMMemorándum de Entendimiento (MoU) entre Codelco y SQM
Memorándum de Entendimiento (MoU) entre Codelco y SQM
 
Salesforce Education Cloud - A Complete Guide.pdf
Salesforce Education Cloud - A Complete Guide.pdfSalesforce Education Cloud - A Complete Guide.pdf
Salesforce Education Cloud - A Complete Guide.pdf
 
Kenya Coconut Production Presentation by Dr. Lalith Perera
Kenya Coconut Production Presentation by Dr. Lalith PereraKenya Coconut Production Presentation by Dr. Lalith Perera
Kenya Coconut Production Presentation by Dr. Lalith Perera
 
1911 Gold Corporate Presentation Apr 2024.pdf
1911 Gold Corporate Presentation Apr 2024.pdf1911 Gold Corporate Presentation Apr 2024.pdf
1911 Gold Corporate Presentation Apr 2024.pdf
 
PSCC - Capability Statement Presentation
PSCC - Capability Statement PresentationPSCC - Capability Statement Presentation
PSCC - Capability Statement Presentation
 
Driving Business Impact for PMs with Jon Harmer
Driving Business Impact for PMs with Jon HarmerDriving Business Impact for PMs with Jon Harmer
Driving Business Impact for PMs with Jon Harmer
 
How Generative AI Is Transforming Your Business | Byond Growth Insights | Apr...
How Generative AI Is Transforming Your Business | Byond Growth Insights | Apr...How Generative AI Is Transforming Your Business | Byond Growth Insights | Apr...
How Generative AI Is Transforming Your Business | Byond Growth Insights | Apr...
 
business environment micro environment macro environment.pptx
business environment micro environment macro environment.pptxbusiness environment micro environment macro environment.pptx
business environment micro environment macro environment.pptx
 
Church Building Grants To Assist With New Construction, Additions, And Restor...
Church Building Grants To Assist With New Construction, Additions, And Restor...Church Building Grants To Assist With New Construction, Additions, And Restor...
Church Building Grants To Assist With New Construction, Additions, And Restor...
 
APRIL2024_UKRAINE_xml_0000000000000 .pdf
APRIL2024_UKRAINE_xml_0000000000000 .pdfAPRIL2024_UKRAINE_xml_0000000000000 .pdf
APRIL2024_UKRAINE_xml_0000000000000 .pdf
 
MAHA Global and IPR: Do Actions Speak Louder Than Words?
MAHA Global and IPR: Do Actions Speak Louder Than Words?MAHA Global and IPR: Do Actions Speak Louder Than Words?
MAHA Global and IPR: Do Actions Speak Louder Than Words?
 
WAM Corporate Presentation April 12 2024.pdf
WAM Corporate Presentation April 12 2024.pdfWAM Corporate Presentation April 12 2024.pdf
WAM Corporate Presentation April 12 2024.pdf
 
Traction part 2 - EOS Model JAX Bridges.
Traction part 2 - EOS Model JAX Bridges.Traction part 2 - EOS Model JAX Bridges.
Traction part 2 - EOS Model JAX Bridges.
 
Technical Leaders - Working with the Management Team
Technical Leaders - Working with the Management TeamTechnical Leaders - Working with the Management Team
Technical Leaders - Working with the Management Team
 
NAB Show Exhibitor List 2024 - Exhibitors Data
NAB Show Exhibitor List 2024 - Exhibitors DataNAB Show Exhibitor List 2024 - Exhibitors Data
NAB Show Exhibitor List 2024 - Exhibitors Data
 
EUDR Info Meeting Ethiopian coffee exporters
EUDR Info Meeting Ethiopian coffee exportersEUDR Info Meeting Ethiopian coffee exporters
EUDR Info Meeting Ethiopian coffee exporters
 
Fordham -How effective decision-making is within the IT department - Analysis...
Fordham -How effective decision-making is within the IT department - Analysis...Fordham -How effective decision-making is within the IT department - Analysis...
Fordham -How effective decision-making is within the IT department - Analysis...
 

WEKA: Algorithms The Basic Methods

  • 2. 1-rule Algorithm (1R) Way to find very easy classification rule Generates a one level decision tree which tests just one attribute Steps: Consider each attribute in turn There will be one branch in the decision tree for each value of this attribute Allot the majority class to each branch Repeat the same for all attributes and choose the one with minimum error
  • 3. 1R Pseudo Code Pseudo code for 1R
  • 4. 1R in action Consider the problem of weather’s effect on play. Data is:
  • 5. 1R in action Let us consider the Outlook parameter first Total Error = 4/14
  • 6. 1R in action Consolidated table for all the attributes, ‘*’ represent arbitrary choice from equivalent options:
  • 7. 1R in action From this table we can see that a decision tree on Outlook and Humidity gives minimum error We can choose and of these two attributes and the corresponding rules as our choice of classification rule Missing is treated as just another attribute, one branch in the decision tree dedicated to missing values like any other attribute value
  • 8. Numeric attributes and 1R To deal with numeric attributes, we Discretize them The steps are : Sort instances on the basis of attribute’s value Place breakpoints where class changes These breakpoints gives us discrete numerical range Majority class of each range is considered as its range
  • 9. Numeric attributes and 1R We have the following data for the weather example,
  • 10. Numeric attributes and 1R Applying the steps we get: The problem with this approach is that we can get a large number of division or Overfitting Therefore we enforce a minimum number of instances , for example taking min = 3 in above example, we get:
  • 11. Numeric attributes and 1R When two adjacent division have the same majority class, then we can join these two divisions So after this we will get: Which gives the following classification rules:
  • 12. Statistical Modeling Another classification technique Assumptions (for a given class): All attributes contributes equally to decision making All attributes are independent of each other
  • 13. Statistical Modeling: An example Given Data:
  • 14. Statistical Modeling: An example Data Description: The upper half shows how many time a value of an attribute occurs for a class The lower half shows the same data in terms of fraction For example, class is yes 9 times For class = yes, outlook = sunny 2 times So under outlook = sunny and class = yes we have 2/9
  • 15. Statistical Modeling Problem at hand: Solution: Taking into the consideration that all attributes equally and are independent Likelihood of yes = 2/9x3/9x3/9x3/9x9/14 = 0.0053 Likelihood of no = 3/5x1/5x4/5x3/5x5/14 = 0.0206
  • 16. Statistical Modeling: An example Solution continued.. As can be observed, likelihood of yes is high Using normalization, we can calculate probability as: Probability of yes = (.0053)/(.0053 + .0206) = 20.5% Probability of no = (.0206)/(.0053 + .0206) = 79.5%
  • 17. Statistical Modeling: An example Derivation using Bayes’ rule: Acc to Bayes’ rule, for a hypothesis H and evidence E that bears on that hypothesis, then P[H|E] = (P[E|H] x P[H]) / P[E] For our example hypothesis H is that play will be, say, yes and E is the particular combination of attribute values at hand Outlook = sunny(E1) Temperature = cool (E2) Humidity = high(E3) Windy = True (E4)
  • 18. Statistical Modeling: An example Derivation using Bayes’ rule: Now since E1, E2, E3 and E4 are independent therefore we have P[H|E] = (P[E1|H] x P[E2|H] x P[E3|H] x P[E4|H] x P[H] ) / P[E] Replacing values from the table we get, P[yes|E] = (2/9 x 3/9 x 3/9 x 3/9 x 9/14) / P[E] P[E] will be taken care of during normalization of P[yes|E] and P[No|E] This method is called as Naïve Bayes
  • 19. Problem and Solution for Naïve Bayes Problem: In case we have an attribute value (Ea)for which P[Ea|H] = 0, then irrespective of other attributes P[H|E] = 0 Solution: We can add a constant to numerator and denominator, a technique called Laplace Estimator for example, P1 + P2 + P3 = 1:
  • 20. Statistical Modeling: Dealing with missing attributes Incase an value is missing, say for attribute Ea in the given data set, we just don’t count it while calculating the P[Ea|H] Incase an attribute is missing in the instance to be classified, then its factor is not there in the expression for P[H|E], for example if outlook is missing then we will have: Likelihood of Yes = 3/9 x 3/9 x 3/9 x 9/14 = 0.0238 Likelihood of No = 1/5 x 4/5 x 3/5 x 5/14 = 0.0343
  • 21. Statistical Modeling: Dealing with numerical attributes Numeric values are handled by assuming that they have : Normal probability distribution Gaussian probability distribution For a normal distribution we have: u = mean sigma = Standard deviation x = instance under consideration f(x) = contribution of to likelihood figures
  • 22. Statistical Modeling: Dealing with numerical attributes An example, we have the data:
  • 23. Statistical Modeling: Dealing with numerical attributes So here we have calculated the mean and standard deviation for numerical attributes like temperature and humidity For temperature = 66 So the contribution of temperature = 66 in P[yes|E] is 0.0340 We do this similarly for other numerical attributes
  • 24. Divide-and-Conquer: Constructing Decision Trees Steps to construct a decision tree recursively: Select an attribute to placed at root node and make one branch for each possible value Repeat the process recursively at each branch, using only those instances that reach the branch If at any time all instances at a node have the classification, stop developing that part of the tree Problem: How to decide which attribute to split on
  • 25. Divide-and-Conquer: Constructing Decision Trees Steps to find the attribute to split on: We consider all the possible attributes as option and branch them according to different possible values Now for each possible attribute value we calculate Information and then find the Information gain for each attribute option Select that attribute for division which gives a Maximum Information Gain Do this until each branch terminates at an attribute which gives Information = 0
  • 26. Divide-and-Conquer: Constructing Decision Trees Calculation of Information and Gain: For data: (P1, P2, P3……Pn) such that P1 + P2 + P3 +……. +Pn = 1 Information(P1, P2 …..Pn) = -P1logP1 -P2logP2 – P3logP3 ……… -PnlogPn Gain= Information before division – Information after division
  • 27. Divide-and-Conquer: Constructing Decision Trees Example: Here we have consider each attribute individually Each is divided into branches according to different possible values Below each branch the number of class is marked
  • 28. Divide-and-Conquer: Constructing Decision Trees Calculations: Using the formulae for Information, initially we have Number of instances with class = Yes is 9 Number of instances with class = No is 5 So we have P1 = 9/14 and P2 = 5/14 Info[9/14, 5/14] = -9/14log(9/14) -5/14log(5/14) = 0.940 bits Now for example lets consider Outlook attribute, we observe the following:
  • 29. Divide-and-Conquer: Constructing Decision Trees Example Contd. Gain by using Outlook for division = info([9,5]) – info([2,3],[4,0],[3,2]) = 0.940 – 0.693 = 0.247 bits Gain (outlook) = 0.247 bits Gain (temperature) = 0.029 bits Gain (humidity) = 0.152 bits Gain (windy) = 0.048 bits So since Outlook gives maximum gain, we will use it for division And we repeat the steps for Outlook = Sunny and Rainy and stop for Overcast since we have Information = 0 for it
  • 30. Divide-and-Conquer: Constructing Decision Trees Highly branching attributes: The problem If we follow the previously subscribed method, it will always favor an attribute with the largest number of branches In extreme cases it will favor an attribute which has different value for each instance: Identification code
  • 31. Divide-and-Conquer: Constructing Decision Trees Highly branching attributes: The problem Information for such an attribute is 0 info([0,1]) + info([0,1]) + info([0,1]) + …………. + info([0,1]) = 0 It will hence have the maximum gain and will be chosen for branching But such an attribute is not good for predicting class of an unknown instance nor does it tells anything about the structure of division So we use gain ratio to compensate for this
  • 32. Divide-and-Conquer: Constructing Decision Trees Highly branching attributes: Gain ratio Gain ratio = gain/split info To calculate split info, for each instance value we just consider the number of instances covered by each attribute value, irrespective of the class Then we calculate the split info, so for identification code with 14 different values we have: info([1,1,1,…..,1]) = -1/14 x log1/14 x 14 = 3.807 For Outlook we will have the split info: info([5,4,5]) = -1/5 x log 1/5 -1/4 x log1/4 -1/5 x log 1/5 = 1.577
  • 33. Divide-and-Conquer: Constructing Decision Trees Highly branching attributes: Gain ratio So we have: And for the ‘highly branched attribute’, gain ratio = 0.247
  • 34. Divide-and-Conquer: Constructing Decision Trees Highly branching attributes: Gain ratio Though the ‘highly branched attribute’ still have the maximum gain ratio, but its advantage is greatly reduced Problem with using gain ratio: In some situations the gain ratio modification overcompensates and can lead to preferring an attribute just because its intrinsic information is much lower than that for the other attributes. A standard fix is to choose the attribute that maximizes the gain ratio, provided that the information gain or that attribute is at least as great as the average information gain for all the attributes examined
  • 35. Covering Algorithms: Constructing rules Approach: Consider each class in turn Seek a way of covering all instances in it, excluding instances not belonging to this class Identify a rule to do so This is called a covering approach because at each stage we identify a rule that covers some of the instances
  • 36.
  • 37. If x > 1.2 and y > 2.6 then class = a
  • 38. If x > 1.2 and y > 2.6 then class = a If x > 1.4 and y < 2.4 then class = a
  • 39. Covering Algorithms: Constructing rules Rules Vs Trees: Covering algorithm covers only a single class at a time whereas division takes all the classes in account as decision trees creates a combines concept description Problem of replicated sub trees is avoided in rules Tree for the previous problem:
  • 40. Covering Algorithms: Constructing rules PRISM Algorithm: A simple covering algorithm Instance space after addition of rules:
  • 41. Covering Algorithms: Constructing rules PRISM Algorithm: Criteria to select an attribute for division Include as many instances of the desired class and exclude as many instances of other class as possible If a new rule covers t instances of which p are positive examples of the class and t-p are instances of other classes i.e errors, then try to maximize p/t
  • 42. Covering Algorithms: Constructing rules PRISM Algorithm: Example data
  • 43. Covering Algorithms: Constructing rules PRISM Algorithm: In action We start with the class = hard and have the following rule: If ? Then recommendation = hard Here ? represents an unknown rule For unknown we have nine choices:
  • 44. Covering Algorithms: Constructing rules PRISM Algorithm: In action Here the maximum t/p ratio is for astigmatism = yes (choosing randomly between equivalent option in case there coverage is also same) So we get the rule: If astigmatism = yes then recommendation = hard We wont stop at this rule as this rule gives only 4 correct results out of 12 instances it covers We remove the correct instances of the above rule from our example set and start with the rule: If astigmatism = yes and ? then recommendation = hard
  • 45. Covering Algorithms: Constructing rules PRISM Algorithm: In action Now we have the data as:
  • 46. Covering Algorithms: Constructing rules PRISM Algorithm: In action And the choices for this data is: We choose tear production rate = normal which has highest t/p
  • 47. Covering Algorithms: Constructing rules PRISM Algorithm: In action So we have the rule: If astigmatism = yes and tear production rate = normal then recommendation = hard Again, we remove matched instances, now we have the data:
  • 48. Covering Algorithms: Constructing rules PRISM Algorithm: In action Now again using t/p we finally have the rule (based on maximum coverage): If astigmatism = yes and tear production rate = normal and spectacle prescription = myope then recommendation = hard And so on. …..
  • 49. Covering Algorithms: Constructing rules PRISM Algorithm: Pseudo Code
  • 50. Covering Algorithms: Constructing rules Rules Vs decision lists The rules produced, for example by PRISM algorithm, are not necessarily to be interpreted in order like decision lists There is no order in which class should be considered while generating rules Using rules for classification, one instance may receive multiple receive multiple classification or no classification at all In such cases go for the rule with maximum coverage and training examples respecitively These difficulties are not there with decision lists as they are to be interpreted in order and have a default rule at the end
  • 51. Mining Association Rules Definition: An association rule can predict any number of attributes and also any combination of attributes Parameter for selecting an Association Rule: Coverage: The number of instances they predict correctly Accuracy: The ratio of coverageand total number of instances the rule is applicable We want association rule with high coverage and a minimum specified accuracy
  • 52. Mining Association Rules Terminology: Item – set: A combination of attributes Item: An attribute – value pair An example: For the weather data we have a table with each column containing an item – set having different number of attributes With each entry the coverage is also given The table is not complete, just gives us a good idea
  • 54. Mining Association Rules Generating Association rules: We need to specify a minimum coverage and accuracy for the rules to be generated before hand Steps: Generate the item sets Each item set can be permuted to generate a number of rules For each rule check if the coverage and accuracy is appropriate This is how we generate association rules
  • 55. Mining Association Rules Generating Association rules: For example if we take the item set: humidity = normal, windy = false, play = yes This gives seven potential rules (with accuracy):
  • 56. Linear models We will look at methods to deal with the prediction of numerical quantities We will see how to use numerical methods for classification
  • 57. Linear models Numerical Prediction: Linear regression Linear regression is a technique to predict numerical quantities Here we express the class (a numerical quantity) as a linear combination of attributes with predetermined weights For example if we have attributes a1,a2,a3…….,ak x = (w0) + (w1)x(a1) + (w2)x(a2) + …… + (wk)x(ak) Here x represents the predicted class and w0,w1……,wk are the predetermined weights
  • 58. Linear models Numerical Prediction: Linear regression The weights are calculated by using the training set To choose optimum weights we select the weights with minimum square sum:
  • 59. Linear models Linear classification: Multi response linear regression For each class we use linear regression to get a linear expression When the instance belongs to the class output is 1, otherwise 0 Now for an unclassified instance we use the expression for each class and get an output The class expression giving the maximum output is selected as the classified class This method has the drawbacks that values produced are not proper probabilities
  • 60. Linear models Linear classification: Logistic regression To get the output as proper probabilities in the range 0 to 1 we use logistic regression Here the output y is defined as: y = 1/(1+e^(-x)) x = (w0) + (w1)x(a1) + (w2)x(a2) + …… + (wk)x(ak) So the output y will lie in the range (0,1]
  • 61. Linear models Linear classification: Logistic regression To select appropriate weights for the expression of x, we maximize: To generalize Logistic regression we can use do the calculation like we did in Multi response linear regression Again the problem with this approach is that the probabilities of different classes do not sum up to 1
  • 62. Linear models Linear classification using the perceptron If instances belonging to different classes can be divided in the instance space by using hyper planes, then they are called linearly separable If instances are linearly separable then we can use perceptron learning rule for classification Steps: Lets assume that we have only 2 classes The equation of hyper plane is (a0 = 1): (w0)(a0) + (w1)(a1) + (w2)(a2) +…….. + (wk)(ak) = 0
  • 63. Linear models Linear classification using the perceptron Steps (contd.): If the sum (mentioned in previous step) is greater than 0 than we have first class else the second one The algorithm to get the weight and hence the equation of dividing hyper plane (or the perceptron)is:
  • 64. Instance-based learning General steps: No preprocessing of training sets, just store the training instances as it is To classify a new instance calculate its distance with every stored training instance The unclassified instance is allotted the class of the instance which has the minimum distance from it
  • 65. Instance-based learning The distance function The distance function we use depends on our application Some of the popular distance functions are: Euclidian distance, Manhattan distance metric etc. The most popular distance metric is Euclidian distance (between teo instances) given by: K is the number of attributes
  • 66. Instance-based learning Normalization of data: We normalize attributes such that they lie in the range [0,1], by using the formulae: Missing attributes: In case of nominal attributes, if any of the two attributes are missing or if the attributes are different, the distance is taken as 1 In nominal attributes, if both are missing than difference is 1. If only one attribute is missing than the difference is the either the normalized value of given attribute or one minus that size, which ever is bigger
  • 67. Instance-based learning Finding nearest neighbors efficiently: Finding nearest neighbor by calculating distance with every attribute of each instance if linear We make this faster by using kd-trees KD-Trees: They are binary trees that divide the input space with a hyper plane and then split each partition again, recursively It stores the points in k dimensional space, k being the number of attributes
  • 68. Instance-based learning Finding nearest neighbors efficiently:
  • 69. Instance-based learning Finding nearest neighbors efficiently: Here we see a kd tree and the instances and splits with k=2 As you can see not all child nodes are developed to the same depth We have mentioned the axis along which the division has been done (v or h in this case) Steps to find the nearest neighbor: Construct the kd tree (explained later) Now start from the root node and comparing the appropriate attribute (based on the axis along which the division has been done), move to left or the right sub-tree
  • 70. Instance-based learning Steps to find the nearest neighbor (contd.): Repeat this step recursively till you reach a node which is either a leaf node or has no appropriate leaf node (left or right) Now you have find the region to which this new instance belong You also have a probable nearest neighbor in the form of the regions leaf node (or immediate neighbor) Calculate the distance of the instance with the probable nearest neighbor. Any closer instance will lie in a circle with radius equal to this distance
  • 71. Instance-based learning Finding nearest neighbors efficiently: Steps to find the nearest neighbor (contd.): Now we will move redo our recursive trace looking for an instance which is closer to put unclassified instance than the probable nearest neighbor we have We start with the immediate neighbor, if it lies in the circle than we will have to consider it and all its child nodes (if any) If condition of previous step is not true then we check the siblings of the parent of our probable nearest neighbor We repeat these steps till we reach the root In case we find instance(s) which are nearer, we update the nearest neighbor
  • 72. Instance-based learning Steps to find the nearest neighbor (contd.):
  • 73. Instance-based learning Construction of KD tree: We need to figure out two things to construct a kd tree: Along which dimension to make the cut Which instance to use to make the cut Deciding the dimension to make the cut: We calculate the variance along each axis The division is done perpendicular to the axis with minimum variance Deciding the instance to be used for division: Just take the median as the point of division So we repeat these steps recursively till all the points are exhausted
  • 74. Clustering Clustering techniques apply when rather than predicting the class, we just want the instances to be divided into natural group Iterative instance based learning: k-means Here k represents the number of clusters The instance space is divided in to k clusters K-means forms the cluster so as the sum of square distances of instances from there cluster center is minimum
  • 75. Clustering Steps: Decide the number of clusters or k manually Now from the instance set to be clustered, randomly select k points. These will be our initial k cluster centers of our k clusters Now take each instance one by one , calculate its distance from all the cluster centers and allot it to the cluster for which it has the minimum distance Once all the instances have been classified, take centroid of all the points in a cluster. This centroid will be give the new cluster center Again re-cluster all the instances followed by taking the centroid to get yet another cluster center Repeat step 5 till we reach the stage in which the cluster centers don’t change. Stop at this, we have our k-clusters
  • 76. Visit more self help tutorials Pick a tutorial of your choice and browse through it at your own pace. The tutorials section is free, self-guiding and will not involve any additional support. Visit us at www.dataminingtools.net