SlideShare ist ein Scribd-Unternehmen logo
1 von 24
1
Decision Tree Induction
2
Attribute Selection Measures
 Heuristic for selecting splitting criterion
 Also termed as Splitting rules
 Ranks the attributes
 If selected attribute is continuous valued or discrete with binary split,
then split point or split subset must also be chosen
 Common measures – Information Gain, Gain ration, Gini Index
 Notations
 D – Data Partition
 Class label attribute has m distinct values – Ci 1=1..m
 Ci,D – Set of tuples of class i in D
3
Attribute Selection Measure:
Information Gain
 Proposed by Claude Shannon
 Select the attribute with the highest information gain
 Minimizes the information needed to classify tuples in the resulting
partition
 Minimizes the expected number of tests needed to classify a tuple
 Expected information needed to classify a tuple in D
Info(D) = - ∑i=1
m
pi log2(pi)
where pi = |Ci,D|/|D| which is the probability that an arbitrary sample
belongs to class Ci
 Info(D) - Average information required to classify a tuple in D
 Info(D) is also known as entropy of D
4
Information Gain
 Attribute A with v distinct values {a1, a2, …av}
 If A is discrete valued partition D into v subsets {D1,D2,…DV}
 Each partition is expected to be pure
 Additional information required to classify the samples is:
InfoA(D) = ∑j=1
v
|Dj| / |D| x Info(Dj)
 |Dj| / |D| - Weight of partition
 InfoA(D) – Expected information required to classify a tuple from D based
on partitioning by A
 Smaller the expected information greater the purity of partitions
 The Information Gain is given by:
Gain(A) = Info(D) – InfoA(D)
 Expected reduction in Information requirement by choosing A
5
Example
Age income student credit_rating buys_computer
youth high no fair no
youth high no excellent no
middle-aged high no fair yes
senior medium no fair yes
senior low yes fair yes
senior low yes excellent no
middle-aged low yes excellent yes
youth medium no fair no
youth low yes fair yes
senior medium yes fair yes
youth medium yes excellent yes
middle-aged medium no excellent yes
middle-aged high yes fair yes
senior medium no excellent no
Based on the
attribute
‘buys_computer’
the number of
classes m = 2
C1 – ‘Yes’
C2 – ‘No’
6
Example
 Expected information needed to classify the sample
 Expected Information requirement for age
Age D1i D2i I(D1i,D2i)
youth 2 3 0.971
middle_aged 4 0 0
senior 3 2 0.971
Gain(age) = Info(D) –
Infoage(D) = 0.246 bits
Gain(income) = 0.029
Gain(student) = 0.151
Gain(credit_rating) =
0.048
Age has highest gain
7
Example
8
Information-Gain for Continuous-Value Attributes
 Must determine the best split point for A
 Sort the value A in increasing order
 Typically, the midpoint between each pair of adjacent values is
considered as a possible split point
 (ai+ai+1)/2 is the midpoint between the values of ai and ai+1
 The point with the minimum expected information requirement for A
is selected as the split-point for A
 Calculate InfoA(D) for each possible split point and choose minimum
one
 Split:
 D1 is the set of tuples in D satisfying A ≤ split-point, and D2 is the
set of tuples in D satisfying A > split-point
9
Gain Ratio for Attribute Selection
 Information gain measure is biased towards attributes with a large
number of values
 Results in more number of partitions - pure
 C4.5 (a successor of ID3) uses gain ratio to overcome the problem
(normalization to information gain)
 Split information value is used:
 Potential information generated by splitting the training data set D into v
partitions – Considers the number of tuples wrt total tuples
 The attribute with the maximum gain ratio is selected as the splitting
attribute
10
Gain Ratio - Example
 Gain ratio for Income attribute
 Gain(Income) = 0.029
 GainRatio(Income) = 0.029/0.926 = 0.031
11
Gini Index
 Measures the impurity of a data partition
 pj is probability of a tuple belonging to class Cj
 Considers a binary split for each value
 To determine best binary split on A, all possible subsets that can be
formed are considered
 2v
– 2 possible ways to form two partitions
 For binary splits
 Reduction in impurity
 Attribute that maximizes reduction in impurity or one with
minimum Gini index is chosen
∑
=
−=
n
j
p jDgini
1
21)(
)(
||
||
)(
||
||
)( 2
2
1
1
Dgini
D
D
Dgini
D
D
DginiA
+=
)()()( DginiDginiAgini A
−=∆
12
Gini index
 Ex. D has 9 tuples in buys_computer = “yes” and 5 in “no”
 Suppose the attribute income partitions D into 10 in D1: {low,
medium} and 4 in D2
but gini{medium,high} is 0.30 and thus the best since it is the lowest
459.0
14
5
14
9
1)(
22
=





−





−=Dgini
)(
14
4
)(
14
10
)( 11},{ DGiniDGiniDgini mediumlowincome 





+





=∈
13
Attribute Selection Measures
 The three measures, in general, return good results but
 Information gain:
 biased towards multivalued attributes
 Gain ratio:
 tends to prefer unbalanced splits in which one partition is much
smaller than the others
 Gini index:
 biased to multivalued attributes
 has difficulty when # of classes is large
 tends to favor tests that result in equal-sized partitions and
purity in both partitions
14
Other Attribute Selection Measures
 CHAID: a popular decision tree algorithm, measure based on χ2
test
for independence
 C-SEP: performs better than info. gain and gini index in certain cases
 G-statistics: has a close approximation to χ2
distribution
 MDL (Minimal Description Length) principle (i.e., the simplest solution
is preferred):
 The best tree as the one that requires the fewest # of bits to both
(1) encode the tree, and (2) encode the exceptions to the tree
 Multivariate splits (partition based on multiple variable combinations)
 CART: finds multivariate splits based on a linear comb. of attrs.
 Best attribute selection measure
 Most give good results, none is significantly superior than others
15
Overfitting and Tree Pruning
 Overfitting: An induced tree may overfit the training data
 Too many branches, some may reflect anomalies due to noise or outliers
 Poor accuracy for unseen samples
 Two approaches to avoid overfitting
 Prepruning: Halt tree construction early—do not split a node if this would
result in the goodness measure falling below a threshold
 Difficult to choose an appropriate threshold
 Postpruning: Remove branches from a “fully grown” tree—get a sequence
of progressively pruned trees
 Use a set of data different from the training data to decide which is the
“best pruned tree”
16
Tree Pruning
 Cost Complexity pruning
 Post pruning approach used in CART
 Cost complexity – function of number of leaves and error rate of
the tree
 For each internal node cost complexity is calculated wrt original
and pruned versions
 If pruning results in a smaller cost complexity – subtree is pruned
 Uses a separate prune set
 Pessimistic Pruning
 Uses training set and adjusts error rates by adding a penalty
 Minimum Description Length (MDL) principle
 Issues: Repetition and Replication
17
Enhancements to Basic Decision
Tree Induction
 Allow for continuous-valued attributes
 Dynamically define new discrete-valued attributes that partition the
continuous attribute value into a discrete set of intervals
 Handle missing attribute values
 Assign the most common value of the attribute
 Assign probability to each of the possible values
 Attribute construction
 Create new attributes based on existing ones that are sparsely
represented
 This reduces fragmentation, repetition, and replication
18
Scalability and Decision Tree Induction
 Scalability: Classifying data sets with millions of examples and
hundreds of attributes with reasonable speed
 Large scale databases – do not fit into memory
 Repeated Swapping of data – Inefficient
 Scalable Variants
 SLIQ, SPRINT
19
Scalable Decision Tree Induction
Methods
 SLIQ
 Supervised Learning in Quest
 Presort the data
 Builds an index for each attribute and only class list and the current
attribute list reside in memory
 Each attribute has an associated attribute list indexed by a Record
Identifier (RID) – linked to class list
 Class list points to node
 Limited by the size of the class list
20
SLIQ Example
21
Scalable Decision Tree Induction Methods
 SPRINT
 Serial PaRallelizable INduction
 Constructs an attribute list data structure
 Class, Attribute and RID for each attribute
 Can be parallelized
22
Scalability Framework for RainForest
 Separates the scalability aspects from the criteria that determine
the quality of the tree
 Builds an AVC-list: AVC (Attribute, Value, Class_label)
 AVC-set (of an attribute X )
 Projection of training dataset onto the attribute X and class label
where counts of individual class label are aggregated
23
Rainforest: Training Set and AVC Sets
student Buy_Computer
yes no
yes 6 1
no 3 4
Age Buy_Computer
yes no
<=30 3 2
31..40 4 0
>40 3 2
Credit
rating
Buy_Computer
yes no
fair 6 2
excellent 3 3
age income student
credit_
rating
buys_
comp
uter
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
AVC-set on incomeAVC-set on Age
AVC-set on Student
Training Examples
income Buy_Computer
yes no
high 2 2
medium 4 2
low 3 1
AVC-set on
credit_rating
24
BOAT (Bootstrapped Optimistic
Algorithm for Tree Construction)
 Use a statistical technique called bootstrapping to create several
smaller samples (subsets), each fits in memory
 Each subset is used to create a tree, resulting in several trees
 These trees are examined and used to construct a new tree T’
 It turns out that T’ is very close to the tree that would be generated
using the whole data set together
 Adv: requires only two scans of DB, an incremental algorithm
 Very much faster

Weitere ähnliche Inhalte

Was ist angesagt?

Decision Tree - C4.5&CART
Decision Tree - C4.5&CARTDecision Tree - C4.5&CART
Decision Tree - C4.5&CART
Xueping Peng
 
Decision tree lecture 3
Decision tree lecture 3Decision tree lecture 3
Decision tree lecture 3
Laila Fatehy
 
Feature selection concepts and methods
Feature selection concepts and methodsFeature selection concepts and methods
Feature selection concepts and methods
Reza Ramezani
 

Was ist angesagt? (20)

Decision Tree - C4.5&CART
Decision Tree - C4.5&CARTDecision Tree - C4.5&CART
Decision Tree - C4.5&CART
 
Decision tree
Decision treeDecision tree
Decision tree
 
Decision Tree Learning
Decision Tree LearningDecision Tree Learning
Decision Tree Learning
 
K mean-clustering algorithm
K mean-clustering algorithmK mean-clustering algorithm
K mean-clustering algorithm
 
2.3 bayesian classification
2.3 bayesian classification2.3 bayesian classification
2.3 bayesian classification
 
Classification techniques in data mining
Classification techniques in data miningClassification techniques in data mining
Classification techniques in data mining
 
04 Classification in Data Mining
04 Classification in Data Mining04 Classification in Data Mining
04 Classification in Data Mining
 
Decision Trees
Decision TreesDecision Trees
Decision Trees
 
Data Mining:Concepts and Techniques, Chapter 8. Classification: Basic Concepts
Data Mining:Concepts and Techniques, Chapter 8. Classification: Basic ConceptsData Mining:Concepts and Techniques, Chapter 8. Classification: Basic Concepts
Data Mining:Concepts and Techniques, Chapter 8. Classification: Basic Concepts
 
introduction to data mining tutorial
introduction to data mining tutorial introduction to data mining tutorial
introduction to data mining tutorial
 
Association rule mining
Association rule miningAssociation rule mining
Association rule mining
 
ID3 ALGORITHM
ID3 ALGORITHMID3 ALGORITHM
ID3 ALGORITHM
 
Feature selection
Feature selectionFeature selection
Feature selection
 
Introduction to Machine Learning Classifiers
Introduction to Machine Learning ClassifiersIntroduction to Machine Learning Classifiers
Introduction to Machine Learning Classifiers
 
Decision tree presentation
Decision tree presentationDecision tree presentation
Decision tree presentation
 
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioLecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
 
Decision tree
Decision treeDecision tree
Decision tree
 
Chapter 4 Classification
Chapter 4 ClassificationChapter 4 Classification
Chapter 4 Classification
 
Decision tree lecture 3
Decision tree lecture 3Decision tree lecture 3
Decision tree lecture 3
 
Feature selection concepts and methods
Feature selection concepts and methodsFeature selection concepts and methods
Feature selection concepts and methods
 

Andere mochten auch (11)

Decision trees
Decision treesDecision trees
Decision trees
 
Data Mining - Classification Of Breast Cancer Dataset using Decision Tree Ind...
Data Mining - Classification Of Breast Cancer Dataset using Decision Tree Ind...Data Mining - Classification Of Breast Cancer Dataset using Decision Tree Ind...
Data Mining - Classification Of Breast Cancer Dataset using Decision Tree Ind...
 
Decision tree powerpoint presentation templates
Decision tree powerpoint presentation templatesDecision tree powerpoint presentation templates
Decision tree powerpoint presentation templates
 
Decision Trees
Decision TreesDecision Trees
Decision Trees
 
Distributed Decision Tree Induction
Distributed Decision Tree InductionDistributed Decision Tree Induction
Distributed Decision Tree Induction
 
Decision Tree and entropy
Decision Tree and entropyDecision Tree and entropy
Decision Tree and entropy
 
SLIQ
SLIQSLIQ
SLIQ
 
pratik meshram-Unit 5 (contemporary mkt r sch)
pratik meshram-Unit 5 (contemporary mkt r sch)pratik meshram-Unit 5 (contemporary mkt r sch)
pratik meshram-Unit 5 (contemporary mkt r sch)
 
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kamber
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & KamberChapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kamber
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kamber
 
Decision tree example problem
Decision tree example problemDecision tree example problem
Decision tree example problem
 
Data mining: Classification and prediction
Data mining: Classification and predictionData mining: Classification and prediction
Data mining: Classification and prediction
 

Ähnlich wie 2.2 decision tree

Classification, Attribute Selection, Classifiers- Decision Tree, ID3,C4.5,Nav...
Classification, Attribute Selection, Classifiers- Decision Tree, ID3,C4.5,Nav...Classification, Attribute Selection, Classifiers- Decision Tree, ID3,C4.5,Nav...
Classification, Attribute Selection, Classifiers- Decision Tree, ID3,C4.5,Nav...
ShivarkarSandip
 
unit classification.pptx
unit  classification.pptxunit  classification.pptx
unit classification.pptx
ssuser908de6
 
Classification (ML).ppt
Classification (ML).pptClassification (ML).ppt
Classification (ML).ppt
rajasamal1999
 
Dataming-chapter-7-Classification-Basic.pptx
Dataming-chapter-7-Classification-Basic.pptxDataming-chapter-7-Classification-Basic.pptx
Dataming-chapter-7-Classification-Basic.pptx
HimanshuSharma997566
 

Ähnlich wie 2.2 decision tree (20)

Data Mining Concepts and Techniques.ppt
Data Mining Concepts and Techniques.pptData Mining Concepts and Techniques.ppt
Data Mining Concepts and Techniques.ppt
 
Data Mining Concepts and Techniques.ppt
Data Mining Concepts and Techniques.pptData Mining Concepts and Techniques.ppt
Data Mining Concepts and Techniques.ppt
 
Unit 3classification
Unit 3classificationUnit 3classification
Unit 3classification
 
Classfication Basic.ppt
Classfication Basic.pptClassfication Basic.ppt
Classfication Basic.ppt
 
Classification, Attribute Selection, Classifiers- Decision Tree, ID3,C4.5,Nav...
Classification, Attribute Selection, Classifiers- Decision Tree, ID3,C4.5,Nav...Classification, Attribute Selection, Classifiers- Decision Tree, ID3,C4.5,Nav...
Classification, Attribute Selection, Classifiers- Decision Tree, ID3,C4.5,Nav...
 
unit classification.pptx
unit  classification.pptxunit  classification.pptx
unit classification.pptx
 
Classification (ML).ppt
Classification (ML).pptClassification (ML).ppt
Classification (ML).ppt
 
Cs501 classification prediction
Cs501 classification predictionCs501 classification prediction
Cs501 classification prediction
 
Dataming-chapter-7-Classification-Basic.pptx
Dataming-chapter-7-Classification-Basic.pptxDataming-chapter-7-Classification-Basic.pptx
Dataming-chapter-7-Classification-Basic.pptx
 
Chapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.pptChapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.ppt
 
08 classbasic
08 classbasic08 classbasic
08 classbasic
 
08 classbasic
08 classbasic08 classbasic
08 classbasic
 
08 classbasic
08 classbasic08 classbasic
08 classbasic
 
classification in data mining and data warehousing.pdf
classification in data mining and data warehousing.pdfclassification in data mining and data warehousing.pdf
classification in data mining and data warehousing.pdf
 
Efficient classification of big data using vfdt (very fast decision tree)
Efficient classification of big data using vfdt (very fast decision tree)Efficient classification of big data using vfdt (very fast decision tree)
Efficient classification of big data using vfdt (very fast decision tree)
 
08ClassBasic.ppt
08ClassBasic.ppt08ClassBasic.ppt
08ClassBasic.ppt
 
Basics of Classification.ppt
Basics of Classification.pptBasics of Classification.ppt
Basics of Classification.ppt
 
data mining.pptx
data mining.pptxdata mining.pptx
data mining.pptx
 
Decision tree
Decision treeDecision tree
Decision tree
 
DM Unit-III ppt.ppt
DM Unit-III ppt.pptDM Unit-III ppt.ppt
DM Unit-III ppt.ppt
 

Mehr von Krish_ver2

Mehr von Krish_ver2 (20)

5.5 back tracking
5.5 back tracking5.5 back tracking
5.5 back tracking
 
5.5 back track
5.5 back track5.5 back track
5.5 back track
 
5.5 back tracking 02
5.5 back tracking 025.5 back tracking 02
5.5 back tracking 02
 
5.4 randomized datastructures
5.4 randomized datastructures5.4 randomized datastructures
5.4 randomized datastructures
 
5.4 randomized datastructures
5.4 randomized datastructures5.4 randomized datastructures
5.4 randomized datastructures
 
5.4 randamized algorithm
5.4 randamized algorithm5.4 randamized algorithm
5.4 randamized algorithm
 
5.3 dynamic programming 03
5.3 dynamic programming 035.3 dynamic programming 03
5.3 dynamic programming 03
 
5.3 dynamic programming
5.3 dynamic programming5.3 dynamic programming
5.3 dynamic programming
 
5.3 dyn algo-i
5.3 dyn algo-i5.3 dyn algo-i
5.3 dyn algo-i
 
5.2 divede and conquer 03
5.2 divede and conquer 035.2 divede and conquer 03
5.2 divede and conquer 03
 
5.2 divide and conquer
5.2 divide and conquer5.2 divide and conquer
5.2 divide and conquer
 
5.2 divede and conquer 03
5.2 divede and conquer 035.2 divede and conquer 03
5.2 divede and conquer 03
 
5.1 greedyyy 02
5.1 greedyyy 025.1 greedyyy 02
5.1 greedyyy 02
 
5.1 greedy
5.1 greedy5.1 greedy
5.1 greedy
 
5.1 greedy 03
5.1 greedy 035.1 greedy 03
5.1 greedy 03
 
4.4 hashing02
4.4 hashing024.4 hashing02
4.4 hashing02
 
4.4 hashing
4.4 hashing4.4 hashing
4.4 hashing
 
4.4 hashing ext
4.4 hashing  ext4.4 hashing  ext
4.4 hashing ext
 
4.4 external hashing
4.4 external hashing4.4 external hashing
4.4 external hashing
 
4.2 bst
4.2 bst4.2 bst
4.2 bst
 

Kürzlich hochgeladen

Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
AnaAcapella
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
kauryashika82
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 

Kürzlich hochgeladen (20)

SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
Spatium Project Simulation student brief
Spatium Project Simulation student briefSpatium Project Simulation student brief
Spatium Project Simulation student brief
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
Magic bus Group work1and 2 (Team 3).pptx
Magic bus Group work1and 2 (Team 3).pptxMagic bus Group work1and 2 (Team 3).pptx
Magic bus Group work1and 2 (Team 3).pptx
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 

2.2 decision tree

  • 2. 2 Attribute Selection Measures  Heuristic for selecting splitting criterion  Also termed as Splitting rules  Ranks the attributes  If selected attribute is continuous valued or discrete with binary split, then split point or split subset must also be chosen  Common measures – Information Gain, Gain ration, Gini Index  Notations  D – Data Partition  Class label attribute has m distinct values – Ci 1=1..m  Ci,D – Set of tuples of class i in D
  • 3. 3 Attribute Selection Measure: Information Gain  Proposed by Claude Shannon  Select the attribute with the highest information gain  Minimizes the information needed to classify tuples in the resulting partition  Minimizes the expected number of tests needed to classify a tuple  Expected information needed to classify a tuple in D Info(D) = - ∑i=1 m pi log2(pi) where pi = |Ci,D|/|D| which is the probability that an arbitrary sample belongs to class Ci  Info(D) - Average information required to classify a tuple in D  Info(D) is also known as entropy of D
  • 4. 4 Information Gain  Attribute A with v distinct values {a1, a2, …av}  If A is discrete valued partition D into v subsets {D1,D2,…DV}  Each partition is expected to be pure  Additional information required to classify the samples is: InfoA(D) = ∑j=1 v |Dj| / |D| x Info(Dj)  |Dj| / |D| - Weight of partition  InfoA(D) – Expected information required to classify a tuple from D based on partitioning by A  Smaller the expected information greater the purity of partitions  The Information Gain is given by: Gain(A) = Info(D) – InfoA(D)  Expected reduction in Information requirement by choosing A
  • 5. 5 Example Age income student credit_rating buys_computer youth high no fair no youth high no excellent no middle-aged high no fair yes senior medium no fair yes senior low yes fair yes senior low yes excellent no middle-aged low yes excellent yes youth medium no fair no youth low yes fair yes senior medium yes fair yes youth medium yes excellent yes middle-aged medium no excellent yes middle-aged high yes fair yes senior medium no excellent no Based on the attribute ‘buys_computer’ the number of classes m = 2 C1 – ‘Yes’ C2 – ‘No’
  • 6. 6 Example  Expected information needed to classify the sample  Expected Information requirement for age Age D1i D2i I(D1i,D2i) youth 2 3 0.971 middle_aged 4 0 0 senior 3 2 0.971 Gain(age) = Info(D) – Infoage(D) = 0.246 bits Gain(income) = 0.029 Gain(student) = 0.151 Gain(credit_rating) = 0.048 Age has highest gain
  • 8. 8 Information-Gain for Continuous-Value Attributes  Must determine the best split point for A  Sort the value A in increasing order  Typically, the midpoint between each pair of adjacent values is considered as a possible split point  (ai+ai+1)/2 is the midpoint between the values of ai and ai+1  The point with the minimum expected information requirement for A is selected as the split-point for A  Calculate InfoA(D) for each possible split point and choose minimum one  Split:  D1 is the set of tuples in D satisfying A ≤ split-point, and D2 is the set of tuples in D satisfying A > split-point
  • 9. 9 Gain Ratio for Attribute Selection  Information gain measure is biased towards attributes with a large number of values  Results in more number of partitions - pure  C4.5 (a successor of ID3) uses gain ratio to overcome the problem (normalization to information gain)  Split information value is used:  Potential information generated by splitting the training data set D into v partitions – Considers the number of tuples wrt total tuples  The attribute with the maximum gain ratio is selected as the splitting attribute
  • 10. 10 Gain Ratio - Example  Gain ratio for Income attribute  Gain(Income) = 0.029  GainRatio(Income) = 0.029/0.926 = 0.031
  • 11. 11 Gini Index  Measures the impurity of a data partition  pj is probability of a tuple belonging to class Cj  Considers a binary split for each value  To determine best binary split on A, all possible subsets that can be formed are considered  2v – 2 possible ways to form two partitions  For binary splits  Reduction in impurity  Attribute that maximizes reduction in impurity or one with minimum Gini index is chosen ∑ = −= n j p jDgini 1 21)( )( || || )( || || )( 2 2 1 1 Dgini D D Dgini D D DginiA += )()()( DginiDginiAgini A −=∆
  • 12. 12 Gini index  Ex. D has 9 tuples in buys_computer = “yes” and 5 in “no”  Suppose the attribute income partitions D into 10 in D1: {low, medium} and 4 in D2 but gini{medium,high} is 0.30 and thus the best since it is the lowest 459.0 14 5 14 9 1)( 22 =      −      −=Dgini )( 14 4 )( 14 10 )( 11},{ DGiniDGiniDgini mediumlowincome       +      =∈
  • 13. 13 Attribute Selection Measures  The three measures, in general, return good results but  Information gain:  biased towards multivalued attributes  Gain ratio:  tends to prefer unbalanced splits in which one partition is much smaller than the others  Gini index:  biased to multivalued attributes  has difficulty when # of classes is large  tends to favor tests that result in equal-sized partitions and purity in both partitions
  • 14. 14 Other Attribute Selection Measures  CHAID: a popular decision tree algorithm, measure based on χ2 test for independence  C-SEP: performs better than info. gain and gini index in certain cases  G-statistics: has a close approximation to χ2 distribution  MDL (Minimal Description Length) principle (i.e., the simplest solution is preferred):  The best tree as the one that requires the fewest # of bits to both (1) encode the tree, and (2) encode the exceptions to the tree  Multivariate splits (partition based on multiple variable combinations)  CART: finds multivariate splits based on a linear comb. of attrs.  Best attribute selection measure  Most give good results, none is significantly superior than others
  • 15. 15 Overfitting and Tree Pruning  Overfitting: An induced tree may overfit the training data  Too many branches, some may reflect anomalies due to noise or outliers  Poor accuracy for unseen samples  Two approaches to avoid overfitting  Prepruning: Halt tree construction early—do not split a node if this would result in the goodness measure falling below a threshold  Difficult to choose an appropriate threshold  Postpruning: Remove branches from a “fully grown” tree—get a sequence of progressively pruned trees  Use a set of data different from the training data to decide which is the “best pruned tree”
  • 16. 16 Tree Pruning  Cost Complexity pruning  Post pruning approach used in CART  Cost complexity – function of number of leaves and error rate of the tree  For each internal node cost complexity is calculated wrt original and pruned versions  If pruning results in a smaller cost complexity – subtree is pruned  Uses a separate prune set  Pessimistic Pruning  Uses training set and adjusts error rates by adding a penalty  Minimum Description Length (MDL) principle  Issues: Repetition and Replication
  • 17. 17 Enhancements to Basic Decision Tree Induction  Allow for continuous-valued attributes  Dynamically define new discrete-valued attributes that partition the continuous attribute value into a discrete set of intervals  Handle missing attribute values  Assign the most common value of the attribute  Assign probability to each of the possible values  Attribute construction  Create new attributes based on existing ones that are sparsely represented  This reduces fragmentation, repetition, and replication
  • 18. 18 Scalability and Decision Tree Induction  Scalability: Classifying data sets with millions of examples and hundreds of attributes with reasonable speed  Large scale databases – do not fit into memory  Repeated Swapping of data – Inefficient  Scalable Variants  SLIQ, SPRINT
  • 19. 19 Scalable Decision Tree Induction Methods  SLIQ  Supervised Learning in Quest  Presort the data  Builds an index for each attribute and only class list and the current attribute list reside in memory  Each attribute has an associated attribute list indexed by a Record Identifier (RID) – linked to class list  Class list points to node  Limited by the size of the class list
  • 21. 21 Scalable Decision Tree Induction Methods  SPRINT  Serial PaRallelizable INduction  Constructs an attribute list data structure  Class, Attribute and RID for each attribute  Can be parallelized
  • 22. 22 Scalability Framework for RainForest  Separates the scalability aspects from the criteria that determine the quality of the tree  Builds an AVC-list: AVC (Attribute, Value, Class_label)  AVC-set (of an attribute X )  Projection of training dataset onto the attribute X and class label where counts of individual class label are aggregated
  • 23. 23 Rainforest: Training Set and AVC Sets student Buy_Computer yes no yes 6 1 no 3 4 Age Buy_Computer yes no <=30 3 2 31..40 4 0 >40 3 2 Credit rating Buy_Computer yes no fair 6 2 excellent 3 3 age income student credit_ rating buys_ comp uter <=30 high no fair no <=30 high no excellent no 31…40 high no fair yes >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no 31…40 low yes excellent yes <=30 medium no fair no <=30 low yes fair yes >40 medium yes fair yes <=30 medium yes excellent yes 31…40 medium no excellent yes 31…40 high yes fair yes >40 medium no excellent no AVC-set on incomeAVC-set on Age AVC-set on Student Training Examples income Buy_Computer yes no high 2 2 medium 4 2 low 3 1 AVC-set on credit_rating
  • 24. 24 BOAT (Bootstrapped Optimistic Algorithm for Tree Construction)  Use a statistical technique called bootstrapping to create several smaller samples (subsets), each fits in memory  Each subset is used to create a tree, resulting in several trees  These trees are examined and used to construct a new tree T’  It turns out that T’ is very close to the tree that would be generated using the whole data set together  Adv: requires only two scans of DB, an incremental algorithm  Very much faster

Hinweis der Redaktion

  1. I : the expected information needed to classify a given sample E (entropy) : expected information based on the partitioning into subsets by A