The document proposes using perceptron learners at the leaves of Hoeffding decision trees to improve performance on data streams. It introduces a new evaluation metric called RAM-Hours that considers both time and memory usage. The authors empirically evaluate different classifier models, including Hoeffding trees with perceptron and naive Bayes learners at leaves, on several datasets. Results show that hybrid models like Hoeffding naive Bayes perceptron trees often provide the best balance of accuracy, time and memory usage.
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Fast Perceptron Decision Tree Learning from Evolving Data Streams
1. Fast Perceptron Decision Tree Learning
from Evolving Data Streams
Albert Bifet, Geoff Holmes, Bernhard Pfahringer, and Eibe Frank
University of Waikato
Hamilton, New Zealand
Hyderabad, 23 June 2010
14th Pacific-Asia Conference on Knowledge Discovery and Data Mining
(PAKDD’10)
2. Motivation
RAM Hours
Time and Memory in one measure
Hoeffding Decision Trees with Perceptron Learners at
leaves
Improve performance of classification methods for data
streams
2 / 28
4. Mining Massive Data
2007
Digital Universe: 281 exabytes (billion gigabytes)
The amount of information created exceeded available
storage for the first time
Web 2.0
106 million registered users
600 million search queries per day
3 billion requests a day via its API.
4 / 28
5. Green Computing
Green Computing
Study and practice of using computing resources efficiently.
Algorithmic Efficiency
A main approach of Green Computing
Data Streams
Fast methods without storing all dataset in memory
5 / 28
6. Data stream classification cycle
1 Process an example at a time,
and inspect it only once (at
most)
2 Use a limited amount of
memory
3 Work in a limited amount of
time
4 Be ready to predict at any
point
6 / 28
7. Mining Massive Data
Koichi Kawana
Simplicity means the achievement of maximum effect with
minimum means.
time
accuracy
memory
Data Streams
7 / 28
8. Evaluation Example
Accuracy Time Memory
Classifier A 70% 100 20
Classifier B 80% 20 40
Which classifier is performing better?
8 / 28
10. Evaluation Example
Accuracy Time Memory RAM-Hours
Classifier A 70% 100 20 2,000
Classifier B 80% 20 40 800
Which classifier is performing better?
10 / 28
12. Hoeffding Trees
Hoeffding Tree : VFDT
Pedro Domingos and Geoff Hulten.
Mining high-speed data streams. 2000
With high probability, constructs an identical model that a
traditional (greedy) method would learn
With theoretical guarantees on the error rate
Time
Contains “Money”
YES
Yes
NO
No
Day
YES
Night
12 / 28
13. Hoeffding Naive Bayes Tree
Hoeffding Tree
Majority Class learner at leaves
Hoeffding Naive Bayes Tree
G. Holmes, R. Kirkby, and B. Pfahringer.
Stress-testing Hoeffding trees, 2005.
monitors accuracy of a Majority Class learner
monitors accuracy of a Naive Bayes learner
predicts using the most accurate method
13 / 28
15. Perceptron
Attribute 1
Attribute 2
Attribute 3
Attribute 4
Attribute 5
Output hw (xi)
w1
w2
w3
w4
w5
We use sigmoid function hw = σ(wT x) where
σ(x) = 1/(1+e−x
)
σ (x) = σ(x)(1−σ(x))
14 / 28
16. Perceptron
Minimize Mean-square error: J(w) = 1
2 ∑(yi −hw (xi))2
Stochastic Gradient Descent: w = w +η∇Jxi
Gradient of the error function:
∇J = −∑
i
(yi −hw (xi))∇hw (xi)
∇hw (xi) = hw (xi)(1−hw (xi))
Weight update rule
w = w +η ∑
i
(yi −hw (xi))hw (xi)(1−hw (xi))xi
14 / 28
17. Perceptron
PERCEPTRON LEARNING(Stream,η)
1 for each class
2 do PERCEPTRON LEARNING(Stream,class,η)
PERCEPTRON LEARNING(Stream,class,η)
1 £ Let w0 and w be randomly initialized
2 for each example (x,y) in Stream
3 do if class = y
4 then δ = (1−hw (x))·hw (x)·(1−hw (x))
5 else δ = (0−hw (x))·hw (x)·(1−hw (x))
6 w = w +η ·δ ·x
PERCEPTRON PREDICTION(x)
1 return argmaxclass hwclass
(x)
15 / 28
18. Hybrid Hoeffding Trees
Hoeffding Naive Bayes Tree
Two learners at leaves: Naive Bayes and Majority Class
Hoeffding Perceptron Tree
Two learners at leaves: Perceptron and Majority Class
Hoeffding Naive Bayes Perceptron Tree
Three learners at leaves: Naive Bayes, Perceptron and Majority
Class
16 / 28
20. What is MOA?
{M}assive {O}nline {A}nalysis is a framework for online
learning from data streams.
It is closely related to WEKA
It includes a collection of offline and online methods as well
as tools for evaluation:
boosting and bagging
Hoeffding Trees
with and without Na¨ıve Bayes classifiers at the leaves.
18 / 28
21. What is MOA?
Easy to extend
Easy to design and run experiments
Philipp Kranen, Hardy Kremer, Timm Jansen, Thomas
Seidl, Albert Bifet, Geoff Holmes, Bernhard Pfahringer
RWTH Aachen University, University of Waikato
Benchmarking Stream Clustering Algorithms within the
MOA Framework
KDD 2010 Demo
18 / 28
22. MOA: the bird
The Moa (another native NZ bird) is not only flightless, like the
Weka, but also extinct.
19 / 28
23. MOA: the bird
The Moa (another native NZ bird) is not only flightless, like the
Weka, but also extinct.
19 / 28
24. MOA: the bird
The Moa (another native NZ bird) is not only flightless, like the
Weka, but also extinct.
19 / 28
25. Concept Drift Framework
t
f(t) f(t)
α
α
t0
W
0.5
1
Definition
Given two data streams a, b, we define c = a⊕W
t0
b as the data
stream built joining the two data streams a and b
Pr[c(t) = b(t)] = 1/(1+e−4(t−t0)/W ).
Pr[c(t) = a(t)] = 1−Pr[c(t) = b(t)]
20 / 28