SlideShare ist ein Scribd-Unternehmen logo
1 von 57
Daniel Roggen
2011
Wearable Computing
Part IV
Ensemble classifiers
Insight into ongoing research
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
F
Context
ActivityS2 P2
S1 P1
S0
P0
S3 P3
S4 P4
S0
S1
S2
S3
S4
F1
F2
F3
F0
C0
C1
C2
Preprocessing
Sensor sampling Segmentation
Feature extraction
Classification
Decision fusion
R
Null class
rejection
Reasoning
Subsymbolic processing Symbolic processing
Low-level activity
models (primitives)
Runtime: Recognition phase
Design-time: Training phase
Training
Activity-aware
application
Sensor data
Annotations
High-level activity
models
Training
A1, p1, t1
A2, p2, t2
A3, p3, t3
A4, p4, t4
t
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Many classifiers: Ensemble classifiers
• What is it?
• How to generate ensembles?
• What are they useful for in wearable computing?
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
What are ensemble classifiers?
{(X1,y1),(X2,y2)…(Xn,yn)}
Decision fusion
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Why?
• Intuitively: increasing the confidence in the decision taken
– Seek additional opinion before making a decision
– Read multiple product reviews
– Request reference before hiring someone
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Background
• 1786 Condorcet’s Jury Theorem
– Probability of a group of individuals arriving at a correct decision
– Individual vote correctly (p) or incorrectly (1-p)
– With p>0.5, the more voters the higher the probability that the majority
decision is correct
– « Theoretical basis for democracy »
http://en.wikipedia.org/wiki/Condorcet_jury_theorem
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Also known as…
• Combination of multiple classifiers
• Classifier fusion
• Classifier ensembles
• Mixture of experts
• Consensus aggregation
• Composite classifier systems
• Dynamic classifier selection
• …
Polikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Why are classifier ensembles interesting?
• Ruta: Another approach [to progress in decision support systems] suggests
that as the limits of the existing individual method are approached and it is
hard to develop a better one, the solution of the problem might be just to
combine existing well performing methods, hoping that better results will be
achieved.
• Diettrich: The main discovery is that ensembles are often much more
accurate than the individual classifiers that make them up.
• Polikar: If we had access to a classifier with perfect generalization
performance, there would be no need to resort to ensemble techniques. The
realities of noise, outliers and overlapping data distributions, however, make
such a classifier an impossible proposition. At best, we can hope for
classifiers that correctly classify the field data most of the time. The strategy
in ensemble systems is therefore to create many classifiers, and combine
their outputs such that the combination improves upon the performance of a
single classifier.
Ruta et al., An overview of classifier fusion methods, Computing and Information Systems, 2000
Dietterich, Ensemble methods in machine learning, Proc. Multiple Classifier Systems, 2000
Polikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Motivation
Dietterich, Ensemble methods in machine learning, Proc. Multiple Classifier Systems, 2000
• The ‘true f’ cannot be
represented by any of
the classifiers in H
• A combination of
multiple classifiers
expands the
representable functions
Dietterich: “These three fundamental issues are the three most important ways in which
existing learning algorithms fail. Hence, ensemble methods have the promise of reducing (and
perhaps even eliminating) these three key shortcomings of standard learning algorithms.”
• Enough training data but
computationally difficult
to find the best classifier
• Local optima
• Ensemble constructed
from different start
points better
approximates f
• Insufficient data
• Many classifiers give the
same accuracy on the
training data
• An ensemble of
‘accurate’ classifiers
reduces the risk of
choosing the wrong
classifier
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Motivation
• Statistical reasons:
– Good performance on training set does not guarantee generalization
– Combining classifiers reduce the risk of selecting a poorly one
• Large volume of data
– Training classifiers with large amounts of data can be impractical
– Partition data in smaller subsets and train/combine specific classifiers
• Too little data
– Resampling techniques and training of different classifiers on (random) subsets
• Data fusion
– Multiple/multimodal sensors
– For each modality a specific classifier is trained, and then combined
• Divide and conquer
– Too complex decision boundary for a single classifier
– Approximate the complex decision boundary by multiple classifiers
Polikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006
Š Daniel Roggen www.danielroggen.net droggen@gmail.comPolikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006
Divide and conquer
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Classifier selection / Classifier fusion
• Classifier selection: Use an expert in a local area of the feature
space
• Classifier fusion: merge individual (weaker) learners to obtain a
single (stronger) learner
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
The diversity problem
• Classifiers must (in a fused sense) agree on the right decision
• When classifiers disagree, they must disagree differently
5 classifiers, majority voting
Classifier Decision
h0: 0
h1: 1
h2: 0
h3: 2
h4: 3
• Classifiers are diverse if they make different errors on data points
• A strategy for ensemble generation must find diverse classifiers
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Measuring diversity
• An good diversity measure should relate to the ensemble accuracy
• No strict definition of ‘diversity’ – active area of research
• For two classifiers: statistical litterature
• For three+ classifiers: no consensus
Polikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006
Kuncheva, Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy,
Machine Learning, 2003
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Measuring diversity: pair-wise measures
• Average of all pair-wise diversity measures
• Q-Statistics
• Correlation
• Disagreement, double fault
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Measuring diversity: summary
• No diversity measure consistently correlates with higher accuracy
• “although a rough tendency was confirmed. . . no prominent links appeared
between the diversity of the ensemble and its accuracy. Diversity alone is a
poor predictor of the ensemble accuracy” [1]
• Although there are proven connections between diversity and accuracy in
some special cases, our results raise some doubts about the usefulness of
diversity measures in building classifier ensembles in real-life pattern
recognition problems. [2]
[1] Kuncheva, That Elusive Diversity in Classifier Ensembles, IbPRIA, 2003
[2] Kuncheva, Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble
Accuracy, Machine Learning, 2003
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Measuring diversity: summary
• In the absence of additional information Q may be recommended
– Simple implementation
– Limits: [-1;1]
– Independence value: 0
Kuncheva, Is Independence Good for Combining Classifiers?, Proc. Int. Conf. Pattern Recognition, 2000
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
How to obtain diversity
Strategies for ensemble generation
1. Enumerating the hypotheses
2. Manipulating the training examples
3. Manipulating the input features
4. Manipulating the output targets
5. Injecting randomness
Dietterich, Ensemble methods in machine learning, Proc. Multiple Classifier Systems, 2000
Brown, Yao, Diversity creation methods: a survey and categorisation, Information Fusion, 2005
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Strategy for ensemble generation (1)
Manipulating the training examples
• Learning algorithm run multiple times on different training subsets
• Suited for unstable classifiers
– decision tree, neural networks, …
– (Stable: linear regression, nearest neighbor, linear threshold)
• Methods:
– Bagging: randomly draw samples from training set
– Cross-validation: leave out disjoints subsets from training
– Boosting: draw samples with more likelihood for difficult samples
Š Daniel Roggen www.danielroggen.net droggen@gmail.comPolikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006
Strategy for ensemble generation (1)
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Strategy for ensemble generation (2)
Manipulating the input features
• Change the set of input features available to the learning algorithm
• E.g. select/group features according to identical sensors
• Input features need to be redundant
• Input decimated ensembles [1]
[1] Tumer,Oza, Input decimated ensembles, Pattern Anal Applic, 2003
Ho, The Random Subspace Method for Constructing Decision Forests, IEEE PAMI, 1998
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Strategy for ensemble generation (3)
Manipulating the output targets
• Classification: {(X1,y1),(X2,y2)…(Xn,yn)}
• Change the classification problem by changing y
• Error correcting codes
– Change form 1 classifier with K classes -> log2(K) 2-class classifiers
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Strategy for ensemble generation (4)
Injecting randomness
• Randomness in the learning algorithm
• E.g.
– initial weights of a neural network
– initial parameters of HMM
– C4.5: random selection among N best decision tree splits
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
How to combine the classifiers?
Ruta et al., An overview of classifier fusion methods, Computing and Information Systems, 2000
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
• (weighted) Majority voting
– Class label output
– Select the class most voted for
• Mean rule
– Continuous output
– Support for class wj is average of classifier output
• Product rule
– Continuous output
– Product of classifier output
How to combine the classifiers?
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Which method is better?
• No free lunch - problem dependent
• Ensemble generation
– Boosting vs Bagging: Boosting usually achieves better generalization but is more
sensitive to noise and outliers
• Ensemble combination
– General case: mean rule - consistent performance on a broad range of problems
– Reliable estimate of classifier accuracy: weighted average, weighted majority
– Classifier output posterior probabilities: product rule
Polikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Which method is better?
• Ensemble combination
– No information classifier errors distribution: median
• always leads to Pe → 0 even with heavy-tailed distributions.
– Error distribution less heavy tailed: mean
– For technical reasons (e.g. communication in WSN) majority vote may
be the only one that can be implemented
• Performance of the majority vote strategy coincides with the performance of
the median strategy
Cabrera, On the impact of fusion strategies on classification errors for large ensembles of classifiers, Pattern
recognition, 2006
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
In wearable computing
Classifier fusion
• Multimodal sensors & NULL class rejection
• Sound
• Acceleration
• Null class when sound&acceleration classification disagree
Ward, Gesture Spotting Using Wrist Worn Microphone and 3-Axis Accelerometer, Proc. Joint Conf on Smart
objects and ambient intelligence, 2005
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Zappi, Roggen et al. Activity recognition from on-body sensors: accuracy-power trade-off by dynamic sensor selection. EWSN,
2008.
In wearable computing
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
In wearable computing
Classifier fusion
Sensor Scalability [2]
• Application defined performance
• Clustering
Robustness to faults [1]
• Graceful degradation
• Implicit fault-tolerance
[1] Zappi, Stiefmeier, Farella, Roggen, Benini, TrĂśster, Activity Recognition from On-Body Sensors by
Classifier Fusion: Sensor Scalability and Robustness. ISSNIP 07
[2] Zappi, Lombriser, Stiefmeier, Farella, Roggen, Benini, TrĂśster, Activity recognition from on-body
sensors: accuracy-power trade-off by dynamic sensor selection, EWSN 08
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
In wearable computing
Classifier fusion
Power-performance management[1]
[1] Zappi, Roggen et al., Network-level power-performance trade-off in wearable activity recognition: a
dynamic sensor selection approach, submitted to ACM Trans. Embedded Computing Systems
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
In wearable computing
Classifier selection
Stiefmeier, Combining Motion Sensors and Ultrasonic Hands Tracking for Continuous Activity Recognition in a
Maintenance Scenario,
Select 'expert'
classifier for location
class 1
Select 'expert'
classifier for location
class 2
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Further applications
• Classification despite missing features
– "A bootstrap-based method can provide an alternative approach to the missing
data problem by generating an ensemble of classifiers, each trained with a
random subset of the features." [1]
– "Strikingly the reduced-models approach, seldom mentioned or used,
consistently outperforms the other two [imputation] methods, sometimes by a
large margin." [2]
• E.g.:
– Long term multimodal activity recognition
– Physiological signal assessment
– Opportunistic activity recognition
[1] Polikar, Bootstrap-inspired techniques in computational intelligence, IEEE Signal Processing Magazine, 2007
[2] Provost, Handling Missing Values when Applying Classification Models, Machine Learning Research, 2007
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Further applications
• Enhanced robustness in activity recognition
– Typically small datasets: are we using the optimal decision boundary for field
deployment?
– Ensembles of classsifiers trained with resampling
– Ensembles have different field generalization performance
• Confidence estimation/QoC
– Continuous valued output of ensemble classifiers can estimate posterior
probability [1]
• WSN
– "classifiers using data from different sensors are usually uncorrelated to a far
greater degree than classifiers which use data from the same sensor" [2]
– Distributed activity recognition (Tiny Task Network): only classification result is
required, lower bandwidth
[1] Muhlbaier, Polikar, Ensemble confidence estimates posterior probability, Int. Workshop on Multiple Classifier
Systems, 2005
[2] Fumera, Roli, A theoretical and experimental analysis of linear combiners for multiple classifier systems, IEEE
Trans. Pattern Anal. Mach. Intell., 2005
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Reasons not to use ensembles
• Classifier with (perfect|good) generalization performance available
• Decreased comprehensibility
• Limited storage and computational resources
• Correlated errors or uncorrelated errors at rate higher than chance
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Summary
• Large body of research showing benefits of ensembles
• Some ensembles classifiers already in use in Wearable Computing
• Potentials: missing features, confidence/QoC, improved robustness,
WSN
• Active field of research
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Further reading
Reviews, books
• Ruta et al., An overview of classifier fusion methods, Computing and Information Systems, 2000
• Dietterich, Ensemble methods in machine learning, Proc. Multiple Classifier Systems, 2000
• Polikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006
• Polikar, Bootstrap-inspired techniques in computational intelligence, IEEE Signal Processing Magazine, 2007
• Kuncheva, Combining Pattern Classifiers, Methods and Algorithms, Wiley, 2005
Diversity
• Kuncheva, Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy,
Machine Learning, 2003
• Brown, Yao, Diversity creation methods: a survey and categorisation, Information Fusion, 2005
Decimation
• Tumer,Oza, Input decimated ensembles, Pattern Anal Applic, 2003
• Ho, The Random Subspace Method for Constructing Decision Forests, IEEE PAMI, 1998
Confidence
• Muhlbaier, Polikar, Ensemble confidence estimates posterior probability, Int. Workshop on Multiple Classifier
Systems, 2005
• Tourassi, Reliability Assessment of Ensemble Classifiers-Application in Mammography
Missing features
• Provost, Handling Missing Values when Applying Classification Models, Machine Learning Research, 2007
Conferences
• Proc. Workshop Multiple Classifier Systems (Springer)
Various
• Cabrera, On the impact of fusion strategies on classification errors for large ensembles of classifiers, Pattern
recognition, 2006
• Fumera, A theoretical and experimental analysis of linear combiners for multiple classifier systems, IEEE Trans.
Pattern Anal. Mach. Intell., 2005
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Multiplication of sensors in real-world use
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
http://www.opportunity-project.eu
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Activity recognition with sensors that just happen to be available
Opportunistic activity recognition
Designing a pattern recognition system without knowing the input space !
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
The OPPORTUNITY activity recognition chain
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
WP4 Ad-hoc cooperative sensing
OPPORTUNITY Architecture, Recognition goal, Self-* principles
• Specify what should be recognized but not how
– E.g.: « Detect grasping manipulative activities with wearable sensors »
• Self-organization in a coordinated sensing mission
– E.g.: « Recognition of manipulative activities » calls for sensors capable of providing
movement information, and placed on body to network
• Sensor self-description (statically known characteristics)
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
WP1 Sensor and features
Filter variations
• Conditioning: re-define features to make them less sensitive to variations
– E.g. use magnitude of acceleration signal, rather than X,Y,Z vector
• Abstraction: different modalities map to the same feature space
– E.g. hand coordinates from inertial sensors or localization system
• Self-characterization: run-time characteristics
– E.g. location, orientation
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
WP2: Opportunistic classifiers
Robust classification & allow for adaptation
• Dynamic « Ensemble classifier » architecture
• Dynamic selection of most informative information channel
• Allow for multimodal data, changing sensor numbers
• Allow for adaptation
sensor0
sensor1
sensorn
classifier0
classifier1
classifiern
c0
c1
cn
Fusion class user
Gesture
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
WP3 Dynamic adaptation and autonomous evolution
Run-time monitoring and adapation of the system
• Adaptation to slow changes, long-term, concept drift
– Sensor degradation, change in user action-motor strategies
• Use new sensors
– Sensing infrastructure changes with upgrades
• Opportunistic user feedback
– Explicit: e.g. feedback through keyboard
– Implicit: e.g. from EEG signals
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Dynamic adaptation:
power-performance management
• Dynamic ensemble classifiers
• Passively: ensemble classifiers allow for changes in the environment
• Actively: benefit of dynamic adaptation
Zappi et al. Network-level power-performance trade-off in wearable activity recognition: a dynamic sensor selection approach, To
appear ACM TECS
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Adaptation: Classifier self-calibration to sensor displacement
FĂśrster, Roggen, TrĂśster, Unsupervised classifier self-calibration through repeated context occurences: is
there robustness against sensor displacement to gain?, Proc. Int. Symposium Wearable Computers, 2009
Calibration dynamics: class
centers follow cluster
displacement in feature
space
Self-calibration to displaced
sensors increases accuracy:
• by 33.3% in HCI dataset
• by 13.4% in fitness dataset
Principle: upon activity detection,
classifiers are re-trained to better
model the last classified activity
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Adaptation: minimally user-supervised adaptation
Acceleration data Recognized gesture
Error button
FĂśrster et al., Incremental kNN classifier exploiting correct - error teacher for activity recognition, Submitted to ICMLA 2010
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Adaptation: minimally user-supervised adaptation
• Adaptation leads to:
• Higher accuracy in the adaptive case v.s. control
• Higher input rate
• More "personalized" gestures
Förster et al., Online user adaptation in gesture and activity recognition - what’s the benefit? Tech Rep.
FĂśrster et al., Incremental kNN classifier exploiting correct - error teacher for activity recognition, Submitted to ICMLA 2010
Š Daniel Roggen www.danielroggen.net droggen@gmail.comFÜrster et al., On the use of brain decoded signals for online user adaptive gesture recognition systems, Pervasive 2010
Adaptation: with brain-signal feedback
• ~9% accuracy increase with perfect brain signal recognition
• ~3% accuracy increase with effective brain signal recognition accuracy
•Adaptation guided by the user’s own perception of the system
• User in the loop
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
• New sensors may be discovered
– Infrastructure upgrades
– Entering a new environment
• Problem: How to use the sensor without self-*?
– Typical in open-ended environments
– Hard to predict what future sensors will be deployed
• Unsupervised approaches to use new sensors!
Using new sensors without supervision…
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Using new sensors without supervision…
… using behavioral assumptions
• Can a reed switch recognize different gestures and
modes of locomotion?
• Extract maximum information content from simple
sensors
– Use behavioral assumptions
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Open
Using new sensors without supervision…
… using behavioral assumptions
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Application to Opportunity Dataset
• Functionality of wearable sensor is learned incrementally
• Autonomous training of wearable systems
• Only needed: sporadic interactions with the environment
• Applicable in WSN/AmI as demonstrated by hardware implementation
Calatroni et al. Context Cells: Towards Lifelong Learning in Activity Recognition Systems, EuroSSC 2009
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Transfer of recognition capabilities
• System designed for domain 1 should work in domain 2
• Changes of sensors between setup 1 and 2
Roggen et al., Wearable Computing: Designing and Sharing Activity-Recognition Systems Across Platforms, IEEE Robotics&Automation Magazine, 2011
Š Daniel Roggen www.danielroggen.net droggen@gmail.com
Summary
• Improving wearability & user-acceptance
• Addressing real-world deployment issues
• Enabling large-scale Ambient Intelligence environments
www.opportunity-project.eu
EC grant n° 225938
Š Daniel Roggen www.danielroggen.net droggen@gmail.com

Weitere ähnliche Inhalte

Was ist angesagt?

Harnessing The Proteome With Proteo Iq Quantitative Proteomics Software
Harnessing The Proteome With Proteo Iq Quantitative Proteomics SoftwareHarnessing The Proteome With Proteo Iq Quantitative Proteomics Software
Harnessing The Proteome With Proteo Iq Quantitative Proteomics Softwarejatwood3
 
Mahesh Joshi
Mahesh JoshiMahesh Joshi
Mahesh Joshibutest
 
Query aware determinization of uncertain
Query aware determinization of uncertainQuery aware determinization of uncertain
Query aware determinization of uncertainShakas Technologies
 
LAS - System Biology Lesson
LAS - System Biology LessonLAS - System Biology Lesson
LAS - System Biology LessonLASircc
 
Research proposal
Research proposalResearch proposal
Research proposalSadia Sharmin
 
Leveraging Machine Learning Techniques Predictive Analytics for Knowledge Dis...
Leveraging Machine Learning Techniques Predictive Analytics for Knowledge Dis...Leveraging Machine Learning Techniques Predictive Analytics for Knowledge Dis...
Leveraging Machine Learning Techniques Predictive Analytics for Knowledge Dis...Kevin Mader
 
Scio12 sem web_final
Scio12 sem web_finalScio12 sem web_final
Scio12 sem web_finalKristi Holmes
 
QUERY AWARE DETERMINIZATION OF UNCERTAIN OBJECTS
 QUERY AWARE DETERMINIZATION OF UNCERTAIN OBJECTS QUERY AWARE DETERMINIZATION OF UNCERTAIN OBJECTS
QUERY AWARE DETERMINIZATION OF UNCERTAIN OBJECTSNexgen Technology
 
A survey of random forest based methods for
A survey of random forest based methods forA survey of random forest based methods for
A survey of random forest based methods forNikhil Sharma
 
Modeling and Aggregation of Complex Annotations
Modeling and Aggregation of Complex AnnotationsModeling and Aggregation of Complex Annotations
Modeling and Aggregation of Complex AnnotationsAlexander Braylan
 
Survey: Biological Inspired Computing in the Network Security
Survey: Biological Inspired Computing in the Network SecuritySurvey: Biological Inspired Computing in the Network Security
Survey: Biological Inspired Computing in the Network SecurityEswar Publications
 
Techniques for integrating machine learning with knowledge ...
Techniques for integrating machine learning with knowledge ...Techniques for integrating machine learning with knowledge ...
Techniques for integrating machine learning with knowledge ...butest
 
Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...
Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...
Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...Kishor Datta Gupta
 
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...Matthew Lease
 
Design Presentation-CGillies
Design Presentation-CGilliesDesign Presentation-CGillies
Design Presentation-CGilliesChristopher Gillies
 
Intuidex - To be or not to be iid by William M. Pottenger (NYC Machine Learni...
Intuidex - To be or not to be iid by William M. Pottenger (NYC Machine Learni...Intuidex - To be or not to be iid by William M. Pottenger (NYC Machine Learni...
Intuidex - To be or not to be iid by William M. Pottenger (NYC Machine Learni...Hakka Labs
 
22 An Introduction to Stochastic Actor-Oriented Models (SAOM or Siena)
22 An Introduction to Stochastic Actor-Oriented Models (SAOM or Siena)22 An Introduction to Stochastic Actor-Oriented Models (SAOM or Siena)
22 An Introduction to Stochastic Actor-Oriented Models (SAOM or Siena)Duke Network Analysis Center
 
Handling Class Overlap and Imbalance to Detect Prompt Situations in Smart Homes
Handling Class Overlap and Imbalance to Detect Prompt Situations in Smart HomesHandling Class Overlap and Imbalance to Detect Prompt Situations in Smart Homes
Handling Class Overlap and Imbalance to Detect Prompt Situations in Smart HomesBarnan Das
 
Machine Learning Challenges For Automated Prompting In Smart Homes
Machine Learning Challenges For Automated Prompting In Smart HomesMachine Learning Challenges For Automated Prompting In Smart Homes
Machine Learning Challenges For Automated Prompting In Smart HomesBarnan Das
 

Was ist angesagt? (20)

Harnessing The Proteome With Proteo Iq Quantitative Proteomics Software
Harnessing The Proteome With Proteo Iq Quantitative Proteomics SoftwareHarnessing The Proteome With Proteo Iq Quantitative Proteomics Software
Harnessing The Proteome With Proteo Iq Quantitative Proteomics Software
 
Mahesh Joshi
Mahesh JoshiMahesh Joshi
Mahesh Joshi
 
Query aware determinization of uncertain
Query aware determinization of uncertainQuery aware determinization of uncertain
Query aware determinization of uncertain
 
LAS - System Biology Lesson
LAS - System Biology LessonLAS - System Biology Lesson
LAS - System Biology Lesson
 
Research proposal
Research proposalResearch proposal
Research proposal
 
Leveraging Machine Learning Techniques Predictive Analytics for Knowledge Dis...
Leveraging Machine Learning Techniques Predictive Analytics for Knowledge Dis...Leveraging Machine Learning Techniques Predictive Analytics for Knowledge Dis...
Leveraging Machine Learning Techniques Predictive Analytics for Knowledge Dis...
 
Scio12 sem web_final
Scio12 sem web_finalScio12 sem web_final
Scio12 sem web_final
 
QUERY AWARE DETERMINIZATION OF UNCERTAIN OBJECTS
 QUERY AWARE DETERMINIZATION OF UNCERTAIN OBJECTS QUERY AWARE DETERMINIZATION OF UNCERTAIN OBJECTS
QUERY AWARE DETERMINIZATION OF UNCERTAIN OBJECTS
 
A survey of random forest based methods for
A survey of random forest based methods forA survey of random forest based methods for
A survey of random forest based methods for
 
Modeling and Aggregation of Complex Annotations
Modeling and Aggregation of Complex AnnotationsModeling and Aggregation of Complex Annotations
Modeling and Aggregation of Complex Annotations
 
Survey: Biological Inspired Computing in the Network Security
Survey: Biological Inspired Computing in the Network SecuritySurvey: Biological Inspired Computing in the Network Security
Survey: Biological Inspired Computing in the Network Security
 
Techniques for integrating machine learning with knowledge ...
Techniques for integrating machine learning with knowledge ...Techniques for integrating machine learning with knowledge ...
Techniques for integrating machine learning with knowledge ...
 
Data mining
Data mining Data mining
Data mining
 
Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...
Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...
Deep Reinforcement Learning based Recommendation with Explicit User-ItemInter...
 
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...
 
Design Presentation-CGillies
Design Presentation-CGilliesDesign Presentation-CGillies
Design Presentation-CGillies
 
Intuidex - To be or not to be iid by William M. Pottenger (NYC Machine Learni...
Intuidex - To be or not to be iid by William M. Pottenger (NYC Machine Learni...Intuidex - To be or not to be iid by William M. Pottenger (NYC Machine Learni...
Intuidex - To be or not to be iid by William M. Pottenger (NYC Machine Learni...
 
22 An Introduction to Stochastic Actor-Oriented Models (SAOM or Siena)
22 An Introduction to Stochastic Actor-Oriented Models (SAOM or Siena)22 An Introduction to Stochastic Actor-Oriented Models (SAOM or Siena)
22 An Introduction to Stochastic Actor-Oriented Models (SAOM or Siena)
 
Handling Class Overlap and Imbalance to Detect Prompt Situations in Smart Homes
Handling Class Overlap and Imbalance to Detect Prompt Situations in Smart HomesHandling Class Overlap and Imbalance to Detect Prompt Situations in Smart Homes
Handling Class Overlap and Imbalance to Detect Prompt Situations in Smart Homes
 
Machine Learning Challenges For Automated Prompting In Smart Homes
Machine Learning Challenges For Automated Prompting In Smart HomesMachine Learning Challenges For Automated Prompting In Smart Homes
Machine Learning Challenges For Automated Prompting In Smart Homes
 

Andere mochten auch

Wearable technologies: what's brewing in the lab?
Wearable technologies: what's brewing in the lab?Wearable technologies: what's brewing in the lab?
Wearable technologies: what's brewing in the lab?Daniel Roggen
 
Image Recognition in Digital Merchandising Industry
Image Recognition in Digital Merchandising IndustryImage Recognition in Digital Merchandising Industry
Image Recognition in Digital Merchandising IndustryIT Weekend
 
Wearable Computing - Part III: The Activity Recognition Chain (ARC)
Wearable Computing - Part III: The Activity Recognition Chain (ARC)Wearable Computing - Part III: The Activity Recognition Chain (ARC)
Wearable Computing - Part III: The Activity Recognition Chain (ARC)Daniel Roggen
 
RNN & LSTM
RNN & LSTMRNN & LSTM
RNN & LSTMhyu_jaram
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryAndrii Gakhov
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRUananth
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications Ahmed_hashmi
 

Andere mochten auch (7)

Wearable technologies: what's brewing in the lab?
Wearable technologies: what's brewing in the lab?Wearable technologies: what's brewing in the lab?
Wearable technologies: what's brewing in the lab?
 
Image Recognition in Digital Merchandising Industry
Image Recognition in Digital Merchandising IndustryImage Recognition in Digital Merchandising Industry
Image Recognition in Digital Merchandising Industry
 
Wearable Computing - Part III: The Activity Recognition Chain (ARC)
Wearable Computing - Part III: The Activity Recognition Chain (ARC)Wearable Computing - Part III: The Activity Recognition Chain (ARC)
Wearable Computing - Part III: The Activity Recognition Chain (ARC)
 
RNN & LSTM
RNN & LSTMRNN & LSTM
RNN & LSTM
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: Theory
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRU
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
 

Ähnlich wie Wearable Computing - Part IV: Ensemble classifiers & Insight into ongoing research

H2O World - Intro to Data Science with Erin Ledell
H2O World - Intro to Data Science with Erin LedellH2O World - Intro to Data Science with Erin Ledell
H2O World - Intro to Data Science with Erin LedellSri Ambati
 
لموعد الإثنين 03 يناير 2022 143 مبادرة #تواصل_تطوير المحاضرة ال 143 من المباد...
لموعد الإثنين 03 يناير 2022 143 مبادرة #تواصل_تطوير المحاضرة ال 143 من المباد...لموعد الإثنين 03 يناير 2022 143 مبادرة #تواصل_تطوير المحاضرة ال 143 من المباد...
لموعد الإثنين 03 يناير 2022 143 مبادرة #تواصل_تطوير المحاضرة ال 143 من المباد...Egyptian Engineers Association
 
Identifying and classifying unknown Network Disruption
Identifying and classifying unknown Network DisruptionIdentifying and classifying unknown Network Disruption
Identifying and classifying unknown Network Disruptionjagan477830
 
NSL KDD Cup 99 dataset Anomaly Detection using Machine Learning Technique
NSL KDD Cup 99 dataset Anomaly Detection using Machine Learning Technique NSL KDD Cup 99 dataset Anomaly Detection using Machine Learning Technique
NSL KDD Cup 99 dataset Anomaly Detection using Machine Learning Technique Sujeet Suryawanshi
 
Data science lecture4_doaa_mohey
Data science lecture4_doaa_moheyData science lecture4_doaa_mohey
Data science lecture4_doaa_moheyDoaa Mohey Eldin
 
Decision treeinductionmethodsandtheirapplicationtobigdatafinal 5
Decision treeinductionmethodsandtheirapplicationtobigdatafinal 5Decision treeinductionmethodsandtheirapplicationtobigdatafinal 5
Decision treeinductionmethodsandtheirapplicationtobigdatafinal 5ssuser33da69
 
Review of Algorithms for Crime Analysis & Prediction
Review of Algorithms for Crime Analysis & PredictionReview of Algorithms for Crime Analysis & Prediction
Review of Algorithms for Crime Analysis & PredictionIRJET Journal
 
Machine Learning in the Financial Industry
Machine Learning in the Financial IndustryMachine Learning in the Financial Industry
Machine Learning in the Financial IndustrySubrat Panda, PhD
 
Data Science and Analysis.pptx
Data Science and Analysis.pptxData Science and Analysis.pptx
Data Science and Analysis.pptxPrashantYadav931011
 
Large Scale Data Mining using Genetics-Based Machine Learning
Large Scale Data Mining using Genetics-Based Machine LearningLarge Scale Data Mining using Genetics-Based Machine Learning
Large Scale Data Mining using Genetics-Based Machine Learningjaumebp
 
De carlo rizk 2010 icelw
De carlo rizk 2010 icelwDe carlo rizk 2010 icelw
De carlo rizk 2010 icelwTing Yuan, Ed.D.
 
ICELW Conference Slides
ICELW Conference SlidesICELW Conference Slides
ICELW Conference Slidestoolboc
 
Lecture-2 Applied ML .pptx
Lecture-2 Applied ML .pptxLecture-2 Applied ML .pptx
Lecture-2 Applied ML .pptxZainULABIDIN496386
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)theijes
 
Machinr Learning and artificial_Lect1.pdf
Machinr Learning and artificial_Lect1.pdfMachinr Learning and artificial_Lect1.pdf
Machinr Learning and artificial_Lect1.pdfSaketBansal9
 
Kaggle Gold Medal Case Study
Kaggle Gold Medal Case StudyKaggle Gold Medal Case Study
Kaggle Gold Medal Case StudyAlon Bochman, CFA
 
Weka bike rental
Weka bike rentalWeka bike rental
Weka bike rentalPratik Doshi
 
Model evaluation in the land of deep learning
Model evaluation in the land of deep learningModel evaluation in the land of deep learning
Model evaluation in the land of deep learningPramit Choudhary
 
Metabolomic Data Analysis Workshop and Tutorials (2014)
Metabolomic Data Analysis Workshop and Tutorials (2014)Metabolomic Data Analysis Workshop and Tutorials (2014)
Metabolomic Data Analysis Workshop and Tutorials (2014)Dmitry Grapov
 
Intro to machine learning
Intro to machine learningIntro to machine learning
Intro to machine learningAkshay Kanchan
 

Ähnlich wie Wearable Computing - Part IV: Ensemble classifiers & Insight into ongoing research (20)

H2O World - Intro to Data Science with Erin Ledell
H2O World - Intro to Data Science with Erin LedellH2O World - Intro to Data Science with Erin Ledell
H2O World - Intro to Data Science with Erin Ledell
 
لموعد الإثنين 03 يناير 2022 143 مبادرة #تواصل_تطوير المحاضرة ال 143 من المباد...
لموعد الإثنين 03 يناير 2022 143 مبادرة #تواصل_تطوير المحاضرة ال 143 من المباد...لموعد الإثنين 03 يناير 2022 143 مبادرة #تواصل_تطوير المحاضرة ال 143 من المباد...
لموعد الإثنين 03 يناير 2022 143 مبادرة #تواصل_تطوير المحاضرة ال 143 من المباد...
 
Identifying and classifying unknown Network Disruption
Identifying and classifying unknown Network DisruptionIdentifying and classifying unknown Network Disruption
Identifying and classifying unknown Network Disruption
 
NSL KDD Cup 99 dataset Anomaly Detection using Machine Learning Technique
NSL KDD Cup 99 dataset Anomaly Detection using Machine Learning Technique NSL KDD Cup 99 dataset Anomaly Detection using Machine Learning Technique
NSL KDD Cup 99 dataset Anomaly Detection using Machine Learning Technique
 
Data science lecture4_doaa_mohey
Data science lecture4_doaa_moheyData science lecture4_doaa_mohey
Data science lecture4_doaa_mohey
 
Decision treeinductionmethodsandtheirapplicationtobigdatafinal 5
Decision treeinductionmethodsandtheirapplicationtobigdatafinal 5Decision treeinductionmethodsandtheirapplicationtobigdatafinal 5
Decision treeinductionmethodsandtheirapplicationtobigdatafinal 5
 
Review of Algorithms for Crime Analysis & Prediction
Review of Algorithms for Crime Analysis & PredictionReview of Algorithms for Crime Analysis & Prediction
Review of Algorithms for Crime Analysis & Prediction
 
Machine Learning in the Financial Industry
Machine Learning in the Financial IndustryMachine Learning in the Financial Industry
Machine Learning in the Financial Industry
 
Data Science and Analysis.pptx
Data Science and Analysis.pptxData Science and Analysis.pptx
Data Science and Analysis.pptx
 
Large Scale Data Mining using Genetics-Based Machine Learning
Large Scale Data Mining using Genetics-Based Machine LearningLarge Scale Data Mining using Genetics-Based Machine Learning
Large Scale Data Mining using Genetics-Based Machine Learning
 
De carlo rizk 2010 icelw
De carlo rizk 2010 icelwDe carlo rizk 2010 icelw
De carlo rizk 2010 icelw
 
ICELW Conference Slides
ICELW Conference SlidesICELW Conference Slides
ICELW Conference Slides
 
Lecture-2 Applied ML .pptx
Lecture-2 Applied ML .pptxLecture-2 Applied ML .pptx
Lecture-2 Applied ML .pptx
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
 
Machinr Learning and artificial_Lect1.pdf
Machinr Learning and artificial_Lect1.pdfMachinr Learning and artificial_Lect1.pdf
Machinr Learning and artificial_Lect1.pdf
 
Kaggle Gold Medal Case Study
Kaggle Gold Medal Case StudyKaggle Gold Medal Case Study
Kaggle Gold Medal Case Study
 
Weka bike rental
Weka bike rentalWeka bike rental
Weka bike rental
 
Model evaluation in the land of deep learning
Model evaluation in the land of deep learningModel evaluation in the land of deep learning
Model evaluation in the land of deep learning
 
Metabolomic Data Analysis Workshop and Tutorials (2014)
Metabolomic Data Analysis Workshop and Tutorials (2014)Metabolomic Data Analysis Workshop and Tutorials (2014)
Metabolomic Data Analysis Workshop and Tutorials (2014)
 
Intro to machine learning
Intro to machine learningIntro to machine learning
Intro to machine learning
 

Mehr von Daniel Roggen

Embedded Sensing and Computational Behaviour Science
Embedded Sensing and Computational Behaviour ScienceEmbedded Sensing and Computational Behaviour Science
Embedded Sensing and Computational Behaviour ScienceDaniel Roggen
 
W10: Laboratory
W10: LaboratoryW10: Laboratory
W10: LaboratoryDaniel Roggen
 
W10: Interrupts
W10: InterruptsW10: Interrupts
W10: InterruptsDaniel Roggen
 
W9: Laboratory 2
W9: Laboratory 2W9: Laboratory 2
W9: Laboratory 2Daniel Roggen
 
W9_3: UoS Educational Processor: Assembler Memory Access
W9_3: UoS Educational Processor: Assembler Memory AccessW9_3: UoS Educational Processor: Assembler Memory Access
W9_3: UoS Educational Processor: Assembler Memory AccessDaniel Roggen
 
W9_2: Jumps and loops
W9_2: Jumps and loopsW9_2: Jumps and loops
W9_2: Jumps and loopsDaniel Roggen
 
W8: Laboratory 1
W8: Laboratory 1W8: Laboratory 1
W8: Laboratory 1Daniel Roggen
 
W8_2: Inside the UoS Educational Processor
W8_2: Inside the UoS Educational ProcessorW8_2: Inside the UoS Educational Processor
W8_2: Inside the UoS Educational ProcessorDaniel Roggen
 
W8_1: Intro to UoS Educational Processor
W8_1: Intro to UoS Educational ProcessorW8_1: Intro to UoS Educational Processor
W8_1: Intro to UoS Educational ProcessorDaniel Roggen
 
Wearable Computing - Part II: Sensors
Wearable Computing - Part II: SensorsWearable Computing - Part II: Sensors
Wearable Computing - Part II: SensorsDaniel Roggen
 
Wearable Computing - Part I: What is Wearable Computing?
Wearable Computing - Part I: What is Wearable Computing?Wearable Computing - Part I: What is Wearable Computing?
Wearable Computing - Part I: What is Wearable Computing?Daniel Roggen
 

Mehr von Daniel Roggen (11)

Embedded Sensing and Computational Behaviour Science
Embedded Sensing and Computational Behaviour ScienceEmbedded Sensing and Computational Behaviour Science
Embedded Sensing and Computational Behaviour Science
 
W10: Laboratory
W10: LaboratoryW10: Laboratory
W10: Laboratory
 
W10: Interrupts
W10: InterruptsW10: Interrupts
W10: Interrupts
 
W9: Laboratory 2
W9: Laboratory 2W9: Laboratory 2
W9: Laboratory 2
 
W9_3: UoS Educational Processor: Assembler Memory Access
W9_3: UoS Educational Processor: Assembler Memory AccessW9_3: UoS Educational Processor: Assembler Memory Access
W9_3: UoS Educational Processor: Assembler Memory Access
 
W9_2: Jumps and loops
W9_2: Jumps and loopsW9_2: Jumps and loops
W9_2: Jumps and loops
 
W8: Laboratory 1
W8: Laboratory 1W8: Laboratory 1
W8: Laboratory 1
 
W8_2: Inside the UoS Educational Processor
W8_2: Inside the UoS Educational ProcessorW8_2: Inside the UoS Educational Processor
W8_2: Inside the UoS Educational Processor
 
W8_1: Intro to UoS Educational Processor
W8_1: Intro to UoS Educational ProcessorW8_1: Intro to UoS Educational Processor
W8_1: Intro to UoS Educational Processor
 
Wearable Computing - Part II: Sensors
Wearable Computing - Part II: SensorsWearable Computing - Part II: Sensors
Wearable Computing - Part II: Sensors
 
Wearable Computing - Part I: What is Wearable Computing?
Wearable Computing - Part I: What is Wearable Computing?Wearable Computing - Part I: What is Wearable Computing?
Wearable Computing - Part I: What is Wearable Computing?
 

KĂźrzlich hochgeladen

The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 

KĂźrzlich hochgeladen (20)

The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 

Wearable Computing - Part IV: Ensemble classifiers & Insight into ongoing research

  • 1. Daniel Roggen 2011 Wearable Computing Part IV Ensemble classifiers Insight into ongoing research
  • 2. Š Daniel Roggen www.danielroggen.net droggen@gmail.com F Context ActivityS2 P2 S1 P1 S0 P0 S3 P3 S4 P4 S0 S1 S2 S3 S4 F1 F2 F3 F0 C0 C1 C2 Preprocessing Sensor sampling Segmentation Feature extraction Classification Decision fusion R Null class rejection Reasoning Subsymbolic processing Symbolic processing Low-level activity models (primitives) Runtime: Recognition phase Design-time: Training phase Training Activity-aware application Sensor data Annotations High-level activity models Training A1, p1, t1 A2, p2, t2 A3, p3, t3 A4, p4, t4 t
  • 3. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Many classifiers: Ensemble classifiers • What is it? • How to generate ensembles? • What are they useful for in wearable computing?
  • 4. Š Daniel Roggen www.danielroggen.net droggen@gmail.com What are ensemble classifiers? {(X1,y1),(X2,y2)…(Xn,yn)} Decision fusion
  • 5. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Why? • Intuitively: increasing the confidence in the decision taken – Seek additional opinion before making a decision – Read multiple product reviews – Request reference before hiring someone
  • 6. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Background • 1786 Condorcet’s Jury Theorem – Probability of a group of individuals arriving at a correct decision – Individual vote correctly (p) or incorrectly (1-p) – With p>0.5, the more voters the higher the probability that the majority decision is correct – ÂŤ Theoretical basis for democracy Âť http://en.wikipedia.org/wiki/Condorcet_jury_theorem
  • 7. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Also known as… • Combination of multiple classifiers • Classifier fusion • Classifier ensembles • Mixture of experts • Consensus aggregation • Composite classifier systems • Dynamic classifier selection • … Polikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006
  • 8. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Why are classifier ensembles interesting? • Ruta: Another approach [to progress in decision support systems] suggests that as the limits of the existing individual method are approached and it is hard to develop a better one, the solution of the problem might be just to combine existing well performing methods, hoping that better results will be achieved. • Diettrich: The main discovery is that ensembles are often much more accurate than the individual classifiers that make them up. • Polikar: If we had access to a classifier with perfect generalization performance, there would be no need to resort to ensemble techniques. The realities of noise, outliers and overlapping data distributions, however, make such a classifier an impossible proposition. At best, we can hope for classifiers that correctly classify the field data most of the time. The strategy in ensemble systems is therefore to create many classifiers, and combine their outputs such that the combination improves upon the performance of a single classifier. Ruta et al., An overview of classifier fusion methods, Computing and Information Systems, 2000 Dietterich, Ensemble methods in machine learning, Proc. Multiple Classifier Systems, 2000 Polikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006
  • 9. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Motivation Dietterich, Ensemble methods in machine learning, Proc. Multiple Classifier Systems, 2000 • The ‘true f’ cannot be represented by any of the classifiers in H • A combination of multiple classifiers expands the representable functions Dietterich: “These three fundamental issues are the three most important ways in which existing learning algorithms fail. Hence, ensemble methods have the promise of reducing (and perhaps even eliminating) these three key shortcomings of standard learning algorithms.” • Enough training data but computationally difficult to find the best classifier • Local optima • Ensemble constructed from different start points better approximates f • Insufficient data • Many classifiers give the same accuracy on the training data • An ensemble of ‘accurate’ classifiers reduces the risk of choosing the wrong classifier
  • 10. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Motivation • Statistical reasons: – Good performance on training set does not guarantee generalization – Combining classifiers reduce the risk of selecting a poorly one • Large volume of data – Training classifiers with large amounts of data can be impractical – Partition data in smaller subsets and train/combine specific classifiers • Too little data – Resampling techniques and training of different classifiers on (random) subsets • Data fusion – Multiple/multimodal sensors – For each modality a specific classifier is trained, and then combined • Divide and conquer – Too complex decision boundary for a single classifier – Approximate the complex decision boundary by multiple classifiers Polikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006
  • 11. Š Daniel Roggen www.danielroggen.net droggen@gmail.comPolikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006 Divide and conquer
  • 12. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Classifier selection / Classifier fusion • Classifier selection: Use an expert in a local area of the feature space • Classifier fusion: merge individual (weaker) learners to obtain a single (stronger) learner
  • 13. Š Daniel Roggen www.danielroggen.net droggen@gmail.com The diversity problem • Classifiers must (in a fused sense) agree on the right decision • When classifiers disagree, they must disagree differently 5 classifiers, majority voting Classifier Decision h0: 0 h1: 1 h2: 0 h3: 2 h4: 3 • Classifiers are diverse if they make different errors on data points • A strategy for ensemble generation must find diverse classifiers
  • 14. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Measuring diversity • An good diversity measure should relate to the ensemble accuracy • No strict definition of ‘diversity’ – active area of research • For two classifiers: statistical litterature • For three+ classifiers: no consensus Polikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006 Kuncheva, Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy, Machine Learning, 2003
  • 15. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Measuring diversity: pair-wise measures • Average of all pair-wise diversity measures • Q-Statistics • Correlation • Disagreement, double fault
  • 16. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Measuring diversity: summary • No diversity measure consistently correlates with higher accuracy • “although a rough tendency was confirmed. . . no prominent links appeared between the diversity of the ensemble and its accuracy. Diversity alone is a poor predictor of the ensemble accuracy” [1] • Although there are proven connections between diversity and accuracy in some special cases, our results raise some doubts about the usefulness of diversity measures in building classifier ensembles in real-life pattern recognition problems. [2] [1] Kuncheva, That Elusive Diversity in Classifier Ensembles, IbPRIA, 2003 [2] Kuncheva, Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy, Machine Learning, 2003
  • 17. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Measuring diversity: summary • In the absence of additional information Q may be recommended – Simple implementation – Limits: [-1;1] – Independence value: 0 Kuncheva, Is Independence Good for Combining Classifiers?, Proc. Int. Conf. Pattern Recognition, 2000
  • 18. Š Daniel Roggen www.danielroggen.net droggen@gmail.com How to obtain diversity Strategies for ensemble generation 1. Enumerating the hypotheses 2. Manipulating the training examples 3. Manipulating the input features 4. Manipulating the output targets 5. Injecting randomness Dietterich, Ensemble methods in machine learning, Proc. Multiple Classifier Systems, 2000 Brown, Yao, Diversity creation methods: a survey and categorisation, Information Fusion, 2005
  • 19. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Strategy for ensemble generation (1) Manipulating the training examples • Learning algorithm run multiple times on different training subsets • Suited for unstable classifiers – decision tree, neural networks, … – (Stable: linear regression, nearest neighbor, linear threshold) • Methods: – Bagging: randomly draw samples from training set – Cross-validation: leave out disjoints subsets from training – Boosting: draw samples with more likelihood for difficult samples
  • 20. Š Daniel Roggen www.danielroggen.net droggen@gmail.comPolikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006 Strategy for ensemble generation (1)
  • 21. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Strategy for ensemble generation (2) Manipulating the input features • Change the set of input features available to the learning algorithm • E.g. select/group features according to identical sensors • Input features need to be redundant • Input decimated ensembles [1] [1] Tumer,Oza, Input decimated ensembles, Pattern Anal Applic, 2003 Ho, The Random Subspace Method for Constructing Decision Forests, IEEE PAMI, 1998
  • 22. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Strategy for ensemble generation (3) Manipulating the output targets • Classification: {(X1,y1),(X2,y2)…(Xn,yn)} • Change the classification problem by changing y • Error correcting codes – Change form 1 classifier with K classes -> log2(K) 2-class classifiers
  • 23. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Strategy for ensemble generation (4) Injecting randomness • Randomness in the learning algorithm • E.g. – initial weights of a neural network – initial parameters of HMM – C4.5: random selection among N best decision tree splits
  • 24. Š Daniel Roggen www.danielroggen.net droggen@gmail.com How to combine the classifiers? Ruta et al., An overview of classifier fusion methods, Computing and Information Systems, 2000
  • 25. Š Daniel Roggen www.danielroggen.net droggen@gmail.com • (weighted) Majority voting – Class label output – Select the class most voted for • Mean rule – Continuous output – Support for class wj is average of classifier output • Product rule – Continuous output – Product of classifier output How to combine the classifiers?
  • 26. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Which method is better? • No free lunch - problem dependent • Ensemble generation – Boosting vs Bagging: Boosting usually achieves better generalization but is more sensitive to noise and outliers • Ensemble combination – General case: mean rule - consistent performance on a broad range of problems – Reliable estimate of classifier accuracy: weighted average, weighted majority – Classifier output posterior probabilities: product rule Polikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006
  • 27. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Which method is better? • Ensemble combination – No information classifier errors distribution: median • always leads to Pe → 0 even with heavy-tailed distributions. – Error distribution less heavy tailed: mean – For technical reasons (e.g. communication in WSN) majority vote may be the only one that can be implemented • Performance of the majority vote strategy coincides with the performance of the median strategy Cabrera, On the impact of fusion strategies on classification errors for large ensembles of classifiers, Pattern recognition, 2006
  • 28. Š Daniel Roggen www.danielroggen.net droggen@gmail.com In wearable computing Classifier fusion • Multimodal sensors & NULL class rejection • Sound • Acceleration • Null class when sound&acceleration classification disagree Ward, Gesture Spotting Using Wrist Worn Microphone and 3-Axis Accelerometer, Proc. Joint Conf on Smart objects and ambient intelligence, 2005
  • 29. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Zappi, Roggen et al. Activity recognition from on-body sensors: accuracy-power trade-off by dynamic sensor selection. EWSN, 2008. In wearable computing
  • 30. Š Daniel Roggen www.danielroggen.net droggen@gmail.com In wearable computing Classifier fusion Sensor Scalability [2] • Application defined performance • Clustering Robustness to faults [1] • Graceful degradation • Implicit fault-tolerance [1] Zappi, Stiefmeier, Farella, Roggen, Benini, TrĂśster, Activity Recognition from On-Body Sensors by Classifier Fusion: Sensor Scalability and Robustness. ISSNIP 07 [2] Zappi, Lombriser, Stiefmeier, Farella, Roggen, Benini, TrĂśster, Activity recognition from on-body sensors: accuracy-power trade-off by dynamic sensor selection, EWSN 08
  • 31. Š Daniel Roggen www.danielroggen.net droggen@gmail.com In wearable computing Classifier fusion Power-performance management[1] [1] Zappi, Roggen et al., Network-level power-performance trade-off in wearable activity recognition: a dynamic sensor selection approach, submitted to ACM Trans. Embedded Computing Systems
  • 32. Š Daniel Roggen www.danielroggen.net droggen@gmail.com In wearable computing Classifier selection Stiefmeier, Combining Motion Sensors and Ultrasonic Hands Tracking for Continuous Activity Recognition in a Maintenance Scenario, Select 'expert' classifier for location class 1 Select 'expert' classifier for location class 2
  • 33. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Further applications • Classification despite missing features – "A bootstrap-based method can provide an alternative approach to the missing data problem by generating an ensemble of classifiers, each trained with a random subset of the features." [1] – "Strikingly the reduced-models approach, seldom mentioned or used, consistently outperforms the other two [imputation] methods, sometimes by a large margin." [2] • E.g.: – Long term multimodal activity recognition – Physiological signal assessment – Opportunistic activity recognition [1] Polikar, Bootstrap-inspired techniques in computational intelligence, IEEE Signal Processing Magazine, 2007 [2] Provost, Handling Missing Values when Applying Classification Models, Machine Learning Research, 2007
  • 34. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Further applications • Enhanced robustness in activity recognition – Typically small datasets: are we using the optimal decision boundary for field deployment? – Ensembles of classsifiers trained with resampling – Ensembles have different field generalization performance • Confidence estimation/QoC – Continuous valued output of ensemble classifiers can estimate posterior probability [1] • WSN – "classifiers using data from different sensors are usually uncorrelated to a far greater degree than classifiers which use data from the same sensor" [2] – Distributed activity recognition (Tiny Task Network): only classification result is required, lower bandwidth [1] Muhlbaier, Polikar, Ensemble confidence estimates posterior probability, Int. Workshop on Multiple Classifier Systems, 2005 [2] Fumera, Roli, A theoretical and experimental analysis of linear combiners for multiple classifier systems, IEEE Trans. Pattern Anal. Mach. Intell., 2005
  • 35. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Reasons not to use ensembles • Classifier with (perfect|good) generalization performance available • Decreased comprehensibility • Limited storage and computational resources • Correlated errors or uncorrelated errors at rate higher than chance
  • 36. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Summary • Large body of research showing benefits of ensembles • Some ensembles classifiers already in use in Wearable Computing • Potentials: missing features, confidence/QoC, improved robustness, WSN • Active field of research
  • 37. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Further reading Reviews, books • Ruta et al., An overview of classifier fusion methods, Computing and Information Systems, 2000 • Dietterich, Ensemble methods in machine learning, Proc. Multiple Classifier Systems, 2000 • Polikar, Ensemble based systems in decision making, IEEE Circuits and Systems magazine, 2006 • Polikar, Bootstrap-inspired techniques in computational intelligence, IEEE Signal Processing Magazine, 2007 • Kuncheva, Combining Pattern Classifiers, Methods and Algorithms, Wiley, 2005 Diversity • Kuncheva, Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy, Machine Learning, 2003 • Brown, Yao, Diversity creation methods: a survey and categorisation, Information Fusion, 2005 Decimation • Tumer,Oza, Input decimated ensembles, Pattern Anal Applic, 2003 • Ho, The Random Subspace Method for Constructing Decision Forests, IEEE PAMI, 1998 Confidence • Muhlbaier, Polikar, Ensemble confidence estimates posterior probability, Int. Workshop on Multiple Classifier Systems, 2005 • Tourassi, Reliability Assessment of Ensemble Classifiers-Application in Mammography Missing features • Provost, Handling Missing Values when Applying Classification Models, Machine Learning Research, 2007 Conferences • Proc. Workshop Multiple Classifier Systems (Springer) Various • Cabrera, On the impact of fusion strategies on classification errors for large ensembles of classifiers, Pattern recognition, 2006 • Fumera, A theoretical and experimental analysis of linear combiners for multiple classifier systems, IEEE Trans. Pattern Anal. Mach. Intell., 2005
  • 38. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Multiplication of sensors in real-world use
  • 39. Š Daniel Roggen www.danielroggen.net droggen@gmail.com http://www.opportunity-project.eu
  • 40. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Activity recognition with sensors that just happen to be available Opportunistic activity recognition Designing a pattern recognition system without knowing the input space !
  • 41. Š Daniel Roggen www.danielroggen.net droggen@gmail.com The OPPORTUNITY activity recognition chain
  • 42. Š Daniel Roggen www.danielroggen.net droggen@gmail.com WP4 Ad-hoc cooperative sensing OPPORTUNITY Architecture, Recognition goal, Self-* principles • Specify what should be recognized but not how – E.g.: ÂŤ Detect grasping manipulative activities with wearable sensors Âť • Self-organization in a coordinated sensing mission – E.g.: ÂŤ Recognition of manipulative activities Âť calls for sensors capable of providing movement information, and placed on body to network • Sensor self-description (statically known characteristics)
  • 43. Š Daniel Roggen www.danielroggen.net droggen@gmail.com WP1 Sensor and features Filter variations • Conditioning: re-define features to make them less sensitive to variations – E.g. use magnitude of acceleration signal, rather than X,Y,Z vector • Abstraction: different modalities map to the same feature space – E.g. hand coordinates from inertial sensors or localization system • Self-characterization: run-time characteristics – E.g. location, orientation
  • 44. Š Daniel Roggen www.danielroggen.net droggen@gmail.com WP2: Opportunistic classifiers Robust classification & allow for adaptation • Dynamic ÂŤ Ensemble classifier Âť architecture • Dynamic selection of most informative information channel • Allow for multimodal data, changing sensor numbers • Allow for adaptation sensor0 sensor1 sensorn classifier0 classifier1 classifiern c0 c1 cn Fusion class user Gesture
  • 45. Š Daniel Roggen www.danielroggen.net droggen@gmail.com WP3 Dynamic adaptation and autonomous evolution Run-time monitoring and adapation of the system • Adaptation to slow changes, long-term, concept drift – Sensor degradation, change in user action-motor strategies • Use new sensors – Sensing infrastructure changes with upgrades • Opportunistic user feedback – Explicit: e.g. feedback through keyboard – Implicit: e.g. from EEG signals
  • 46. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Dynamic adaptation: power-performance management • Dynamic ensemble classifiers • Passively: ensemble classifiers allow for changes in the environment • Actively: benefit of dynamic adaptation Zappi et al. Network-level power-performance trade-off in wearable activity recognition: a dynamic sensor selection approach, To appear ACM TECS
  • 47. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Adaptation: Classifier self-calibration to sensor displacement FĂśrster, Roggen, TrĂśster, Unsupervised classifier self-calibration through repeated context occurences: is there robustness against sensor displacement to gain?, Proc. Int. Symposium Wearable Computers, 2009 Calibration dynamics: class centers follow cluster displacement in feature space Self-calibration to displaced sensors increases accuracy: • by 33.3% in HCI dataset • by 13.4% in fitness dataset Principle: upon activity detection, classifiers are re-trained to better model the last classified activity
  • 48. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Adaptation: minimally user-supervised adaptation Acceleration data Recognized gesture Error button FĂśrster et al., Incremental kNN classifier exploiting correct - error teacher for activity recognition, Submitted to ICMLA 2010
  • 49. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Adaptation: minimally user-supervised adaptation • Adaptation leads to: • Higher accuracy in the adaptive case v.s. control • Higher input rate • More "personalized" gestures FĂśrster et al., Online user adaptation in gesture and activity recognition - what’s the benefit? Tech Rep. FĂśrster et al., Incremental kNN classifier exploiting correct - error teacher for activity recognition, Submitted to ICMLA 2010
  • 50. Š Daniel Roggen www.danielroggen.net droggen@gmail.comFĂśrster et al., On the use of brain decoded signals for online user adaptive gesture recognition systems, Pervasive 2010 Adaptation: with brain-signal feedback • ~9% accuracy increase with perfect brain signal recognition • ~3% accuracy increase with effective brain signal recognition accuracy •Adaptation guided by the user’s own perception of the system • User in the loop
  • 51. Š Daniel Roggen www.danielroggen.net droggen@gmail.com • New sensors may be discovered – Infrastructure upgrades – Entering a new environment • Problem: How to use the sensor without self-*? – Typical in open-ended environments – Hard to predict what future sensors will be deployed • Unsupervised approaches to use new sensors! Using new sensors without supervision…
  • 52. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Using new sensors without supervision… … using behavioral assumptions • Can a reed switch recognize different gestures and modes of locomotion? • Extract maximum information content from simple sensors – Use behavioral assumptions
  • 53. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Open Using new sensors without supervision… … using behavioral assumptions
  • 54. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Application to Opportunity Dataset • Functionality of wearable sensor is learned incrementally • Autonomous training of wearable systems • Only needed: sporadic interactions with the environment • Applicable in WSN/AmI as demonstrated by hardware implementation Calatroni et al. Context Cells: Towards Lifelong Learning in Activity Recognition Systems, EuroSSC 2009
  • 55. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Transfer of recognition capabilities • System designed for domain 1 should work in domain 2 • Changes of sensors between setup 1 and 2 Roggen et al., Wearable Computing: Designing and Sharing Activity-Recognition Systems Across Platforms, IEEE Robotics&Automation Magazine, 2011
  • 56. Š Daniel Roggen www.danielroggen.net droggen@gmail.com Summary • Improving wearability & user-acceptance • Addressing real-world deployment issues • Enabling large-scale Ambient Intelligence environments www.opportunity-project.eu EC grant n° 225938
  • 57. Š Daniel Roggen www.danielroggen.net droggen@gmail.com

Hinweis der Redaktion

  1. the outcome of OPPORTUNITY is to demonstrate that robust activity recognition can be performed, despite the usual variability in sensor placement and orientation typical of sensors placed on-body and/or integrated into clothing, mobile devices, or the environment. This natural variability is nowadays a challenge to state of the art approaches.