SlideShare ist ein Scribd-Unternehmen logo
1 von 99
Downloaden Sie, um offline zu lesen
Data Exploration and Preparation
Machine Learning and Data Mining (Unit 15)




Prof. Pier Luca Lanzi
References                                    2



  Jiawei Han and Micheline Kamber, "Data Mining, : Concepts
  and Techniques", The Morgan Kaufmann Series in Data
  Management Systems (Second Edition)
  Tom M. Mitchell. “Machine Learning” McGraw Hill 1997
  Pang-Ning Tan, Michael Steinbach, Vipin Kumar,
  “Introduction to Data Mining”, Addison Wesley
  Ian H. Witten, Eibe Frank. “Data Mining: Practical Machine
  Learning Tools and Techniques with Java Implementations”
  2nd Edition




                  Prof. Pier Luca Lanzi
Outline                                    3



  Data Exploration
     Types of data
     Descriptive statistics
     Visualization

  Data Preprocessing
     Aggregation
     Sampling
     Dimensionality Reduction
     Feature creation
     Discretization
     Attribute Transformation
     Compression
     Concept hierarchies



                   Prof. Pier Luca Lanzi
Types of data sets                       4



  Record
     Data Matrix
     Document Data
     Transaction Data
  Graph
     World Wide Web
     Molecular Structures
  Ordered
     Spatial Data
     Temporal Data
     Sequential Data
     Genetic Sequence Data




                 Prof. Pier Luca Lanzi
Important Characteristics of              5
Structured Data

  Dimensionality
     Curse of Dimensionality

  Sparsity
     Only presence counts

  Resolution
     Patterns depend on the scale




                  Prof. Pier Luca Lanzi
Record Data                                               6



  Data that consists of a collection of records, each of which
  consists of a fixed set of attributes



                 Tid Refund Marital       Taxable
                                          Income Cheat
                            Status

                 1    Yes      Single     125K      No
                 2    No       Married    100K      No
                 3    No       Single     70K       No
                 4    Yes      Married    120K      No
                 5    No       Divorced 95K         Yes
                 6    No       Married    60K       No
                 7    Yes      Divorced 220K        No
                 8    No       Single     85K       Yes
                 9    No       Married    75K       No
                 10   No       Single     90K       Yes
            10




                            Prof. Pier Luca Lanzi
Data Matrix                                                     7



  If data objects have the same fixed set of numeric attributes, then
  the data objects can be thought of as points in a multi-dimensional
  space, where each dimension represents a distinct attribute

  Such data set can be represented by an m by n matrix, where there
  are m rows, one for each object, and n columns, one for each
  attribute




    Projection    Projection            Distance   Load   Thickness
    of x Load      of y load


   10.23         5.27                  15.22       2.7    1.2
   12.65         6.25                  16.22       2.2    1.1


                        Prof. Pier Luca Lanzi
Document Data                                                                 8



  Each document becomes a `term' vector,
     each term is a component (attribute) of the vector,
     the value of each component is the number of times the
     corresponding term occurs in the document.




                                                                                  timeout


                                                                                            season
                            coach




                                                          game
                                                  score
                    team




                                           ball




                                                                       lost
                                    play




                                                                 win
       Document 1   3        0      5      0       2      6      0     2           0         2


       Document 2   0        7      0      2       1      0      0     3           0         0


       Document 3   0        1      0      0       1      2      2     0           3         0




                        Prof. Pier Luca Lanzi
Transaction Data                                 9



  A special type of record data, where
     each record (transaction) involves a set of items.
     For example, consider a grocery store. The set of
     products purchased by a customer during one shopping
     trip constitute a transaction, while the individual products
     that were purchased are the items.



          TID   Items

          1     Bread, Coke, Milk
          2     Beer, Bread
          3     Beer, Coke, Diaper, Milk
          4     Beer, Bread, Diaper, Milk
          5     Coke, Diaper, Milk


                   Prof. Pier Luca Lanzi
Graph Data                                                  10
Generic graph and HTML Links




                                            <a href="papers/papers.html#bbbb">
                                            Data Mining </a>
                                            <li>
           2                                <a href="papers/papers.html#aaaa">
                                            Graph Partitioning </a>
                           1
    5                                       <li>
                                            <a href="papers/papers.html#aaaa">
            2                               Parallel Solution of Sparse Linear
                                            System of Equations </a>
                5                           <li>
                                            <a href="papers/papers.html#ffff">
                                            N-Body Computation and Dense Linear
                                            System Solvers




                    Prof. Pier Luca Lanzi
Chemical Data                            11



  Benzene Molecule: C6H6




                 Prof. Pier Luca Lanzi
Ordered Data                               12



  Sequences of transactions


               Items/Events




               An element of
               the sequence

                   Prof. Pier Luca Lanzi
Ordered Data                            13



  Genomic sequence data



       GGTTCCGCCTTCAGCCCCGCGCC
       CGCAGGGCCCGCCCCGCGCCGTC
       GAGAAGGGCCCGCCTGGCGGGCG
       GGGGGAGGCGGGGCCGCCCGAGC
       CCAACCGAGTCCGACCAGGTGCC
       CCCTCTGCTCGGCCTAGACCTGA
       GCTCATTAGGCGGCAGCGGACAG
       GCCAAGTAGAACACGCGAAGCGC
       TGGGCTGCCTGCTGCGACCAGGG




                Prof. Pier Luca Lanzi
Ordered Data                                14



  Spatio-Temporal Data




  Average Monthly
  Temperature of
  land and ocean




                    Prof. Pier Luca Lanzi
What is data exploration?                            15



  A preliminary exploration of the data to better understand its
  characteristics.

  Key motivations of data exploration include
     Helping to select the right tool for preprocessing or analysis
     Making use of humans’ abilities to recognize patterns
     People can recognize patterns not captured by data analysis
     tools

  Related to the area of Exploratory Data Analysis (EDA)
     Created by statistician John Tukey
     Seminal book is Exploratory Data Analysis by Tukey
     A nice online introduction can be found in Chapter 1 of the NIST
     Engineering Statistics Handbook
     http://www.itl.nist.gov/div898/handbook/index.htm



                     Prof. Pier Luca Lanzi
Techniques Used In Data Exploration     16



  In EDA, as originally defined by Tukey
     The focus was on visualization
     Clustering and anomaly detection were viewed
     as exploratory techniques
     In data mining, clustering and anomaly
     detection are major areas of interest, and not
     thought of as just exploratory

  In our discussion of data exploration, we focus on
     Summary statistics
     Visualization
     Online Analytical Processing (OLAP)



                Prof. Pier Luca Lanzi
Iris Sample Data Set                                            17



  Many of the exploratory data techniques are illustrated with the Iris
  Plant data set http://www.ics.uci.edu/~mlearn/MLRepository.html
  From the statistician Douglas Fisher
     Three flower types (classes): Setosa, Virginica, Versicolour
     Four (non-class) attributes, sepal width and length,
     petal width and length




                                             Virginica. Robert H. Mohlenbrock. USDA
                                             NRCS. 1995. Northeast wetland flora: Field
                                             office guide to plant species. Northeast National
                                             Technical Center, Chester, PA. Courtesy of
                                             USDA NRCS Wetland Science Institute.

                     Prof. Pier Luca Lanzi
Mining Data Descriptive Characteristics             18



  Motivation
     To better understand the data:
     central tendency, variation and spread

  Data dispersion characteristics
     median, max, min, quantiles, outliers, variance, etc.

  Numerical dimensions correspond to sorted intervals
    Data dispersion: analyzed with multiple granularities of precision
    Boxplot or quantile analysis on sorted intervals

  Dispersion analysis on computed measures
     Folding measures into numerical dimensions
     Boxplot or quantile analysis on the transformed cube




                    Prof. Pier Luca Lanzi
Summary Statistics                             19



  They are numbers that summarize properties of the data

  Summarized properties include frequency, location and
  spread

  Examples
     Location, mean
     Spread, standard deviation

  Most summary statistics can be calculated in a single pass
  through the data




                  Prof. Pier Luca Lanzi
Frequency and Mode                             20



  The frequency of an attribute value is the percentage of time
  the value occurs in the data set

  For example, given the attribute ‘gender’ and a
  representative population of people, the gender ‘female’
  occurs about 50% of the time.

  The mode of a an attribute is the most frequent attribute
  value

  The notions of frequency and mode are typically used with
  categorical data




                  Prof. Pier Luca Lanzi
Percentiles                                     21



  For continuous data, the notion of a percentile is more useful

  Given an ordinal or continuous attribute x and a number p
  between 0 and 100, the pth percentile is a value xp of x such
  that p% of the observed values of x are less than xp

  For instance, the 50th percentile is the value x50% such that
                               x       p

  50% of all values of x are less than x50%




                   Prof. Pier Luca Lanzi
Measures of Location: Mean and Median          22



  The mean is the most common measure of the location of a
  set of points

  However, the mean is very sensitive to outliers

  Thus, the median or a trimmed mean is also commonly used




                  Prof. Pier Luca Lanzi
Measures of Spread: Range and Variance           23



  Range is the difference between the max and min
  The variance or standard deviation is the most common
  measure of the spread of a set of points




  However, this is also sensitive to outliers, so that other
  measures are often used.




                   Prof. Pier Luca Lanzi
Visualization                                    24



  Visualization is the conversion of data into a visual or tabular
  format so that the characteristics of the data and the
  relationships among data items or attributes can be analyzed
  or reported

  Visualization of data is one of the most powerful and
  appealing techniques for data exploration
     Humans have a well developed ability to analyze large
     amounts of information that is presented visually
     Can detect general patterns and trends
     Can detect outliers and unusual patterns




                   Prof. Pier Luca Lanzi
Example: Sea Surface Temperature                   25



  The following shows the Sea Surface Temperature for July 1982
  Tens of thousands of data points are summarized in a single figure




                    Prof. Pier Luca Lanzi
Representation                                  26



  Is the mapping of information to a visual format

  Data objects, their attributes, and the relationships among
  data objects are translated into graphical elements such as
  points, lines, shapes, and colors

  Example:
     Objects are often represented as points
     Their attribute values can be represented as the position
     of the points or the characteristics of the points, e.g.,
     color, size, and shape
     If position is used, then the relationships of points, i.e.,
     whether they form groups or a point is an outlier, is easily
     perceived.



                   Prof. Pier Luca Lanzi
Arrangement                                    27



  Is the placement of visual elements within a display
  Can make a large difference in how easy
  it is to understand the data
  Example:




                   Prof. Pier Luca Lanzi
Selection                                      28



  Is the elimination or the de-emphasis of
  certain objects and attributes

  Selection may involve the chossing a subset of attributes
     Dimensionality reduction is often used to reduce the
     number of dimensions to two or three
     Alternatively, pairs of attributes can be considered

  Selection may also involve choosing a subset of objects
      A region of the screen can only show so many points
     Can sample, but want to preserve points in sparse areas




                  Prof. Pier Luca Lanzi
Visualization Techniques: Histograms          29



  Histogram
     Usually shows the distribution of values of a single
     variable
     Divide the values into bins and show a bar plot of the
     number of objects in each bin.
     The height of each bar indicates the number of objects
     Shape of histogram depends on the number of bins
  Example: Petal Width (10 and 20 bins, respectively)




                  Prof. Pier Luca Lanzi
Two-Dimensional Histograms                      30



Show the joint distribution of the values of two attributes
Example: petal width and petal length
   What does this tell us?




                    Prof. Pier Luca Lanzi
Visualization Techniques: Box Plots               31



  Box Plots
    Invented by J. Tukey
    Another way of displaying the distribution of data
    Following figure shows the basic part of a box plot

                                outlier


                                10th percentile




                                75th percentile

                                50th percentile
                                25th percentile



                                10th percentile




                  Prof. Pier Luca Lanzi
Example of Box Plots                          32




Box plots can be used to compare attributes




                 Prof. Pier Luca Lanzi
Visualization Techniques: Scatter Plots       33



  Attributes values determine the position
  Two-dimensional scatter plots most common, but can have
  three-dimensional scatter plots
  Often additional attributes can be displayed by using the
  size, shape, and color of the markers that represent the
  objects
  It is useful to have arrays of scatter plots can compactly
  summarize the relationships of several pairs of attributes




                  Prof. Pier Luca Lanzi
Scatter Plot Array of Iris Attributes     34




                  Prof. Pier Luca Lanzi
Visualization Techniques: Contour Plots       35



  Useful when a continuous attribute is measured on a spatial
  grid
  They partition the plane into regions of similar values
  The contour lines that form the boundaries of these regions
  connect points with equal values
  The most common example is contour maps of elevation
  Can also display temperature, rainfall, air pressure, etc.




                  Prof. Pier Luca Lanzi
Contour Plot Example: SST Dec, 1998     36




                                             Celsius



                Prof. Pier Luca Lanzi
Visualization Techniques: Matrix Plots           37




Can plot the data matrix

This can be useful when objects are sorted according to class

Typically, the attributes are normalized to prevent one
attribute from dominating the plot

Plots of similarity or distance matrices can also be useful for
visualizing the relationships between objects

Examples of matrix plots are presented on the next two slides




                    Prof. Pier Luca Lanzi
Visualization of the Iris Data Matrix     38




                                               standard
                                               deviation

                  Prof. Pier Luca Lanzi
Visualization of the Iris Correlation Matrix   39




                  Prof. Pier Luca Lanzi
Visualization Techniques:                        40
Parallel Coordinates

  Used to plot the attribute values of high-dimensional data
  Instead of using perpendicular axes, use a set of parallel
  axes
  The attribute values of each object are plotted as a point on
  each corresponding coordinate axis and the points are
  connected by a line
  Thus, each object is represented as a line
  Often, the lines representing a distinct class of objects group
  together, at least for some attributes
  Ordering of attributes is important in seeing such groupings




                   Prof. Pier Luca Lanzi
Parallel Coordinates Plots for Iris Data   41




                  Prof. Pier Luca Lanzi
Other Visualization Techniques                 42



  Star Plots
     Similar approach to parallel coordinates,
     but axes radiate from a central point
     The line connecting the values of an object is a polygon

  Chernoff Faces
    Approach created by Herman Chernoff
    This approach associates each attribute with a
    characteristic of a face
    The values of each attribute determine the appearance of
    the corresponding facial characteristic
    Each object becomes a separate face
    Relies on human’s ability to distinguish faces



                   Prof. Pier Luca Lanzi
Star Plots for Iris Data                      43




                                                   Setosa




                                                   Versicolour


                                                   Virginica




                      Prof. Pier Luca Lanzi
Chernoff Faces for Iris Data                 44




                                                  Setosa



                                                  Versicolour




                                                  Virginica




                     Prof. Pier Luca Lanzi
Why Data Preprocessing?                         45



  Data in the real world is dirty

  Incomplete: lacking attribute values, lacking certain
  attributes of interest, or containing only aggregate data
     e.g., occupation=“ ”
  Noisy: containing errors or outliers
     e.g., Salary=“-10”

  Inconsistent: containing discrepancies in codes or names
     e.g., Age=“42” Birthday=“03/07/1997”
     e.g., Was rating “1,2,3”, now rating “A, B, C”
     e.g., discrepancy between duplicate records




                   Prof. Pier Luca Lanzi
Why Is Data Dirty?                            46



  Incomplete data may come from
     “Not applicable” data value when collected
     Different considerations between the time when the data
     was collected and when it is analyzed.
     Human/hardware/software problems
  Noisy data (incorrect values) may come from
     Faulty data collection instruments
     Human or computer error at data entry
     Errors in data transmission
  Inconsistent data may come from
     Different data sources
     Functional dependency violation (e.g., modify some linked
     data)
  Duplicate records also need data cleaning


                  Prof. Pier Luca Lanzi
Why Is Data Preprocessing Important?            47



  No quality data, no quality mining results!

  Quality decisions must be based on quality data
    E.g., duplicate or missing data may cause incorrect or
    even misleading statistics.

  Data warehouse needs consistent integration of quality data

  Data extraction, cleaning, and transformation comprises the
  majority of the work of building a data warehouse




                   Prof. Pier Luca Lanzi
Data Quality                                  48



  What kinds of data quality problems?
  How can we detect problems with the data?
  What can we do about these problems?



  Examples of data quality problems:
     noise and outliers
     missing values
     duplicate data




                  Prof. Pier Luca Lanzi
Multi-Dimensional Measure of                     49
Data Quality

  A well-accepted multidimensional view:
     Accuracy
     Completeness
     Consistency
     Timeliness
     Believability
     Value added
     Interpretability
     Accessibility
  Broad categories:
     Intrinsic, contextual, representational, and accessibility




                   Prof. Pier Luca Lanzi
Noise                                                       50



  Noise refers to modification of original values
  Examples: distortion of a person’s voice when talking on a
  poor phone and “snow” on television screen




        Two Sine Waves                       Two Sine Waves + Noise


                     Prof. Pier Luca Lanzi
Outliers                                       51



  Outliers are data objects with characteristics that are
  considerably different than most of the other data objects in
  the data set




                   Prof. Pier Luca Lanzi
Missing Values                                 52



  Reasons for missing values
    Information is not collected
    (e.g., people decline to give their age and weight)
    Attributes may not be applicable to all cases
    (e.g., annual income is not applicable to children)

  Handling missing values
    Eliminate Data Objects
    Estimate Missing Values
    Ignore the Missing Value During Analysis
    Replace with all possible values (weighted by their
    probabilities)




                  Prof. Pier Luca Lanzi
Duplicate Data                                 53



  Data set may include data objects that are duplicates, or
  almost duplicates of one another

  Major issue when merging data from heterogeous sources

  Examples: same person with multiple email addresses

  Data cleaning: process of dealing with duplicate data issues




                   Prof. Pier Luca Lanzi
Data Cleaning as a Process                                   54



  Data discrepancy detection
     Use metadata (e.g., domain, range, dependency, distribution)
     Check field overloading
     Check uniqueness rule, consecutive rule and null rule
     Use commercial tools
       • Data scrubbing: use simple domain knowledge (e.g., postal code,
         spell-check) to detect errors and make corrections
       • Data auditing: by analyzing data to discover rules and relationship
         to detect violators (e.g., correlation and clustering to find outliers)

  Data migration and integration
     Data migration tools: allow transformations to be specified
     ETL (Extraction/Transformation/Loading) tools: allow users to
     specify transformations through a graphical user interface

  Integration of the two processes
      Iterative and interactive (e.g., Potter’s Wheels)




                       Prof. Pier Luca Lanzi
Data Preprocessing                         55



  Aggregation
  Sampling
  Dimensionality Reduction
  Feature subset selection
  Feature creation
  Discretization and Binarization
  Attribute Transformation




                   Prof. Pier Luca Lanzi
Aggregation                                        56



  Combining two or more attributes (or objects)
  into a single attribute (or object)

  Purpose
     Data reduction: reduce the number of attributes or objects
     Change of scale: cities aggregated into regions, states,
     countries, etc
     More “stable” data: aggregated data tends to have less
     variability




                    Prof. Pier Luca Lanzi
Aggregation                                                     57




Variation of Precipitation in Australia




  Standard Deviation of Average                   Standard Deviation of Average
  Monthly Precipitation                           Yearly Precipitation


                          Prof. Pier Luca Lanzi
Sampling                                         58



  Sampling is the main technique employed for data selection

  It is often used for both the preliminary investigation of the
  data and the final data analysis.

  Statisticians sample because obtaining the entire set of data
  of interest is too expensive or time consuming

  Sampling is used in data mining because processing the
  entire set of data of interest is too expensive or time
  consuming




                   Prof. Pier Luca Lanzi
Key principles for Effective Sampling          59



  Using a sample will work almost as well as using the entire
  data sets, if the sample is representative

  A sample is representative if it has approximately the same
  property (of interest) as the original set of data




                  Prof. Pier Luca Lanzi
Types of Sampling                                    60



  Simple Random Sampling
     There is an equal probability of selecting any particular item

  Sampling without replacement
    As each item is selected, it is removed from the population

  Sampling with replacement
    Objects are not removed from the population as they are
    selected for the sample.
    In sampling with replacement, the same object can
    be picked up more than once

  Stratified sampling
     Split the data into several partitions
     Then draw random samples from each partition



                     Prof. Pier Luca Lanzi
Sample Size                                 61




  8000 points                 2000 Points        500 Points




                Prof. Pier Luca Lanzi
Sample Size                                       62



  What sample size is necessary to get at least
  one object from each of 10 groups.




                  Prof. Pier Luca Lanzi
Curse of Dimensionality                                           63



  When dimensionality increases,
data becomes increasingly sparse
in the space that it occupies

  Definitions of density and
distance between points, which is
critical for clustering and outlier
detection, become less meaningful




                                             •Randomly generate 500 points
                                             •Compute difference between max and
                                              min distance between any pair of points



                     Prof. Pier Luca Lanzi
Dimensionality Reduction                       64



  Purpose:
     Avoid curse of dimensionality
     Reduce amount of time and memory required by data
     mining algorithms
     Allow data to be more easily visualized
     May help to eliminate irrelevant features or reduce noise

  Techniques
     Principle Component Analysis
     Singular Value Decomposition
     Others: supervised and non-linear techniques




                   Prof. Pier Luca Lanzi
Dimensionality Reduction:                           65
Principal Component Analysis (PCA)

  Given N data vectors from n-dimensions, find k ≤ n orthogonal
  vectors (principal components) that can be best used to represent
  data
  Steps
     Normalize input data: Each attribute falls within the same range
     Compute k orthonormal (unit) vectors, i.e., principal
     components
     Each input data (vector) is a linear combination of the k
     principal component vectors
     The principal components are sorted in order of decreasing
     “significance” or strength
     Since the components are sorted, the size of the data can be
     reduced by eliminating the weak components, i.e., those with
     low variance. (i.e., using the strongest principal components, it
     is possible to reconstruct a good approximation of the original
     data
  Works for numeric data only
  Used when the number of dimensions is large


                    Prof. Pier Luca Lanzi
Principal Component Analysis             66




                                  X2

                                              Y1
          Y2




                                                   X1




                 Prof. Pier Luca Lanzi
Dimensionality Reduction: PCA                   67



  Goal is to find a projection that captures the largest amount
  of variation in data




             x2


                                           e




                                           x1


                   Prof. Pier Luca Lanzi
Dimensionality Reduction: PCA                  68



  Find the eigenvectors of the covariance matrix
  The eigenvectors define the new space




             x2


                                          e




                                          x1

                  Prof. Pier Luca Lanzi
Feature Subset Selection                        69



  Another way to reduce dimensionality of data

  Redundant features
    duplicate much or all of the information contained in one
    or more other attributes
    Example: purchase price of a product and the amount of
    sales tax paid

  Irrelevant features
      contain no information that is useful for the data mining
      task at hand
      Example: students' ID is often irrelevant to the task of
      predicting students' GPA




                   Prof. Pier Luca Lanzi
Feature Subset Selection                            70



Brute-force approach
   Try all possible feature subsets as input to data mining algorithm

Embedded approaches
  Feature selection occurs naturally as part of
  the data mining algorithm

Filter approaches
    Features are selected using a procedure that is
    independent from a specific data mining algorithm
    E.g., attributes are selected based on correlation measures

Wrapper approaches:
   Use a data mining algorithm as a black box
   to find best subset of attributes
   E.g., apply a genetic algorithm and an algorithm for decision tree
   to find the best set of features for a decision tree


                    Prof. Pier Luca Lanzi
Feature Creation                                71



  Create new attributes that can capture the important
  information in a data set much more efficiently than the
  original attributes
  E.g., given the birthday, create the attribute age

  Three general methodologies:
     Feature Extraction: domain-specific
     Mapping Data to New Space
     Feature Construction: combining features




                   Prof. Pier Luca Lanzi
Mapping Data to a New Space                          72




Fourier transform
Wavelet transform




 Two Sine Waves          Two Sine Waves + Noise   Frequency




                    Prof. Pier Luca Lanzi
Discretization                                  73



  Three types of attributes:
     Nominal: values from an unordered set, e.g., color
     Ordinal: values from an ordered set,
     e.g., military or academic rank
     Continuous: real numbers,
     e.g., integer or real numbers

  Discretization:
     Divide the range of a continuous attribute into intervals
     Some classification algorithms only accept categorical
     attributes
     Reduce data size by discretization
     Prepare for further analysis



                   Prof. Pier Luca Lanzi
Discretization Approaches                      74



  Supervised
    Attributes are discretized using the class information
    Generates intervals that tries to minimize the loss of
    information about the class

  Unsupervised
    Attributes are discretized solely based on their values




                   Prof. Pier Luca Lanzi
Discretization Using Class Labels                                  75



  Entropy based approach




  3 categories for both x and y                     5 categories for both x and y



                            Prof. Pier Luca Lanzi
Discretization Without Using Class Labels                             76




      Data                                     Equal interval width




     Equal frequency                                K-means

                       Prof. Pier Luca Lanzi
Discretization and Concept Hierarchy                77



  Discretization
     Reduce the number of values for a given continuous attribute
     by dividing the range of the attribute into intervals
     Interval labels can then be used to replace actual data values
     Supervised vs. unsupervised
     Split (top-down) vs. merge (bottom-up)
     Discretization can be performed recursively on an attribute

  Concept hierarchy formation
     Recursively reduce the data by collecting and replacing low level
     concepts (such as numeric values for age) by higher level
     concepts (such as young, middle-aged, or senior)




                    Prof. Pier Luca Lanzi
Typical Discretization Methods            78



  Entropy-based discretization:
  supervised, top-down split

  Interval merging by χ2 Analysis:
  supervised, bottom-up merge

  Segmentation by natural partitioning:
  top-down split, unsupervised




                  Prof. Pier Luca Lanzi
Entropy-Based Discretization                                      79



  Given a set of samples S, if S is partitioned into two intervals S1
  and S2 using boundary T, the information gain after partitioning is

                                                     |S |
                              | S1 |
               I (S , T ) =          Entropy ( S 1) + 2 Entropy ( S 2 )
                               |S|                   |S|
  Entropy is calculated based on class distribution of the samples in
  the set. Given m classes, the entropy of S1 is
                                               m
                      Entropy ( S 1 ) = − ∑ p i log 2 ( p i )
                                               i =1

     where pi is the probability of class i in S1
  The boundary that minimizes the entropy function over all possible
  boundaries is selected as a binary discretization
  The process is recursively applied to partitions obtained until some
  stopping criterion is met
  Such a boundary may reduce data size and improve classification
  accuracy


                       Prof. Pier Luca Lanzi
Interval Merge by χ2 Analysis                   80



  Merging-based (bottom-up) vs. splitting-based methods
  Merge: Find the best neighboring intervals and merge them
  to form larger intervals recursively
  ChiMerge [Kerber AAAI 1992, See also Liu et al. DMKD 2002]
      Initially, each distinct value of a numerical attr. A is
      considered to be one interval
      χ2 tests are performed for every pair of adjacent intervals
      Adjacent intervals with the least χ2 values are merged
      together, since low χ2 values for a pair indicate similar
      class distributions
      This merge process proceeds recursively until a
      predefined stopping criterion is met (such as significance
      level, max-interval, max inconsistency, etc.)




                   Prof. Pier Luca Lanzi
Segmentation by Natural Partitioning              81



  A simply 3-4-5 rule can be used to segment numeric data
  into relatively uniform, “natural” intervals.

  If an interval covers 3, 6, 7 or 9 distinct values at the most
  significant digit, partition the range into 3 equi-width
  intervals

  If it covers 2, 4, or 8 distinct values at the most significant
  digit, partition the range into 4 intervals

  If it covers 1, 5, or 10 distinct values at the most significant
  digit, partition the range into 5 intervals




                    Prof. Pier Luca Lanzi
Example of 3-4-5 Rule                                                                                                          82




                                        count




                    -$351       -$159                                 profit                       $1,838          $4,700
Step 1:
                    Min         Low (i.e, 5%-tile)                                         High(i.e, 95%-0 tile)      Max
Step 2:             msd=1,000           Low=-$1,000          High=$2,000

                                                           (-$1,000 - $2,000)
Step 3:

                                        (-$1,000 - 0)                                  ($1,000 - $2,000)
                                                                (0 -$ 1,000)


                                                            (-$400 -$5,000)
Step 4:


                                                                                                                         ($2,000 - $5, 000)
              (-$400 - 0)                                                               ($1,000 - $2, 000)
                                        (0 - $1,000)
                            (0 -
                                                                         ($1,000 -
 (-$400 -                    $200)
                                                                                                                   ($2,000 -
                                                                          $1,200)
  -$300)
                                                                                                                    $3,000)
                             ($200 -
                                                                           ($1,200 -
                              $400)
  (-$300 -                                                                  $1,400)
                                                                                                                    ($3,000 -
   -$200)
                                                                                                                     $4,000)
                                                                               ($1,400 -
                            ($400 -
  (-$200 -                                                                      $1,600)
                             $600)                                                                                              ($4,000 -
   -$100)                                                                                                                        $5,000)
                                  ($600 -                                           ($1,600 -
                                                                                              ($1,800 -
                                   $800)        ($800 -                              $1,800)
                                                                                               $2,000)
   (-$100 -                                      $1,000)
    0)
                                                  Prof. Pier Luca Lanzi
Attribute Transformation                      83



  A function that maps the entire set of values of a given
  attribute to a new set of replacement values such that each
  old value can be identified with one of the new values
     Simple functions: xk, log(x), ex, |x|
     Standardization and Normalization




                  Prof. Pier Luca Lanzi
On-Line Analytical Processing (OLAP)           84



  Relational databases put data into tables, while OLAP uses a
  multidimensional array representation.
  Such representations of data previously existed in statistics
  and other fields
  There are a number of data analysis and data exploration
  operations that are easier with such a data representation.




                   Prof. Pier Luca Lanzi
Creating a Multidimensional Array               85



  First, identify which attributes are to be the dimensions and
  which attribute is to be the target attribute whose values
  appear as entries in the multidimensional array.
      The attributes used as dimensions must have discrete
      values
      The target value is typically a count or continuous value,
      e.g., the cost of an item
      Can have no target variable at all except the count of
      objects that have the same set of attribute values
  Second, find the value of each entry in the multidimensional
  array by summing the values (of the target attribute) or
  count of all objects that have the attribute values
  corresponding to that entry.




                   Prof. Pier Luca Lanzi
Example: Iris data                            86



  We show how the attributes, petal length, petal width, and
  species type can be converted to a multidimensional array
  First, we discretized the petal width and length to have
  categorical values: low, medium, and high
  We get the following table - note the count attribute




                  Prof. Pier Luca Lanzi
Example: Iris data (continued)                 87



  Each unique tuple of petal width, petal length, and species
  type identifies one element of the array.
  This element is assigned the corresponding count value.
  The figure illustrates
  the result.
  All non-specified
  tuples are 0.




                   Prof. Pier Luca Lanzi
Example: Iris data (continued)                88



  Slices of the multidimensional array are shown by the
  following cross-tabulations
  What do these tables tell us?




                  Prof. Pier Luca Lanzi
OLAP Operations: Data Cube                      89



  The key operation of a OLAP is the formation of a data cube
  A data cube is a multidimensional representation of data,
  together with all possible aggregates.
  By all possible aggregates, we mean the aggregates that
  result by selecting a proper subset of the dimensions and
  summing over all remaining dimensions.
  For example, if we choose the species type dimension of the
  Iris data and sum over all other dimensions, the result will be
  a one-dimensional entry with three entries, each of which
  gives the number of flowers of each type.




                   Prof. Pier Luca Lanzi
Data Cube Example                               90



  Consider a data set that records the sales of products at a
  number of company stores at various dates.
  This data can be represented
  as a 3 dimensional array
  There are 3 two-dimensional
  aggregates (3 choose 2 ),
  3 one-dimensional aggregates,
  and 1 zero-dimensional
  aggregate (the overall total)




                   Prof. Pier Luca Lanzi
Data Cube Example (continued)                 91



  The following figure table shows one of the two dimensional
  aggregates, along with two of the one-dimensional
  aggregates, and the overall total




                  Prof. Pier Luca Lanzi
OLAP Operations: Slicing and Dicing            92



  Slicing is selecting a group of cells from the entire
  multidimensional array by specifying a specific value for one
  or more dimensions.
  Dicing involves selecting a subset of cells by specifying a
  range of attribute values.
      This is equivalent to defining a subarray from the
      complete array.
  In practice, both operations can also be accompanied by
  aggregation over some dimensions.




                   Prof. Pier Luca Lanzi
OLAP Operations: Roll-up and Drill-down         93



  Attribute values often have a hierarchical structure.
      Each date is associated with a year, month, and week.
      A location is associated with a continent, country, state
      (province, etc.), and city.
      Products can be divided into various categories, such as
      clothing, electronics, and furniture.
  Note that these categories often nest and form a tree or
  lattice
      A year contains months which contains day
      A country contains a state which contains a city




                   Prof. Pier Luca Lanzi
OLAP Operations: Roll-up and Drill-down           94



  This hierarchical structure gives rise to the roll-up and drill-
  down operations.
     For sales data, we can aggregate (roll up) the sales
     across all the dates in a month.
     Conversely, given a view of the data where the time
     dimension is broken into months, we could split the
     monthly sales totals (drill down) into daily sales totals.
     Likewise, we can drill down or roll up on the location or
     product ID attributes.




                    Prof. Pier Luca Lanzi
Data Compression                              95



  String compression
     There are extensive theories and well-tuned algorithms
     Typically lossless
     But only limited manipulation is possible without
     expansion
  Audio/video compression
     Typically lossy compression, with progressive refinement
     Sometimes small fragments of signal can be
     reconstructed without reconstructing the whole
  Time sequence is not audio
     Typically short and vary slowly with time




                  Prof. Pier Luca Lanzi
96
Dimensionality Reduction:
Wavelet Transformation

                                                                 Haar2       Daubechie4
   Discrete wavelet transform (DWT): linear signal processing, multi-
   resolutional analysis
   Compressed approximation: store only a small fraction of the
   strongest of the wavelet coefficients
   Similar to discrete Fourier transform (DFT), but better lossy
   compression, localized in space
   Method:
      Length, L, must be an integer power of 2 (padding with 0’s, when
      necessary)
      Each transform has 2 functions: smoothing, difference
      Applies to pairs of data, resulting in two set of data of length L/2
      Applies two functions recursively, until reaches the desired length




                           Prof. Pier Luca Lanzi
Concept Hierarchy Generation                       97
for Categorical Data

  Specification of a partial/total ordering of attributes explicitly
  at the schema level by users or experts
      street < city < state < country

  Specification of a hierarchy for a set of values by explicit
  data grouping
     {Urbana, Champaign, Chicago} < Illinois

  Specification of only a partial set of attributes
    E.g., only street < city, not others

  Automatic generation of hierarchies (or attribute levels) by
  the analysis of the number of distinct values
     E.g., for a set of attributes: {street, city, state, country}


                    Prof. Pier Luca Lanzi
Automatic Concept Hierarchy Generation                   98



  Some hierarchies can be automatically generated based on
  the analysis of the number of distinct values per attribute in
  the data set
     The attribute with the most distinct values is placed at the
     lowest level of the hierarchy
     Exceptions, e.g., weekday, month, quarter, year


        Country                  15 distinct values


        Province                 365 distinct values


          City                    3567 distinct values


         Street                   674,339 distinct values

                   Prof. Pier Luca Lanzi
Summary                                      99



 Data exploration and preparation, or preprocessing, is a big
 issue for both data warehousing and data mining
 Descriptive data summarization is need for quality data
 preprocessing
 Data preparation includes
     Data cleaning and data integration
     Data reduction and feature selection
     Discretization
 A lot a methods have been developed but data preprocessing
 still an active area of research




                 Prof. Pier Luca Lanzi

Weitere ähnliche Inhalte

Was ist angesagt?

Bayesian Uncertainty Estimation for Batch Normalized Deep Networks
Bayesian Uncertainty Estimation for Batch Normalized Deep NetworksBayesian Uncertainty Estimation for Batch Normalized Deep Networks
Bayesian Uncertainty Estimation for Batch Normalized Deep Networksharmonylab
 
【DL輪読会】Representational Continuity for Unsupervised Continual Learning ( ICLR...
【DL輪読会】Representational Continuity for Unsupervised Continual Learning ( ICLR...【DL輪読会】Representational Continuity for Unsupervised Continual Learning ( ICLR...
【DL輪読会】Representational Continuity for Unsupervised Continual Learning ( ICLR...Deep Learning JP
 
20190706cvpr2019_3d_shape_representation
20190706cvpr2019_3d_shape_representation20190706cvpr2019_3d_shape_representation
20190706cvpr2019_3d_shape_representationTakuya Minagawa
 
"Introduction to Data Visualization" Workshop for General Assembly by Hunter ...
"Introduction to Data Visualization" Workshop for General Assembly by Hunter ..."Introduction to Data Visualization" Workshop for General Assembly by Hunter ...
"Introduction to Data Visualization" Workshop for General Assembly by Hunter ...Hunter Whitney
 
Uncertain Knowledge in AI from Object Automation
Uncertain Knowledge in AI from Object Automation Uncertain Knowledge in AI from Object Automation
Uncertain Knowledge in AI from Object Automation Object Automation
 
(2020.10) 分子のグラフ表現と機械学習: Graph Neural Networks (GNNs) とは?
(2020.10) 分子のグラフ表現と機械学習: Graph Neural Networks (GNNs) とは?(2020.10) 分子のグラフ表現と機械学習: Graph Neural Networks (GNNs) とは?
(2020.10) 分子のグラフ表現と機械学習: Graph Neural Networks (GNNs) とは?Ichigaku Takigawa
 
入門 異常検知 -これを読めば異常検知がわかる-
入門 異常検知 -これを読めば異常検知がわかる-入門 異常検知 -これを読めば異常検知がわかる-
入門 異常検知 -これを読めば異常検知がわかる-Natsuki Hirota
 
はじパタ8章 svm
はじパタ8章 svmはじパタ8章 svm
はじパタ8章 svmtetsuro ito
 
Rで学ぶ回帰分析と単位根検定
Rで学ぶ回帰分析と単位根検定Rで学ぶ回帰分析と単位根検定
Rで学ぶ回帰分析と単位根検定Nagi Teramo
 
[DL輪読会]AVID:Adversarial Visual Irregularity Detection
[DL輪読会]AVID:Adversarial Visual Irregularity Detection[DL輪読会]AVID:Adversarial Visual Irregularity Detection
[DL輪読会]AVID:Adversarial Visual Irregularity DetectionDeep Learning JP
 
Exploratory data analysis with Python
Exploratory data analysis with PythonExploratory data analysis with Python
Exploratory data analysis with PythonDavis David
 
差分プライバシーによる時系列データの扱い方
差分プライバシーによる時系列データの扱い方差分プライバシーによる時系列データの扱い方
差分プライバシーによる時系列データの扱い方Hiroshi Nakagawa
 
Rで学ぶ観察データでの因果推定
Rで学ぶ観察データでの因果推定Rで学ぶ観察データでの因果推定
Rで学ぶ観察データでの因果推定Hiroki Matsui
 
時系列データ分析とPython
時系列データ分析とPython時系列データ分析とPython
時系列データ分析とPythonHirofumi Tsuruta
 
最近のDQN
最近のDQN最近のDQN
最近のDQNmooopan
 
Supervised PCAとその周辺
Supervised PCAとその周辺Supervised PCAとその周辺
Supervised PCAとその周辺Daisuke Yoneoka
 
PCAの最終形態GPLVMの解説
PCAの最終形態GPLVMの解説PCAの最終形態GPLVMの解説
PCAの最終形態GPLVMの解説弘毅 露崎
 
機械学習の理論と実践
機械学習の理論と実践機械学習の理論と実践
機械学習の理論と実践Preferred Networks
 

Was ist angesagt? (20)

Bayesian Uncertainty Estimation for Batch Normalized Deep Networks
Bayesian Uncertainty Estimation for Batch Normalized Deep NetworksBayesian Uncertainty Estimation for Batch Normalized Deep Networks
Bayesian Uncertainty Estimation for Batch Normalized Deep Networks
 
t-SNE
t-SNEt-SNE
t-SNE
 
【DL輪読会】Representational Continuity for Unsupervised Continual Learning ( ICLR...
【DL輪読会】Representational Continuity for Unsupervised Continual Learning ( ICLR...【DL輪読会】Representational Continuity for Unsupervised Continual Learning ( ICLR...
【DL輪読会】Representational Continuity for Unsupervised Continual Learning ( ICLR...
 
20190706cvpr2019_3d_shape_representation
20190706cvpr2019_3d_shape_representation20190706cvpr2019_3d_shape_representation
20190706cvpr2019_3d_shape_representation
 
"Introduction to Data Visualization" Workshop for General Assembly by Hunter ...
"Introduction to Data Visualization" Workshop for General Assembly by Hunter ..."Introduction to Data Visualization" Workshop for General Assembly by Hunter ...
"Introduction to Data Visualization" Workshop for General Assembly by Hunter ...
 
Uncertain Knowledge in AI from Object Automation
Uncertain Knowledge in AI from Object Automation Uncertain Knowledge in AI from Object Automation
Uncertain Knowledge in AI from Object Automation
 
(2020.10) 分子のグラフ表現と機械学習: Graph Neural Networks (GNNs) とは?
(2020.10) 分子のグラフ表現と機械学習: Graph Neural Networks (GNNs) とは?(2020.10) 分子のグラフ表現と機械学習: Graph Neural Networks (GNNs) とは?
(2020.10) 分子のグラフ表現と機械学習: Graph Neural Networks (GNNs) とは?
 
入門 異常検知 -これを読めば異常検知がわかる-
入門 異常検知 -これを読めば異常検知がわかる-入門 異常検知 -これを読めば異常検知がわかる-
入門 異常検知 -これを読めば異常検知がわかる-
 
はじパタ8章 svm
はじパタ8章 svmはじパタ8章 svm
はじパタ8章 svm
 
Rで学ぶ回帰分析と単位根検定
Rで学ぶ回帰分析と単位根検定Rで学ぶ回帰分析と単位根検定
Rで学ぶ回帰分析と単位根検定
 
[DL輪読会]AVID:Adversarial Visual Irregularity Detection
[DL輪読会]AVID:Adversarial Visual Irregularity Detection[DL輪読会]AVID:Adversarial Visual Irregularity Detection
[DL輪読会]AVID:Adversarial Visual Irregularity Detection
 
Exploratory data analysis with Python
Exploratory data analysis with PythonExploratory data analysis with Python
Exploratory data analysis with Python
 
差分プライバシーによる時系列データの扱い方
差分プライバシーによる時系列データの扱い方差分プライバシーによる時系列データの扱い方
差分プライバシーによる時系列データの扱い方
 
Rで学ぶ観察データでの因果推定
Rで学ぶ観察データでの因果推定Rで学ぶ観察データでの因果推定
Rで学ぶ観察データでの因果推定
 
時系列データ分析とPython
時系列データ分析とPython時系列データ分析とPython
時系列データ分析とPython
 
最近のDQN
最近のDQN最近のDQN
最近のDQN
 
Exploring Data
Exploring DataExploring Data
Exploring Data
 
Supervised PCAとその周辺
Supervised PCAとその周辺Supervised PCAとその周辺
Supervised PCAとその周辺
 
PCAの最終形態GPLVMの解説
PCAの最終形態GPLVMの解説PCAの最終形態GPLVMの解説
PCAの最終形態GPLVMの解説
 
機械学習の理論と実践
機械学習の理論と実践機械学習の理論と実践
機械学習の理論と実践
 

Andere mochten auch

1.8 discretization
1.8 discretization1.8 discretization
1.8 discretizationKrish_ver2
 
Machine Learning and Data Mining: 14 Evaluation and Credibility
Machine Learning and Data Mining: 14 Evaluation and CredibilityMachine Learning and Data Mining: 14 Evaluation and Credibility
Machine Learning and Data Mining: 14 Evaluation and CredibilityPier Luca Lanzi
 
Machine Learning and Data Mining: 19 Mining Text And Web Data
Machine Learning and Data Mining: 19 Mining Text And Web DataMachine Learning and Data Mining: 19 Mining Text And Web Data
Machine Learning and Data Mining: 19 Mining Text And Web DataPier Luca Lanzi
 
DMTM 2015 - 04 Data Exploration
DMTM 2015 - 04 Data ExplorationDMTM 2015 - 04 Data Exploration
DMTM 2015 - 04 Data ExplorationPier Luca Lanzi
 
Machine Learning and Data Mining: 16 Classifiers Ensembles
Machine Learning and Data Mining: 16 Classifiers EnsemblesMachine Learning and Data Mining: 16 Classifiers Ensembles
Machine Learning and Data Mining: 16 Classifiers EnsemblesPier Luca Lanzi
 
Data Mining: Concepts and Techniques (3rd ed.) - Chapter 3 preprocessing
Data Mining:  Concepts and Techniques (3rd ed.)- Chapter 3 preprocessingData Mining:  Concepts and Techniques (3rd ed.)- Chapter 3 preprocessing
Data Mining: Concepts and Techniques (3rd ed.) - Chapter 3 preprocessingSalah Amean
 
Data Warehousing and Data Mining
Data Warehousing and Data MiningData Warehousing and Data Mining
Data Warehousing and Data Miningidnats
 
Data exploration
Data explorationData exploration
Data explorationsidjain89
 
Meetup_Consumer_Credit_Default_Vers_2_All
Meetup_Consumer_Credit_Default_Vers_2_AllMeetup_Consumer_Credit_Default_Vers_2_All
Meetup_Consumer_Credit_Default_Vers_2_AllBernard Ong
 
Minerals Sector Presentation - 12th Plan (2012 - 2017)
Minerals Sector Presentation - 12th Plan (2012 - 2017)Minerals Sector Presentation - 12th Plan (2012 - 2017)
Minerals Sector Presentation - 12th Plan (2012 - 2017)NITI Aayog
 
Curs 3 Data Mining
Curs 3 Data MiningCurs 3 Data Mining
Curs 3 Data MiningLucian Sasu
 
jsm2015: the dendextend R package
jsm2015: the dendextend R packagejsm2015: the dendextend R package
jsm2015: the dendextend R packageTal Galili
 
Assert에 대해서
Assert에 대해서Assert에 대해서
Assert에 대해서MoonLightMS
 

Andere mochten auch (20)

1.8 discretization
1.8 discretization1.8 discretization
1.8 discretization
 
Machine Learning and Data Mining: 14 Evaluation and Credibility
Machine Learning and Data Mining: 14 Evaluation and CredibilityMachine Learning and Data Mining: 14 Evaluation and Credibility
Machine Learning and Data Mining: 14 Evaluation and Credibility
 
Machine Learning and Data Mining: 19 Mining Text And Web Data
Machine Learning and Data Mining: 19 Mining Text And Web DataMachine Learning and Data Mining: 19 Mining Text And Web Data
Machine Learning and Data Mining: 19 Mining Text And Web Data
 
DMTM 2015 - 04 Data Exploration
DMTM 2015 - 04 Data ExplorationDMTM 2015 - 04 Data Exploration
DMTM 2015 - 04 Data Exploration
 
Machine Learning and Data Mining: 16 Classifiers Ensembles
Machine Learning and Data Mining: 16 Classifiers EnsemblesMachine Learning and Data Mining: 16 Classifiers Ensembles
Machine Learning and Data Mining: 16 Classifiers Ensembles
 
Data discretization
Data discretizationData discretization
Data discretization
 
18 Data Streams
18 Data Streams18 Data Streams
18 Data Streams
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Data Mining: Concepts and Techniques (3rd ed.) - Chapter 3 preprocessing
Data Mining:  Concepts and Techniques (3rd ed.)- Chapter 3 preprocessingData Mining:  Concepts and Techniques (3rd ed.)- Chapter 3 preprocessing
Data Mining: Concepts and Techniques (3rd ed.) - Chapter 3 preprocessing
 
Data Warehousing and Data Mining
Data Warehousing and Data MiningData Warehousing and Data Mining
Data Warehousing and Data Mining
 
Data exploration
Data explorationData exploration
Data exploration
 
Last 37 years return & Inflation data
Last 37 years return & Inflation dataLast 37 years return & Inflation data
Last 37 years return & Inflation data
 
Meetup_Consumer_Credit_Default_Vers_2_All
Meetup_Consumer_Credit_Default_Vers_2_AllMeetup_Consumer_Credit_Default_Vers_2_All
Meetup_Consumer_Credit_Default_Vers_2_All
 
Cs1011 dw-dm-1
Cs1011 dw-dm-1Cs1011 dw-dm-1
Cs1011 dw-dm-1
 
Minerals Sector Presentation - 12th Plan (2012 - 2017)
Minerals Sector Presentation - 12th Plan (2012 - 2017)Minerals Sector Presentation - 12th Plan (2012 - 2017)
Minerals Sector Presentation - 12th Plan (2012 - 2017)
 
02 Data Mining
02 Data Mining02 Data Mining
02 Data Mining
 
2. olap warehouse
2. olap warehouse2. olap warehouse
2. olap warehouse
 
Curs 3 Data Mining
Curs 3 Data MiningCurs 3 Data Mining
Curs 3 Data Mining
 
jsm2015: the dendextend R package
jsm2015: the dendextend R packagejsm2015: the dendextend R package
jsm2015: the dendextend R package
 
Assert에 대해서
Assert에 대해서Assert에 대해서
Assert에 대해서
 

Ähnlich wie Machine Learning and Data Mining: 15 Data Exploration and Preparation

The Data Analysis Workflow
The Data Analysis WorkflowThe Data Analysis Workflow
The Data Analysis WorkflowJonathanEarley3
 
A MineKnowledge Case Study: Analyzing Earthquakes
A MineKnowledge Case Study: Analyzing EarthquakesA MineKnowledge Case Study: Analyzing Earthquakes
A MineKnowledge Case Study: Analyzing Earthquakesmineknowledge
 
KDD, Data Mining, Data Science_I.pptx
KDD, Data Mining, Data Science_I.pptxKDD, Data Mining, Data Science_I.pptx
KDD, Data Mining, Data Science_I.pptxYogeshGairola2
 
Scientific data management from the lab to the web
Scientific data management   from the lab to the webScientific data management   from the lab to the web
Scientific data management from the lab to the webJose Manuel Gómez-Pérez
 
1-Data Understanding.pdf
1-Data Understanding.pdf1-Data Understanding.pdf
1-Data Understanding.pdfgopikahari7
 
A DataMine.it Case Study: Analyzing Earthquakes
A DataMine.it Case Study: Analyzing EarthquakesA DataMine.it Case Study: Analyzing Earthquakes
A DataMine.it Case Study: Analyzing EarthquakesGeorge Tziralis
 
Improving Authenticated Dynamic Dictionaries, with Application to Cryptocurre...
Improving Authenticated Dynamic Dictionaries, with Application to Cryptocurre...Improving Authenticated Dynamic Dictionaries, with Application to Cryptocurre...
Improving Authenticated Dynamic Dictionaries, with Application to Cryptocurre...Dmitry Meshkov
 
Lecture-1-Introduction-to-Data-Mining.pdf
Lecture-1-Introduction-to-Data-Mining.pdfLecture-1-Introduction-to-Data-Mining.pdf
Lecture-1-Introduction-to-Data-Mining.pdfJojo314349
 
Qualitative data analysis
Qualitative data analysisQualitative data analysis
Qualitative data analysisHiram Ting
 
Data Archiving and Processing
Data Archiving and ProcessingData Archiving and Processing
Data Archiving and ProcessingCRRC-Armenia
 
System modeling using ERG and Petri Nets
System modeling using ERG and Petri NetsSystem modeling using ERG and Petri Nets
System modeling using ERG and Petri NetsNavneet Bhushan
 
Data and Processes: Can we Marry Them . . . and Make the Marriage Last?
Data and Processes: Can we Marry Them . . . and Make the Marriage Last?Data and Processes: Can we Marry Them . . . and Make the Marriage Last?
Data and Processes: Can we Marry Them . . . and Make the Marriage Last?INRIA-CEDAR
 
Implementation of Improved Apriori Algorithm on Large Dataset using Hadoop
Implementation of Improved Apriori Algorithm on Large Dataset using HadoopImplementation of Improved Apriori Algorithm on Large Dataset using Hadoop
Implementation of Improved Apriori Algorithm on Large Dataset using HadoopBRNSSPublicationHubI
 
Text Analytics for Dummies 2010
Text Analytics for Dummies 2010Text Analytics for Dummies 2010
Text Analytics for Dummies 2010Seth Grimes
 
Presentation1.pdf
Presentation1.pdfPresentation1.pdf
Presentation1.pdfZixunZhou
 
Log Mining: Beyond Log Analysis
Log Mining: Beyond Log AnalysisLog Mining: Beyond Log Analysis
Log Mining: Beyond Log AnalysisAnton Chuvakin
 

Ähnlich wie Machine Learning and Data Mining: 15 Data Exploration and Preparation (20)

The Data Analysis Workflow
The Data Analysis WorkflowThe Data Analysis Workflow
The Data Analysis Workflow
 
A MineKnowledge Case Study: Analyzing Earthquakes
A MineKnowledge Case Study: Analyzing EarthquakesA MineKnowledge Case Study: Analyzing Earthquakes
A MineKnowledge Case Study: Analyzing Earthquakes
 
Data Mining
Data MiningData Mining
Data Mining
 
KDD, Data Mining, Data Science_I.pptx
KDD, Data Mining, Data Science_I.pptxKDD, Data Mining, Data Science_I.pptx
KDD, Data Mining, Data Science_I.pptx
 
Scientific data management from the lab to the web
Scientific data management   from the lab to the webScientific data management   from the lab to the web
Scientific data management from the lab to the web
 
1-Data Understanding.pdf
1-Data Understanding.pdf1-Data Understanding.pdf
1-Data Understanding.pdf
 
A DataMine.it Case Study: Analyzing Earthquakes
A DataMine.it Case Study: Analyzing EarthquakesA DataMine.it Case Study: Analyzing Earthquakes
A DataMine.it Case Study: Analyzing Earthquakes
 
Log Data Mining
Log Data MiningLog Data Mining
Log Data Mining
 
Improving Authenticated Dynamic Dictionaries, with Application to Cryptocurre...
Improving Authenticated Dynamic Dictionaries, with Application to Cryptocurre...Improving Authenticated Dynamic Dictionaries, with Application to Cryptocurre...
Improving Authenticated Dynamic Dictionaries, with Application to Cryptocurre...
 
Lecture-1-Introduction-to-Data-Mining.pdf
Lecture-1-Introduction-to-Data-Mining.pdfLecture-1-Introduction-to-Data-Mining.pdf
Lecture-1-Introduction-to-Data-Mining.pdf
 
NISO Forum, Denver, Sept. 24, 2012: Scientific discovery and innovation in an...
NISO Forum, Denver, Sept. 24, 2012: Scientific discovery and innovation in an...NISO Forum, Denver, Sept. 24, 2012: Scientific discovery and innovation in an...
NISO Forum, Denver, Sept. 24, 2012: Scientific discovery and innovation in an...
 
Qualitative data analysis
Qualitative data analysisQualitative data analysis
Qualitative data analysis
 
Data Archiving and Processing
Data Archiving and ProcessingData Archiving and Processing
Data Archiving and Processing
 
System modeling using ERG and Petri Nets
System modeling using ERG and Petri NetsSystem modeling using ERG and Petri Nets
System modeling using ERG and Petri Nets
 
Data and Processes: Can we Marry Them . . . and Make the Marriage Last?
Data and Processes: Can we Marry Them . . . and Make the Marriage Last?Data and Processes: Can we Marry Them . . . and Make the Marriage Last?
Data and Processes: Can we Marry Them . . . and Make the Marriage Last?
 
Implementation of Improved Apriori Algorithm on Large Dataset using Hadoop
Implementation of Improved Apriori Algorithm on Large Dataset using HadoopImplementation of Improved Apriori Algorithm on Large Dataset using Hadoop
Implementation of Improved Apriori Algorithm on Large Dataset using Hadoop
 
wendi_ppt
wendi_pptwendi_ppt
wendi_ppt
 
Text Analytics for Dummies 2010
Text Analytics for Dummies 2010Text Analytics for Dummies 2010
Text Analytics for Dummies 2010
 
Presentation1.pdf
Presentation1.pdfPresentation1.pdf
Presentation1.pdf
 
Log Mining: Beyond Log Analysis
Log Mining: Beyond Log AnalysisLog Mining: Beyond Log Analysis
Log Mining: Beyond Log Analysis
 

Mehr von Pier Luca Lanzi

11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i VideogiochiPier Luca Lanzi
 
Breve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiBreve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiPier Luca Lanzi
 
Global Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomeGlobal Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomePier Luca Lanzi
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Pier Luca Lanzi
 
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...Pier Luca Lanzi
 
GGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaGGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaPier Luca Lanzi
 
Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Pier Luca Lanzi
 
DMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparationDMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparationPier Luca Lanzi
 
DMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph miningDMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph miningPier Luca Lanzi
 
DMTM Lecture 17 Text mining
DMTM Lecture 17 Text miningDMTM Lecture 17 Text mining
DMTM Lecture 17 Text miningPier Luca Lanzi
 
DMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesDMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesPier Luca Lanzi
 
DMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluationDMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluationPier Luca Lanzi
 
DMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringDMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringPier Luca Lanzi
 
DMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clusteringDMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clusteringPier Luca Lanzi
 
DMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringDMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringPier Luca Lanzi
 
DMTM Lecture 11 Clustering
DMTM Lecture 11 ClusteringDMTM Lecture 11 Clustering
DMTM Lecture 11 ClusteringPier Luca Lanzi
 
DMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesDMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesPier Luca Lanzi
 
DMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsDMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsPier Luca Lanzi
 
DMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesDMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesPier Luca Lanzi
 
DMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesDMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesPier Luca Lanzi
 

Mehr von Pier Luca Lanzi (20)

11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi
 
Breve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiBreve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei Videogiochi
 
Global Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomeGlobal Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning Welcome
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018
 
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
 
GGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaGGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di apertura
 
Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018
 
DMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparationDMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparation
 
DMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph miningDMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph mining
 
DMTM Lecture 17 Text mining
DMTM Lecture 17 Text miningDMTM Lecture 17 Text mining
DMTM Lecture 17 Text mining
 
DMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesDMTM Lecture 16 Association rules
DMTM Lecture 16 Association rules
 
DMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluationDMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluation
 
DMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringDMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clustering
 
DMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clusteringDMTM Lecture 13 Representative based clustering
DMTM Lecture 13 Representative based clustering
 
DMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringDMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clustering
 
DMTM Lecture 11 Clustering
DMTM Lecture 11 ClusteringDMTM Lecture 11 Clustering
DMTM Lecture 11 Clustering
 
DMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesDMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensembles
 
DMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsDMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethods
 
DMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesDMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rules
 
DMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesDMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision trees
 

Kürzlich hochgeladen

Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 

Kürzlich hochgeladen (20)

Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 

Machine Learning and Data Mining: 15 Data Exploration and Preparation

  • 1. Data Exploration and Preparation Machine Learning and Data Mining (Unit 15) Prof. Pier Luca Lanzi
  • 2. References 2 Jiawei Han and Micheline Kamber, "Data Mining, : Concepts and Techniques", The Morgan Kaufmann Series in Data Management Systems (Second Edition) Tom M. Mitchell. “Machine Learning” McGraw Hill 1997 Pang-Ning Tan, Michael Steinbach, Vipin Kumar, “Introduction to Data Mining”, Addison Wesley Ian H. Witten, Eibe Frank. “Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations” 2nd Edition Prof. Pier Luca Lanzi
  • 3. Outline 3 Data Exploration Types of data Descriptive statistics Visualization Data Preprocessing Aggregation Sampling Dimensionality Reduction Feature creation Discretization Attribute Transformation Compression Concept hierarchies Prof. Pier Luca Lanzi
  • 4. Types of data sets 4 Record Data Matrix Document Data Transaction Data Graph World Wide Web Molecular Structures Ordered Spatial Data Temporal Data Sequential Data Genetic Sequence Data Prof. Pier Luca Lanzi
  • 5. Important Characteristics of 5 Structured Data Dimensionality Curse of Dimensionality Sparsity Only presence counts Resolution Patterns depend on the scale Prof. Pier Luca Lanzi
  • 6. Record Data 6 Data that consists of a collection of records, each of which consists of a fixed set of attributes Tid Refund Marital Taxable Income Cheat Status 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 10 Prof. Pier Luca Lanzi
  • 7. Data Matrix 7 If data objects have the same fixed set of numeric attributes, then the data objects can be thought of as points in a multi-dimensional space, where each dimension represents a distinct attribute Such data set can be represented by an m by n matrix, where there are m rows, one for each object, and n columns, one for each attribute Projection Projection Distance Load Thickness of x Load of y load 10.23 5.27 15.22 2.7 1.2 12.65 6.25 16.22 2.2 1.1 Prof. Pier Luca Lanzi
  • 8. Document Data 8 Each document becomes a `term' vector, each term is a component (attribute) of the vector, the value of each component is the number of times the corresponding term occurs in the document. timeout season coach game score team ball lost play win Document 1 3 0 5 0 2 6 0 2 0 2 Document 2 0 7 0 2 1 0 0 3 0 0 Document 3 0 1 0 0 1 2 2 0 3 0 Prof. Pier Luca Lanzi
  • 9. Transaction Data 9 A special type of record data, where each record (transaction) involves a set of items. For example, consider a grocery store. The set of products purchased by a customer during one shopping trip constitute a transaction, while the individual products that were purchased are the items. TID Items 1 Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diaper, Milk 4 Beer, Bread, Diaper, Milk 5 Coke, Diaper, Milk Prof. Pier Luca Lanzi
  • 10. Graph Data 10 Generic graph and HTML Links <a href="papers/papers.html#bbbb"> Data Mining </a> <li> 2 <a href="papers/papers.html#aaaa"> Graph Partitioning </a> 1 5 <li> <a href="papers/papers.html#aaaa"> 2 Parallel Solution of Sparse Linear System of Equations </a> 5 <li> <a href="papers/papers.html#ffff"> N-Body Computation and Dense Linear System Solvers Prof. Pier Luca Lanzi
  • 11. Chemical Data 11 Benzene Molecule: C6H6 Prof. Pier Luca Lanzi
  • 12. Ordered Data 12 Sequences of transactions Items/Events An element of the sequence Prof. Pier Luca Lanzi
  • 13. Ordered Data 13 Genomic sequence data GGTTCCGCCTTCAGCCCCGCGCC CGCAGGGCCCGCCCCGCGCCGTC GAGAAGGGCCCGCCTGGCGGGCG GGGGGAGGCGGGGCCGCCCGAGC CCAACCGAGTCCGACCAGGTGCC CCCTCTGCTCGGCCTAGACCTGA GCTCATTAGGCGGCAGCGGACAG GCCAAGTAGAACACGCGAAGCGC TGGGCTGCCTGCTGCGACCAGGG Prof. Pier Luca Lanzi
  • 14. Ordered Data 14 Spatio-Temporal Data Average Monthly Temperature of land and ocean Prof. Pier Luca Lanzi
  • 15. What is data exploration? 15 A preliminary exploration of the data to better understand its characteristics. Key motivations of data exploration include Helping to select the right tool for preprocessing or analysis Making use of humans’ abilities to recognize patterns People can recognize patterns not captured by data analysis tools Related to the area of Exploratory Data Analysis (EDA) Created by statistician John Tukey Seminal book is Exploratory Data Analysis by Tukey A nice online introduction can be found in Chapter 1 of the NIST Engineering Statistics Handbook http://www.itl.nist.gov/div898/handbook/index.htm Prof. Pier Luca Lanzi
  • 16. Techniques Used In Data Exploration 16 In EDA, as originally defined by Tukey The focus was on visualization Clustering and anomaly detection were viewed as exploratory techniques In data mining, clustering and anomaly detection are major areas of interest, and not thought of as just exploratory In our discussion of data exploration, we focus on Summary statistics Visualization Online Analytical Processing (OLAP) Prof. Pier Luca Lanzi
  • 17. Iris Sample Data Set 17 Many of the exploratory data techniques are illustrated with the Iris Plant data set http://www.ics.uci.edu/~mlearn/MLRepository.html From the statistician Douglas Fisher Three flower types (classes): Setosa, Virginica, Versicolour Four (non-class) attributes, sepal width and length, petal width and length Virginica. Robert H. Mohlenbrock. USDA NRCS. 1995. Northeast wetland flora: Field office guide to plant species. Northeast National Technical Center, Chester, PA. Courtesy of USDA NRCS Wetland Science Institute. Prof. Pier Luca Lanzi
  • 18. Mining Data Descriptive Characteristics 18 Motivation To better understand the data: central tendency, variation and spread Data dispersion characteristics median, max, min, quantiles, outliers, variance, etc. Numerical dimensions correspond to sorted intervals Data dispersion: analyzed with multiple granularities of precision Boxplot or quantile analysis on sorted intervals Dispersion analysis on computed measures Folding measures into numerical dimensions Boxplot or quantile analysis on the transformed cube Prof. Pier Luca Lanzi
  • 19. Summary Statistics 19 They are numbers that summarize properties of the data Summarized properties include frequency, location and spread Examples Location, mean Spread, standard deviation Most summary statistics can be calculated in a single pass through the data Prof. Pier Luca Lanzi
  • 20. Frequency and Mode 20 The frequency of an attribute value is the percentage of time the value occurs in the data set For example, given the attribute ‘gender’ and a representative population of people, the gender ‘female’ occurs about 50% of the time. The mode of a an attribute is the most frequent attribute value The notions of frequency and mode are typically used with categorical data Prof. Pier Luca Lanzi
  • 21. Percentiles 21 For continuous data, the notion of a percentile is more useful Given an ordinal or continuous attribute x and a number p between 0 and 100, the pth percentile is a value xp of x such that p% of the observed values of x are less than xp For instance, the 50th percentile is the value x50% such that x p 50% of all values of x are less than x50% Prof. Pier Luca Lanzi
  • 22. Measures of Location: Mean and Median 22 The mean is the most common measure of the location of a set of points However, the mean is very sensitive to outliers Thus, the median or a trimmed mean is also commonly used Prof. Pier Luca Lanzi
  • 23. Measures of Spread: Range and Variance 23 Range is the difference between the max and min The variance or standard deviation is the most common measure of the spread of a set of points However, this is also sensitive to outliers, so that other measures are often used. Prof. Pier Luca Lanzi
  • 24. Visualization 24 Visualization is the conversion of data into a visual or tabular format so that the characteristics of the data and the relationships among data items or attributes can be analyzed or reported Visualization of data is one of the most powerful and appealing techniques for data exploration Humans have a well developed ability to analyze large amounts of information that is presented visually Can detect general patterns and trends Can detect outliers and unusual patterns Prof. Pier Luca Lanzi
  • 25. Example: Sea Surface Temperature 25 The following shows the Sea Surface Temperature for July 1982 Tens of thousands of data points are summarized in a single figure Prof. Pier Luca Lanzi
  • 26. Representation 26 Is the mapping of information to a visual format Data objects, their attributes, and the relationships among data objects are translated into graphical elements such as points, lines, shapes, and colors Example: Objects are often represented as points Their attribute values can be represented as the position of the points or the characteristics of the points, e.g., color, size, and shape If position is used, then the relationships of points, i.e., whether they form groups or a point is an outlier, is easily perceived. Prof. Pier Luca Lanzi
  • 27. Arrangement 27 Is the placement of visual elements within a display Can make a large difference in how easy it is to understand the data Example: Prof. Pier Luca Lanzi
  • 28. Selection 28 Is the elimination or the de-emphasis of certain objects and attributes Selection may involve the chossing a subset of attributes Dimensionality reduction is often used to reduce the number of dimensions to two or three Alternatively, pairs of attributes can be considered Selection may also involve choosing a subset of objects A region of the screen can only show so many points Can sample, but want to preserve points in sparse areas Prof. Pier Luca Lanzi
  • 29. Visualization Techniques: Histograms 29 Histogram Usually shows the distribution of values of a single variable Divide the values into bins and show a bar plot of the number of objects in each bin. The height of each bar indicates the number of objects Shape of histogram depends on the number of bins Example: Petal Width (10 and 20 bins, respectively) Prof. Pier Luca Lanzi
  • 30. Two-Dimensional Histograms 30 Show the joint distribution of the values of two attributes Example: petal width and petal length What does this tell us? Prof. Pier Luca Lanzi
  • 31. Visualization Techniques: Box Plots 31 Box Plots Invented by J. Tukey Another way of displaying the distribution of data Following figure shows the basic part of a box plot outlier 10th percentile 75th percentile 50th percentile 25th percentile 10th percentile Prof. Pier Luca Lanzi
  • 32. Example of Box Plots 32 Box plots can be used to compare attributes Prof. Pier Luca Lanzi
  • 33. Visualization Techniques: Scatter Plots 33 Attributes values determine the position Two-dimensional scatter plots most common, but can have three-dimensional scatter plots Often additional attributes can be displayed by using the size, shape, and color of the markers that represent the objects It is useful to have arrays of scatter plots can compactly summarize the relationships of several pairs of attributes Prof. Pier Luca Lanzi
  • 34. Scatter Plot Array of Iris Attributes 34 Prof. Pier Luca Lanzi
  • 35. Visualization Techniques: Contour Plots 35 Useful when a continuous attribute is measured on a spatial grid They partition the plane into regions of similar values The contour lines that form the boundaries of these regions connect points with equal values The most common example is contour maps of elevation Can also display temperature, rainfall, air pressure, etc. Prof. Pier Luca Lanzi
  • 36. Contour Plot Example: SST Dec, 1998 36 Celsius Prof. Pier Luca Lanzi
  • 37. Visualization Techniques: Matrix Plots 37 Can plot the data matrix This can be useful when objects are sorted according to class Typically, the attributes are normalized to prevent one attribute from dominating the plot Plots of similarity or distance matrices can also be useful for visualizing the relationships between objects Examples of matrix plots are presented on the next two slides Prof. Pier Luca Lanzi
  • 38. Visualization of the Iris Data Matrix 38 standard deviation Prof. Pier Luca Lanzi
  • 39. Visualization of the Iris Correlation Matrix 39 Prof. Pier Luca Lanzi
  • 40. Visualization Techniques: 40 Parallel Coordinates Used to plot the attribute values of high-dimensional data Instead of using perpendicular axes, use a set of parallel axes The attribute values of each object are plotted as a point on each corresponding coordinate axis and the points are connected by a line Thus, each object is represented as a line Often, the lines representing a distinct class of objects group together, at least for some attributes Ordering of attributes is important in seeing such groupings Prof. Pier Luca Lanzi
  • 41. Parallel Coordinates Plots for Iris Data 41 Prof. Pier Luca Lanzi
  • 42. Other Visualization Techniques 42 Star Plots Similar approach to parallel coordinates, but axes radiate from a central point The line connecting the values of an object is a polygon Chernoff Faces Approach created by Herman Chernoff This approach associates each attribute with a characteristic of a face The values of each attribute determine the appearance of the corresponding facial characteristic Each object becomes a separate face Relies on human’s ability to distinguish faces Prof. Pier Luca Lanzi
  • 43. Star Plots for Iris Data 43 Setosa Versicolour Virginica Prof. Pier Luca Lanzi
  • 44. Chernoff Faces for Iris Data 44 Setosa Versicolour Virginica Prof. Pier Luca Lanzi
  • 45. Why Data Preprocessing? 45 Data in the real world is dirty Incomplete: lacking attribute values, lacking certain attributes of interest, or containing only aggregate data e.g., occupation=“ ” Noisy: containing errors or outliers e.g., Salary=“-10” Inconsistent: containing discrepancies in codes or names e.g., Age=“42” Birthday=“03/07/1997” e.g., Was rating “1,2,3”, now rating “A, B, C” e.g., discrepancy between duplicate records Prof. Pier Luca Lanzi
  • 46. Why Is Data Dirty? 46 Incomplete data may come from “Not applicable” data value when collected Different considerations between the time when the data was collected and when it is analyzed. Human/hardware/software problems Noisy data (incorrect values) may come from Faulty data collection instruments Human or computer error at data entry Errors in data transmission Inconsistent data may come from Different data sources Functional dependency violation (e.g., modify some linked data) Duplicate records also need data cleaning Prof. Pier Luca Lanzi
  • 47. Why Is Data Preprocessing Important? 47 No quality data, no quality mining results! Quality decisions must be based on quality data E.g., duplicate or missing data may cause incorrect or even misleading statistics. Data warehouse needs consistent integration of quality data Data extraction, cleaning, and transformation comprises the majority of the work of building a data warehouse Prof. Pier Luca Lanzi
  • 48. Data Quality 48 What kinds of data quality problems? How can we detect problems with the data? What can we do about these problems? Examples of data quality problems: noise and outliers missing values duplicate data Prof. Pier Luca Lanzi
  • 49. Multi-Dimensional Measure of 49 Data Quality A well-accepted multidimensional view: Accuracy Completeness Consistency Timeliness Believability Value added Interpretability Accessibility Broad categories: Intrinsic, contextual, representational, and accessibility Prof. Pier Luca Lanzi
  • 50. Noise 50 Noise refers to modification of original values Examples: distortion of a person’s voice when talking on a poor phone and “snow” on television screen Two Sine Waves Two Sine Waves + Noise Prof. Pier Luca Lanzi
  • 51. Outliers 51 Outliers are data objects with characteristics that are considerably different than most of the other data objects in the data set Prof. Pier Luca Lanzi
  • 52. Missing Values 52 Reasons for missing values Information is not collected (e.g., people decline to give their age and weight) Attributes may not be applicable to all cases (e.g., annual income is not applicable to children) Handling missing values Eliminate Data Objects Estimate Missing Values Ignore the Missing Value During Analysis Replace with all possible values (weighted by their probabilities) Prof. Pier Luca Lanzi
  • 53. Duplicate Data 53 Data set may include data objects that are duplicates, or almost duplicates of one another Major issue when merging data from heterogeous sources Examples: same person with multiple email addresses Data cleaning: process of dealing with duplicate data issues Prof. Pier Luca Lanzi
  • 54. Data Cleaning as a Process 54 Data discrepancy detection Use metadata (e.g., domain, range, dependency, distribution) Check field overloading Check uniqueness rule, consecutive rule and null rule Use commercial tools • Data scrubbing: use simple domain knowledge (e.g., postal code, spell-check) to detect errors and make corrections • Data auditing: by analyzing data to discover rules and relationship to detect violators (e.g., correlation and clustering to find outliers) Data migration and integration Data migration tools: allow transformations to be specified ETL (Extraction/Transformation/Loading) tools: allow users to specify transformations through a graphical user interface Integration of the two processes Iterative and interactive (e.g., Potter’s Wheels) Prof. Pier Luca Lanzi
  • 55. Data Preprocessing 55 Aggregation Sampling Dimensionality Reduction Feature subset selection Feature creation Discretization and Binarization Attribute Transformation Prof. Pier Luca Lanzi
  • 56. Aggregation 56 Combining two or more attributes (or objects) into a single attribute (or object) Purpose Data reduction: reduce the number of attributes or objects Change of scale: cities aggregated into regions, states, countries, etc More “stable” data: aggregated data tends to have less variability Prof. Pier Luca Lanzi
  • 57. Aggregation 57 Variation of Precipitation in Australia Standard Deviation of Average Standard Deviation of Average Monthly Precipitation Yearly Precipitation Prof. Pier Luca Lanzi
  • 58. Sampling 58 Sampling is the main technique employed for data selection It is often used for both the preliminary investigation of the data and the final data analysis. Statisticians sample because obtaining the entire set of data of interest is too expensive or time consuming Sampling is used in data mining because processing the entire set of data of interest is too expensive or time consuming Prof. Pier Luca Lanzi
  • 59. Key principles for Effective Sampling 59 Using a sample will work almost as well as using the entire data sets, if the sample is representative A sample is representative if it has approximately the same property (of interest) as the original set of data Prof. Pier Luca Lanzi
  • 60. Types of Sampling 60 Simple Random Sampling There is an equal probability of selecting any particular item Sampling without replacement As each item is selected, it is removed from the population Sampling with replacement Objects are not removed from the population as they are selected for the sample. In sampling with replacement, the same object can be picked up more than once Stratified sampling Split the data into several partitions Then draw random samples from each partition Prof. Pier Luca Lanzi
  • 61. Sample Size 61 8000 points 2000 Points 500 Points Prof. Pier Luca Lanzi
  • 62. Sample Size 62 What sample size is necessary to get at least one object from each of 10 groups. Prof. Pier Luca Lanzi
  • 63. Curse of Dimensionality 63 When dimensionality increases, data becomes increasingly sparse in the space that it occupies Definitions of density and distance between points, which is critical for clustering and outlier detection, become less meaningful •Randomly generate 500 points •Compute difference between max and min distance between any pair of points Prof. Pier Luca Lanzi
  • 64. Dimensionality Reduction 64 Purpose: Avoid curse of dimensionality Reduce amount of time and memory required by data mining algorithms Allow data to be more easily visualized May help to eliminate irrelevant features or reduce noise Techniques Principle Component Analysis Singular Value Decomposition Others: supervised and non-linear techniques Prof. Pier Luca Lanzi
  • 65. Dimensionality Reduction: 65 Principal Component Analysis (PCA) Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors (principal components) that can be best used to represent data Steps Normalize input data: Each attribute falls within the same range Compute k orthonormal (unit) vectors, i.e., principal components Each input data (vector) is a linear combination of the k principal component vectors The principal components are sorted in order of decreasing “significance” or strength Since the components are sorted, the size of the data can be reduced by eliminating the weak components, i.e., those with low variance. (i.e., using the strongest principal components, it is possible to reconstruct a good approximation of the original data Works for numeric data only Used when the number of dimensions is large Prof. Pier Luca Lanzi
  • 66. Principal Component Analysis 66 X2 Y1 Y2 X1 Prof. Pier Luca Lanzi
  • 67. Dimensionality Reduction: PCA 67 Goal is to find a projection that captures the largest amount of variation in data x2 e x1 Prof. Pier Luca Lanzi
  • 68. Dimensionality Reduction: PCA 68 Find the eigenvectors of the covariance matrix The eigenvectors define the new space x2 e x1 Prof. Pier Luca Lanzi
  • 69. Feature Subset Selection 69 Another way to reduce dimensionality of data Redundant features duplicate much or all of the information contained in one or more other attributes Example: purchase price of a product and the amount of sales tax paid Irrelevant features contain no information that is useful for the data mining task at hand Example: students' ID is often irrelevant to the task of predicting students' GPA Prof. Pier Luca Lanzi
  • 70. Feature Subset Selection 70 Brute-force approach Try all possible feature subsets as input to data mining algorithm Embedded approaches Feature selection occurs naturally as part of the data mining algorithm Filter approaches Features are selected using a procedure that is independent from a specific data mining algorithm E.g., attributes are selected based on correlation measures Wrapper approaches: Use a data mining algorithm as a black box to find best subset of attributes E.g., apply a genetic algorithm and an algorithm for decision tree to find the best set of features for a decision tree Prof. Pier Luca Lanzi
  • 71. Feature Creation 71 Create new attributes that can capture the important information in a data set much more efficiently than the original attributes E.g., given the birthday, create the attribute age Three general methodologies: Feature Extraction: domain-specific Mapping Data to New Space Feature Construction: combining features Prof. Pier Luca Lanzi
  • 72. Mapping Data to a New Space 72 Fourier transform Wavelet transform Two Sine Waves Two Sine Waves + Noise Frequency Prof. Pier Luca Lanzi
  • 73. Discretization 73 Three types of attributes: Nominal: values from an unordered set, e.g., color Ordinal: values from an ordered set, e.g., military or academic rank Continuous: real numbers, e.g., integer or real numbers Discretization: Divide the range of a continuous attribute into intervals Some classification algorithms only accept categorical attributes Reduce data size by discretization Prepare for further analysis Prof. Pier Luca Lanzi
  • 74. Discretization Approaches 74 Supervised Attributes are discretized using the class information Generates intervals that tries to minimize the loss of information about the class Unsupervised Attributes are discretized solely based on their values Prof. Pier Luca Lanzi
  • 75. Discretization Using Class Labels 75 Entropy based approach 3 categories for both x and y 5 categories for both x and y Prof. Pier Luca Lanzi
  • 76. Discretization Without Using Class Labels 76 Data Equal interval width Equal frequency K-means Prof. Pier Luca Lanzi
  • 77. Discretization and Concept Hierarchy 77 Discretization Reduce the number of values for a given continuous attribute by dividing the range of the attribute into intervals Interval labels can then be used to replace actual data values Supervised vs. unsupervised Split (top-down) vs. merge (bottom-up) Discretization can be performed recursively on an attribute Concept hierarchy formation Recursively reduce the data by collecting and replacing low level concepts (such as numeric values for age) by higher level concepts (such as young, middle-aged, or senior) Prof. Pier Luca Lanzi
  • 78. Typical Discretization Methods 78 Entropy-based discretization: supervised, top-down split Interval merging by χ2 Analysis: supervised, bottom-up merge Segmentation by natural partitioning: top-down split, unsupervised Prof. Pier Luca Lanzi
  • 79. Entropy-Based Discretization 79 Given a set of samples S, if S is partitioned into two intervals S1 and S2 using boundary T, the information gain after partitioning is |S | | S1 | I (S , T ) = Entropy ( S 1) + 2 Entropy ( S 2 ) |S| |S| Entropy is calculated based on class distribution of the samples in the set. Given m classes, the entropy of S1 is m Entropy ( S 1 ) = − ∑ p i log 2 ( p i ) i =1 where pi is the probability of class i in S1 The boundary that minimizes the entropy function over all possible boundaries is selected as a binary discretization The process is recursively applied to partitions obtained until some stopping criterion is met Such a boundary may reduce data size and improve classification accuracy Prof. Pier Luca Lanzi
  • 80. Interval Merge by χ2 Analysis 80 Merging-based (bottom-up) vs. splitting-based methods Merge: Find the best neighboring intervals and merge them to form larger intervals recursively ChiMerge [Kerber AAAI 1992, See also Liu et al. DMKD 2002] Initially, each distinct value of a numerical attr. A is considered to be one interval χ2 tests are performed for every pair of adjacent intervals Adjacent intervals with the least χ2 values are merged together, since low χ2 values for a pair indicate similar class distributions This merge process proceeds recursively until a predefined stopping criterion is met (such as significance level, max-interval, max inconsistency, etc.) Prof. Pier Luca Lanzi
  • 81. Segmentation by Natural Partitioning 81 A simply 3-4-5 rule can be used to segment numeric data into relatively uniform, “natural” intervals. If an interval covers 3, 6, 7 or 9 distinct values at the most significant digit, partition the range into 3 equi-width intervals If it covers 2, 4, or 8 distinct values at the most significant digit, partition the range into 4 intervals If it covers 1, 5, or 10 distinct values at the most significant digit, partition the range into 5 intervals Prof. Pier Luca Lanzi
  • 82. Example of 3-4-5 Rule 82 count -$351 -$159 profit $1,838 $4,700 Step 1: Min Low (i.e, 5%-tile) High(i.e, 95%-0 tile) Max Step 2: msd=1,000 Low=-$1,000 High=$2,000 (-$1,000 - $2,000) Step 3: (-$1,000 - 0) ($1,000 - $2,000) (0 -$ 1,000) (-$400 -$5,000) Step 4: ($2,000 - $5, 000) (-$400 - 0) ($1,000 - $2, 000) (0 - $1,000) (0 - ($1,000 - (-$400 - $200) ($2,000 - $1,200) -$300) $3,000) ($200 - ($1,200 - $400) (-$300 - $1,400) ($3,000 - -$200) $4,000) ($1,400 - ($400 - (-$200 - $1,600) $600) ($4,000 - -$100) $5,000) ($600 - ($1,600 - ($1,800 - $800) ($800 - $1,800) $2,000) (-$100 - $1,000) 0) Prof. Pier Luca Lanzi
  • 83. Attribute Transformation 83 A function that maps the entire set of values of a given attribute to a new set of replacement values such that each old value can be identified with one of the new values Simple functions: xk, log(x), ex, |x| Standardization and Normalization Prof. Pier Luca Lanzi
  • 84. On-Line Analytical Processing (OLAP) 84 Relational databases put data into tables, while OLAP uses a multidimensional array representation. Such representations of data previously existed in statistics and other fields There are a number of data analysis and data exploration operations that are easier with such a data representation. Prof. Pier Luca Lanzi
  • 85. Creating a Multidimensional Array 85 First, identify which attributes are to be the dimensions and which attribute is to be the target attribute whose values appear as entries in the multidimensional array. The attributes used as dimensions must have discrete values The target value is typically a count or continuous value, e.g., the cost of an item Can have no target variable at all except the count of objects that have the same set of attribute values Second, find the value of each entry in the multidimensional array by summing the values (of the target attribute) or count of all objects that have the attribute values corresponding to that entry. Prof. Pier Luca Lanzi
  • 86. Example: Iris data 86 We show how the attributes, petal length, petal width, and species type can be converted to a multidimensional array First, we discretized the petal width and length to have categorical values: low, medium, and high We get the following table - note the count attribute Prof. Pier Luca Lanzi
  • 87. Example: Iris data (continued) 87 Each unique tuple of petal width, petal length, and species type identifies one element of the array. This element is assigned the corresponding count value. The figure illustrates the result. All non-specified tuples are 0. Prof. Pier Luca Lanzi
  • 88. Example: Iris data (continued) 88 Slices of the multidimensional array are shown by the following cross-tabulations What do these tables tell us? Prof. Pier Luca Lanzi
  • 89. OLAP Operations: Data Cube 89 The key operation of a OLAP is the formation of a data cube A data cube is a multidimensional representation of data, together with all possible aggregates. By all possible aggregates, we mean the aggregates that result by selecting a proper subset of the dimensions and summing over all remaining dimensions. For example, if we choose the species type dimension of the Iris data and sum over all other dimensions, the result will be a one-dimensional entry with three entries, each of which gives the number of flowers of each type. Prof. Pier Luca Lanzi
  • 90. Data Cube Example 90 Consider a data set that records the sales of products at a number of company stores at various dates. This data can be represented as a 3 dimensional array There are 3 two-dimensional aggregates (3 choose 2 ), 3 one-dimensional aggregates, and 1 zero-dimensional aggregate (the overall total) Prof. Pier Luca Lanzi
  • 91. Data Cube Example (continued) 91 The following figure table shows one of the two dimensional aggregates, along with two of the one-dimensional aggregates, and the overall total Prof. Pier Luca Lanzi
  • 92. OLAP Operations: Slicing and Dicing 92 Slicing is selecting a group of cells from the entire multidimensional array by specifying a specific value for one or more dimensions. Dicing involves selecting a subset of cells by specifying a range of attribute values. This is equivalent to defining a subarray from the complete array. In practice, both operations can also be accompanied by aggregation over some dimensions. Prof. Pier Luca Lanzi
  • 93. OLAP Operations: Roll-up and Drill-down 93 Attribute values often have a hierarchical structure. Each date is associated with a year, month, and week. A location is associated with a continent, country, state (province, etc.), and city. Products can be divided into various categories, such as clothing, electronics, and furniture. Note that these categories often nest and form a tree or lattice A year contains months which contains day A country contains a state which contains a city Prof. Pier Luca Lanzi
  • 94. OLAP Operations: Roll-up and Drill-down 94 This hierarchical structure gives rise to the roll-up and drill- down operations. For sales data, we can aggregate (roll up) the sales across all the dates in a month. Conversely, given a view of the data where the time dimension is broken into months, we could split the monthly sales totals (drill down) into daily sales totals. Likewise, we can drill down or roll up on the location or product ID attributes. Prof. Pier Luca Lanzi
  • 95. Data Compression 95 String compression There are extensive theories and well-tuned algorithms Typically lossless But only limited manipulation is possible without expansion Audio/video compression Typically lossy compression, with progressive refinement Sometimes small fragments of signal can be reconstructed without reconstructing the whole Time sequence is not audio Typically short and vary slowly with time Prof. Pier Luca Lanzi
  • 96. 96 Dimensionality Reduction: Wavelet Transformation Haar2 Daubechie4 Discrete wavelet transform (DWT): linear signal processing, multi- resolutional analysis Compressed approximation: store only a small fraction of the strongest of the wavelet coefficients Similar to discrete Fourier transform (DFT), but better lossy compression, localized in space Method: Length, L, must be an integer power of 2 (padding with 0’s, when necessary) Each transform has 2 functions: smoothing, difference Applies to pairs of data, resulting in two set of data of length L/2 Applies two functions recursively, until reaches the desired length Prof. Pier Luca Lanzi
  • 97. Concept Hierarchy Generation 97 for Categorical Data Specification of a partial/total ordering of attributes explicitly at the schema level by users or experts street < city < state < country Specification of a hierarchy for a set of values by explicit data grouping {Urbana, Champaign, Chicago} < Illinois Specification of only a partial set of attributes E.g., only street < city, not others Automatic generation of hierarchies (or attribute levels) by the analysis of the number of distinct values E.g., for a set of attributes: {street, city, state, country} Prof. Pier Luca Lanzi
  • 98. Automatic Concept Hierarchy Generation 98 Some hierarchies can be automatically generated based on the analysis of the number of distinct values per attribute in the data set The attribute with the most distinct values is placed at the lowest level of the hierarchy Exceptions, e.g., weekday, month, quarter, year Country 15 distinct values Province 365 distinct values City 3567 distinct values Street 674,339 distinct values Prof. Pier Luca Lanzi
  • 99. Summary 99 Data exploration and preparation, or preprocessing, is a big issue for both data warehousing and data mining Descriptive data summarization is need for quality data preprocessing Data preparation includes Data cleaning and data integration Data reduction and feature selection Discretization A lot a methods have been developed but data preprocessing still an active area of research Prof. Pier Luca Lanzi