SlideShare a Scribd company logo
1 of 36
1
Data Preprocessing
2
Introduction
 Real World databases
 Huge size
 Noisy, Missing and Inconsistent
 Data Preprocessing
 Data Cleaning
 Data Integration
 Data Transformation
 Data Reduction
3
Need for Data Preprocessing
 Incomplete Data
 Lacking certain attributes of interest
 May not have been considered important during data entry
 Equipment Malfunctioning
 Deleted due to clashes with other data
 Only aggregate data maybe present
4
Need for Data Preprocessing
 Noisy data
 Incorrect attribute values
 Errors or outliers e.g., Salary=“-10”
 Data collection instruments – faulty
 Data entry errors
 Transmission errors – limited buffer size
 Inconsistent
 Containing discrepancies in codes or names e.g., Age=“42”
Birthdate=“03/07/1997”
 Different data sources
 Functional dependency violation (e.g., modify some linked
data)
 Duplicate records also need data cleaning
5
Need for Data Preprocessing
 Quality data
 Quality decisions must be based on quality data
 e.g., duplicate or missing data may cause incorrect or even
misleading statistics.
 Data warehouse needs consistent integration of quality data
 Data extraction, cleaning, and transformation
comprises the majority of the work of building a data
warehouse
6
Major Tasks in Data Preprocessing
 Data cleaning
 Fill in missing values
 Smooth noisy data
 Identify or remove outliers
 Resolve inconsistencies
 Data integration
 Integration of multiple databases, data cubes, or files
 Attributes may have different names
 Eliminating redundancies
 Some attributes can be inferred from others
7
Major Tasks in Data Preprocessing
 Data transformation
 Normalization and aggregation
 Data reduction
 Obtains reduced representation in volume but produces the same or
similar analytical results
 Data Aggregation
 Dimension Reduction
 Data Compression
 Numerosity Reduction
 Generalization
 Data discretization
 Part of data reduction but with particular importance, especially for
numerical data
8
Forms of Data Preprocessing
9
Data Cleaning
 Importance
 “Data cleaning is the number one problem in data
warehousing”—DCI survey
 Data cleaning tasks
 Fill in missing values
 Identify outliers and smooth out noisy data
 Correct inconsistent data
 Resolve redundancy caused by data integration
10
Missing Data
 Data is not always available
 E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
 Missing data may need to be inferred.
11
Handling Missing Data
 Ignore the tuple
 Usually done when class label is missing (assuming the task is
classification)—not effective when the percentage of missing
values per attribute varies considerably.
 Fill in the missing value manually
 Tedious + infeasible
12
Handling Missing Data
 Fill in it automatically with
 a global constant : e.g., “unknown”, a new class
 the attribute mean
 the attribute mean for all samples belonging to the
same class: smarter
 the most probable value: inference-based such as
Bayesian formula or decision tree
13
Noisy Data
 Noise: random error or variance in a measured
variable
 Binning
 first sort data and partition into bins
 then one can smooth by bin means, smooth by bin median, smooth by bin
boundaries, etc.
 Regression
 Smooth by fitting the data into regression functions
 Clustering
 Detect and remove outliers
 Combined computer and human inspection
 Detect suspicious values and check by human (e.g., deal with possible outliers)
14
Binning
 Equal-width (distance) partitioning
 Divides the range into N intervals of equal size: uniform grid
 if A and B are the lowest and highest values of the attribute, the
width of intervals will be: W = (B –A)/N.
 The most straightforward, but outliers may dominate
 Skewed data is not handled well
 Equal-depth (frequency) partitioning
 Divides the range into N intervals, each containing approximately
same number of samples
 Good data scaling
 Managing categorical attributes can be tricky
15
Binning
 Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26,
28, 29, 34
 Partition into equal-frequency (equi-depth) bins:
 Bin 1: 4, 8, 9, 15
 Bin 2: 21, 21, 24, 25
 Bin 3: 26, 28, 29, 34
 Smoothing by bin means:
 Bin 1: 9, 9, 9, 9
 Bin 2: 23, 23, 23, 23
 Bin 3: 29, 29, 29, 29
 Smoothing by bin boundaries:
 Bin 1: 4, 4, 4, 15
 Bin 2: 21, 21, 25, 25
 Bin 3: 26, 26, 26, 34
16
Regression
x
y
y = x + 1
X1
Y1
Y1’
17
Cluster Analysis
18
Human Inspection
 Combined Computer and Human Inspection
 Outliers in Handwritten character recognition
 Surprise value
 Garbage / Outliers
 Human identification
19
Inconsistent Data
 Data discrepancy detection
 Use metadata (e.g., domain, range, dependency)
 Check Functional dependency relationships
 Use commercial tools
 Data scrubbing: use simple domain knowledge to detect errors and
make corrections
 Data auditing: by analyzing data to discover rules and relationship to
detect violators (e.g., correlation and clustering to find outliers)
 Discrepancy detection and transformation
20
Data Integration
 Data integration:
 Combines data from multiple sources into a coherent store
 Schema integration: e.g., A.cust-id ≡ B.cust-#
 Integrate metadata from different sources
 Entity identification problem:
 Identify real world entities from multiple data sources
 Other Issues
 Detecting redundancy (tuple level and attribute level)
 Detecting and resolving data value conflicts
 For the same real world entity, attribute values from different sources are different
 Possible reasons: different representations, different scales, e.g., metric vs.
British units
21
Handling Redundancy
 Redundant data occur often during integration of multiple
databases
 Object identification: The same attribute or object may have
different names in different databases
 Derivable data: One attribute may be a “derived” attribute in
another table, e.g., annual revenue
 Redundant attributes may be detected by correlation analysis
 Careful integration - helps to reduce/avoid redundancies and
inconsistencies and improve mining speed and quality
22
Correlation Analysis
 Correlation coefficient (also called Pearson’s product moment
coefficient)
where n is the number of tuples, and are the respective means
of A and B, σA and σB are the respective standard deviation of A and
B
 If rA,B > 0, A and B are positively correlated (A’s values increase
as B’s). The higher the value, the stronger the correlation.
 rA,B = 0: independent; rA,B < 0: negatively correlated
BAn
BBAA
r BA
σσ)1(
))((
,
−
−−
=
∑
A B
23
Correlation Analysis
 Chi-square test
χ2
= ∑ i=1
c
∑ j=1
r
(oij – eij)2
/ eij
c – Number of possibilities for attribute 1; r – for attribute 2
eij =Expected Frequency : count(A=ai) x count(B=bj) / N
o – Observed Frequency
e – Expected Frequency
Compare χ2
value with table and decide
(r-1) x (c-1) : degrees of freedom
Chi-square test
Male Female Total
Fiction 250 (90) 200 (360) 450
Non-fiction 50 (210) 1000 (840) 1050
Total 300 1200 1500
24
χ2
= (250-90)2
/90 + (200-360)2
/360 +
(50-210)2
/210 + (1000-840)2
/840 = 507.93
Hypothesis: Gender and preferred reading are independent
507.83 > 10.83
So hypothesis is rejected
Conclusion: Gender and preferred_reading are strongly correlated
Chi-square test
Degrees of
freedom (df)
χ2
value
1 0.004 0.02 0.06 0.15 0.46 1.07 1.64 2.71 3.84 6.64 10.83
2 0.10 0.21 0.45 0.71 1.39 2.41 3.22 4.60 5.99 9.21 13.82
3 0.35 0.58 1.01 1.42 2.37 3.66 4.64 6.25 7.82 11.34 16.27
4 0.71 1.06 1.65 2.20 3.36 4.88 5.99 7.78 9.49 13.28 18.47
5 1.14 1.61 2.34 3.00 4.35 6.06 7.29 9.24 11.07 15.09 20.52
6 1.63 2.20 3.07 3.83 5.35 7.23 8.56 10.64 12.59 16.81 22.46
7 2.17 2.83 3.82 4.67 6.35 8.38 9.80 12.02 14.07 18.48 24.32
8 2.73 3.49 4.59 5.53 7.34 9.52 11.03 13.36 15.51 20.09 26.12
9 3.32 4.17 5.38 6.39 8.34 10.66 12.24 14.68 16.92 21.67 27.88
10 3.94 4.86 6.18 7.27 9.34 11.78 13.44 15.99 18.31 23.21 29.59
P value (Probability) 0.95 0.90 0.80 0.70 0.50 0.30 0.20 0.10 0.05 0.01 0.001
Non-significant Significant
25
26
Data Transformation
1. Smoothing: remove noise from data – Binning, Clustering,
Regression
2. Normalization: scaled to fall within a small, specified range
 min-max normalization
 z-score normalization
 normalization by decimal scaling
1. Attribute/feature construction
 New attributes constructed from the given ones
1. Aggregation: summarization, data cube construction
2. Generalization: concept hierarchy climbing
27
1. Smoothing
 Binning
 Equal-width (distance) partitioning
 Equal-depth (frequency) partitioning
 Clustering
28
1. Smoothing
Regression and Log-Linear Models
 Linear regression: Data are modeled to fit a straight line
 Often uses the least-square method to fit the line
 Multiple regression: allows a response variable Y to be modeled
as a linear function of multidimensional feature vector
 Log-linear model: approximates multidimensional probability
distributions
29
 Linear regression: Y = w X + b
 Two regression coefficients, w and b, specify the line and are to be
estimated by using the data at hand
 Using the least squares criterion to the known values of Y1, Y2, …, X1,
X2, ….
 Multiple regression: Y = b0 + b1 X1 + b2 X2.
 Many nonlinear functions can be transformed into the above
 Log-linear models:
 Approximation of joint probabilities
 Estimate the probability of each point in a multi-dimensional space from a
smaller subset of dimensions
1. Smoothing
Regression and Log-Linear Models
30
2. Normalization
 Min-max normalization: to [new_minA, new_maxA]
 Ex. Let income range $12,000 to $98,000 normalized to [0.0,
1.0]. Then $73,600 is mapped to
 Z-score normalization (μ: mean, σ: standard deviation)
 Ex. Let μ = 54,000, σ = 16,000. Then for 73600
 Normalization by decimal scaling
716.00)00.1(
000,12000,98
000,12600,73
=+−
−
−
AAA
AA
A
minnewminnewmaxnew
minmax
minv
v _)__(' +−
−
−
=
A
Av
v
σ
µ−
='
j
v
v
10
'= Where j is the smallest integer such that Max(|ν’|) < 1
225.1
000,16
000,54600,73
=
−
31
3. Attribute Construction
 New attributes are constructed from given attributes
 To improve accuracy and understanding of structure
 Area from height and width
 Product operator and ‘and’
32
4. Aggregation
 Data Cubes
 Store Multi-dimensional aggregated information
 Each cell holds an aggregate data value
 Concept hierarchies – allow analysis at multiple
levels
 Provide fast access to pre-computed summarized
data
33
A Sample 3D Data Cube
Total annual sales
of TV in U.S.A.Date
Product
Country
sum
sum
TV
VCR
PC
1Qtr 2Qtr 3Qtr 4Qtr
U.S.A
Canada
Mexico
sum
34
4. Aggregation
Data Cube Aggregation
 The lowest level of abstraction
 Base Cuboid
 Highest level of abstraction
 Apex Cuboid
 Multiple levels
 Lattice of Cuboids
35
Cube: A Lattice of Cuboids
time,item
time,item,location
time, item, location, supplier
all
time item location supplier
time,location
time,supplier
item,location
item,supplier
location,supplier
time,item,supplier
time,location,supplier
item,location,supplier
0-D(apex) cuboid
1-D cuboids
2-D cuboids
3-D cuboids
4-D(base) cuboid
36
5. Generalization - Concept hierarchy
 Concept hierarchy formation
 Recursively reduce the data by collecting and replacing low
level concepts (such as numeric values for age) by higher
level concepts (such as young, middle-aged, or senior)
 Detail lost but more meaningful
 Mining becomes easier
 Several concept hierarchies can be defined for the
same attribute

More Related Content

What's hot

Data Mining: Concepts and Techniques (3rd ed.) — Chapter 5
Data Mining:  Concepts and Techniques (3rd ed.)— Chapter 5 Data Mining:  Concepts and Techniques (3rd ed.)— Chapter 5
Data Mining: Concepts and Techniques (3rd ed.) — Chapter 5 Salah Amean
 
2.4 rule based classification
2.4 rule based classification2.4 rule based classification
2.4 rule based classificationKrish_ver2
 
Decision trees in Machine Learning
Decision trees in Machine Learning Decision trees in Machine Learning
Decision trees in Machine Learning Mohammad Junaid Khan
 
Data preprocessing using Machine Learning
Data  preprocessing using Machine Learning Data  preprocessing using Machine Learning
Data preprocessing using Machine Learning Gopal Sakarkar
 
Decision Tree - C4.5&CART
Decision Tree - C4.5&CARTDecision Tree - C4.5&CART
Decision Tree - C4.5&CARTXueping Peng
 
Birch Algorithm With Solved Example
Birch Algorithm With Solved ExampleBirch Algorithm With Solved Example
Birch Algorithm With Solved Examplekailash shaw
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessingankur bhalla
 
Data Visualization in Data Science
Data Visualization in Data ScienceData Visualization in Data Science
Data Visualization in Data ScienceMaloy Manna, PMP®
 
Data preparation
Data preparationData preparation
Data preparationTony Nguyen
 
Data preprocessing in Data Mining
Data preprocessing in Data MiningData preprocessing in Data Mining
Data preprocessing in Data MiningDHIVYADEVAKI
 
Supervised and unsupervised learning
Supervised and unsupervised learningSupervised and unsupervised learning
Supervised and unsupervised learningAmAn Singh
 
Data mining :Concepts and Techniques Chapter 2, data
Data mining :Concepts and Techniques Chapter 2, dataData mining :Concepts and Techniques Chapter 2, data
Data mining :Concepts and Techniques Chapter 2, dataSalah Amean
 
Supervised learning and Unsupervised learning
Supervised learning and Unsupervised learning Supervised learning and Unsupervised learning
Supervised learning and Unsupervised learning Usama Fayyaz
 
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kamber
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & KamberChapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kamber
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kambererror007
 
Data preparation and processing chapter 2
Data preparation and processing chapter  2Data preparation and processing chapter  2
Data preparation and processing chapter 2Mahmoud Alfarra
 
Exploratory data analysis data visualization
Exploratory data analysis data visualizationExploratory data analysis data visualization
Exploratory data analysis data visualizationDr. Hamdan Al-Sabri
 

What's hot (20)

Data Mining: Concepts and Techniques (3rd ed.) — Chapter 5
Data Mining:  Concepts and Techniques (3rd ed.)— Chapter 5 Data Mining:  Concepts and Techniques (3rd ed.)— Chapter 5
Data Mining: Concepts and Techniques (3rd ed.) — Chapter 5
 
2.4 rule based classification
2.4 rule based classification2.4 rule based classification
2.4 rule based classification
 
Decision trees in Machine Learning
Decision trees in Machine Learning Decision trees in Machine Learning
Decision trees in Machine Learning
 
Data preprocessing using Machine Learning
Data  preprocessing using Machine Learning Data  preprocessing using Machine Learning
Data preprocessing using Machine Learning
 
Decision Tree - C4.5&CART
Decision Tree - C4.5&CARTDecision Tree - C4.5&CART
Decision Tree - C4.5&CART
 
Birch Algorithm With Solved Example
Birch Algorithm With Solved ExampleBirch Algorithm With Solved Example
Birch Algorithm With Solved Example
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Data Visualization in Data Science
Data Visualization in Data ScienceData Visualization in Data Science
Data Visualization in Data Science
 
Data Mining: Data Preprocessing
Data Mining: Data PreprocessingData Mining: Data Preprocessing
Data Mining: Data Preprocessing
 
Data preparation
Data preparationData preparation
Data preparation
 
Data preprocessing in Data Mining
Data preprocessing in Data MiningData preprocessing in Data Mining
Data preprocessing in Data Mining
 
Supervised and unsupervised learning
Supervised and unsupervised learningSupervised and unsupervised learning
Supervised and unsupervised learning
 
Data mining :Concepts and Techniques Chapter 2, data
Data mining :Concepts and Techniques Chapter 2, dataData mining :Concepts and Techniques Chapter 2, data
Data mining :Concepts and Techniques Chapter 2, data
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Supervised learning and Unsupervised learning
Supervised learning and Unsupervised learning Supervised learning and Unsupervised learning
Supervised learning and Unsupervised learning
 
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kamber
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & KamberChapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kamber
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kamber
 
Data preparation and processing chapter 2
Data preparation and processing chapter  2Data preparation and processing chapter  2
Data preparation and processing chapter 2
 
Exploratory data analysis data visualization
Exploratory data analysis data visualizationExploratory data analysis data visualization
Exploratory data analysis data visualization
 
Data Preprocessing
Data PreprocessingData Preprocessing
Data Preprocessing
 

Viewers also liked

Data preprocessing
Data preprocessingData preprocessing
Data preprocessingAmuthamca
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessingHoang Nguyen
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessingHarry Potter
 
An efficient data preprocessing method for mining
An efficient data preprocessing method for miningAn efficient data preprocessing method for mining
An efficient data preprocessing method for miningKamesh Waran
 
Data Preprocessing- Data Warehouse & Data Mining
Data Preprocessing- Data Warehouse & Data MiningData Preprocessing- Data Warehouse & Data Mining
Data Preprocessing- Data Warehouse & Data MiningTrinity Dwarka
 
5.2 divede and conquer 03
5.2 divede and conquer 035.2 divede and conquer 03
5.2 divede and conquer 03Krish_ver2
 
문제는 한글이 잘 구현되는가?
문제는 한글이 잘 구현되는가?문제는 한글이 잘 구현되는가?
문제는 한글이 잘 구현되는가?Choi Man Dream
 
160607 14 sw교육_강의안
160607 14 sw교육_강의안160607 14 sw교육_강의안
160607 14 sw교육_강의안Choi Man Dream
 
1.9 b trees eg 03
1.9 b trees eg 031.9 b trees eg 03
1.9 b trees eg 03Krish_ver2
 
1.9 b trees 02
1.9 b trees 021.9 b trees 02
1.9 b trees 02Krish_ver2
 
5.3 dyn algo-i
5.3 dyn algo-i5.3 dyn algo-i
5.3 dyn algo-iKrish_ver2
 
trabajo de cultural
trabajo de culturaltrabajo de cultural
trabajo de culturalargelures
 
CV Belinda Wahl 2015
CV Belinda Wahl 2015CV Belinda Wahl 2015
CV Belinda Wahl 2015Belinda Wahl
 
평범한 이야기[Intro: 2015 의기제]
평범한 이야기[Intro: 2015 의기제]평범한 이야기[Intro: 2015 의기제]
평범한 이야기[Intro: 2015 의기제]대호 이
 
2.4 mst prim &kruskal demo
2.4 mst  prim &kruskal demo2.4 mst  prim &kruskal demo
2.4 mst prim &kruskal demoKrish_ver2
 

Viewers also liked (20)

Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
An efficient data preprocessing method for mining
An efficient data preprocessing method for miningAn efficient data preprocessing method for mining
An efficient data preprocessing method for mining
 
Data Preprocessing- Data Warehouse & Data Mining
Data Preprocessing- Data Warehouse & Data MiningData Preprocessing- Data Warehouse & Data Mining
Data Preprocessing- Data Warehouse & Data Mining
 
5.2 divede and conquer 03
5.2 divede and conquer 035.2 divede and conquer 03
5.2 divede and conquer 03
 
4.4 hashing02
4.4 hashing024.4 hashing02
4.4 hashing02
 
문제는 한글이 잘 구현되는가?
문제는 한글이 잘 구현되는가?문제는 한글이 잘 구현되는가?
문제는 한글이 잘 구현되는가?
 
4.2 bst 02
4.2 bst 024.2 bst 02
4.2 bst 02
 
160607 14 sw교육_강의안
160607 14 sw교육_강의안160607 14 sw교육_강의안
160607 14 sw교육_강의안
 
1.9 b trees eg 03
1.9 b trees eg 031.9 b trees eg 03
1.9 b trees eg 03
 
Online Trading Concepts
Online Trading ConceptsOnline Trading Concepts
Online Trading Concepts
 
1.9 b trees 02
1.9 b trees 021.9 b trees 02
1.9 b trees 02
 
5.3 dyn algo-i
5.3 dyn algo-i5.3 dyn algo-i
5.3 dyn algo-i
 
RESUME-ARITRA BHOWMIK
RESUME-ARITRA BHOWMIKRESUME-ARITRA BHOWMIK
RESUME-ARITRA BHOWMIK
 
trabajo de cultural
trabajo de culturaltrabajo de cultural
trabajo de cultural
 
CV Belinda Wahl 2015
CV Belinda Wahl 2015CV Belinda Wahl 2015
CV Belinda Wahl 2015
 
평범한 이야기[Intro: 2015 의기제]
평범한 이야기[Intro: 2015 의기제]평범한 이야기[Intro: 2015 의기제]
평범한 이야기[Intro: 2015 의기제]
 
4.2 bst 03
4.2 bst 034.2 bst 03
4.2 bst 03
 
2.4 mst prim &kruskal demo
2.4 mst  prim &kruskal demo2.4 mst  prim &kruskal demo
2.4 mst prim &kruskal demo
 

Similar to 1.6.data preprocessing

Datapreprocessingppt
DatapreprocessingpptDatapreprocessingppt
DatapreprocessingpptShree Hari
 
Data preperation
Data preperationData preperation
Data preperationFraboni Ec
 
prvg4sczsginx3ynyqlc-signature-b84f0cf1da1e7d0fde4ecfab2a28f243cfa561f9aa2c9b...
prvg4sczsginx3ynyqlc-signature-b84f0cf1da1e7d0fde4ecfab2a28f243cfa561f9aa2c9b...prvg4sczsginx3ynyqlc-signature-b84f0cf1da1e7d0fde4ecfab2a28f243cfa561f9aa2c9b...
prvg4sczsginx3ynyqlc-signature-b84f0cf1da1e7d0fde4ecfab2a28f243cfa561f9aa2c9b...ImXaib
 
Data preparation
Data preparationData preparation
Data preparationJames Wong
 
Pre_processing_the_data_using_advance_technique
Pre_processing_the_data_using_advance_techniquePre_processing_the_data_using_advance_technique
Pre_processing_the_data_using_advance_techniqueBhushan134837
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessingTony Nguyen
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessingJames Wong
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessingHarry Potter
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessingFraboni Ec
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessingYoung Alista
 

Similar to 1.6.data preprocessing (20)

Data Mining
Data MiningData Mining
Data Mining
 
Datapreprocessingppt
DatapreprocessingpptDatapreprocessingppt
Datapreprocessingppt
 
Data preperation
Data preperationData preperation
Data preperation
 
Data preperation
Data preperationData preperation
Data preperation
 
Data preperation
Data preperationData preperation
Data preperation
 
Data preparation
Data preparationData preparation
Data preparation
 
Data preparation
Data preparationData preparation
Data preparation
 
prvg4sczsginx3ynyqlc-signature-b84f0cf1da1e7d0fde4ecfab2a28f243cfa561f9aa2c9b...
prvg4sczsginx3ynyqlc-signature-b84f0cf1da1e7d0fde4ecfab2a28f243cfa561f9aa2c9b...prvg4sczsginx3ynyqlc-signature-b84f0cf1da1e7d0fde4ecfab2a28f243cfa561f9aa2c9b...
prvg4sczsginx3ynyqlc-signature-b84f0cf1da1e7d0fde4ecfab2a28f243cfa561f9aa2c9b...
 
Data preparation
Data preparationData preparation
Data preparation
 
Datapreprocessing
DatapreprocessingDatapreprocessing
Datapreprocessing
 
Datapreprocessing
DatapreprocessingDatapreprocessing
Datapreprocessing
 
Pre_processing_the_data_using_advance_technique
Pre_processing_the_data_using_advance_techniquePre_processing_the_data_using_advance_technique
Pre_processing_the_data_using_advance_technique
 
Data processing
Data processingData processing
Data processing
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Datapreprocess
DatapreprocessDatapreprocess
Datapreprocess
 

More from Krish_ver2

5.5 back tracking
5.5 back tracking5.5 back tracking
5.5 back trackingKrish_ver2
 
5.5 back track
5.5 back track5.5 back track
5.5 back trackKrish_ver2
 
5.5 back tracking 02
5.5 back tracking 025.5 back tracking 02
5.5 back tracking 02Krish_ver2
 
5.4 randomized datastructures
5.4 randomized datastructures5.4 randomized datastructures
5.4 randomized datastructuresKrish_ver2
 
5.4 randomized datastructures
5.4 randomized datastructures5.4 randomized datastructures
5.4 randomized datastructuresKrish_ver2
 
5.4 randamized algorithm
5.4 randamized algorithm5.4 randamized algorithm
5.4 randamized algorithmKrish_ver2
 
5.3 dynamic programming 03
5.3 dynamic programming 035.3 dynamic programming 03
5.3 dynamic programming 03Krish_ver2
 
5.3 dynamic programming
5.3 dynamic programming5.3 dynamic programming
5.3 dynamic programmingKrish_ver2
 
5.2 divede and conquer 03
5.2 divede and conquer 035.2 divede and conquer 03
5.2 divede and conquer 03Krish_ver2
 
5.2 divide and conquer
5.2 divide and conquer5.2 divide and conquer
5.2 divide and conquerKrish_ver2
 
5.1 greedyyy 02
5.1 greedyyy 025.1 greedyyy 02
5.1 greedyyy 02Krish_ver2
 
4.4 hashing ext
4.4 hashing  ext4.4 hashing  ext
4.4 hashing extKrish_ver2
 
4.4 external hashing
4.4 external hashing4.4 external hashing
4.4 external hashingKrish_ver2
 
4.1 sequentioal search
4.1 sequentioal search4.1 sequentioal search
4.1 sequentioal searchKrish_ver2
 
3.9 external sorting
3.9 external sorting3.9 external sorting
3.9 external sortingKrish_ver2
 

More from Krish_ver2 (20)

5.5 back tracking
5.5 back tracking5.5 back tracking
5.5 back tracking
 
5.5 back track
5.5 back track5.5 back track
5.5 back track
 
5.5 back tracking 02
5.5 back tracking 025.5 back tracking 02
5.5 back tracking 02
 
5.4 randomized datastructures
5.4 randomized datastructures5.4 randomized datastructures
5.4 randomized datastructures
 
5.4 randomized datastructures
5.4 randomized datastructures5.4 randomized datastructures
5.4 randomized datastructures
 
5.4 randamized algorithm
5.4 randamized algorithm5.4 randamized algorithm
5.4 randamized algorithm
 
5.3 dynamic programming 03
5.3 dynamic programming 035.3 dynamic programming 03
5.3 dynamic programming 03
 
5.3 dynamic programming
5.3 dynamic programming5.3 dynamic programming
5.3 dynamic programming
 
5.2 divede and conquer 03
5.2 divede and conquer 035.2 divede and conquer 03
5.2 divede and conquer 03
 
5.2 divide and conquer
5.2 divide and conquer5.2 divide and conquer
5.2 divide and conquer
 
5.1 greedyyy 02
5.1 greedyyy 025.1 greedyyy 02
5.1 greedyyy 02
 
5.1 greedy
5.1 greedy5.1 greedy
5.1 greedy
 
5.1 greedy 03
5.1 greedy 035.1 greedy 03
5.1 greedy 03
 
4.4 hashing
4.4 hashing4.4 hashing
4.4 hashing
 
4.4 hashing ext
4.4 hashing  ext4.4 hashing  ext
4.4 hashing ext
 
4.4 external hashing
4.4 external hashing4.4 external hashing
4.4 external hashing
 
4.2 bst
4.2 bst4.2 bst
4.2 bst
 
4.1 sequentioal search
4.1 sequentioal search4.1 sequentioal search
4.1 sequentioal search
 
3.9 external sorting
3.9 external sorting3.9 external sorting
3.9 external sorting
 
3.8 quicksort
3.8 quicksort3.8 quicksort
3.8 quicksort
 

Recently uploaded

Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptxPoojaSen20
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 

Recently uploaded (20)

Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptx
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 

1.6.data preprocessing

  • 2. 2 Introduction  Real World databases  Huge size  Noisy, Missing and Inconsistent  Data Preprocessing  Data Cleaning  Data Integration  Data Transformation  Data Reduction
  • 3. 3 Need for Data Preprocessing  Incomplete Data  Lacking certain attributes of interest  May not have been considered important during data entry  Equipment Malfunctioning  Deleted due to clashes with other data  Only aggregate data maybe present
  • 4. 4 Need for Data Preprocessing  Noisy data  Incorrect attribute values  Errors or outliers e.g., Salary=“-10”  Data collection instruments – faulty  Data entry errors  Transmission errors – limited buffer size  Inconsistent  Containing discrepancies in codes or names e.g., Age=“42” Birthdate=“03/07/1997”  Different data sources  Functional dependency violation (e.g., modify some linked data)  Duplicate records also need data cleaning
  • 5. 5 Need for Data Preprocessing  Quality data  Quality decisions must be based on quality data  e.g., duplicate or missing data may cause incorrect or even misleading statistics.  Data warehouse needs consistent integration of quality data  Data extraction, cleaning, and transformation comprises the majority of the work of building a data warehouse
  • 6. 6 Major Tasks in Data Preprocessing  Data cleaning  Fill in missing values  Smooth noisy data  Identify or remove outliers  Resolve inconsistencies  Data integration  Integration of multiple databases, data cubes, or files  Attributes may have different names  Eliminating redundancies  Some attributes can be inferred from others
  • 7. 7 Major Tasks in Data Preprocessing  Data transformation  Normalization and aggregation  Data reduction  Obtains reduced representation in volume but produces the same or similar analytical results  Data Aggregation  Dimension Reduction  Data Compression  Numerosity Reduction  Generalization  Data discretization  Part of data reduction but with particular importance, especially for numerical data
  • 8. 8 Forms of Data Preprocessing
  • 9. 9 Data Cleaning  Importance  “Data cleaning is the number one problem in data warehousing”—DCI survey  Data cleaning tasks  Fill in missing values  Identify outliers and smooth out noisy data  Correct inconsistent data  Resolve redundancy caused by data integration
  • 10. 10 Missing Data  Data is not always available  E.g., many tuples have no recorded value for several attributes, such as customer income in sales data  Missing data may need to be inferred.
  • 11. 11 Handling Missing Data  Ignore the tuple  Usually done when class label is missing (assuming the task is classification)—not effective when the percentage of missing values per attribute varies considerably.  Fill in the missing value manually  Tedious + infeasible
  • 12. 12 Handling Missing Data  Fill in it automatically with  a global constant : e.g., “unknown”, a new class  the attribute mean  the attribute mean for all samples belonging to the same class: smarter  the most probable value: inference-based such as Bayesian formula or decision tree
  • 13. 13 Noisy Data  Noise: random error or variance in a measured variable  Binning  first sort data and partition into bins  then one can smooth by bin means, smooth by bin median, smooth by bin boundaries, etc.  Regression  Smooth by fitting the data into regression functions  Clustering  Detect and remove outliers  Combined computer and human inspection  Detect suspicious values and check by human (e.g., deal with possible outliers)
  • 14. 14 Binning  Equal-width (distance) partitioning  Divides the range into N intervals of equal size: uniform grid  if A and B are the lowest and highest values of the attribute, the width of intervals will be: W = (B –A)/N.  The most straightforward, but outliers may dominate  Skewed data is not handled well  Equal-depth (frequency) partitioning  Divides the range into N intervals, each containing approximately same number of samples  Good data scaling  Managing categorical attributes can be tricky
  • 15. 15 Binning  Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34  Partition into equal-frequency (equi-depth) bins:  Bin 1: 4, 8, 9, 15  Bin 2: 21, 21, 24, 25  Bin 3: 26, 28, 29, 34  Smoothing by bin means:  Bin 1: 9, 9, 9, 9  Bin 2: 23, 23, 23, 23  Bin 3: 29, 29, 29, 29  Smoothing by bin boundaries:  Bin 1: 4, 4, 4, 15  Bin 2: 21, 21, 25, 25  Bin 3: 26, 26, 26, 34
  • 16. 16 Regression x y y = x + 1 X1 Y1 Y1’
  • 18. 18 Human Inspection  Combined Computer and Human Inspection  Outliers in Handwritten character recognition  Surprise value  Garbage / Outliers  Human identification
  • 19. 19 Inconsistent Data  Data discrepancy detection  Use metadata (e.g., domain, range, dependency)  Check Functional dependency relationships  Use commercial tools  Data scrubbing: use simple domain knowledge to detect errors and make corrections  Data auditing: by analyzing data to discover rules and relationship to detect violators (e.g., correlation and clustering to find outliers)  Discrepancy detection and transformation
  • 20. 20 Data Integration  Data integration:  Combines data from multiple sources into a coherent store  Schema integration: e.g., A.cust-id ≡ B.cust-#  Integrate metadata from different sources  Entity identification problem:  Identify real world entities from multiple data sources  Other Issues  Detecting redundancy (tuple level and attribute level)  Detecting and resolving data value conflicts  For the same real world entity, attribute values from different sources are different  Possible reasons: different representations, different scales, e.g., metric vs. British units
  • 21. 21 Handling Redundancy  Redundant data occur often during integration of multiple databases  Object identification: The same attribute or object may have different names in different databases  Derivable data: One attribute may be a “derived” attribute in another table, e.g., annual revenue  Redundant attributes may be detected by correlation analysis  Careful integration - helps to reduce/avoid redundancies and inconsistencies and improve mining speed and quality
  • 22. 22 Correlation Analysis  Correlation coefficient (also called Pearson’s product moment coefficient) where n is the number of tuples, and are the respective means of A and B, σA and σB are the respective standard deviation of A and B  If rA,B > 0, A and B are positively correlated (A’s values increase as B’s). The higher the value, the stronger the correlation.  rA,B = 0: independent; rA,B < 0: negatively correlated BAn BBAA r BA σσ)1( ))(( , − −− = ∑ A B
  • 23. 23 Correlation Analysis  Chi-square test χ2 = ∑ i=1 c ∑ j=1 r (oij – eij)2 / eij c – Number of possibilities for attribute 1; r – for attribute 2 eij =Expected Frequency : count(A=ai) x count(B=bj) / N o – Observed Frequency e – Expected Frequency Compare χ2 value with table and decide (r-1) x (c-1) : degrees of freedom
  • 24. Chi-square test Male Female Total Fiction 250 (90) 200 (360) 450 Non-fiction 50 (210) 1000 (840) 1050 Total 300 1200 1500 24 χ2 = (250-90)2 /90 + (200-360)2 /360 + (50-210)2 /210 + (1000-840)2 /840 = 507.93 Hypothesis: Gender and preferred reading are independent 507.83 > 10.83 So hypothesis is rejected Conclusion: Gender and preferred_reading are strongly correlated
  • 25. Chi-square test Degrees of freedom (df) χ2 value 1 0.004 0.02 0.06 0.15 0.46 1.07 1.64 2.71 3.84 6.64 10.83 2 0.10 0.21 0.45 0.71 1.39 2.41 3.22 4.60 5.99 9.21 13.82 3 0.35 0.58 1.01 1.42 2.37 3.66 4.64 6.25 7.82 11.34 16.27 4 0.71 1.06 1.65 2.20 3.36 4.88 5.99 7.78 9.49 13.28 18.47 5 1.14 1.61 2.34 3.00 4.35 6.06 7.29 9.24 11.07 15.09 20.52 6 1.63 2.20 3.07 3.83 5.35 7.23 8.56 10.64 12.59 16.81 22.46 7 2.17 2.83 3.82 4.67 6.35 8.38 9.80 12.02 14.07 18.48 24.32 8 2.73 3.49 4.59 5.53 7.34 9.52 11.03 13.36 15.51 20.09 26.12 9 3.32 4.17 5.38 6.39 8.34 10.66 12.24 14.68 16.92 21.67 27.88 10 3.94 4.86 6.18 7.27 9.34 11.78 13.44 15.99 18.31 23.21 29.59 P value (Probability) 0.95 0.90 0.80 0.70 0.50 0.30 0.20 0.10 0.05 0.01 0.001 Non-significant Significant 25
  • 26. 26 Data Transformation 1. Smoothing: remove noise from data – Binning, Clustering, Regression 2. Normalization: scaled to fall within a small, specified range  min-max normalization  z-score normalization  normalization by decimal scaling 1. Attribute/feature construction  New attributes constructed from the given ones 1. Aggregation: summarization, data cube construction 2. Generalization: concept hierarchy climbing
  • 27. 27 1. Smoothing  Binning  Equal-width (distance) partitioning  Equal-depth (frequency) partitioning  Clustering
  • 28. 28 1. Smoothing Regression and Log-Linear Models  Linear regression: Data are modeled to fit a straight line  Often uses the least-square method to fit the line  Multiple regression: allows a response variable Y to be modeled as a linear function of multidimensional feature vector  Log-linear model: approximates multidimensional probability distributions
  • 29. 29  Linear regression: Y = w X + b  Two regression coefficients, w and b, specify the line and are to be estimated by using the data at hand  Using the least squares criterion to the known values of Y1, Y2, …, X1, X2, ….  Multiple regression: Y = b0 + b1 X1 + b2 X2.  Many nonlinear functions can be transformed into the above  Log-linear models:  Approximation of joint probabilities  Estimate the probability of each point in a multi-dimensional space from a smaller subset of dimensions 1. Smoothing Regression and Log-Linear Models
  • 30. 30 2. Normalization  Min-max normalization: to [new_minA, new_maxA]  Ex. Let income range $12,000 to $98,000 normalized to [0.0, 1.0]. Then $73,600 is mapped to  Z-score normalization (μ: mean, σ: standard deviation)  Ex. Let μ = 54,000, σ = 16,000. Then for 73600  Normalization by decimal scaling 716.00)00.1( 000,12000,98 000,12600,73 =+− − − AAA AA A minnewminnewmaxnew minmax minv v _)__(' +− − − = A Av v σ µ− =' j v v 10 '= Where j is the smallest integer such that Max(|ν’|) < 1 225.1 000,16 000,54600,73 = −
  • 31. 31 3. Attribute Construction  New attributes are constructed from given attributes  To improve accuracy and understanding of structure  Area from height and width  Product operator and ‘and’
  • 32. 32 4. Aggregation  Data Cubes  Store Multi-dimensional aggregated information  Each cell holds an aggregate data value  Concept hierarchies – allow analysis at multiple levels  Provide fast access to pre-computed summarized data
  • 33. 33 A Sample 3D Data Cube Total annual sales of TV in U.S.A.Date Product Country sum sum TV VCR PC 1Qtr 2Qtr 3Qtr 4Qtr U.S.A Canada Mexico sum
  • 34. 34 4. Aggregation Data Cube Aggregation  The lowest level of abstraction  Base Cuboid  Highest level of abstraction  Apex Cuboid  Multiple levels  Lattice of Cuboids
  • 35. 35 Cube: A Lattice of Cuboids time,item time,item,location time, item, location, supplier all time item location supplier time,location time,supplier item,location item,supplier location,supplier time,item,supplier time,location,supplier item,location,supplier 0-D(apex) cuboid 1-D cuboids 2-D cuboids 3-D cuboids 4-D(base) cuboid
  • 36. 36 5. Generalization - Concept hierarchy  Concept hierarchy formation  Recursively reduce the data by collecting and replacing low level concepts (such as numeric values for age) by higher level concepts (such as young, middle-aged, or senior)  Detail lost but more meaningful  Mining becomes easier  Several concept hierarchies can be defined for the same attribute