2. 04/18/13 Data Mining: Concepts and Techniques
2
3. Chapter 4: Data Cube Computation
and Data Generalization
Efficient Computation of Data Cubes
Exploration and Discovery in Multidimensional
Databases
Attribute-Oriented Induction ─ An Alternative
Data Generalization Method
04/18/13 Data Mining: Concepts and Techniques
3
4. Efficient Computation of Data Cubes
Preliminary cube computation tricks (Agarwal et al.’96)
Computing full/iceberg cubes: 3 methodologies
Top-Down: Multi-Way array aggregation (Zhao, Deshpande &
Naughton, SIGMOD’97)
Bottom-Up:
Bottom-up computation: BUC (Beyer & Ramarkrishnan,
SIGMOD’99)
H-cubing technique (Han, Pei, Dong & Wang: SIGMOD’01)
Integrating Top-Down and Bottom-Up:
Star-cubing algorithm (Xin, Han, Li & Wah: VLDB’03)
High-dimensional OLAP: A Minimal Cubing Approach (Li, et al.
VLDB’04)
Computing alternative kinds of cubes:
Partial cube, closed cube, approximate cube, etc.
04/18/13 Data Mining: Concepts and Techniques
4
5. Preliminary Tricks (Agarwal et al. VLDB’96)
Sorting, hashing, and grouping operations are applied to the dimension
attributes in order to reorder and cluster related tuples
Aggregates may be computed from previously computed aggregates,
rather than from the base fact table
Smallest-child: computing a cuboid from the smallest, previously
computed cuboid
Cache-results: caching results of a cuboid from which other
cuboids are computed to reduce disk I/Os
Amortize-scans: computing as many as possible cuboids at the
same time to amortize disk reads
Share-sorts: sharing sorting costs cross multiple cuboids when
sort-based method is used
Share-partitions: sharing the partitioning cost across multiple
cuboids when hash-based algorithms are used
04/18/13 Data Mining: Concepts and Techniques
5
6. Multi-Way Array Aggregation
Array-based “bottom-up” algorithm
Using multi-dimensional chunks a ll
No direct tuple comparisons
Simultaneous aggregation on A B C
multiple dimensions
Intermediate aggregate values are A B A C BC
re-used for computing ancestor
cuboids
A BC
Cannot do Apriori pruning: No
iceberg optimization
04/18/13 Data Mining: Concepts and Techniques
6
7. Multi-way Array Aggregation for Cube
Computation (MOLAP)
Partition arrays into chunks (a small subcube which fits in memory).
Compressed sparse array addressing: (chunk_id, offset)
Compute aggregates in “multiway” by visiting cube cells in the order
which minimizes the # of times to visit each cell, and reduces memory
access and storage cost.
C c3 61
c2 45
62 63 64
46 47 48
c1 29 30 31 32 What is the best
c0
b3 B13 14 15 16 60 traversing order
44
9
28 56 to do multi-way
b2
B 40
24 52 aggregation?
b1 5 36
20
b0 1 2 3 4
a0 a1 a2 a3
04/18/13 A Data Mining: Concepts and Techniques
7
8. Multi-way Array Aggregation for
Cube Computation
C c3 61
c2 45
62 63 64
46 47 48
c1 29 30 31 32
c0
B13 14 15 16 60
b3 44
B 28 56
b2 9
40
24 52
b1 5
36
20
b0 1 2 3 4
a0 a1 a2 a3
A
04/18/13 Data Mining: Concepts and Techniques
8
9. Multi-way Array Aggregation for
Cube Computation
C c3 61
c2 45
62 63 64
46 47 48
c1 29 30 31 32
c0
B13 14 15 16 60
b3 44
B 28 56
b2 9
40
24 52
b1 5
36
20
b0 1 2 3 4
a0 a1 a2 a3
A
04/18/13 Data Mining: Concepts and Techniques
9
10. Multi-Way Array Aggregation for
Cube Computation (Cont.)
Method: the planes should be sorted and computed
according to their size in ascending order
Idea: keep the smallest plane in the main memory,
fetch and compute only one chunk at a time for the
largest plane
Limitation of the method: computing well only for a small
number of dimensions
If there are a large number of dimensions, “top-down”
computation and iceberg cube computation methods
can be explored
04/18/13 Data Mining: Concepts and Techniques
10
11. Bottom-Up Computation (BUC)
a ll
BUC (Beyer & Ramakrishnan,
SIGMOD’99)
A B C D
Bottom-up cube computation A B A C A D B C B D C D
(Note: top-down in our view!) A B C A B D A C D B C D
Divides dimensions into partitions
and facilitates iceberg pruning A B C D
1 a ll
If a partition does not satisfy
min_sup, its descendants can 2 A 10 B 14 C 16 D
be pruned
If minsup = 1 ⇒ compute full
3 A B 7 A C 9 A D 11 BC 13 BD 15 C D
CUBE! 4 A BC 6 A BD 8 A C D 12 BC D
No simultaneous aggregation
04/18/13 Data Mining: Concepts andA Techniques
5 BC D
11
12. BUC: Partitioning
Usually, entire data set can’t
fit in main memory
Sort distinct values, partition into blocks that fit
Continue processing
Optimizations
Partitioning
External Sorting, Hashing, Counting Sort
Ordering dimensions to encourage pruning
Cardinality, Skew, Correlation
Collapsing duplicates
Can’t do holistic aggregates anymore!
04/18/13 Data Mining: Concepts and Techniques
12
13. H-Cubing: Using H-Tree Structure
a ll
Bottom-up computation A B C D
Exploring an H-tree A B A C A D B C B D C D
structure
If the current computation
A B C A B D A C D B C D
of an H-tree cannot pass A B C D
min_sup, do not proceed
further (pruning)
No simultaneous
aggregation
04/18/13 Data Mining: Concepts and Techniques
13
14. H-tree: A Prefix Hyper-tree
Attr. Val. Quant-Info Side-link
Edu Sum:2285 … root
Hhd …
Bus …
Header … … edu hhd bus
Jan …
table Feb …
… …
Tor … Jan Mar Jan Feb
Van …
Mon …
… …
Tor Van Tor Mon
Month City Cust_grp Prod Cost Price
Jan Tor Edu Printer 500 485 Q.I. Q.I. Q.I.
Quant-Info
Jan Tor Hhd TV 800 1200
Sum: 1765
Jan Tor Edu Camera 1160 1280
Feb Mon Bus Laptop 1500 2500
Cnt: 2
Mar Van Edu HD 540 520 bins
… … … … …
04/18/13 Data …
Mining: Concepts and Techniques
14
15. H-Cubing: Computing Cells Involving Dimension
City
Attr.
Val.
Q.I.
Side-
link
From (*, *, Tor) to (*, Jan, Tor)
Header Edu …
root
Table Hhd
Bus
…
…
HTor … …
Edu. Hhd. Bus.
Jan …
Feb …
… …
Attr. Val. Quant-Info Side-link Jan. Mar. Jan. Feb.
Edu Sum:2285 …
Hhd …
Bus …
… … Tor. Van. Tor. Mon.
Jan …
Feb …
… …
Quant-Info Q.I. Q.I. Q.I.
Tor …
Van … Sum: 1765
Mon …
… … Cnt: 2
bins
04/18/13 Data Mining: Concepts and Techniques
15
16. Computing Cells Involving Month But No City
1. Roll up quant-info root
2. Compute cells involving
Edu. Hhd. Bus.
month but no city
Attr. Val. Quant-Info Side-link
Edu. Sum:2285 … Jan. Mar. Jan. Feb.
Hhd. …
Bus. …
Q.I. Q.I. Q.I. Q.I.
… …
Jan. …
Feb. …
Mar. …
… …
Tor. …
Tor. Van. Tor. Mont.
Van. … Top-k OK mark: if Q.I. in a child passes
Mont. … top-k avg threshold, so does its parents.
… … No binning is needed!
04/18/13 Data Mining: Concepts and Techniques
16
17. Computing Cells Involving Only Cust_grp
root
Check header table directly edu hhd bus
Attr. Val. Quant-Info Side-link Jan Mar Jan Feb
Sum:2285
Edu
…
Hhd … Q.I. Q.I. Q.I. Q.I.
Bus …
… …
Jan …
Feb …
Mar …
… … Tor Van
Tor … Tor Mon
Van …
Mon …
… …
04/18/13 Data Mining: Concepts and Techniques
17
18. Star-Cubing: An Integrating Method
Integrate the top-down and bottom-up methods
Explore shared dimensions
E.g., dimension A is the shared dimension of ACD and AD
ABD/AB means cuboid ABD has shared dimensions AB
Allows for shared computations
e.g., cuboid AB is computed simultaneously as ABD
Aggregate in a top-down manner but with the bottom-up sub-layer
underneath which will allow Apriori pruning C /C D
Shared dimensions grow in bottom-up fashion
A C /A C A D /A B C /B C B D /B C D
A B C /A B C A B D /A B A C D /A BC D
A B C D /a ll
04/18/13 Data Mining: Concepts and Techniques
18
19. Iceberg Pruning in Shared Dimensions
Anti-monotonic property of shared dimensions
If the measure is anti-monotonic, and if the
aggregate value on a shared dimension does not
satisfy the iceberg condition, then all the cells
extended from this shared dimension cannot
satisfy the condition either
Intuition: if we can compute the shared dimensions
before the actual cuboid, we can use them to do
Apriori pruning
Problem: how to prune while still aggregate
simultaneously on multiple dimensions?
04/18/13 Data Mining: Concepts and Techniques
19
20. Cell Trees
Use a tree structure similar
to H-tree to represent
cuboids
Collapses common prefixes
to save memory
Keep count at node
Traverse the tree to retrieve
a particular tuple
04/18/13 Data Mining: Concepts and Techniques
20
21. Star Attributes and Star Nodes
Intuition: If a single-dimensional
aggregate on an attribute value p
does not satisfy the iceberg A B C D Count
condition, it is useless to distinguish a1 b1 c1 d1 1
them during the iceberg computation a1 b1 c4 d3 1
E.g., b2, b3, b4, c1, c2, c4, d1, d2, d3 a1 b2 c2 d2 1
a2 b3 c3 d4 1
Solution: Replace such attributes by
a2 b4 c3 d4 1
a *. Such attributes are star
attributes, and the corresponding
nodes in the cell tree are star nodes
04/18/13 Data Mining: Concepts and Techniques
21
22. Example: Star Reduction
Suppose minsup = 2
Perform one-dimensional
aggregation. Replace attribute A B C D Count
a1 b1 * * 1
values whose count < 2 with *. And
a1 b1 * * 1
collapse all *’s together
a1 * * * 1
Resulting table has all such a2 * c3 d4 1
attributes replaced with the star- a2 * c3 d4 1
attribute
With regards to the iceberg A B C D Count
computation, this new table is a a1 b1 * * 2
a1 * * * 1
loseless compression of the original
a2 * c3 d4 2
table
04/18/13 Data Mining: Concepts and Techniques
22
23. Star Tree
Given the new compressed
table, it is possible to construct
the corresponding cell tree—
called star tree
Keep a star table at the side for
easy lookup of star attributes
The star tree is a loseless
compression of the original cell
tree
04/18/13 Data Mining: Concepts and Techniques
23
24. Star-Cubing Algorithm—DFS on Lattice Tree
a ll
BC D : 51
A /A B /B C /C D /D
b*: 33 b1: 26
ro o t: 5
c*: 14 c3: 211 c* : 27
A B /A B A C /A C A D /A B C /B C B D /B C D
a1: 3 a2: 2
d*: 15 d4 : 2 12 d*: 28
A B C /A B C A B D /A B A C D /A BC D
b*: 1 b1: 2 b*: 2
c*: 1 c*: 2 c3: 2
A BC D
d*: 1 d*: 2 d4: 2
04/18/13 Data Mining: Concepts and Techniques
24
25. Multi-Way BC D A C D /A A B D /A B A B C /A B C
Aggregation
A BC D
04/18/13 Data Mining: Concepts and Techniques
25
27. Multi-Way Star-Tree Aggregation
Start depth-first search at the root of the base star tree
At each new node in the DFS, create corresponding star
tree that are descendents of the current tree according to
the integrated traversal ordering
E.g., in the base tree, when DFS reaches a1, the
ACD/A tree is created
When DFS reaches b*, the ABD/AD tree is created
The counts in the base tree are carried over to the new
trees
04/18/13 Data Mining: Concepts and Techniques
27
28. Multi-Way Aggregation (2)
When DFS reaches a leaf node (e.g., d*), start
backtracking
On every backtracking branch, the count in the
corresponding trees are output, the tree is destroyed,
and the node in the base tree is destroyed
Example
When traversing from d* back to c*, the
a1b*c*/a1b*c* tree is output and destroyed
When traversing from c* back to b*, the
a1b*D/a1b* tree is output and destroyed
When at b*, jump to b1 and repeat similar process
04/18/13 Data Mining: Concepts and Techniques
28
29. The Curse of Dimensionality
None of the previous cubing method can handle high
dimensionality!
A database of 600k tuples. Each dimension has
cardinality of 100 and zipf of 2.
04/18/13 Data Mining: Concepts and Techniques
29
30. Motivation of High-D OLAP
Challenge to current cubing methods:
The “curse of dimensionality’’ problem
Iceberg cube and compressed cubes: only delay the
inevitable explosion
Full materialization: still significant overhead in
accessing results on disk
High-D OLAP is needed in applications
Science and engineering analysis
Bio-data analysis: thousands of genes
Statistical surveys: hundreds of variables
04/18/13 Data Mining: Concepts and Techniques
30
31. Fast High-D OLAP with Minimal Cubing
Observation: OLAP occurs only on a small subset of
dimensions at a time
Semi-Online Computational Model
1. Partition the set of dimensions into shell
fragments
2. Compute data cubes for each shell fragment while
retaining inverted indices or value-list indices
3. Given the pre-computed fragment cubes,
dynamically compute cube cells of the high-
04/18/13 Data Mining: Concepts and Techniques
dimensional data cube online 31
32. Properties of Proposed Method
Partitions the data vertically
Reduces high-dimensional cube into a set of lower
dimensional cubes
Online re-construction of original high-dimensional space
Lossless reduction
Offers tradeoffs between the amount of pre-processing
and the speed of online computation
04/18/13 Data Mining: Concepts and Techniques
32
33. Example Computation
Let the cube aggregation function be count
tid A B C D E
1 a1 b1 c1 d1 e1
2 a1 b2 c1 d2 e1
3 a1 b2 c1 d1 e2
4 a2 b1 c1 d1 e2
5 a2 b1 c1 d1 e3
Divide the 5 dimensions into 2 shell fragments:
(A, B, C) and (D, E)
04/18/13 Data Mining: Concepts and Techniques
33
34. 1-D Inverted Indices
Build traditional invert index or RID list
Attribute Value TID List List Size
a1 123 3
a2 45 2
b1 145 3
b2 23 2
c1 12345 5
d1 1345 4
d2 2 1
e1 12 2
e2 34 2
e3 5 1
04/18/13 Data Mining: Concepts and Techniques
34
35. Shell Fragment Cubes
Generalize the 1-D inverted indices to multi-dimensional
ones in the data cube sense
Cell Intersection TID List List Size
a1 b1 123 ∩ 45
1 1 1
a1 b2 123 ∩ 3
2 23 2
a2 b1 4 5 ∩1 4 5 45 2
a2 b2 4 5 ∩2 3 ⊗ 0
04/18/13 Data Mining: Concepts and Techniques
35
36. Shell Fragment Cubes (2)
Compute all cuboids for data cubes ABC and DE while
retaining the inverted indices
For example, shell fragment cube ABC contains 7
cuboids:
A, B, C
AB, AC, BC
ABC
This completes the offline computation stage
04/18/13 Data Mining: Concepts and Techniques
36
37. Shell Fragment Cubes (3)
Given a database of T tuples, D dimensions, and F shell
fragment size, the fragment cubes’ space requirement is:
D F
OT − )
(2 1
F
For F < 5, the growth is sub-linear.
04/18/13 Data Mining: Concepts and Techniques
37
38. Shell Fragment Cubes (4)
Shell fragments do not have to be disjoint
Fragment groupings can be arbitrary to allow for
maximum online performance
Known common combinations (e.g.,<city, state>)
should be grouped together.
Shell fragment sizes can be adjusted for optimal
balance between offline and online computation
04/18/13 Data Mining: Concepts and Techniques
38
39. ID_Measure Table
If measures other than count are present, store in
ID_measure table separate from the shell fragments
tid count sum
1 5 70
2 3 10
3 8 20
4 5 40
5 2 30
04/18/13 Data Mining: Concepts and Techniques
39
40. The Frag-Shells Algorithm
1. Partition set of dimension (A1,…,An) into a set of k fragments (P1,
…,Pk).
2. Scan base table once and do the following
3. insert <tid, measure> into ID_measure table.
4. for each attribute value ai of each dimension Ai
5. build inverted index entry <ai, tidlist>
6. For each fragment partition Pi
7. build local fragment cube Si by intersecting tid-lists in bottom-
04/18/13 up fashion. Data Mining: Concepts and Techniques
40
41. Frag-Shells (2)
Dimensions D Cuboid
EF Cuboid
A B C D E F … DE Cuboid
Cell Tuple-ID List
d1 e1 {1, 3, 8, 9}
d1 e2 {2, 4, 6, 7}
d2 e1 {5, 10}
… …
ABC DEF
Cube Cube
04/18/13 Data Mining: Concepts and Techniques
41
42. Online Query Computation
A query has the general form a 2K, n:M
1 ,
,a a
Each ai has 3 possible values
1. Instantiated value
2. Aggregate * function
3. Inquire ? function
For example, 3??*1 oreturns a 2-D data
: u
cnt
cube.
04/18/13 Data Mining: Concepts and Techniques
42
43. Online Query Computation (2)
Given the fragment cubes, process a query as
follows
1. Divide the query into fragment, same as the shell
2. Fetch the corresponding TID list for each
fragment from the fragment cube
3. Intersect the TID lists from each fragment to
construct instantiated base table
4. Compute the data cube using the base table with
any cubing algorithm
04/18/13 Data Mining: Concepts and Techniques
43
44. Online Query Computation (3)
A B C D E F G H I J K L M N …
Instantiated Online
Base Table Cube
04/18/13 Data Mining: Concepts and Techniques
44
45. Experiment: Size vs. Dimensionality (50 and
100 cardinality)
(50-C): 106 tuples, 0 skew, 50 cardinality, fragment size 3.
(100-C): 106 tuples, 2 skew, 100 cardinality, fragment size 2.
04/18/13 Data Mining: Concepts and Techniques
45
49. Experiment: I/O vs. # of Instantiated
Dimensions
106 tuples, 10 dimensions, 10 cardinalty, 0 skew, fragment size 1, 7 total
relevant dimensions.
04/18/13 Data Mining: Concepts and Techniques
49
50. Experiments on Real World Data
UCI Forest CoverType data set
54 dimensions, 581K tuples
Shell fragments of size 2 took 33 seconds and 325MB
to compute
3-D subquery with 1 instantiate D: 85ms~1.4 sec.
Longitudinal Study of Vocational Rehab. Data
24 dimensions, 8818 tuples
Shell fragments of size 3 took 0.9 seconds and 60MB
to compute
5-D query with 0 instantiated D: 227ms~2.6 sec.
04/18/13 Data Mining: Concepts and Techniques
50
51. Comparisons to Related Work
[Harinarayan96] computes low-dimensional cuboids by
further aggregation of high-dimensional cuboids.
Opposite of our method’s direction.
Inverted indexing structures [Witten99] focus on single
dimensional data or multi-dimensional data with no
aggregation.
Tree-stripping [Berchtold00] uses similar vertical
partitioning of database but no aggregation.
04/18/13 Data Mining: Concepts and Techniques
51
52. Further Implementation Considerations
Incremental Update:
Append more TIDs to inverted list
Add <tid: measure> to ID_measure table
Incremental adding new dimensions
Form new inverted list and add new fragments
Bitmap indexing
May further improve space usage and speed
Inverted index compression
Store as d-gaps
Explore more IR compression methods
04/18/13 Data Mining: Concepts and Techniques
52
53. Chapter 4: Data Cube Computation
and Data Generalization
Efficient Computation of Data Cubes
Exploration and Discovery in Multidimensional
Databases
Attribute-Oriented Induction ─ An Alternative
Data Generalization Method
04/18/13 Data Mining: Concepts and Techniques
53
54. Computing Cubes with Non-Antimonotonic
Iceberg Conditions
Most cubing algorithms cannot compute cubes with non-
antimonotonic iceberg conditions efficiently
Example
CREATE CUBE Sales_Iceberg AS
SELECT month, city, cust_grp,
AVG(price), COUNT(*)
FROM Sales_Infor
CUBEBY month, city, cust_grp
HAVING AVG(price) >= 800 AND
COUNT(*) >= 50
Needs to study how to push constraint into the cubing
process
04/18/13 Data Mining: Concepts and Techniques
54
55. Non-Anti-Monotonic Iceberg Condition
Anti-monotonic: if a process fails a condition, continue
processing will still fail
The cubing query with avg is non-anti-monotonic!
(Mar, *, *, 600, 1800) fails the HAVING clause
(Mar, *, Bus, 1300, 360) passes the clause
Month City Cust_grp Prod Cost Price CREATE CUBE Sales_Iceberg AS
Jan Tor Edu Printer 500 485 SELECT month, city, cust_grp,
Jan Tor Hld TV 800 1200
AVG(price), COUNT(*)
Jan Tor Edu Camera 1160 1280
Feb Mon Bus Laptop 1500 2500
FROM Sales_Infor
Mar Van Edu HD 540 520 CUBEBY month, city, cust_grp
… … … … … … HAVING AVG(price) >= 800 AND
COUNT(*) >= 50
04/18/13 Data Mining: Concepts and Techniques
55
56. From Average to Top-k Average
Let (*, Van, *) cover 1,000 records
Avg(price) is the average price of those 1000 sales
Avg50(price) is the average price of the top-50 sales
(top-50 according to the sales price
Top-k average is anti-monotonic
The top 50 sales in Van. is with avg(price) <= 800
the top 50 deals in Van. during Feb. must be with
avg(price) <= 800
Month City Cust_grp Prod Cost Price
… … … … … …
04/18/13 Data Mining: Concepts and Techniques
56
57. Binning for Top-k Average
Computing top-k avg is costly with large k
Binning idea
Avg50(c) >= 800
Large value collapsing: use a sum and a count to
summarize records with measure >= 800
If count>=800, no need to check “small” records
Small value binning: a group of bins
One bin covers a range, e.g., 600~800, 400~600,
etc.
Register a sum and a count for each bin
04/18/13 Data Mining: Concepts and Techniques
57
58. Computing Approximate top-k average
Suppose for (*, Van, *), we have
Range Sum Count
Approximate avg50()=
Over 800 28000 20 (28000+10600+600*15)/50=95
600~800 10600 15 Top 50
2
400~600 15200 30
… … …
The cell may pass the HAVING clause
Month City Cust_grp Prod Cost Price
… … … … … …
04/18/13 Data Mining: Concepts and Techniques
58
59. Weakened Conditions Facilitate Pushing
Accumulate quant-info for cells to compute average
iceberg cubes efficiently
Three pieces: sum, count, top-k bins
Use top-k bins to estimate/prune descendants
Use sum and count to consolidate current cell
weakest strongest
Approximate avg 50 () real avg 50 () avg()
Anti-monotonic, can Anti-monotonic, but Not anti-
be computed computationally monotonic
efficiently costly
04/18/13 Data Mining: Concepts and Techniques
59
60. Computing Iceberg Cubes with Other
Complex Measures
Computing other complex measures
Key point: find a function which is weaker but ensures
certain anti-monotonicity
Examples
Avg() ≤ v: avgk(c) ≤ v (bottom-k avg)
Avg() ≥ v only (no count): max(price) ≥ v
Sum(profit) (profit can be negative):
p_sum(c) ≥ v if p_count(c) ≥ k; or otherwise, sumk(c) ≥ v
Others: conjunctions of multiple conditions
04/18/13 Data Mining: Concepts and Techniques
60
61. Compressed Cubes: Condensed or Closed
Cubes
W. Wang, H. Lu, J. Feng, J. X. Yu, Condensed Cube: An Effective Approach to
Reducing Data Cube Size, ICDE’02.
Icerberg cube cannot solve all the problems
Suppose 100 dimensions, only 1 base cell with count = 10. How many
aggregate (non-base) cells if count >= 10?
Condensed cube
Only need to store one cell (a1, a2, …, a100, 10), which represents all the
corresponding aggregate cells
Adv.
Fully precomputed cube without compression
Efficient computation of the minimal condensed cube
Closed cube
Dong Xin, Jiawei Han, Zheng Shao, and Hongyan Liu, “C-Cubing: Efficient
Computation of Closed Cubes by Aggregation-Based Checking”, ICDE'06.
04/18/13 Data Mining: Concepts and Techniques
61
62. Chapter 4: Data Cube Computation
and Data Generalization
Efficient Computation of Data Cubes
Exploration and Discovery in Multidimensional
Databases
Attribute-Oriented Induction ─ An Alternative
Data Generalization Method
04/18/13 Data Mining: Concepts and Techniques
62
63. Discovery-Driven Exploration of Data Cubes
Hypothesis-driven
exploration by user, huge search space
Discovery-driven (Sarawagi, et al.’98)
Effective navigation of large OLAP data cubes
pre-compute measures indicating exceptions, guide
user in the data analysis, at all levels of aggregation
Exception: significantly different from the value
anticipated, based on a statistical model
Visual cues such as background color are used to
reflect the degree of exception of each cell
04/18/13 Data Mining: Concepts and Techniques
63
64. Kinds of Exceptions and their Computation
Parameters
SelfExp: surprise of cell relative to other cells at same
level of aggregation
InExp: surprise beneath the cell
PathExp: surprise beneath cell for each drill-down
path
Computation of exception indicator (modeling fitting and
computing SelfExp, InExp, and PathExp values) can be
overlapped with cube construction
Exception themselves can be stored, indexed and
retrieved like precomputed aggregates
04/18/13 Data Mining: Concepts and Techniques
64
66. Complex Aggregation at Multiple
Granularities: Multi-Feature Cubes
Multi-feature cubes (Ross, et al. 1998): Compute complex queries
involving multiple dependent aggregates at multiple granularities
Ex. Grouping by all subsets of {item, region, month}, find the
maximum price in 1997 for each group, and the total sales among all
maximum price tuples
select item, region, month, max(price), sum(R.sales)
from purchases
where year = 1997
cube by item, region, month: R
such that R.price = max(price)
Continuing the last example, among the max price tuples, find the
min and max shelf live, and find the fraction of the total sales due to
tuple that have min shelf life within the set of all max price tuples
04/18/13 Data Mining: Concepts and Techniques
66
67. Cube-Gradient (Cubegrade)
Analysis of changes of sophisticated measures in multi-
dimensional spaces
Query: changes of average house price in Vancouver
in ‘00 comparing against ’99
Answer: Apts in West went down 20%, houses in
Metrotown went up 10%
Cubegrade problem by Imielinski et al.
Changes in dimensions changes in measures
Drill-down, roll-up, and mutation
04/18/13 Data Mining: Concepts and Techniques
67
68. From Cubegrade to Multi-dimensional
Constrained Gradients in Data Cubes
Significantly more expressive than association rules
Capture trends in user-specified measures
Serious challenges
Many trivial cells in a cube “significance constraint”
to prune trivial cells
Numerate pairs of cells “probe constraint” to select a
subset of cells to examine
Only interesting changes wanted “gradient
constraint” to capture significant changes
04/18/13 Data Mining: Concepts and Techniques
68
69. MD Constrained Gradient Mining
Significance constraint Csig: (cnt≥100)
Probe constraint Cprb: (city=“Van”, cust_grp=“busi”,
prod_grp=“*”)
Gradient constraint Cgrad(cg, cp):
(avg_price(cg)/avg_price(cp)≥1.3)
Probe cell: satisfied Cprb (c4, c2) satisfies Cgrad!
Dimensions Measures
Base cell
cid Yr City Cst_grp Prd_grp Cnt Avg_price
c1 00 Van Busi PC 300 2100
Aggregated cell
c2 * Van Busi PC 2800 1800
Siblings c3 * Tor Busi PC 7900 2350
c4 * * busi PC 58600 2250
Ancestor
04/18/13 Data Mining: Concepts and Techniques
69
70. Efficient Computing Cube-gradients
Compute probe cells using Csig and Cprb
The set of probe cells P is often very small
Use probe P and constraints to find gradients
Pushing selection deeply
Set-oriented processing for probe cells
Iceberg growing from low to high dimensionalities
Dynamic pruning probe cells during growth
Incorporating efficient iceberg cubing method
04/18/13 Data Mining: Concepts and Techniques
70
71. Chapter 4: Data Cube Computation
and Data Generalization
Efficient Computation of Data Cubes
Exploration and Discovery in Multidimensional
Databases
Attribute-Oriented Induction ─ An Alternative
Data Generalization Method
04/18/13 Data Mining: Concepts and Techniques
71
72. What is Concept Description?
Descriptive vs. predictive data mining
Descriptive mining: describes concepts or task-relevant
data sets in concise, summarative, informative,
discriminative forms
Predictive mining: Based on data and analysis,
constructs models for the database, and predicts the
trend and properties of unknown data
Concept description:
Characterization: provides a concise and succinct
summarization of the given collection of data
Comparison: provides descriptions comparing two or
more collections of data
04/18/13 Data Mining: Concepts and Techniques
72
73. Data Generalization and Summarization-based
Characterization
Data generalization
A process which abstracts a large set of task-relevant
data in a database from a low conceptual levels to
higher ones.
1
2
3
4
Conceptual levels
5
Approaches:
Data cube approach(OLAP approach)
Attribute-oriented induction approach
04/18/13 Data Mining: Concepts and Techniques
73
74. Concept Description vs. OLAP
Similarity:
Data generalization
Presentation of data summarization at multiple levels of abstraction.
Interactive drilling, pivoting, slicing and dicing.
Differences:
Can handle complex data types of the attributes and their
aggregations
Automated desired level allocation.
Dimension relevance analysis and ranking when there are many
relevant dimensions.
Sophisticated typing on dimensions and measures.
Analytical characterization: data dispersion analysis
04/18/13 Data Mining: Concepts and Techniques
74
75. Attribute-Oriented Induction
Proposed in 1989 (KDD ‘89 workshop)
Not confined to categorical data nor particular measures
How it is done?
Collect the task-relevant data (initial relation) using a
relational database query
Perform generalization by attribute removal or attribute
generalization
Apply aggregation by merging identical, generalized
tuples and accumulating their respective counts
Interactive presentation with users
04/18/13 Data Mining: Concepts and Techniques
75
76. Basic Principles of Attribute-Oriented Induction
Data focusing: task-relevant data, including dimensions,
and the result is the initial relation
Attribute-removal: remove attribute A if there is a large set
of distinct values for A but (1) there is no generalization
operator on A, or (2) A’s higher level concepts are
expressed in terms of other attributes
Attribute-generalization: If there is a large set of distinct
values for A, and there exists a set of generalization
operators on A, then select an operator and generalize A
Attribute-threshold control: typical 2-8, specified/default
Generalized relation threshold control: control the final
relation/rule size Data Mining: Concepts and Techniques
04/18/13 76
77. Attribute-Oriented Induction: Basic Algorithm
InitialRel: Query processing of task-relevant data, deriving
the initial relation.
PreGen: Based on the analysis of the number of distinct
values in each attribute, determine generalization plan for
each attribute: removal? or how high to generalize?
PrimeGen: Based on the PreGen plan, perform
generalization to the right level to derive a “prime
generalized relation”, accumulating the counts.
Presentation: User interaction: (1) adjust levels by drilling,
(2) pivoting, (3) mapping into rules, cross tabs, visualization
presentations.
04/18/13 Data Mining: Concepts and Techniques
77
78. Example
DMQL: Describe general characteristics of graduate
students in the Big-University database
use Big_University_DB
mine characteristics as “Science_Students”
in relevance to name, gender, major, birth_place,
birth_date, residence, phone#, gpa
from student
where status in “graduate”
Corresponding SQL statement:
Select name, gender, major, birth_place, birth_date,
residence, phone#, gpa
from student
where status in {“Msc”, “MBA”, “PhD” }
04/18/13 Data Mining: Concepts and Techniques
78
79. Class Characterization: An Example
Name Gender Major Birth-Place Birth_date Residence Phone # GPA
Initial Jim M CS Vancouver,BC, 8-12-76 3511 Main St., 687-4598 3.67
Woodman Canada Richmond
Relation Scott M CS Montreal, Que, 28-7-75 345 1st Ave., 253-9106 3.70
Lachance Canada Richmond
Laura Lee F Physics Seattle, WA, USA 25-8-70 125 Austin Ave., 420-5232 3.83
… … … … … Burnaby … …
…
Removed Retained Sci,Eng, Country Age range City Removed Excl,
Bus VG,..
Gender Major Birth_region Age_range Residence GPA Count
Prime M Science Canada 20-25 Richmond Very-good 16
Generalized F Science Foreign 25-30 Burnaby Excellent 22
Relation … … … … … … …
Birth_Region
Canada Foreign Total
Gender
M 16 14 30
F 10 22 32
Total 26 36 62
04/18/13 Data Mining: Concepts and Techniques
79
80. Presentation of Generalized Results
Generalized relation:
Relations where some or all attributes are generalized, with counts
or other aggregation values accumulated.
Cross tabulation:
Mapping results into cross tabulation form (similar to contingency
tables).
Visualization techniques:
Pie charts, bar charts, curves, cubes, and other visual forms.
Quantitative characteristic rules:
Mapping generalized result into characteristic rules with quantitative
information associated with it, e.g.,
grad ( x) ∧ male( x) ⇒
birth _ region( x) ="Canada"[t :53%]∨ birth _ region( x) =" foreign"[t : 47%].
04/18/13 Data Mining: Concepts and Techniques
80
81. Mining Class Comparisons
Comparison: Comparing two or more classes
Method:
Partition the set of relevant data into the target class and the
contrasting class(es)
Generalize both classes to the same high level concepts
Compare tuples with the same high level descriptions
Present for every tuple its description and two measures
support - distribution within single class
comparison - distribution between classes
Highlight the tuples with strong discriminant features
Relevance Analysis:
Find attributes (features) which best distinguish different classes
04/18/13 Data Mining: Concepts and Techniques
81
82. Quantitative Discriminant Rules
Cj = target class
qa = a generalized tuple covers some tuples of class
but can also cover some tuples of contrasting class
d-weight
range: [0, 1] count(qa ∈Cj )
d − weight = m
∑count(q
i =1
a ∈Ci )
quantitative discriminant rule form
∀ X, target_class(X) ⇐ condition(X) [d : d_weight]
04/18/13 Data Mining: Concepts and Techniques
82
83. Example: Quantitative Discriminant Rule
Status Birth_country Age_range Gpa Count
Graduate Canada 25-30 Good 90
Undergraduate Canada 25-30 Good 210
Count distribution between graduate and undergraduate students for a generalized tuple
Quantitative discriminant rule
∀ X , graduate _ student ( X ) ⇐
birth _ country ( X ) =" Canada"∧ age _ range( X ) ="25 − 30"∧ gpa ( X ) =" good " [d : 30%]
where 90/(90 + 210) = 30%
04/18/13 Data Mining: Concepts and Techniques
83
84. Class Description
Quantitative characteristic rule
∀ X, target_class(X) ⇒ condition(X) [t : t_weight]
necessary
Quantitative discriminant rule
∀ X, target_class(X) ⇐ condition(X) [d : d_weight]
sufficient
Quantitative description rule
∀ X, target_class(X) ⇔
condition 1(X) [t : w1, d : w ′1] ∨ ... ∨ conditionn(X) [t : wn, d : w ′n]
necessary and sufficient
04/18/13 Data Mining: Concepts and Techniques
84
85. Example: Quantitative Description Rule
Location/item TV Computer Both_items
Count t-wt d-wt Count t-wt d-wt Count t-wt d-wt
Europe 80 25% 40% 240 75% 30% 320 100% 32%
N_Am 120 17.65% 60% 560 82.35% 70% 680 100% 68%
Both_ 200 20% 100% 800 80% 100% 1000 100% 100%
regions
Crosstab showing associated t-weight, d-weight values and total number
(in thousands) of TVs and computers sold at AllElectronics in 1998
Quantitative description rule for target class Europe
∀ X, Europe(X) ⇔
(item(X) =" TV" ) [t : 25%, d : 40%] ∨ (item(X) =" computer" ) [t : 75%, d : 30%]
04/18/13 Data Mining: Concepts and Techniques
85
86. Summary
Efficient algorithms for computing data cubes
Multiway array aggregation
BUC
H-cubing
Star-cubing
High-D OLAP by minimal cubing
Further development of data cube technology
Discovery-drive cube
Multi-feature cubes
Cube-gradient analysis
Anther generalization approach: Attribute-Oriented Induction
04/18/13 Data Mining: Concepts and Techniques
86
87. References (I)
S. Agarwal, R. Agrawal, P. M. Deshpande, A. Gupta, J. F. Naughton, R. Ramakrishnan,
and S. Sarawagi. On the computation of multidimensional aggregates. VLDB’96
D. Agrawal, A. E. Abbadi, A. Singh, and T. Yurek. Efficient view maintenance in data
warehouses. SIGMOD’97
R. Agrawal, A. Gupta, and S. Sarawagi. Modeling multidimensional databases. ICDE’97
K. Beyer and R. Ramakrishnan. Bottom-Up Computation of Sparse and Iceberg CUBEs..
SIGMOD’99
Y. Chen, G. Dong, J. Han, B. W. Wah, and J. Wang, Multi-Dimensional Regression
Analysis of Time-Series Data Streams, VLDB'02
G. Dong, J. Han, J. Lam, J. Pei, K. Wang. Mining Multi-dimensional Constrained
Gradients in Data Cubes. VLDB’ 01
J. Han, Y. Cai and N. Cercone, Knowledge Discovery in Databases: An Attribute-Oriented
Approach, VLDB'92
J. Han, J. Pei, G. Dong, K. Wang. Efficient Computation of Iceberg Cubes With Complex
Measures. SIGMOD’01
04/18/13 Data Mining: Concepts and Techniques
87
88. References (II)
L. V. S. Lakshmanan, J. Pei, and J. Han, Quotient Cube: How to Summarize the
Semantics of a Data Cube, VLDB'02
X. Li, J. Han, and H. Gonzalez, High-Dimensional OLAP: A Minimal Cubing Approach,
VLDB'04
K. Ross and D. Srivastava. Fast computation of sparse datacubes. VLDB’97
K. A. Ross, D. Srivastava, and D. Chatziantoniou. Complex aggregation at multiple
granularities. EDBT'98
S. Sarawagi, R. Agrawal, and N. Megiddo. Discovery-driven exploration of OLAP data
cubes. EDBT'98
G. Sathe and S. Sarawagi. Intelligent Rollups in Multidimensional OLAP Data. VLDB'01
D. Xin, J. Han, X. Li, B. W. Wah, Star-Cubing: Computing Iceberg Cubes by Top-Down
and Bottom-Up Integration, VLDB'03
D. Xin, J. Han, Z. Shao, H. Liu, C-Cubing: Efficient Computation of Closed Cubes by
Aggregation-Based Checking, ICDE'06
W. Wang, H. Lu, J. Feng, J. X. Yu, Condensed Cube: An Effective Approach to
Reducing Data Cube Size. ICDE’02
Y. Zhao, P. M. Deshpande, and J. F. Naughton. An array-based algorithm for
simultaneous multidimensional aggregates. SIGMOD’97
04/18/13 Data Mining: Concepts and Techniques
88
89. 04/18/13 Data Mining: Concepts and Techniques
89