SlideShare ist ein Scribd-Unternehmen logo
1 von 66
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/1
Outline
• Introduction
• Background
• Distributed Database Design
➡ Fragmentation
➡ Data distribution
• Database Integration
• Semantic Data Control
• Distributed Query Processing
• Multidatabase Query Processing
• Distributed Transaction Management
• Data Replication
• Parallel Database Systems
• Distributed Object DBMS
• Peer-to-Peer Data Management
• Web Data Management
• Current Issues
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/2
Design Problem
• In the general setting :
Making decisions about the placement of data and programs across the sites of
a computer network as well as possibly designing the network itself.
• In Distributed DBMS, the placement of applications entails
➡ placement of the distributed DBMS software; and
➡ placement of the applications that run on the database
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/3
Dimensions of the Problem
Level of sharing
Level of knowledge
Access pattern behavior
partial
information
dynamic
static
data
data +
program
complete
information
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/4
Distribution Design
• Top-down
➡ mostly in designing systems from scratch
➡ mostly in homogeneous systems
• Bottom-up
➡ when the databases already exist at a number of sites
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/5
Top-Down Design
User Input
View Integration
User Input
Requirements
Analysis
Objectives
Conceptual
Design
View Design
Access
Information ES’sGCS
Distribution
Design
Physical
Design
LCS’s
LIS’s
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/6
Distribution Design Issues
Why fragment at all?
How to fragment?
How much to fragment?
How to test correctness?
How to allocate?
Information requirements?
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/7
Fragmentation
• Can't we just distribute relations?
• What is a reasonable unit of distribution?
➡ relation
✦ views are subsets of relations locality
✦ extra communication
➡ fragments of relations (sub-relations)
✦ concurrent execution of a number of transactions that access different portions of
a relation
✦ views that cannot be defined on a single fragment will require extra processing
✦ semantic data control (especially integrity enforcement) more difficult
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/8
Fragmentation Alternatives –
Horizontal
PROJ1 : projects with budgets less than
$200,000
PROJ2 : projects with budgets greater
than or equal to $200,000
PROJ1
PNO PNAME BUDGET LOC
P3 CAD/CAM 250000 New York
P4 Maintenance 310000 Paris
P5 CAD/CAM 500000 Boston
PNO PNAME LOC
P1 Instrumentation 150000 Montreal
P2 Database Develop. 135000 New York
BUDGET
PROJ2
New York
New York
PROJ
PNO PNAME BUDGET LOC
P1 Instrumentation 150000 Montreal
P3 CAD/CAM 250000
P2 Database Develop. 135000
P4 Maintenance 310000 Paris
P5 CAD/CAM 500000 Boston
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/9
Fragmentation Alternatives –
Vertical
PROJ1: information about project
budgets
PROJ2: information about project
names and locations
PNO BUDGET
P1 150000
P3 250000
P2 135000
P4 310000
P5 500000
PNO PNAME LOC
P1 Instrumentation Montreal
P3 CAD/CAM New York
P2 Database Develop. New York
P4 Maintenance Paris
P5 CAD/CAM Boston
PROJ1 PROJ2
New York
New York
PROJ
PNO PNAME BUDGET LOC
P1 Instrumentation 150000 Montreal
P3 CAD/CAM 250000
P2 Database Develop. 135000
P4 Maintenance 310000 Paris
P5 CAD/CAM 500000 Boston
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/10
Degree of Fragmentation
Finding the suitable level of partitioning within this
range
tuples
or
attributes
relations
finite number of alternatives
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/11
Correctness of Fragmentation
• Completeness
➡ Decomposition of relation R into fragments R1, R2, ..., Rn is complete if and
only if each data item in R can also be found in some Ri
• Reconstruction
➡ If relation R is decomposed into fragments R1, R2, ..., Rn, then there should
exist some relational operator ∇ such that
R = ∇1≤i≤nRi
• Disjointness
➡ If relation R is decomposed into fragments R1, R2, ..., Rn, and data item di is in
Rj, then di should not be in any other fragment Rk (k ≠ j ).
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/12
Allocation Alternatives
• Non-replicated
➡ partitioned : each fragment resides at only one site
• Replicated
➡ fully replicated : each fragment at each site
➡ partially replicated : each fragment at some of the sites
• Rule of thumb:
If read-only queries
update queries
<< 1, replication is advantageous,
otherwise replication may cause problems
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/13
Full-replication Partial-replication Partitioning
QUERY
PROCESSING
Easy Same Difficulty
Same DifficultyDIRECTORY
MANAGEMENT
Easy or
Non-existant
CONCURRENCY
CONTROL
EasyDifficultModerate
RELIABILITY Very high High Low
REALITY
Possible
application
Realistic
Possible
application
Comparison of Replication
Alternatives
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/14
Information Requirements
• Four categories:
➡ Database information
➡ Application information
➡ Communication network information
➡ Computer system information
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/15
Fragmentation
• Horizontal Fragmentation (HF)
➡ Primary Horizontal Fragmentation (PHF)
➡ Derived Horizontal Fragmentation (DHF)
• Vertical Fragmentation (VF)
• Hybrid Fragmentation (HF)
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/16
PHF – Information Requirements
• Database Information
➡ relationship
➡ cardinality of each relation: card(R)
TITLE, SAL
SKILL
ENO, ENAME, TITLE PNO, PNAME, BUDGET, LO
C
ENO, PNO, RESP, DUR
EMP PROJ
ASG
L1
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/17
PHF - Information Requirements
• Application Information
➡ simple predicates : Given R[A1, A2, …, An], a simple predicate pj is
pj : Ai θValue
where θ {=,<,≤,>,≥,≠}, Value Di and Di is the domain of Ai.
For relation R we define Pr = {p1, p2, …,pm}
Example :
PNAME = "Maintenance"
BUDGET ≤ 200000
➡ minterm predicates : Given R and Pr = {p1, p2, …,pm}
define M = {m1,m2,…,mr} as
M = { mi | mi = pj Pr pj* }, 1≤j≤m, 1≤i≤z
where pj* = pj or pj* = ¬(pj).
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/18
PHF – Information Requirements
Example
m1: PNAME="Maintenance" BUDGET≤200000
m2: NOT(PNAME="Maintenance") BUDGET≤200000
m3: PNAME= "Maintenance" NOT(BUDGET≤200000)
m4: NOT(PNAME="Maintenance") NOT(BUDGET≤200000)
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/19
PHF – Information Requirements
• Application Information
➡ minterm selectivities: sel(mi)
✦ The number of tuples of the relation that would be accessed by a user query
which is specified according to a given minterm predicate mi.
➡ access frequencies: acc(qi)
✦ The frequency with which a user application qi accesses data.
✦ Access frequency for a minterm predicate can also be defined.
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/20
Primary Horizontal Fragmentation
Definition :
Rj = Fj
(R), 1 ≤ j ≤ w
where Fj is a selection formula, which is (preferably) a minterm predicate.
Therefore,
A horizontal fragment Ri of relation R consists of all the tuples of R which
satisfy a minterm predicate mi.

Given a set of minterm predicates M, there are as many horizontal fragments
of relation R as there are minterm predicates.
Set of horizontal fragments also referred to as minterm fragments.
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/21
PHF – Algorithm
Given: A relation R, the set of simple predicates Pr
Output: The set of fragments of R = {R1, R2,…,Rw} which obey the
fragmentation rules.
Preliminaries :
➡ Pr should be complete
➡ Pr should be minimal
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/22
Completeness of Simple
Predicates
• A set of simple predicates Pr is said to be complete if and only if the
accesses to the tuples of the minterm fragments defined on Pr requires that
two tuples of the same minterm fragment have the same probability of
being accessed by any application.
• Example :
➡ Assume PROJ[PNO,PNAME,BUDGET,LOC] has two applications defined
on it.
➡ Find the budgets of projects at each location. (1)
➡ Find projects with budgets less than $200000. (2)
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/23
Completeness of Simple
Predicates
According to (1),
Pr={LOC=“Montreal”,LOC=“New York”,LOC=“Paris”}
which is not complete with respect to (2).
Modify
Pr ={LOC=“Montreal”,LOC=“New
York”,LOC=“Paris”, BUDGET≤200000,BUDGET>200000}
which is complete.
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/24
Minimality of Simple Predicates
• If a predicate influences how fragmentation is performed, (i.e., causes a
fragment f to be further fragmented into, say, fi and fj) then there should be
at least one application that accesses fi and fj differently.
• In other words, the simple predicate should be relevant in determining a
fragmentation.
• If all the predicates of a set Pr are relevant, then Pr is minimal.
acc(mi )
card( fi )
=
acc(mj )
card( fj )
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/25
Minimality of Simple Predicates
Example :
Pr ={LOC=“Montreal”,LOC=“New York”, LOC=“Paris”,
BUDGET≤200000,BUDGET>200000}
is minimal (in addition to being complete). However, if we add
PNAME = “Instrumentation”
then Pr is not minimal.
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/26
COM_MIN Algorithm
Given: a relation R and a set of simple predicates Pr
Output: a complete and minimal set of simple predicates Pr' for Pr
Rule 1: a relation or fragment is partitioned into at least two parts which
are accessed differently by at least one application.
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/27
COM_MIN Algorithm
 Initialization :
➡ find a pi Pr such that pi partitions R according to Rule 1
➡ set Pr' = pi ; Pr Pr – {pi} ; F {fi}
 Iteratively add predicates to Pr' until it is complete
➡ find a pj Pr such that pj partitions some fk defined according to minterm
predicate over Pr' according to Rule 1
➡ set Pr' = Pr' {pi}; Pr Pr – {pi}; F F {fi}
➡ if pk Pr' which is nonrelevant then
Pr' Pr – {pi}
F F – {fi}
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/28
PHORIZONTAL Algorithm
Makes use of COM_MIN to perform fragmentation.
Input: a relation R and a set of simple predicates Pr
Output: a set of minterm predicates M according to which relation R is to
be fragmented
 Pr' COM_MIN (R,Pr)
 determine the set M of minterm predicates
 determine the set I of implications among pi Pr
 eliminate the contradictory minterms from M
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/29
PHF – Example
• Two candidate relations : PAY and PROJ.
• Fragmentation of relation PAY
➡ Application: Check the salary info and determine raise.
➡ Employee records kept at two sites application run at two sites
➡ Simple predicates
p1 : SAL ≤ 30000
p2 : SAL > 30000
Pr = {p1,p2} which is complete and minimal Pr'=Pr
➡ Minterm predicates
m1 : (SAL ≤ 30000)
m2 : NOT(SAL ≤ 30000) (SAL > 30000)
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/30
PHF – Example
TITLE
Mech. Eng.
Programmer
SAL
27000
24000
PAY1 PAY2
TITLE
Elect. Eng.
Syst. Anal.
SAL
40000
34000
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/31
PHF – Example
• Fragmentation of relation PROJ
➡ Applications:
✦ Find the name and budget of projects given their no.
✓ Issued at three sites
✦ Access project information according to budget
✓ one site accesses ≤200000 other accesses >200000
➡ Simple predicates
➡ For application (1)
p1 : LOC = “Montreal”
p2 : LOC = “New York”
p3 : LOC = “Paris”
➡ For application (2)
p4 : BUDGET ≤ 200000
p5 : BUDGET > 200000
➡ Pr = Pr' = {p1,p2,p3,p4,p5}
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/32
• Fragmentation of relation PROJ continued
➡ Minterm fragments left after elimination
m1 : (LOC = “Montreal”) (BUDGET ≤ 200000)
m2 : (LOC = “Montreal”) (BUDGET > 200000)
m3 : (LOC = “New York”) (BUDGET ≤ 200000)
m4 : (LOC = “New York”) (BUDGET > 200000)
m5 : (LOC = “Paris”) (BUDGET ≤ 200000)
m6 : (LOC = “Paris”) (BUDGET > 200000)
PHF – Example
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/33
PHF – Example
PROJ1
PNO PNAME BUDGET LOC PNO PNAME BUDGET LOC
P1 Instrumentation 150000 Montreal P2
Database
Develop.
135000 New York
PROJ2
PROJ4 PROJ6
PNO PNAME BUDGET LOC
P3 CAD/CAM 250000 New
York
PNO PNAME BUDGET LOC
MaintenanceP4 310000 Paris
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/34
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/35
• Completeness
➡ Since Pr' is complete and minimal, the selection predicates are complete
• Reconstruction
➡ If relation R is fragmented into FR = {R1,R2,…,Rr}
R = Ri FR Ri
• Disjointness
➡ Minterm predicates that form the basis of fragmentation should be mutually
exclusive.
PHF – Correctness
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/36
Derived Horizontal Fragmentation
• Defined on a member relation of a link according to a selection operation
specified on its owner.
➡ Each link is an equijoin.
➡ Equijoin can be implemented by means of semijoins.
TITLE,SAL
SKILL
ENO, ENAME, TITLE PNO, PNAME, BUDGET, LOC
ENO, PNO, RESP, DUR
EMP PROJ
ASG
L1
L2 L3
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/37
DHF – Definition
Given a link L where owner(L)=S and member(L)=R, the derived horizontal
fragments of R are defined as
Ri = R ⋉F Si, 1≤i≤w
where w is the maximum number of fragments that will be defined on R and
Si = Fi
(S)
where Fi is the formula according to which the primary horizontal fragment
Si is defined.
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/38
Given link L1 where owner(L1)=SKILL and member(L1)=EMP
EMP1 = EMP ⋉ SKILL1
EMP2 = EMP ⋉ SKILL2
where
SKILL1 = SAL≤30000(SKILL)
SKILL2 = SAL>30000(SKILL)
DHF – Example
ENO ENAME TITLE
E3 A. Lee Mech. Eng.
E4 J. Miller Programmer
E7 R. Davis Mech. Eng.
EMP1
ENO ENAME TITLE
E1 J. Doe Elect. Eng.
E2 M. Smith Syst. Anal.
E5 B. Casey Syst. Anal.
EMP2
E6 L. Chu Elect. Eng.
E8 J. Jones Syst. Anal.
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/39
DHF – Correctness
• Completeness
➡ Referential integrity
➡ Let R be the member relation of a link whose owner is relation S which is
fragmented as FS = {S1, S2, ..., Sn}. Furthermore, let A be the join attribute
between R and S. Then, for each tuple t of R, there should be a tuple t' of S
such that
t[A] = t' [A]
• Reconstruction
➡ Same as primary horizontal fragmentation.
• Disjointness
➡ Simple join graphs between the owner and the member fragments.
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/40
• Has been studied within the centralized context
➡ design methodology
➡ physical clustering
• More difficult than horizontal, because more alternatives exist.
Two approaches :
➡ grouping
✦ attributes to fragments
➡ splitting
✦ relation to fragments
Vertical Fragmentation
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/41
• Overlapping fragments
➡ grouping
• Non-overlapping fragments
➡ splitting
We do not consider the replicated key attributes to be overlapping.
Advantage:
Easier to enforce functional dependencies
(for integrity checking etc.)
Vertical Fragmentation
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/42
VF – Information Requirements
• Application Information
➡ Attribute affinities
✦ a measure that indicates how closely related the attributes are
✦ This is obtained from more primitive usage data
➡ Attribute usage values
✦ Given a set of queries Q = {q1, q2,…, qq} that will run on the relation
R[A1, A2,…, An],
use(qi,•) can be defined accordingly
use(qi,Aj) =
1 if attribute Aj is referenced by query qi
0 otherwise
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/43
VF – Definition of use(qi,Aj)
Consider the following 4 queries for relation PROJ
q1:SELECT BUDGET q2: SELECT PNAME,BUDGET
FROM PROJ FROM PROJ
WHERE PNO=Value
q3:SELECT PNAME q4: SELECT SUM(BUDGET)
FROM PROJ FROM PROJ
WHERE LOC=Value WHERE LOC=Value
Let A1= PNO, A2= PNAME, A3= BUDGET, A4= LOC
q1
q2
q3
q4
A1
1 0 1 0
0 01 1
0 01 1
0 0 1 1
A2 A3 A4
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/44
VF – Affinity Measure aff(Ai,Aj)
The attribute affinity measure between two attributes Ai and Aj of a relation
R[A1, A2, …, An] with respect to the set of applications Q = (q1, q2, …, qq) is
defined as follows :
aff (Ai, Aj) (query access)
all queries that access Ai and Aj
query access access frequency of a query
access
execution
all sites
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/45
Assume each query in the previous example accesses the attributes once
during each execution.
Also assume the access frequencies
Then
aff(A1, A3) = 15*1 + 20*1+10*1
= 45
and the attribute affinity matrix AA is
VF – Calculation of aff(Ai, Aj)
4
q1
q 2
q3
q
S1
S2
S3
15 20 10
5 0 0
25 2525
3 0 0
A A A A1 2 3 4
A
A
A
A
1
2
3
4
45 0 45 0
0 80 5 75
45 5 53 3
0 75 3 78
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/46
• Take the attribute affinity matrix AA and reorganize the attribute orders to
form clusters where the attributes in each cluster demonstrate high affinity
to one another.
• Bond Energy Algorithm (BEA) has been used for clustering of entities.
BEA finds an ordering of entities (in our case attributes) such that the
global affinity measure is maximized.
VF – Clustering Algorithm
AM (affinity of Ai and Aj with their neighbors)
ji
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/47
Bond Energy Algorithm
Input: The AA matrix
Output: The clustered affinity matrix CA which is a perturbation of AA
Initialization: Place and fix one of the columns of AA in CA.
Iteration: Place the remaining n-i columns in the remaining i+1 positions in
the CA matrix. For each column, choose the placement that makes the most
contribution to the global affinity measure.
Row order: Order the rows according to the column ordering.
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/48
Bond Energy Algorithm
“Best” placement? Define contribution of a placement:
cont(Ai, Ak, Aj) = 2bond(Ai, Ak)+2bond(Ak, Al) –2bond(Ai, Aj)
where
bond(Ax,Ay) = aff(Az,Ax)aff(Az,Ay)
z 1
n
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/49
BEA – Example
Consider the following AA matrix and the corresponding CA matrix where
A1 and A2 have been placed. Place A3:
Ordering (0-3-1) :
cont(A0,A3,A1) = 2bond(A0 , A3)+2bond(A3 , A1)–2bond(A0 , A1)
= 2* 0 + 2* 4410 – 2*0 = 8820
Ordering (1-3-2) :
cont(A1,A3,A2) = 2bond(A1 , A3)+2bond(A3 , A2)–2bond(A1,A2)
= 2* 4410 + 2* 890 – 2*225 = 10150
Ordering (2-3-4) :
cont (A2,A3,A4) = 1780
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/50
BEA – Example
• Therefore, the CA matrix has the form
• When A4 is placed, the final form of the CA matrix (after row organization)
is
A1 A2A3
45
0
45
0
45
5
53
3
0
80
5
75
A1 A2A3 A4
A1
A2
A3
A4
45
45
0
0
45
53
5
3
0
5
80
75
0
3
75
78
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/51
How can you divide a set of clustered attributes {A1, A2, …, An}
into two (or more) sets {A1, A2, …, Ai} and {Ai, …, An} such that
there are no (or minimal) applications that access both (or more
than one) of the sets.
VF – Algorithm
A1
A2
Ai
Ai+1
Am
…A1 A2 A3 Ai Ai+1 Am
BA
. . .
TA
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/52
Define
TQ = set of applications that access only TA
BQ = set of applications that access only BA
OQ = set of applications that access both TA and BA
and
CTQ = total number of accesses to attributes by applications
that access only TA
CBQ = total number of accesses to attributes by applications
that access only BA
COQ = total number of accesses to attributes by applications
that access both TA and BA
Then find the point along the diagonal that maximizes
VF – ALgorithm
CTQ CBQ COQ2
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/53
Two problems :
Cluster forming in the middle of the CA matrix
➡ Shift a row up and a column left and apply the algorithm to find the “best”
partitioning point
➡ Do this for all possible shifts
➡ Cost O(m2)
More than two clusters
➡ m-way partitioning
➡ try 1, 2, …, m–1 split points along diagonal and try to find the best point for
each of these
➡ Cost O(2m)
VF – Algorithm
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/54
VF – Correctness
A relation R, defined over attribute set A and key K, generates the vertical
partitioning FR = {R1, R2, …, Rr}.
• Completeness
➡ The following should be true for A:
A = ARi
• Reconstruction
➡ Reconstruction can be achieved by
R = ⋈•K Ri, Ri FR
• Disjointness
➡ TID's are not considered to be overlapping since they are maintained by the
system
➡ Duplicated keys are not considered to be overlapping
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/55
Hybrid Fragmentation
R
HFHF
R1
VF VFVFVFVF
R11 R12 R21 R22 R23
R2
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/56
Fragment Allocation
• Problem Statement
Given
F = {F1, F2, …, Fn} fragments
S ={S1, S2, …, Sm} network sites
Q = {q1, q2,…, qq} applications
Find the "optimal" distribution of F to S.
• Optimality
➡ Minimal cost
✦ Communication + storage + processing (read & update)
✦ Cost in terms of time (usually)
➡ Performance
Response time and/or throughput
➡ Constraints
✦ Per site constraints (storage & processing)
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/57
Information Requirements
• Database information
➡ selectivity of fragments
➡ size of a fragment
• Application information
➡ access types and numbers
➡ access localities
• Communication network information
➡ unit cost of storing data at a site
➡ unit cost of processing at a site
• Computer system information
➡ bandwidth
➡ latency
➡ communication overhead
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/58
Allocation
File Allocation (FAP) vs Database Allocation (DAP):
➡ Fragments are not individual files
✦ relationships have to be maintained
➡ Access to databases is more complicated
✦ remote file access model not applicable
✦ relationship between allocation and query processing
➡ Cost of integrity enforcement should be considered
➡ Cost of concurrency control should be considered
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/59
Allocation – Information
Requirements
• Database Information
➡ selectivity of fragments
➡ size of a fragment
• Application Information
➡ number of read accesses of a query to a fragment
➡ number of update accesses of query to a fragment
➡ A matrix indicating which queries updates which fragments
➡ A similar matrix for retrievals
➡ originating site of each query
• Site Information
➡ unit cost of storing data at a site
➡ unit cost of processing at a site
• Network Information
➡ communication cost/frame between two sites
➡ frame size
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/60
General Form
min(Total Cost)
subject to
response time constraint
storage constraint
processing constraint
Decision Variable
Allocation Model
xij
1 if fragment Fi is stored at site Sj
0 otherwise
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/61
• Total Cost
• Storage Cost (of fragment Fj at Sk)
• Query Processing Cost (for one query)
processing component + transmission component
Allocation Model
(unit storage cost at Sk) (size of Fj) xjk
query processing cost
all queries
cost of storing a fragment at a site
all fragmentsall sites
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/62
Allocation Model
• Query Processing Cost
Processing component
access cost + integrity enforcement cost + concurrency control cost
➡ Access cost
➡ Integrity enforcement and concurrency control costs
✦ Can be similarly calculated
(no. of update accesses+ no. of read accesses)
all fragmentsall sites
xij local processing cost at a site
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/63
• Query Processing Cost
Transmission component
cost of processing updates + cost of processing retrievals
➡ Cost of updates
➡ Retrieval Cost
Allocation Model
update message cost
all fragmentsall sites
acknowledgment cost
all fragmentsall sites
minall sites
all fragments
(cost of retrieval command
cost of sending back the result)
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/64
Allocation Model
• Constraints
➡ Response Time
execution time of query ≤ max. allowable response time for that query
➡ Storage Constraint (for a site)
➡ Processing constraint (for a site)
storage requirement of a fragment at that site
all fragments
storage capacity at that site
processing load of a query at that site
all queries processing capacity of that site
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/65
Allocation Model
• Solution Methods
➡ FAP is NP-complete
➡ DAP also NP-complete
• Heuristics based on
➡ single commodity warehouse location (for FAP)
➡ knapsack problem
➡ branch and bound techniques
➡ network flow
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/66
Allocation Model
• Attempts to reduce the solution space
➡ assume all candidate partitionings known; select the “best” partitioning
➡ ignore replication at first
➡ sliding window on fragments

Weitere ähnliche Inhalte

Was ist angesagt?

Cost estimation for Query Optimization
Cost estimation for Query OptimizationCost estimation for Query Optimization
Cost estimation for Query OptimizationRavinder Kamboj
 
Query processing and optimization (updated)
Query processing and optimization (updated)Query processing and optimization (updated)
Query processing and optimization (updated)Ravinder Kamboj
 
Query Decomposition and data localization
Query Decomposition and data localization Query Decomposition and data localization
Query Decomposition and data localization Hafiz faiz
 
Query decomposition in data base
Query decomposition in data baseQuery decomposition in data base
Query decomposition in data baseSalman Memon
 
Distributed Query Processing
Distributed Query ProcessingDistributed Query Processing
Distributed Query ProcessingMythili Kannan
 
Query processing in Distributed Database System
Query processing in Distributed Database SystemQuery processing in Distributed Database System
Query processing in Distributed Database SystemMeghaj Mallick
 
Distributed dbms architectures
Distributed dbms architecturesDistributed dbms architectures
Distributed dbms architecturesPooja Dixit
 
Concurrency Control in Distributed Database.
Concurrency Control in Distributed Database.Concurrency Control in Distributed Database.
Concurrency Control in Distributed Database.Meghaj Mallick
 
Query processing and Query Optimization
Query processing and Query OptimizationQuery processing and Query Optimization
Query processing and Query OptimizationNiraj Gandha
 
DISTRIBUTED DATABASE WITH RECOVERY TECHNIQUES
DISTRIBUTED DATABASE WITH RECOVERY TECHNIQUESDISTRIBUTED DATABASE WITH RECOVERY TECHNIQUES
DISTRIBUTED DATABASE WITH RECOVERY TECHNIQUESAAKANKSHA JAIN
 
Multiversion Concurrency Control Techniques
Multiversion Concurrency Control TechniquesMultiversion Concurrency Control Techniques
Multiversion Concurrency Control TechniquesRaj vardhan
 
Distributed DBMS - Unit 8 - Distributed Transaction Management & Concurrency ...
Distributed DBMS - Unit 8 - Distributed Transaction Management & Concurrency ...Distributed DBMS - Unit 8 - Distributed Transaction Management & Concurrency ...
Distributed DBMS - Unit 8 - Distributed Transaction Management & Concurrency ...Gyanmanjari Institute Of Technology
 

Was ist angesagt? (20)

Cost estimation for Query Optimization
Cost estimation for Query OptimizationCost estimation for Query Optimization
Cost estimation for Query Optimization
 
Query processing and optimization (updated)
Query processing and optimization (updated)Query processing and optimization (updated)
Query processing and optimization (updated)
 
Distributed DBMS - Unit 1 - Introduction
Distributed DBMS - Unit 1 - IntroductionDistributed DBMS - Unit 1 - Introduction
Distributed DBMS - Unit 1 - Introduction
 
Query Decomposition and data localization
Query Decomposition and data localization Query Decomposition and data localization
Query Decomposition and data localization
 
Distributed dbms
Distributed dbmsDistributed dbms
Distributed dbms
 
Query decomposition in data base
Query decomposition in data baseQuery decomposition in data base
Query decomposition in data base
 
File system structure
File system structureFile system structure
File system structure
 
Distributed Query Processing
Distributed Query ProcessingDistributed Query Processing
Distributed Query Processing
 
Query processing in Distributed Database System
Query processing in Distributed Database SystemQuery processing in Distributed Database System
Query processing in Distributed Database System
 
Distributed DBMS - Unit 5 - Semantic Data Control
Distributed DBMS - Unit 5 - Semantic Data ControlDistributed DBMS - Unit 5 - Semantic Data Control
Distributed DBMS - Unit 5 - Semantic Data Control
 
Distributed dbms architectures
Distributed dbms architecturesDistributed dbms architectures
Distributed dbms architectures
 
Concurrency Control in Distributed Database.
Concurrency Control in Distributed Database.Concurrency Control in Distributed Database.
Concurrency Control in Distributed Database.
 
Distributed DBMS - Unit 3 - Distributed DBMS Architecture
Distributed DBMS - Unit 3 - Distributed DBMS ArchitectureDistributed DBMS - Unit 3 - Distributed DBMS Architecture
Distributed DBMS - Unit 3 - Distributed DBMS Architecture
 
Query processing and Query Optimization
Query processing and Query OptimizationQuery processing and Query Optimization
Query processing and Query Optimization
 
Page replacement
Page replacementPage replacement
Page replacement
 
BANKER'S ALGORITHM
BANKER'S ALGORITHMBANKER'S ALGORITHM
BANKER'S ALGORITHM
 
DISTRIBUTED DATABASE WITH RECOVERY TECHNIQUES
DISTRIBUTED DATABASE WITH RECOVERY TECHNIQUESDISTRIBUTED DATABASE WITH RECOVERY TECHNIQUES
DISTRIBUTED DATABASE WITH RECOVERY TECHNIQUES
 
Multiversion Concurrency Control Techniques
Multiversion Concurrency Control TechniquesMultiversion Concurrency Control Techniques
Multiversion Concurrency Control Techniques
 
DDBMS
DDBMSDDBMS
DDBMS
 
Distributed DBMS - Unit 8 - Distributed Transaction Management & Concurrency ...
Distributed DBMS - Unit 8 - Distributed Transaction Management & Concurrency ...Distributed DBMS - Unit 8 - Distributed Transaction Management & Concurrency ...
Distributed DBMS - Unit 8 - Distributed Transaction Management & Concurrency ...
 

Ähnlich wie Database, 3 Distribution Design

Database ,2 Background
 Database ,2 Background Database ,2 Background
Database ,2 BackgroundAli Usman
 
Database ,7 query localization
Database ,7 query localizationDatabase ,7 query localization
Database ,7 query localizationAli Usman
 
IJSETR-VOL-3-ISSUE-12-3358-3363
IJSETR-VOL-3-ISSUE-12-3358-3363IJSETR-VOL-3-ISSUE-12-3358-3363
IJSETR-VOL-3-ISSUE-12-3358-3363SHIVA REDDY
 
Performance measures
Performance measuresPerformance measures
Performance measuresDivya Tiwari
 
Information-Flow Analysis of Design Breaks up
Information-Flow Analysis of Design Breaks upInformation-Flow Analysis of Design Breaks up
Information-Flow Analysis of Design Breaks upEswar Publications
 
Problems in Task Scheduling in Multiprocessor System
Problems in Task Scheduling in Multiprocessor SystemProblems in Task Scheduling in Multiprocessor System
Problems in Task Scheduling in Multiprocessor Systemijtsrd
 
Data structures assignmentweek4b.pdfCI583 Data Structure
Data structures assignmentweek4b.pdfCI583 Data StructureData structures assignmentweek4b.pdfCI583 Data Structure
Data structures assignmentweek4b.pdfCI583 Data StructureOllieShoresna
 
1 introduction
1 introduction1 introduction
1 introductionAmrit Kaur
 
P Systems and Distributed Computing
P Systems and Distributed ComputingP Systems and Distributed Computing
P Systems and Distributed ComputingApostolos Syropoulos
 
DATA MINING USING R (1).pptx
DATA MINING USING R (1).pptxDATA MINING USING R (1).pptx
DATA MINING USING R (1).pptxmyworld93
 
IRJET- Hadoop based Frequent Closed Item-Sets for Association Rules form ...
IRJET-  	  Hadoop based Frequent Closed Item-Sets for Association Rules form ...IRJET-  	  Hadoop based Frequent Closed Item-Sets for Association Rules form ...
IRJET- Hadoop based Frequent Closed Item-Sets for Association Rules form ...IRJET Journal
 
Chapter 3 principles of parallel algorithm design
Chapter 3   principles of parallel algorithm designChapter 3   principles of parallel algorithm design
Chapter 3 principles of parallel algorithm designDenisAkbar1
 
cis97003
cis97003cis97003
cis97003perfj
 
1 introduction DDBS
1 introduction DDBS1 introduction DDBS
1 introduction DDBSnaimanighat
 
DataRobot R Package
DataRobot R PackageDataRobot R Package
DataRobot R PackageDataRobot
 

Ähnlich wie Database, 3 Distribution Design (20)

Design
DesignDesign
Design
 
Design
DesignDesign
Design
 
Database ,2 Background
 Database ,2 Background Database ,2 Background
Database ,2 Background
 
Database ,7 query localization
Database ,7 query localizationDatabase ,7 query localization
Database ,7 query localization
 
IJSETR-VOL-3-ISSUE-12-3358-3363
IJSETR-VOL-3-ISSUE-12-3358-3363IJSETR-VOL-3-ISSUE-12-3358-3363
IJSETR-VOL-3-ISSUE-12-3358-3363
 
Performance measures
Performance measuresPerformance measures
Performance measures
 
DDBMS
DDBMSDDBMS
DDBMS
 
Information-Flow Analysis of Design Breaks up
Information-Flow Analysis of Design Breaks upInformation-Flow Analysis of Design Breaks up
Information-Flow Analysis of Design Breaks up
 
Problems in Task Scheduling in Multiprocessor System
Problems in Task Scheduling in Multiprocessor SystemProblems in Task Scheduling in Multiprocessor System
Problems in Task Scheduling in Multiprocessor System
 
User biglm
User biglmUser biglm
User biglm
 
Data structures assignmentweek4b.pdfCI583 Data Structure
Data structures assignmentweek4b.pdfCI583 Data StructureData structures assignmentweek4b.pdfCI583 Data Structure
Data structures assignmentweek4b.pdfCI583 Data Structure
 
1 introduction
1 introduction1 introduction
1 introduction
 
P Systems and Distributed Computing
P Systems and Distributed ComputingP Systems and Distributed Computing
P Systems and Distributed Computing
 
DATA MINING USING R (1).pptx
DATA MINING USING R (1).pptxDATA MINING USING R (1).pptx
DATA MINING USING R (1).pptx
 
Solution(1)
Solution(1)Solution(1)
Solution(1)
 
IRJET- Hadoop based Frequent Closed Item-Sets for Association Rules form ...
IRJET-  	  Hadoop based Frequent Closed Item-Sets for Association Rules form ...IRJET-  	  Hadoop based Frequent Closed Item-Sets for Association Rules form ...
IRJET- Hadoop based Frequent Closed Item-Sets for Association Rules form ...
 
Chapter 3 principles of parallel algorithm design
Chapter 3   principles of parallel algorithm designChapter 3   principles of parallel algorithm design
Chapter 3 principles of parallel algorithm design
 
cis97003
cis97003cis97003
cis97003
 
1 introduction DDBS
1 introduction DDBS1 introduction DDBS
1 introduction DDBS
 
DataRobot R Package
DataRobot R PackageDataRobot R Package
DataRobot R Package
 

Mehr von Ali Usman

Cisco Packet Tracer Overview
Cisco Packet Tracer OverviewCisco Packet Tracer Overview
Cisco Packet Tracer OverviewAli Usman
 
Islamic Arts and Architecture
Islamic Arts and  ArchitectureIslamic Arts and  Architecture
Islamic Arts and ArchitectureAli Usman
 
Database ,18 Current Issues
Database ,18 Current IssuesDatabase ,18 Current Issues
Database ,18 Current IssuesAli Usman
 
Database , 17 Web
Database , 17 WebDatabase , 17 Web
Database , 17 WebAli Usman
 
Database ,16 P2P
Database ,16 P2P Database ,16 P2P
Database ,16 P2P Ali Usman
 
Database , 15 Object DBMS
Database , 15 Object DBMSDatabase , 15 Object DBMS
Database , 15 Object DBMSAli Usman
 
Database ,14 Parallel DBMS
Database ,14 Parallel DBMSDatabase ,14 Parallel DBMS
Database ,14 Parallel DBMSAli Usman
 
Database , 13 Replication
Database , 13 ReplicationDatabase , 13 Replication
Database , 13 ReplicationAli Usman
 
Database , 12 Reliability
Database , 12 ReliabilityDatabase , 12 Reliability
Database , 12 ReliabilityAli Usman
 
Database ,11 Concurrency Control
Database ,11 Concurrency ControlDatabase ,11 Concurrency Control
Database ,11 Concurrency ControlAli Usman
 
Database ,10 Transactions
Database ,10 TransactionsDatabase ,10 Transactions
Database ,10 TransactionsAli Usman
 
Database , 6 Query Introduction
Database , 6 Query Introduction Database , 6 Query Introduction
Database , 6 Query Introduction Ali Usman
 
Database , 5 Semantic
Database , 5 SemanticDatabase , 5 Semantic
Database , 5 SemanticAli Usman
 
Database , 4 Data Integration
Database , 4 Data IntegrationDatabase , 4 Data Integration
Database , 4 Data IntegrationAli Usman
 
Processor Specifications
Processor SpecificationsProcessor Specifications
Processor SpecificationsAli Usman
 
Fifty Year Of Microprocessor
Fifty Year Of MicroprocessorFifty Year Of Microprocessor
Fifty Year Of MicroprocessorAli Usman
 
Discrete Structures lecture 2
 Discrete Structures lecture 2 Discrete Structures lecture 2
Discrete Structures lecture 2Ali Usman
 
Discrete Structures. Lecture 1
 Discrete Structures. Lecture 1  Discrete Structures. Lecture 1
Discrete Structures. Lecture 1 Ali Usman
 
Muslim Contributions in Medicine-Geography-Astronomy
Muslim Contributions in Medicine-Geography-AstronomyMuslim Contributions in Medicine-Geography-Astronomy
Muslim Contributions in Medicine-Geography-AstronomyAli Usman
 
Muslim Contributions in Geography
Muslim Contributions in GeographyMuslim Contributions in Geography
Muslim Contributions in GeographyAli Usman
 

Mehr von Ali Usman (20)

Cisco Packet Tracer Overview
Cisco Packet Tracer OverviewCisco Packet Tracer Overview
Cisco Packet Tracer Overview
 
Islamic Arts and Architecture
Islamic Arts and  ArchitectureIslamic Arts and  Architecture
Islamic Arts and Architecture
 
Database ,18 Current Issues
Database ,18 Current IssuesDatabase ,18 Current Issues
Database ,18 Current Issues
 
Database , 17 Web
Database , 17 WebDatabase , 17 Web
Database , 17 Web
 
Database ,16 P2P
Database ,16 P2P Database ,16 P2P
Database ,16 P2P
 
Database , 15 Object DBMS
Database , 15 Object DBMSDatabase , 15 Object DBMS
Database , 15 Object DBMS
 
Database ,14 Parallel DBMS
Database ,14 Parallel DBMSDatabase ,14 Parallel DBMS
Database ,14 Parallel DBMS
 
Database , 13 Replication
Database , 13 ReplicationDatabase , 13 Replication
Database , 13 Replication
 
Database , 12 Reliability
Database , 12 ReliabilityDatabase , 12 Reliability
Database , 12 Reliability
 
Database ,11 Concurrency Control
Database ,11 Concurrency ControlDatabase ,11 Concurrency Control
Database ,11 Concurrency Control
 
Database ,10 Transactions
Database ,10 TransactionsDatabase ,10 Transactions
Database ,10 Transactions
 
Database , 6 Query Introduction
Database , 6 Query Introduction Database , 6 Query Introduction
Database , 6 Query Introduction
 
Database , 5 Semantic
Database , 5 SemanticDatabase , 5 Semantic
Database , 5 Semantic
 
Database , 4 Data Integration
Database , 4 Data IntegrationDatabase , 4 Data Integration
Database , 4 Data Integration
 
Processor Specifications
Processor SpecificationsProcessor Specifications
Processor Specifications
 
Fifty Year Of Microprocessor
Fifty Year Of MicroprocessorFifty Year Of Microprocessor
Fifty Year Of Microprocessor
 
Discrete Structures lecture 2
 Discrete Structures lecture 2 Discrete Structures lecture 2
Discrete Structures lecture 2
 
Discrete Structures. Lecture 1
 Discrete Structures. Lecture 1  Discrete Structures. Lecture 1
Discrete Structures. Lecture 1
 
Muslim Contributions in Medicine-Geography-Astronomy
Muslim Contributions in Medicine-Geography-AstronomyMuslim Contributions in Medicine-Geography-Astronomy
Muslim Contributions in Medicine-Geography-Astronomy
 
Muslim Contributions in Geography
Muslim Contributions in GeographyMuslim Contributions in Geography
Muslim Contributions in Geography
 

Kürzlich hochgeladen

Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 

Kürzlich hochgeladen (20)

Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 

Database, 3 Distribution Design

  • 1. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/1 Outline • Introduction • Background • Distributed Database Design ➡ Fragmentation ➡ Data distribution • Database Integration • Semantic Data Control • Distributed Query Processing • Multidatabase Query Processing • Distributed Transaction Management • Data Replication • Parallel Database Systems • Distributed Object DBMS • Peer-to-Peer Data Management • Web Data Management • Current Issues
  • 2. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/2 Design Problem • In the general setting : Making decisions about the placement of data and programs across the sites of a computer network as well as possibly designing the network itself. • In Distributed DBMS, the placement of applications entails ➡ placement of the distributed DBMS software; and ➡ placement of the applications that run on the database
  • 3. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/3 Dimensions of the Problem Level of sharing Level of knowledge Access pattern behavior partial information dynamic static data data + program complete information
  • 4. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/4 Distribution Design • Top-down ➡ mostly in designing systems from scratch ➡ mostly in homogeneous systems • Bottom-up ➡ when the databases already exist at a number of sites
  • 5. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/5 Top-Down Design User Input View Integration User Input Requirements Analysis Objectives Conceptual Design View Design Access Information ES’sGCS Distribution Design Physical Design LCS’s LIS’s
  • 6. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/6 Distribution Design Issues Why fragment at all? How to fragment? How much to fragment? How to test correctness? How to allocate? Information requirements?
  • 7. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/7 Fragmentation • Can't we just distribute relations? • What is a reasonable unit of distribution? ➡ relation ✦ views are subsets of relations locality ✦ extra communication ➡ fragments of relations (sub-relations) ✦ concurrent execution of a number of transactions that access different portions of a relation ✦ views that cannot be defined on a single fragment will require extra processing ✦ semantic data control (especially integrity enforcement) more difficult
  • 8. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/8 Fragmentation Alternatives – Horizontal PROJ1 : projects with budgets less than $200,000 PROJ2 : projects with budgets greater than or equal to $200,000 PROJ1 PNO PNAME BUDGET LOC P3 CAD/CAM 250000 New York P4 Maintenance 310000 Paris P5 CAD/CAM 500000 Boston PNO PNAME LOC P1 Instrumentation 150000 Montreal P2 Database Develop. 135000 New York BUDGET PROJ2 New York New York PROJ PNO PNAME BUDGET LOC P1 Instrumentation 150000 Montreal P3 CAD/CAM 250000 P2 Database Develop. 135000 P4 Maintenance 310000 Paris P5 CAD/CAM 500000 Boston
  • 9. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/9 Fragmentation Alternatives – Vertical PROJ1: information about project budgets PROJ2: information about project names and locations PNO BUDGET P1 150000 P3 250000 P2 135000 P4 310000 P5 500000 PNO PNAME LOC P1 Instrumentation Montreal P3 CAD/CAM New York P2 Database Develop. New York P4 Maintenance Paris P5 CAD/CAM Boston PROJ1 PROJ2 New York New York PROJ PNO PNAME BUDGET LOC P1 Instrumentation 150000 Montreal P3 CAD/CAM 250000 P2 Database Develop. 135000 P4 Maintenance 310000 Paris P5 CAD/CAM 500000 Boston
  • 10. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/10 Degree of Fragmentation Finding the suitable level of partitioning within this range tuples or attributes relations finite number of alternatives
  • 11. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/11 Correctness of Fragmentation • Completeness ➡ Decomposition of relation R into fragments R1, R2, ..., Rn is complete if and only if each data item in R can also be found in some Ri • Reconstruction ➡ If relation R is decomposed into fragments R1, R2, ..., Rn, then there should exist some relational operator ∇ such that R = ∇1≤i≤nRi • Disjointness ➡ If relation R is decomposed into fragments R1, R2, ..., Rn, and data item di is in Rj, then di should not be in any other fragment Rk (k ≠ j ).
  • 12. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/12 Allocation Alternatives • Non-replicated ➡ partitioned : each fragment resides at only one site • Replicated ➡ fully replicated : each fragment at each site ➡ partially replicated : each fragment at some of the sites • Rule of thumb: If read-only queries update queries << 1, replication is advantageous, otherwise replication may cause problems
  • 13. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/13 Full-replication Partial-replication Partitioning QUERY PROCESSING Easy Same Difficulty Same DifficultyDIRECTORY MANAGEMENT Easy or Non-existant CONCURRENCY CONTROL EasyDifficultModerate RELIABILITY Very high High Low REALITY Possible application Realistic Possible application Comparison of Replication Alternatives
  • 14. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/14 Information Requirements • Four categories: ➡ Database information ➡ Application information ➡ Communication network information ➡ Computer system information
  • 15. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/15 Fragmentation • Horizontal Fragmentation (HF) ➡ Primary Horizontal Fragmentation (PHF) ➡ Derived Horizontal Fragmentation (DHF) • Vertical Fragmentation (VF) • Hybrid Fragmentation (HF)
  • 16. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/16 PHF – Information Requirements • Database Information ➡ relationship ➡ cardinality of each relation: card(R) TITLE, SAL SKILL ENO, ENAME, TITLE PNO, PNAME, BUDGET, LO C ENO, PNO, RESP, DUR EMP PROJ ASG L1
  • 17. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/17 PHF - Information Requirements • Application Information ➡ simple predicates : Given R[A1, A2, …, An], a simple predicate pj is pj : Ai θValue where θ {=,<,≤,>,≥,≠}, Value Di and Di is the domain of Ai. For relation R we define Pr = {p1, p2, …,pm} Example : PNAME = "Maintenance" BUDGET ≤ 200000 ➡ minterm predicates : Given R and Pr = {p1, p2, …,pm} define M = {m1,m2,…,mr} as M = { mi | mi = pj Pr pj* }, 1≤j≤m, 1≤i≤z where pj* = pj or pj* = ¬(pj).
  • 18. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/18 PHF – Information Requirements Example m1: PNAME="Maintenance" BUDGET≤200000 m2: NOT(PNAME="Maintenance") BUDGET≤200000 m3: PNAME= "Maintenance" NOT(BUDGET≤200000) m4: NOT(PNAME="Maintenance") NOT(BUDGET≤200000)
  • 19. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/19 PHF – Information Requirements • Application Information ➡ minterm selectivities: sel(mi) ✦ The number of tuples of the relation that would be accessed by a user query which is specified according to a given minterm predicate mi. ➡ access frequencies: acc(qi) ✦ The frequency with which a user application qi accesses data. ✦ Access frequency for a minterm predicate can also be defined.
  • 20. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/20 Primary Horizontal Fragmentation Definition : Rj = Fj (R), 1 ≤ j ≤ w where Fj is a selection formula, which is (preferably) a minterm predicate. Therefore, A horizontal fragment Ri of relation R consists of all the tuples of R which satisfy a minterm predicate mi.  Given a set of minterm predicates M, there are as many horizontal fragments of relation R as there are minterm predicates. Set of horizontal fragments also referred to as minterm fragments.
  • 21. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/21 PHF – Algorithm Given: A relation R, the set of simple predicates Pr Output: The set of fragments of R = {R1, R2,…,Rw} which obey the fragmentation rules. Preliminaries : ➡ Pr should be complete ➡ Pr should be minimal
  • 22. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/22 Completeness of Simple Predicates • A set of simple predicates Pr is said to be complete if and only if the accesses to the tuples of the minterm fragments defined on Pr requires that two tuples of the same minterm fragment have the same probability of being accessed by any application. • Example : ➡ Assume PROJ[PNO,PNAME,BUDGET,LOC] has two applications defined on it. ➡ Find the budgets of projects at each location. (1) ➡ Find projects with budgets less than $200000. (2)
  • 23. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/23 Completeness of Simple Predicates According to (1), Pr={LOC=“Montreal”,LOC=“New York”,LOC=“Paris”} which is not complete with respect to (2). Modify Pr ={LOC=“Montreal”,LOC=“New York”,LOC=“Paris”, BUDGET≤200000,BUDGET>200000} which is complete.
  • 24. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/24 Minimality of Simple Predicates • If a predicate influences how fragmentation is performed, (i.e., causes a fragment f to be further fragmented into, say, fi and fj) then there should be at least one application that accesses fi and fj differently. • In other words, the simple predicate should be relevant in determining a fragmentation. • If all the predicates of a set Pr are relevant, then Pr is minimal. acc(mi ) card( fi ) = acc(mj ) card( fj )
  • 25. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/25 Minimality of Simple Predicates Example : Pr ={LOC=“Montreal”,LOC=“New York”, LOC=“Paris”, BUDGET≤200000,BUDGET>200000} is minimal (in addition to being complete). However, if we add PNAME = “Instrumentation” then Pr is not minimal.
  • 26. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/26 COM_MIN Algorithm Given: a relation R and a set of simple predicates Pr Output: a complete and minimal set of simple predicates Pr' for Pr Rule 1: a relation or fragment is partitioned into at least two parts which are accessed differently by at least one application.
  • 27. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/27 COM_MIN Algorithm  Initialization : ➡ find a pi Pr such that pi partitions R according to Rule 1 ➡ set Pr' = pi ; Pr Pr – {pi} ; F {fi}  Iteratively add predicates to Pr' until it is complete ➡ find a pj Pr such that pj partitions some fk defined according to minterm predicate over Pr' according to Rule 1 ➡ set Pr' = Pr' {pi}; Pr Pr – {pi}; F F {fi} ➡ if pk Pr' which is nonrelevant then Pr' Pr – {pi} F F – {fi}
  • 28. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/28 PHORIZONTAL Algorithm Makes use of COM_MIN to perform fragmentation. Input: a relation R and a set of simple predicates Pr Output: a set of minterm predicates M according to which relation R is to be fragmented  Pr' COM_MIN (R,Pr)  determine the set M of minterm predicates  determine the set I of implications among pi Pr  eliminate the contradictory minterms from M
  • 29. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/29 PHF – Example • Two candidate relations : PAY and PROJ. • Fragmentation of relation PAY ➡ Application: Check the salary info and determine raise. ➡ Employee records kept at two sites application run at two sites ➡ Simple predicates p1 : SAL ≤ 30000 p2 : SAL > 30000 Pr = {p1,p2} which is complete and minimal Pr'=Pr ➡ Minterm predicates m1 : (SAL ≤ 30000) m2 : NOT(SAL ≤ 30000) (SAL > 30000)
  • 30. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/30 PHF – Example TITLE Mech. Eng. Programmer SAL 27000 24000 PAY1 PAY2 TITLE Elect. Eng. Syst. Anal. SAL 40000 34000
  • 31. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/31 PHF – Example • Fragmentation of relation PROJ ➡ Applications: ✦ Find the name and budget of projects given their no. ✓ Issued at three sites ✦ Access project information according to budget ✓ one site accesses ≤200000 other accesses >200000 ➡ Simple predicates ➡ For application (1) p1 : LOC = “Montreal” p2 : LOC = “New York” p3 : LOC = “Paris” ➡ For application (2) p4 : BUDGET ≤ 200000 p5 : BUDGET > 200000 ➡ Pr = Pr' = {p1,p2,p3,p4,p5}
  • 32. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/32 • Fragmentation of relation PROJ continued ➡ Minterm fragments left after elimination m1 : (LOC = “Montreal”) (BUDGET ≤ 200000) m2 : (LOC = “Montreal”) (BUDGET > 200000) m3 : (LOC = “New York”) (BUDGET ≤ 200000) m4 : (LOC = “New York”) (BUDGET > 200000) m5 : (LOC = “Paris”) (BUDGET ≤ 200000) m6 : (LOC = “Paris”) (BUDGET > 200000) PHF – Example
  • 33. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/33 PHF – Example PROJ1 PNO PNAME BUDGET LOC PNO PNAME BUDGET LOC P1 Instrumentation 150000 Montreal P2 Database Develop. 135000 New York PROJ2 PROJ4 PROJ6 PNO PNAME BUDGET LOC P3 CAD/CAM 250000 New York PNO PNAME BUDGET LOC MaintenanceP4 310000 Paris
  • 34. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/34
  • 35. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/35 • Completeness ➡ Since Pr' is complete and minimal, the selection predicates are complete • Reconstruction ➡ If relation R is fragmented into FR = {R1,R2,…,Rr} R = Ri FR Ri • Disjointness ➡ Minterm predicates that form the basis of fragmentation should be mutually exclusive. PHF – Correctness
  • 36. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/36 Derived Horizontal Fragmentation • Defined on a member relation of a link according to a selection operation specified on its owner. ➡ Each link is an equijoin. ➡ Equijoin can be implemented by means of semijoins. TITLE,SAL SKILL ENO, ENAME, TITLE PNO, PNAME, BUDGET, LOC ENO, PNO, RESP, DUR EMP PROJ ASG L1 L2 L3
  • 37. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/37 DHF – Definition Given a link L where owner(L)=S and member(L)=R, the derived horizontal fragments of R are defined as Ri = R ⋉F Si, 1≤i≤w where w is the maximum number of fragments that will be defined on R and Si = Fi (S) where Fi is the formula according to which the primary horizontal fragment Si is defined.
  • 38. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/38 Given link L1 where owner(L1)=SKILL and member(L1)=EMP EMP1 = EMP ⋉ SKILL1 EMP2 = EMP ⋉ SKILL2 where SKILL1 = SAL≤30000(SKILL) SKILL2 = SAL>30000(SKILL) DHF – Example ENO ENAME TITLE E3 A. Lee Mech. Eng. E4 J. Miller Programmer E7 R. Davis Mech. Eng. EMP1 ENO ENAME TITLE E1 J. Doe Elect. Eng. E2 M. Smith Syst. Anal. E5 B. Casey Syst. Anal. EMP2 E6 L. Chu Elect. Eng. E8 J. Jones Syst. Anal.
  • 39. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/39 DHF – Correctness • Completeness ➡ Referential integrity ➡ Let R be the member relation of a link whose owner is relation S which is fragmented as FS = {S1, S2, ..., Sn}. Furthermore, let A be the join attribute between R and S. Then, for each tuple t of R, there should be a tuple t' of S such that t[A] = t' [A] • Reconstruction ➡ Same as primary horizontal fragmentation. • Disjointness ➡ Simple join graphs between the owner and the member fragments.
  • 40. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/40 • Has been studied within the centralized context ➡ design methodology ➡ physical clustering • More difficult than horizontal, because more alternatives exist. Two approaches : ➡ grouping ✦ attributes to fragments ➡ splitting ✦ relation to fragments Vertical Fragmentation
  • 41. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/41 • Overlapping fragments ➡ grouping • Non-overlapping fragments ➡ splitting We do not consider the replicated key attributes to be overlapping. Advantage: Easier to enforce functional dependencies (for integrity checking etc.) Vertical Fragmentation
  • 42. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/42 VF – Information Requirements • Application Information ➡ Attribute affinities ✦ a measure that indicates how closely related the attributes are ✦ This is obtained from more primitive usage data ➡ Attribute usage values ✦ Given a set of queries Q = {q1, q2,…, qq} that will run on the relation R[A1, A2,…, An], use(qi,•) can be defined accordingly use(qi,Aj) = 1 if attribute Aj is referenced by query qi 0 otherwise
  • 43. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/43 VF – Definition of use(qi,Aj) Consider the following 4 queries for relation PROJ q1:SELECT BUDGET q2: SELECT PNAME,BUDGET FROM PROJ FROM PROJ WHERE PNO=Value q3:SELECT PNAME q4: SELECT SUM(BUDGET) FROM PROJ FROM PROJ WHERE LOC=Value WHERE LOC=Value Let A1= PNO, A2= PNAME, A3= BUDGET, A4= LOC q1 q2 q3 q4 A1 1 0 1 0 0 01 1 0 01 1 0 0 1 1 A2 A3 A4
  • 44. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/44 VF – Affinity Measure aff(Ai,Aj) The attribute affinity measure between two attributes Ai and Aj of a relation R[A1, A2, …, An] with respect to the set of applications Q = (q1, q2, …, qq) is defined as follows : aff (Ai, Aj) (query access) all queries that access Ai and Aj query access access frequency of a query access execution all sites
  • 45. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/45 Assume each query in the previous example accesses the attributes once during each execution. Also assume the access frequencies Then aff(A1, A3) = 15*1 + 20*1+10*1 = 45 and the attribute affinity matrix AA is VF – Calculation of aff(Ai, Aj) 4 q1 q 2 q3 q S1 S2 S3 15 20 10 5 0 0 25 2525 3 0 0 A A A A1 2 3 4 A A A A 1 2 3 4 45 0 45 0 0 80 5 75 45 5 53 3 0 75 3 78
  • 46. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/46 • Take the attribute affinity matrix AA and reorganize the attribute orders to form clusters where the attributes in each cluster demonstrate high affinity to one another. • Bond Energy Algorithm (BEA) has been used for clustering of entities. BEA finds an ordering of entities (in our case attributes) such that the global affinity measure is maximized. VF – Clustering Algorithm AM (affinity of Ai and Aj with their neighbors) ji
  • 47. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/47 Bond Energy Algorithm Input: The AA matrix Output: The clustered affinity matrix CA which is a perturbation of AA Initialization: Place and fix one of the columns of AA in CA. Iteration: Place the remaining n-i columns in the remaining i+1 positions in the CA matrix. For each column, choose the placement that makes the most contribution to the global affinity measure. Row order: Order the rows according to the column ordering.
  • 48. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/48 Bond Energy Algorithm “Best” placement? Define contribution of a placement: cont(Ai, Ak, Aj) = 2bond(Ai, Ak)+2bond(Ak, Al) –2bond(Ai, Aj) where bond(Ax,Ay) = aff(Az,Ax)aff(Az,Ay) z 1 n
  • 49. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/49 BEA – Example Consider the following AA matrix and the corresponding CA matrix where A1 and A2 have been placed. Place A3: Ordering (0-3-1) : cont(A0,A3,A1) = 2bond(A0 , A3)+2bond(A3 , A1)–2bond(A0 , A1) = 2* 0 + 2* 4410 – 2*0 = 8820 Ordering (1-3-2) : cont(A1,A3,A2) = 2bond(A1 , A3)+2bond(A3 , A2)–2bond(A1,A2) = 2* 4410 + 2* 890 – 2*225 = 10150 Ordering (2-3-4) : cont (A2,A3,A4) = 1780
  • 50. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/50 BEA – Example • Therefore, the CA matrix has the form • When A4 is placed, the final form of the CA matrix (after row organization) is A1 A2A3 45 0 45 0 45 5 53 3 0 80 5 75 A1 A2A3 A4 A1 A2 A3 A4 45 45 0 0 45 53 5 3 0 5 80 75 0 3 75 78
  • 51. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/51 How can you divide a set of clustered attributes {A1, A2, …, An} into two (or more) sets {A1, A2, …, Ai} and {Ai, …, An} such that there are no (or minimal) applications that access both (or more than one) of the sets. VF – Algorithm A1 A2 Ai Ai+1 Am …A1 A2 A3 Ai Ai+1 Am BA . . . TA
  • 52. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/52 Define TQ = set of applications that access only TA BQ = set of applications that access only BA OQ = set of applications that access both TA and BA and CTQ = total number of accesses to attributes by applications that access only TA CBQ = total number of accesses to attributes by applications that access only BA COQ = total number of accesses to attributes by applications that access both TA and BA Then find the point along the diagonal that maximizes VF – ALgorithm CTQ CBQ COQ2
  • 53. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/53 Two problems : Cluster forming in the middle of the CA matrix ➡ Shift a row up and a column left and apply the algorithm to find the “best” partitioning point ➡ Do this for all possible shifts ➡ Cost O(m2) More than two clusters ➡ m-way partitioning ➡ try 1, 2, …, m–1 split points along diagonal and try to find the best point for each of these ➡ Cost O(2m) VF – Algorithm
  • 54. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/54 VF – Correctness A relation R, defined over attribute set A and key K, generates the vertical partitioning FR = {R1, R2, …, Rr}. • Completeness ➡ The following should be true for A: A = ARi • Reconstruction ➡ Reconstruction can be achieved by R = ⋈•K Ri, Ri FR • Disjointness ➡ TID's are not considered to be overlapping since they are maintained by the system ➡ Duplicated keys are not considered to be overlapping
  • 55. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/55 Hybrid Fragmentation R HFHF R1 VF VFVFVFVF R11 R12 R21 R22 R23 R2
  • 56. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/56 Fragment Allocation • Problem Statement Given F = {F1, F2, …, Fn} fragments S ={S1, S2, …, Sm} network sites Q = {q1, q2,…, qq} applications Find the "optimal" distribution of F to S. • Optimality ➡ Minimal cost ✦ Communication + storage + processing (read & update) ✦ Cost in terms of time (usually) ➡ Performance Response time and/or throughput ➡ Constraints ✦ Per site constraints (storage & processing)
  • 57. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/57 Information Requirements • Database information ➡ selectivity of fragments ➡ size of a fragment • Application information ➡ access types and numbers ➡ access localities • Communication network information ➡ unit cost of storing data at a site ➡ unit cost of processing at a site • Computer system information ➡ bandwidth ➡ latency ➡ communication overhead
  • 58. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/58 Allocation File Allocation (FAP) vs Database Allocation (DAP): ➡ Fragments are not individual files ✦ relationships have to be maintained ➡ Access to databases is more complicated ✦ remote file access model not applicable ✦ relationship between allocation and query processing ➡ Cost of integrity enforcement should be considered ➡ Cost of concurrency control should be considered
  • 59. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/59 Allocation – Information Requirements • Database Information ➡ selectivity of fragments ➡ size of a fragment • Application Information ➡ number of read accesses of a query to a fragment ➡ number of update accesses of query to a fragment ➡ A matrix indicating which queries updates which fragments ➡ A similar matrix for retrievals ➡ originating site of each query • Site Information ➡ unit cost of storing data at a site ➡ unit cost of processing at a site • Network Information ➡ communication cost/frame between two sites ➡ frame size
  • 60. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/60 General Form min(Total Cost) subject to response time constraint storage constraint processing constraint Decision Variable Allocation Model xij 1 if fragment Fi is stored at site Sj 0 otherwise
  • 61. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/61 • Total Cost • Storage Cost (of fragment Fj at Sk) • Query Processing Cost (for one query) processing component + transmission component Allocation Model (unit storage cost at Sk) (size of Fj) xjk query processing cost all queries cost of storing a fragment at a site all fragmentsall sites
  • 62. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/62 Allocation Model • Query Processing Cost Processing component access cost + integrity enforcement cost + concurrency control cost ➡ Access cost ➡ Integrity enforcement and concurrency control costs ✦ Can be similarly calculated (no. of update accesses+ no. of read accesses) all fragmentsall sites xij local processing cost at a site
  • 63. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/63 • Query Processing Cost Transmission component cost of processing updates + cost of processing retrievals ➡ Cost of updates ➡ Retrieval Cost Allocation Model update message cost all fragmentsall sites acknowledgment cost all fragmentsall sites minall sites all fragments (cost of retrieval command cost of sending back the result)
  • 64. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/64 Allocation Model • Constraints ➡ Response Time execution time of query ≤ max. allowable response time for that query ➡ Storage Constraint (for a site) ➡ Processing constraint (for a site) storage requirement of a fragment at that site all fragments storage capacity at that site processing load of a query at that site all queries processing capacity of that site
  • 65. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/65 Allocation Model • Solution Methods ➡ FAP is NP-complete ➡ DAP also NP-complete • Heuristics based on ➡ single commodity warehouse location (for FAP) ➡ knapsack problem ➡ branch and bound techniques ➡ network flow
  • 66. Distributed DBMS © M. T. Özsu & P. Valduriez Ch.3/66 Allocation Model • Attempts to reduce the solution space ➡ assume all candidate partitionings known; select the “best” partitioning ➡ ignore replication at first ➡ sliding window on fragments