68 transaction scheduling in distributed real time systems
1. The International Journal of Time-Critical Computing Systems, 19, 169–193 (2000)
c 2000 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands.
Transaction Scheduling in Distributed Real-Time
Systems
KWOK-WA LAM
Department of Computer Science, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong
VICTOR C. S. LEE csvlee@cityu.edu.hk
Department of Computer Science, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong
SHEUNG-LUN HUNG
Department of Computer Science, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong
Abstract. In this paper, we study the performance of using optimistic approach to concurrency control in
distributed real-time database systems (RTDBS). The traditional optimistic approach suffers from the problem of
unnecessary restarts. Transaction restarts can significantly increase the system workload and intensify resource
and data contention. In distributed environments, the complexity of the system and the high communication
overhead exacerbate the problem. Therefore, the number of unnecessary restarts is the determinant factor that
affects the performance of optimistic approach in distributed RTDBS. When optimistic approach is extended to
distributed environments, a number of issues resulting from the increased complexity and communication overhead
have to be resolved. In this paper, a new real-time distributed optimistic concurrency control (DOCC) protocol
with dynamic adjustment of serialization order (DASO), called DOCC-DA is proposed. This protocol can avoid
unnecessary transaction restarts by dynamically adjusting the serialization order of the conflicting transactions.
Therefore, resourcescanbesavedandmoretransactionscanmeettheirdeadlines. IntheDOCC-DAprotocol, anew
distributed circular validation scheme is included to facilitate transaction validation in distributed environments.
The performance of the DOCC-DA protocol has been examined in detail by simulation. The results showed that
the performance of the DOCC-DA protocol is consistently better than that of other protocols.
Keywords: distributed real-time databases, optimistic concurrency control, dynamic adjustment of serialization
order, distributed circular validation
1. Introduction
A real-time database system (RTDBS) is one whose basic specification and design criteria
must include the requirement of meeting the timing constraints of real-time transactions
(Yu, 1994). The correctness of RTDBS depends on not only the logical correctness, but
also the timeliness of the results. That is, a transaction must be completed within a specified
time, called deadline. If the deadlines are missed, the consequence can be either loss of
the transaction’s values or catastrophe, depending on the characteristics of the applications
(Ramamritham, 1993). Common applications of RTDBS can be found in international
financial market systems, air traffic controlling systems, nuclear power plant management
systems and integrated manufacturing systems.
The goal of scheduling transactions in RTDBS is two-fold: to meet the timing constraints
and to ensure the data consistency. Priority-based scheduling in real-time systems can be
adopted to enforce timing constraints of transactions while concurrency control in database
2. 170 LAM, LEE AND HUNG
systems can be used to maintain data consistency. Unfortunately, the two mechanisms often
operate in an incompatible manner. For instance, many concurrency control protocols are
based on transaction blocking such as the two-phase locking (2PL) protocol (Bernstein,
1987). However, transaction blocking may cause priority inversion in which a high priority
transaction is blocked by lower priority transactions (Huang, 1992; Sha, 1990) and priority
inversion has a negative impact on real-time scheduling.
Optimistic concurrency control (OCC) protocols are another popular approach to concur-
rency control for RTDBS. Some studies (Haritsa, 1990, 1992) showed that OCC protocols
outperform 2PL protocols over a wide range of system loading and resource availability
in firm RTDBS. The main reason is due to the nice properties of the OCC protocols that
are non-blocking, deadlock-free and fruitful restarts. In OCC protocols, conflict resolution
is performed at the end of a transaction execution. Therefore, restarting other conflicting
transactions must result in a transaction commitment. We called this fruitful restart. On
the other hand, in 2PL protocols, a transaction which restarts other conflicting transactions
may eventually be aborted and discarded (Abbott, 1992; Lam, 1994). Such restarts of the
conflicting transactions are called fruitless restarts.
However, therestart-basedconflictresolutionpolicyadoptedinOCCprotocolsincurshigh
wastage of resources (Kung, 1981). Transaction restarts have two negative implications in
RTDBS. Firstly, since the number of restarts can be unbounded, if the restarted transactions
miss their deadlines in the end, the time and resources spent on them will be wasted.
Secondly, the time and resources needed to handle restarted transactions may seriously
affect active transactions to meet their deadlines. Thus, for RTDBS, it is particularly
important to reduce the number of transaction restarts such that the amount of the resources
wasted can be reduced and more transactions can meet their deadlines. In fact, the number
of restarts incurred by OCC protocols is the major factor deciding the performance of
concurrency control in RTDBS (Datta, 1997).
Many real-time applications are inherently distributed such as advanced command and
control systems (Son, 1993; Ulusoy, 1994). Distributed database fits more naturally in
their decentralized structures. Distributed database systems allow transactions to access
shared data at remote sites. While transactions are scheduled under timing constraints in a
distributed RTDBS, it should also be ensured that both the global consistency and the local
consistency of a distributed database are preserved. To this end, it is required to exchange
messages containing scheduling information between the sites where the transaction is being
executed. The communication delays caused by message exchanges constitute substantial
overheads to the response time of a distributed transaction. Thus, it becomes more difficult
to satisfy the timing constraint of transactions in distributed RTDBS than in a centralized
one.
To reduce the number of unnecessary restarts in OCC protocols, Lee et al (Lee, 1993b)
suggested the OCC-TI protocol which utilized the notion of dynamic adjustment of serial-
ization order (DASO) and timestamp intervals (Konana, 1997). In the OCC-TI protocol,
the serialization order between transactions is dynamically induced according to the types
of data conflicts. The temporary serialization order of a transaction relative to others is
defined by its timestamp interval. The timestamp interval of an active transaction may be
adjusted at the validation of another transaction or when data conflict is detected during its
3. TRANSACTION SCHEDULING 171
read phase. Whenever the timestamp interval of a transaction shuts out, a non-serializable
execution is detected and the transaction will be restarted. Their performance study (Lee,
1993b) showed that the OCC-TI outperformed other OCC protocols and claimed that the
most determinant factor on the performance of OCC protocols is the number of transaction
restarts.
However, the OCC-TI protocol becomes ineffective if it is simply extended to distributed
environments. The OCC-TI protocol requires updating the timestamp interval of a trans-
action for each data access or when a conflicting transaction commits. Since the same
current timestamp interval of a transaction must be kept by all its sub-transactions, it will
cause many message passings between sub-transactions. Consequently, data scheduling at
one site is tightly coupled by data scheduling at other sites and transaction commits. This
will lead to a high degree of system inefficiency in terms of resource utilization and high
overheads. Another ineffectiveness of the OCC-TI protocol is its handicap in handling the
write-write conflicts that are common in many RTDBS.
As suggested by Lee (1993b), the effective approach to enhancing the performance of
OCC protocols in RTDBS is to reduce the number of transaction restarts which can be
achieved by using the notion of DASO. However, utilizing the notion of DASO in distributed
environments poses different issues in designing the protocol. In this paper, we attempt to
address the impact of transaction restarts on the performance of a distributed RTDBS. We
present a new distributed optimistic concurrency control (DOCC) protocol called DOCC-
DA, which reduces the number of transaction restarts by using the notion of DASO. The
DOCC-DA protocol does not require many message passings in inducing the temporary
serialization order of a transaction. Hence, each site may have more autonomy in scheduling
data to transactions and the sites are not tightly coupled with each other. This is a desirable
feature for distributed applications such as industrial automation and banking where delays
associated with inter-process communications and remote database accesses are of real
concern (Son, 1990). Another contribution of our work is that our new protocol can resolve
write-write conflicts between transactions such that the number of restarts can be further
reduced. The remainder of this paper is organized as follows. Section 2 describes the
design issues of OCC protocol in distributed environments and our approach to solving
these issues. The DOCC-DA protocol is presented in section 3. Section 4 describes the
performance model for evaluation of the DOCC-DA protocol. In section 5, we discuss the
performance results and conclude the study in section 6.
2. Design Issues of OCC in Distributed RTDBS
In this section, we discuss the issues to be resolved when the notion of DASO is utilized
in designing an OCC protocol in distributed environments. First, we will describe the
distributed real-time database model and the problem of unnecessary restarts. Then, we
explain the use of DASO to alleviate the problem of unnecessary restarts. We also discuss
how to precisely adjust and record the temporary serialization order with few message
passings and how to design an efficient validation scheme when the use of DASO brings
out the increased complexity in distributed environments.
4. 172 LAM, LEE AND HUNG
2.1. Distributed Real-Time Database Model
In a distributed database system, information is stored across a number of sites intercon-
nected through a reliable communication network. Each site has a system-wide unique
identifier, and sites communicate through messages. All messages sent arrive at their des-
tinations in finite time and in the order of sending. Each site is equipped with a logical
clock, and clocks of all sites are loosely synchronized. The system provides a function,
which upon being called generates a system-wide unique timestamp with the help of its site
identifier and the local clock.
Each distributed transaction is a collection of sub-transactions that execute at various sites
where the requested data objects reside. Each transaction is assigned a globally unique
priority based on its real-time constraint. This priority is carried by all its sub-transactions.
The operations of a transaction are executed in a sequential manner, one at a time. A
sub-transaction accesses its data objects and performs its processing independent of the
sub-transactions of other transactions.
A transaction is executed in three phases (Kung, 1981): a read phase, a validation phase
and a write phase if the validation is successful. In the read phase, the requested data objects
are read from the database and write operations are performed on a private workspace not
accessible to other transactions. At the end of transaction execution, the last two phases are
initiated. Validation phase ensures that the execution of a validating transaction preserves
serializability. Conflict resolution relies on transaction restart. In the write phase, all
updates made by a transaction will be transferred to the database and are then made visible
to other transactions.
Transactions are scheduled to access resources such as CPU and data objects based on their
priorities. The priorities of transactions may represent their urgency or criticality depending
on the assignment policy of priority to transactions. The priorities of transactions can be
static and dynamic (Ramamritham, 1993). As reported in (Abbott, 1992), dynamic priority
assignment achieves better system performance than static one in most cases. One of the
best dynamic priority assignment schemes is based on the deadlines of transactions. Hence,
we use earliest deadline first algorithm to assign priorities to transactions.
2.2. Problem of Unnecessary Restarts
There are two validation schemes commonly used in conventional OCC protocols, namely
backward validation and forward validation (Harder, 1984). In the backward validation
scheme, the serializability of transaction execution is checked against the committed trans-
actions whereas in the forward validation (FV) scheme, transactions are validated against
other concurrent executing transactions. Most of the real-time OCC protocols employ
the forward validation scheme (OCC-FV) (Haritsa, 1990a, 1990b, 1992; Huang, 1991;
Lam, 1995; Lee, 1993a) as it provides greater flexibility of choosing either the validating
transaction or the conflicting active transactions to restart according to the adopted conflict
resolution policies. Conflict resolution policies are normally based on transactions’ crit-
icality or priorities. However, there is no way to take these factors into consideration in
5. TRANSACTION SCHEDULING 173
the backward validation scheme, as the only candidate possible for restart is the validating
transaction itself.
However, the OCC-FV protocol has a serious problem of unnecessary restarts (Lam,
1995; Lee, 1993b) which is caused by an ineffective validation scheme. The FV scheme
may erroneously regard some serializable executions as non-serializable ones. A transaction
thatmaybeabletocommitisselectedtorestart. Toexplaintheproblemfully, themechanism
of the FV scheme is reviewed. In the FV scheme, the write set of a validating transaction is
checked against the read set of the concurrent executing transactions. Let Tc be the set of
concurrent executing transactions in their read phase and Tv be the validating transaction
and RS(Ti ) and W S(Ti ) be the read set and the write set of transaction Ti respectively. The
FV algorithm is as follows.
Validate(Tv){
for each Tc(c = 1, 2, . . . , n) do
if WS(Tv) ∩ RS(Tc) = ∅ then
restart (Tc);
endif;
enddo;}
The FV scheme preserves the serializability of transactions (Bernstein, 1987) under the
assumption that the serialization order of transactions is determined by their commitment
order (Harder, 1984). It implies that the serialization order of the validating transaction
always precedes all other concurrent executing transactions. That is, all conflicting transac-
tions of the validating transaction have to be restarted under the FV scheme. This simplified
assumption leads to the problem of unnecessary restart.
The following example illustrates the problem. Assume that there are three transactions.
T1 : r1[x]w1[x]w1[z]
T2 : r2[x]w2[y]w2[z]
T3 : r3[x]r3[y]w3[z]
where ri [x] and wi [x] represent a read and a write operation respectively on data object x
by transaction Ti . Let vi and ci be the validation and the commit operation of transaction Ti
respectively. Suppose the partial schedule S1, when T1 comes to validation, is as follows.
S1 : r1[x]w1[x]r2[x]r3[x]r3[y]w3[z]w1[z]v1c1
Based on the OCC-FV protocol, at the validation phase of T1, T2 and T3 have to be restarted.
It is because T1 has write-read conflict on data object x with T2 and has write-read and write-
write conflicts on data objects x and z respectively with T3. However, the restarts of T2 and
T3 are unnecessary if T2 and T3 are allowed to continue and enter their validation phase as
shown in the schedule S2.
S2 : r1[x]w1[x]r2[x]r3[x]r3[y]w3[z]w1[z]v1c1w2[y]w2[z]v2c2v3c3
Although the commitment order of S2 is T1 → T2 → T3, the serializability of the transac-
tions can still be preserved such that S2 is equivalent to a serial schedule with the serialization
6. 174 LAM, LEE AND HUNG
order of T3 → T2 → T1. That is, it is unnecessary to enforce that the serialization order
is identical to the commitment order. We refer to the restarts of T2 and T3 under the FV
scheme as unnecessary restarts when a serialization order where T2 and T3 are not required
to restart exists. Note that T2 and T3 will be restarted under the OCC-TI protocol because
it is unable to handle the write-write conflicts between transactions.
2.3. Dynamic Adjustment of Serialization Order with Thomas’ Write Rule
In this section, we will discuss how the serialization order of transactions can be dynamically
adjustedtoresolvetheirdataconflictssothattheunnecessaryrestartproblemcanbeavoided.
In the validation of a transaction Tv, there are three possible types of conflicts between
Tv and the concurrent executing transactions Tc which have not entered the validation
phase. They are the write-read conflicts (W S(Tv) ∩ RS(Tc) = ∅), the write-write conflicts
(W S(Tv) ∩ W S(Tc) = ∅) and the read-write conflicts (RS(Tv) ∩ W S(Tc) = ∅). The
read-write conflicts may be resolved by adjusting the serialization order between Tv and Tc
as Tv → Tc. This is called forward adjustment. The write-read conflicts may be resolved
by adjusting the serialization order between Tv and Tc as Tc → Tv. This is called backward
adjustment such that the read of Tc precedes the write of Tv in the serialization order. This
is possible because the value of the data object read by Tc is not affected by the write of
Tv. Thus, Tc’s read has not yet been invalidated by the write of Tv. Finally, the write-write
conflicts may be resolved by either forward adjustment (Tv → Tc) or backward adjustment
(Tc → Tv). For forward adjustment, Tv’s write does not overwrite Tc’s write. For backward
adjustment, it is possible when Tc’s write can be ignored by applying Thomas’ write rule
(Bernstein, 1987).
If Tc has to be both backward and forward adjusted with respect to Tv, it means that it
has serious conflict with Tv as it is impossible to adjust Tc in the serialization order with
respect to Tv. Hence, the schedule has become a non-serializable execution. One of them
has to be restarted depending on the adopted conflict resolution policy. It is obvious that
those concurrent executing transactions which need only either backward adjustment or
forward adjustment are allowed to continue as they do not have serious conflict with the
validating transaction yet. Hence, it is easily observed that if the serialization order can be
dynamically adjusted, the number of restarts can be substantially reduced.
Note that the OCC-FV protocol only considers the forward adjustment to resolve data
conflicts and ignores the backward adjustment. Thus, transactions, which only have read-
write conflicts with the validating transaction, are unnecessarily restarted by the OCC-FV
protocol. For the OCC-TI protocol, the write-write conflicts are resolved with only forward
adjustment, i.e. Tv → Tc. The possibility of using backward adjustment by applying
Thomas’ write rule is ignored. Reconsider the schedule S2 repeating below as an example:
S2 : r1[x]w1[x]r2[x]r3[x]r3[y]w3[z]w1[z]v1c1w2[y]w2[z]v2c2v3c3
In spite of DASO in the OCC-TI protocol, there are two instances where transactions are
restarted unnecessarily in this example. At the validation phase of T1, the serialization order
of T2 and T3 are both backward adjusted before T1 because T1 has write-read conflict on data
7. TRANSACTION SCHEDULING 175
object x with T2 and T3. Under the OCC-TI protocol, T3 will also need forward adjustment
after T1 for the write-write conflict on data object z. Therefore, the OCC-TI protocol will
consider T3 having serious conflicts with T1 because T3 needs both forward and backward
adjustments. Hence, T3 will be restarted. Subsequently, T2 will also be restarted when it
pre-writes the data object z in its read phase.
In fact, the restarts of T2 and T3 under the OCC-TI protocol are unnecessary. Although
T3 has write-write conflict on data object z with T1, the serialization order of T3 can be
backward adjusted before T1 because T3’s write on data object z can be ignored by applying
Thomas’ write rule. By Thomas’ write rule, the two consecutive writes of T3 and T1 in
the serialization order, T3 → T1, produces the same result on data object z as processing
the single write of T1 which is the last write on the data object. At the validation phase
of T2, the serialization order of T3 is further backward adjusted before T2 because T2 has
write-read conflict on data object y with T3. Likewise, the write of T2 and T3 on data object
z can both be ignored by Thomas’s write rule. Hence, T2 and T3 can both be committed.
Therefore, the final serialization order becomes T3 → T2 → T1. Therefore, the inclusion
of Thomas’ write rule in resolving write-write conflicts adds extra flexibility in adjusting
the serialization order of transactions.
2.4. Design Issues of Validation Scheme
For OCC protocols in centralized database systems, the validation phase and the write phase
are implemented in a system-wide critical section (Harder, 1984; Kung, 1981). That is, there
is no parallelism in the validation phase and in the write phase. This decreases the amount
of concurrency. If this processing strategy is simply extended to distributed environments,
all sites must be checked though the transaction may not have been executed at some of
these sites. This incurs high communication overheads. The validation phase and the write
phase are the integral part of the critical section. After validation, local sub-transactions
will have to wait within their local critical sections until they get commit messages to enter
the write phase from their parent transaction. This, in turn, may lead to a situation where
a group of sub-transactions may get involved in a validation deadlock where each member
of the sub-transactions wait indefinitely to enter its local critical section which is being
visited by some other sub-transaction. This validation deadlock is mainly caused by the
requirement of a sub-transaction waiting inside a critical section. The validation deadlock
can be avoided if the write phase can be disintegrated from the validation phase by pulling
it out of the critical section. However, this disintegration introduces another problem where
the global serializability of transaction execution cannot be guaranteed even if a transaction
gets local validation at all participating sites.
Instead of using a system-wide critical section in distributed environments, we propose
a locking mechanism to implement the validation scheme. It is well known that the major
problem of using locking approach is deadlock that is caused by the blocking nature of the
locks. Deadlock is very difficult to detect especially in distributed environments. The high
overhead of the distributed deadlock detection and the subsequent deadlock resolution de-
crease the probability of transactions meeting their deadlines in distributed RTDBS. Hence,
it is desirable to have a deadlock-free validation scheme. Since the data objects accessed
8. 176 LAM, LEE AND HUNG
Table 1. Compatibility lock table.
PR-lock held PW-lock held VR-lock held VW-lock held
PR-lock OK OK NOT OK NOT OK
requested
PW-lock OK OK NOT OK NOT OK
requested
VR-lock OK OK NOT OK NOT OK
requested
VW-lock OK OK NOT OK NOT OK
requested
by a transaction are already known in the validation phase, the validating transaction can
request validation locks in a circular and unidirectional fashion. In this way, deadlock
can be avoided as there will be no two validating transactions mutually waiting for each
other.
Another issue of designing a validation scheme is the efficiency. In the validation phase,
the DASO scheme can be made possible only when the current read/write sets of other
concurrent executing transactions at all sites are known. It is very difficult and incurs
too many overheads to maintain globally the read/write sets of all concurrent-executing
transactions at all sites. One intuitive approach is that all sub-transactions have to inform
the parent transaction of their read/write sets of their conflicting sub-transactions and the
parent transaction will centrally adjust the relative serialization order of the conflicting
transactions. However, this centralized approach is inefficient as it introduces potential
performance bottleneck at the parent transaction as well as risk of single point failure.
2.5. Distributed Circular Validation Scheme
We now describe another core component of the DOCC-DA protocol, a new distributed
validation scheme using locking approach and circular validation. Four types of locks are
utilized in the validation scheme. The PR-lock or PW-lock has to be obtained when a data
object is read or written respectively by a transaction into its own workspace in its read
phase. Since the PR-lock and PW-lock mainly serve as an indicator to inform the system that
the data objects are accessed by the executing transactions, they are compatible with each
other. When a transaction enters the validation phase after it has obtained all the requested
PR-locks and PW-locks, these locks will be upgraded to the VR-locks and VW-locks one by
one respectively. The function of the VR-lock and VW-lock is to prevent other transactions
from accessing the data object. Thus, the VR-lock and VW-lock are incompatible with
each other and with the PR-lock and PW-lock. If the requested validation lock (VR-lock
or VW-lock) is denied, the validating transaction will be blocked until the data object is
unlocked by other transaction. To avoid the problem of deadlock due to the exclusive
nature of the two “blocking” locks (VR-lock and VW-lock), they are obtained in a circular
and unidirectional fashion. If the blocking direction is unidirectional, deadlock cannot be
formed. The compatibility of these locks is shown in Table 1.
9. TRANSACTION SCHEDULING 177
In our new validation scheme, we assume that there is a monotonic increasing index
associated with each data object in the whole database. When a transaction enters the
validation phase, it triggers the sub-transaction that P-locked the data object with the lowest
index to start its local validation first. Each sub-transaction performs its local validation
one by one in a circular and unidirectional fashion. The sub-transaction will use its local
serialization order timestamp (SOT) to check the validity of the data objects read or pre-
written by it. If all are valid, the sub-transaction will upgrade the PR-locks and PW-locks
to the VR-locks and VW-locks one by one. If any transaction has data conflict with it, a
token containing the type of adjustment and the information of the conflicting transaction
is created for each conflicting transaction. The token contains the following information:
CTID : the transaction ID of the conflicting transaction;
FOR : a flag indicating that the conflicting transaction needs forward adjustment
if it has write-read or write-write conflict with the validating transaction;
BACK : aflag indicating that the conflicting transaction needs backward adjustment
if it has read-write conflict with the validating transaction.
Afterwards, the set of tokens contained in a single validation packet are passed to the vali-
dating sub-transaction at the next site downstream together with a packet header containing
the validating transaction ID (VTID) and its SOT.
Upon receipt of the validation packet, the validating sub-transaction will check the validity
of the data objects read or pre-written by it in a similar way as its precedent sub-transaction.
However, it will use either its local SOT or the SOT in the validation packet header depending
on which one is earlier. If all the accessed local data objects are checked valid, the sub-
transaction will identify the conflicting transactions. Note that if the token for a conflicting
transaction already exists, only the FOR or BACK flag is updated. Otherwise, a new token
will be created for the conflicting transaction. Then, the validation packet containing a new
set of tokens is passed to the validating sub-transaction at the next site downstream. This
circular validation will continue site by site until all the data objects are V-locked. The
validation packet is returned back to the parent transaction.
Once the validation packet returned to the parent transaction, it determines which con-
flicting transactions have serious data conflict with it. A serious conflict occurs when both
the FOR and BACK flags are set. It will also identify those transactions that need back-
ward adjustment only. Then, the parent transaction will send this information to all its
sub-transactions together with the commit message. Upon receipt of the commit messages,
all the sub-transactions will enter the write phase. Note that our new validation scheme can
be easily integrated into the standard two-phase commit protocol. The circular validation
can be considered as the voting phase while the write phase can be considered as the commit
phase.
We demonstrate that a deadlock cannot occur in the circular validation scheme. A
deadlock is formed when there exists a circular wait among the validating transactions.
Thus, we need to demonstrate that there cannot be a circular wait. The validating trans-
actions are required to upgrade their locks in a circular and unidirectional fashion. Let
10. 178 LAM, LEE AND HUNG
D = {D1, D2, . . . , Dn} be the set of data objects in the database that is partitioned in dif-
ferent sites. The index of a data object Di , Index(Di ), determines the unidirectional order
in which a validating transaction obtains its validation locks. In other words, a validating
transaction upgrades its locks in an increasing order of the indices of data objects.
We prove that there cannot be a circular wait by assuming that a circular wait exists (proof
by contradiction). Let the set of transactions involved in the circular wait be {T0 → T1 →
T2 → · · · → Tn → T0}, where Ti is waiting for upgrading a lock on data item Di , which is
being V-locked by transaction Ti+1. Then since transaction Ti+1 is holding a V-lock on Di
while it is requesting a V-lock on data item Di+1, we must have Index(Di ) < Index(Di+1)
for all i. But this condition means that Index(D0) < Index(D1) < Index(D2) < · · · <
Index(Dn) < Index(D0). By transitivity, Index(D0) < Index(D0), is impossible. There-
fore, there can be no circular wait.
3. The DOCC-DA Protocol
In this section, we describe the detailed algorithm of the DOCC-DA protocol. In the DOCC-
DA protocol, serialization order is maintained by assigning timestamps to transactions.
The timestamp of each transaction, called the serialization order timestamp (SOT), will
be dynamically adjusted during its execution based on the types of data conflicts with the
validating transaction. Thus, the SOT of a transaction indicates its relative position in the
serialization order. Note that with DASO, a validating transaction may precede committed
transactions in the serialization order. Thus, the serialization order timestamp sequence,
with respect to a schedule S, for a set of transactions is a sequence of timestamps such that
for any pair of transactions Ti and Tj , if SOT(Ti ) and SOT(Tj ) are the serialization order
timestamps of Ti and Tj respectively, then the following holds: if SOT(Ti ) < SOT(Tj ) there
exists a serial schedule equivalent to S, in which Ti completes before Tj .
Each sub-transaction of a transaction Ti will also carry its own SOT(Ti ) at its local site to
perform its local validation. The initial value of SOT(Ti ) at each site is set to be ∞ when
the sub-transaction is initiated. Whenever a committed transaction Tk backward adjusts
Ti at a local site, SOT(Ti ) at that site will be set to a value that is sufficiently smaller
than SOT(Tk). It is desirable that each sub-transaction should carry the same SOT value to
indicate the position of Ti in the serialization order. However, in the DOCC-DA protocol, the
serialization order of a transaction can be determined at the end of transaction execution.
Therefore, the SOT(Ti ) at each site can be of different values. In other words, a local
SOT(Ti ) indicates only the relative serialization order of the sub-transaction at that site.
When Ti starts its validation, each sub-transaction may use its local SOT(Ti ) or the SOT(Ti )
received from the validating sub-transaction at the upstream site depending on which one is
earlier. In this way, this deferment design can significantly reduce the amount of message
passing in the course of transaction execution in order to synchronize the SOT(Ti ) value at
each site.
Upon successfully passing the circular validation scheme, the validating transaction Ti
is assigned a final serialization timestamp. If Ti has not been backward adjusted, its final
serialization timestamp will be set to the validation time. Otherwise, the final serialization
timestamp is set to the value of the SOT(Ti ) returned to the parent transaction.
11. TRANSACTION SCHEDULING 179
In each site, a data object table and a transaction table are maintained. The data object
table keeps a read timestamp and a write timestamp for each data object in the local database.
They are defined as follows:
RTS(Dx ) : the latest timestamp among the committed transactions that have read
the data object Dx ;
WTS(Dx ) : the latest timestamp among the committed transactions that have written
the data object Dx .
The transaction table at each site maintains the following information for each local active
transaction or sub-transaction Ti :
RS(Ti ) : the read set of Ti ;
WS(Ti ) : the write set of Ti ;
SOT(Ti ) : the serialization order timestamp of Ti ;
TR(Ti , Dx ) : the value of WTS(Dx ) of the data object Dx when Ti reads Dx ;
3.1. Read Phase
When a sub-transaction of Ti wants to read or pre-write a data object Dx in its private
workspace, it will first obtain the PR-lock or PW-lock respectively. These locks will be
granted if there is no VR-lock or VW-lock. In the read phase, there is no need for Ti to detect
data conflicts. However, the write timestamp of each data object read will be recorded. That
is, if Ti wants to read Dx , the value of WTS(Dx ) will be recorded into TR(Ti , Dx ). If Ti
wants to write Dx , the new value of Dx will be pre-written into its private workspace.
3.2. Validation Phase
When a sub-transaction of Ti receives a validation packet, it will update its SOT(Ti ) if the
received SOT(Ti ) is earlier. Afterwards, Ti will upgrade its local PR-locks and PW-locks
to the VR-locks and VW-locks respectively one by one. If there is a VR-lock or VW-lock
being held by another transaction, Ti will be blocked until the lock is released.
To upgrade the PR-lock to the VR-lock on Dx , Ti will check the value of TR(Ti , Dx )
to ensure that the version of Dx read by Ti is the one written by a committed transaction
whose serialization order precedes that of Ti , i.e. TR(Ti , Dx ) is earlier than SOT(Ti ). If
TR(Ti , Dx ) is later than SOT(Ti ), Ti will send “Abort” message to the parent transaction that
will globally abort the whole transaction because a committed transaction has invalidated
the value of Dx that Ti has read. Otherwise, the VR-lock will be granted. If there is a PW-
lock held by another transaction, Ti will either create or update the token for the conflicting
transaction and set the FOR flag in the token.
To upgrade the PW-lock to VW-lock on Dx , if the RTS(Dx ) is later than SOT(Ti ), Ti will
send “Abort” message to the parent transaction. It is because a committed transaction after
12. 180 LAM, LEE AND HUNG
Ti in the serialization order has read Dx before Ti wants to write Dx . Since the conflicting
transaction has committed, the only way to resolve the data conflict is to abort Ti . If SOT(Ti )
is later then RTS(Dx ), the PW-lock is upgraded to the VW-lock. If there is a PR-lock, Ti
will update the token for the conflicting transaction accordingly. However, if there is a
PW-lock, Ti does not need to do anything because write-write conflicts will be handled in
the write phase. The Upgrade PR-lock and Upgrade PW-lock procedural descriptions are
as follows:
Upgrade PR-lock(Ti, Dx){
if VR-lock (Dx) or VW-lock(Dx) exists then
block Ti;
endif;
if TR(Ti, Dx) > SOT(Ti) then
send ‘‘Abort’’ message to the parent transaction;
else
upgrade PR-lock(Dx) to VR-lock(Dx);
if ∃PW-lock(Dx) held by Tk ∧ k = i then
if a token for Tk does not exist then
create a token for Tk;
endif
set FOR flag;
endif
endif}
Upgrade PW-lock(Ti, Dx) {
if VR-lock(Dx) or VW-lock(Dx) exists then
block Ti;
endif;
if RTS(Dx) > SOT(Ti) then
send ‘‘Abort’’ message to the parent transaction;
else
upgrade PW-lock(Dx) to VW-lock(Dx);
if ∃PR-lock(Dx) held by Tk ∧ k = i then
if a token for Tk does not exist then
create a token for Tk;
endif
set BACK flag;
endif;
endif;}
Table2describeshowavalidatingtransactionsetstheflagsforthe conflictingtransactions.
When the sub-transaction successfully upgrades all its local PR-locks and PW-locks, it
will pass the validation packet containing the set of tokens to the validating sub-transaction
at next site downstream and waits for the response from the parent transaction about the
final decision.
13. TRANSACTION SCHEDULING 181
Table 2. Setting of forward and back-
ward flags in a token.
Locks already held by Tk
Update PR-lock PW-lock
Lock Type
PR → VR no flag set set FOR
PW → VW set Back no flag set
3.3. The Role of Parent Transaction
The parent transaction has two roles. One is to coordinate the synchronization of the
validation and write phases. The other is to perform a global conflict resolution based on
the adopted conflict resolution policy. That is, when the validation phase is completed, the
transaction has to determine whether to commit or to abort in face of having data conflicts
with high priority transactions. It appears undesirable in the context of priority-driven
scheduling to allow the occurrences of low priority validating transactions unilaterally
committing at the cost of restarting higher priority transactions.
Studies in (Haritsa, 1990, 1992) addressed the problem of adding transaction priority
information in conflict resolution. They showed that the problem is nontrivial partly because
giving preferential treatment to high priority transactions may result in an increase in the
number of missed deadlines. They proposed a class of priority wait schemes, particularly
Wait-50 (Haritsa, 1992), to enhance the real-time performance of the OCC-FV protocol.
The basic idea of Wait-50 is to optimize the beneficial effects of priority wait while reducing
the effects of later restarts and an increased number of conflicts.
However, recent study (Datta, 1997) showed that there appears to be little advantage to be
gained by incorporating priority cognizance in the conflict resolution of the OCC protocols
in RTDBS. They revealed that real-time performance might be improved provided that the
conflict set of a validating transaction is reasonably large. However, their simulation results
showed that the number of higher priority transactions that conflict with the validating
transaction is relatively small even under the condition of high data contention.
In addition, restart of a validating transaction may have two performance implications.
First, the validating transaction is restarted after spending most of the time and resources
for its execution. In particular, the delayed conflict resolution policy of the OCC protocols
significantly reduces the possibility that a validating transaction sacrificed for a higher
priority active transaction will meet its deadline. Second, there is no guarantee that the
active transaction that causes the restart of the validating transaction will meet its deadline.
If the active transaction does not meet its deadline for any reason, the sacrifice of the
validating transaction is wasted.
Most of the conflict resolution policies require a system snapshot so that the number of low
and high priority transactions can be determined. It would be impracticable in distributed
environments as it means that all sites have to be involved in reaching this decision. Hence,
in order to focus our attention on the performance of our DOCC-DA protocol, a simple
conflict resolution policy is adopted in our study. Data conflicts are resolved by “transaction
14. 182 LAM, LEE AND HUNG
races”: the transaction which reaches the “goal line” (validation phase) first gets to survive
and other conflicting transactions have to be restarted or backward adjusted. Thus, the
parent transaction will identify those conflicting transactions that needed to be restarted
and those conflicting transactions that needed to be backward adjusted. To enter the write
phase, the parent transaction will send a “Commit” message to its sub-transactions together
with the following two sets of conflicting transactions.
BTRAN : the set of transactions to be backward adjusted;
SERIOUS : the set of serious conflicting transactions to be restarted;
The algorithm of the conflict resolution is as follows.
Conflict resolution (Ti){
BTRAN := {}; SERIOUS := {};
for each token do
if both FOR and BACK flags are set then
SERIOUS := SERIOUS ∪ {CTID};
elsif only BACK flag is set then
BTRAN := BTRAN ∪ {CTID};
endif;
enddo;
if (SOT(Ti) ==∝) then //** SOT(Ti) received back **//
SOT(Ti) := current time;
endif;
Send ‘‘Commit’’ message with SERIOUS, BTRAN and SOT(Ti)
to sub-transactions;}
3.4. Write Phase
For each sub-transaction, if a “Commit” message is received, Ti will abort those serious
conflicting transactions in the SERIOUS set. For those transactions in BTRAN set, their
SOT values are updated to SOT(Ti ) − ε where ε is a sufficiently small value. Then Ti will
update the values of RTS(Dx ) and WTS(Dx ) to the received SOT(Ti ) value on Dx held by
its VR-lock or VW-lock. Finally, the prewritten data objects will be made permanent in
the database following Thomas’ write rule. Finally, all its VR-locks and VW-locks will be
released. On the other hand, if “Abort” message is received, all its locks are released and all
prewritten data objects are disregarded. The algorithm for each sub-transaction to perform
the write phase is as follows:
Write phase (Ti){
if Commit message is received then
FWS(Ti) := {};
for each Tk in SERIOUS do
restart Tk;
enddo
15. TRANSACTION SCHEDULING 183
for each Tk in BTRAN do
SOT(Tk) := SOT(Ti) − ε;
//** ε is a sufficiently small value **//
enddo;
for each Dx in RS(Ti) do
if RTS(Dx) < SOT(Ti) then
RTS(Dx) := SOT(Ti);
endif;
enddo;
for each Dx in WS(Ti) do
if WTS(Dx) < SOT(Ti) then
WTS(Dx) := SOT(Ti);
FWS(Ti) := FWS(Ti) ∪ Dx
endif;
enddo;
//** Thomas’ write rule applied **//
copy the local prewritten data objects in FWS(Ti) to
the database;
else
//** Abort message is received **//
discard the local prewritten data objects;
restart Ti itself;
endif;
release all its locks;}
4. Experimental Model
The distributed RTDBS model consists of a set of inter-connected sites and the database is
partitionedindifferentsiteswiththedataobjectsineachsiteformingitslocaldatabase. Each
site is connected to a local communication interface that in turn is connected to other sites
throughthecommunicationnetworkthatismodeledasaconstantdelayserver. Ineachsiteas
shown in Figure 1, there is a transaction generator, a ready queue, a block queue, a scheduler,
a CPU, a local database, and a communication interface. The transaction generator creates
firm real-time transactions with inter-arrival time following exponential distribution. The
creation of a transaction in a site is independent of the creation of transactions in other sites.
Each transaction is modeled as a sequence of read/write operations on data objects that
are evenly distributed among different sites. The processing of an operation involves use
of the CPU and access to data object. Upon arrival, transactions are queued in the ready
queue for the CPU. The queuing discipline is earliest deadline first (EDF). In each site,
there is a scheduler. If the deadline of the transaction at the head of the ready queue is
missed, the scheduler will abort the transaction. Otherwise, the scheduler assigns the CPU
to the transaction. When the transaction’s data request is denied, the scheduler will place
the transaction into the block queue until the requested data object is available.
16. 184 LAM, LEE AND HUNG
Figure 1. Model of a site in a distributed RTDBS.
4.1. Model Parameters
Table 3 summarizes the baseline setting of the model to be used in the simulation exper-
iments. The purposes of the simulation experiments are to evaluate the characteristics
of the proposed protocol and to demonstrate the capabilities of the proposed protocol in
improving the performance of distributed RTDBS. The values of the baseline setting are
based on those used in other related research studies (Lam, 1997; Ulusoy, 1992). However,
the simulation experiments are not confined to the baseline setting. In addition to the mean
transaction arrival rate, the impact of write probability and the amount of slack time are
also studied by varying their values from the baseline setting.
To assign a deadline to each transaction, the following function applies.
Deadline = Tgen + Texe × (1 + SF)
The value of SF is uniformly selected from the range in Table 3. Tgen is the current time
when the transaction is generated. Texe is the estimated execution time that is a function of
the parameters, Tprocess, Noper, and Tcomm.
Table 3. The baseline setting.
Number of sites 4
Database size 200 data objects/site
Mean transaction arrival rate 0.4–2.8 transactions/sec
Transaction size (Noper) 3–20
(number of operations) (uniformly distributed)
Write proabability 0.0–1.0
Communication delay (Tcomm) 100 ms
CPU time to process a data object (Tprocess) 34 ms
CPU time to update a data object (Tupdate) 6 ms
Slack factor (SF) 2.5–13.75
17. TRANSACTION SCHEDULING 185
4.2. Performance Measures
In distributed RTDBS, the major performance measure is the “miss rate” which indicates
the probability of missing deadlines of transactions.
Miss Rate = Nmissed/(Ncommitted + Nmissed) × 100%
where
Nmissed = number of transaction missing its deadline;
Ncommitted = number of transaction committed.
The other two measures, which help to study the performance of the DOCC-DA protocol,
are the “restart ratio” and the “backtrack ratio”. The restart ratio gives the number of
restarts per transaction completed. The backtrack ratio gives the number of adjustment of
serialization order made per transaction completed. Since a restarted transaction may meet
its deadline, miss rate cannot provide direct effect of the DOCC-DA protocol. Therefore,
the restart ratio can help to measure the actual change in the number of restarts brought by
the protocol. The backtrack ratio, on the other hand, helps to reflect the effectiveness of the
protocol by collecting the frequency of adjustment made.
5. Performance Results
We have performed a series of experiments comparing the performance of the DOCC-
DA protocol with that of the two-phase locking-high priority (2PL-HP) and the OCC-FV
protocol in distributed environments. Although the OCC-FV protocol is not originally
designed for RTDBS, recent study (Datta, 1997) showed that the OCC-FV protocol with
EDF scheduling can be better than the priority cognizant real-time protocols. To our best
knowledge, we do not aware of other distributed OCC protocols for real-time domain in the
current literature. Therefore, we have adapted the OCC-FV protocol in distributed environ-
ments for comparison. The validation of the OCC-FV protocol in distributed environments
is implemented using the distributed circular validation scheme. On the other hand, there
are a number of locking-based concurrency control protocols in distributed real-time en-
vironments. These protocols are based on 2PL with variation on conflict resolution such
as high priority, conditional restart (Abbott, 1988), priority inheritance and priority ceiling
(Sha, 1988). 2PL-HP and OCC-FV are chosen because they represent concurrency control
protocols for RTDBS based on locking and optimistic concurrency control, respectively, in
the previous studies (Abbott, 1988; Haritsa, 1990a, 1990b, 1992; Huang, 1991; Lee, 1993a).
All these experiments show that the DOCC-DA protocol outperforms both protocols for a
wide range of different parameters.
5.1. Impact of System Workload
Figure 2 shows the miss rates of the three protocols as a function of mean transaction arrival
rate (MTAR) in transactions per second (tps). In this experiment, the write probability is
18. 186 LAM, LEE AND HUNG
Figure 2. Transaction miss rates.
fixed at 0.6. The MTAR controls the workload of the system. When the MTAR is low
(< 1.2 tps), there is no significant difference between the three protocols. However, as the
MTAR increases, the DOCC-DA protocol outperforms the DOCC-FV protocol, which in
turn outperforms the 2PL-HP protocol.
The reason for the better performance of both DOCC protocols over the 2PL-HP protocol
is due to the lower restart ratios as shown in Figure 3. The DOCC protocols incur much fewer
transaction restarts and hence make better use of resources than the 2PL-HP protocol, which
suffersfromwastedtransactionrestarts. Transactionrestartsindistributedenvironmentscan
make a transaction more difficult to meet its deadline as the time spent on communication
delays before restarts will be wasted. On the other hand, the delayed conflict resolution
policy of the DOCC protocols helps them to avoid such wasted restarts. Note that the
restart ratio decreases after a certain MTAR. The reason is that resource contention becomes
to dominate data contention, making more transactions miss their deadlines without any
restart. In fact, some recent studies (Haritsa, 1990a, 1992; Lee, 1996) showed that OCC
protocols outperform 2PL protocols over a wide range of system workload and resource
availability in RTDBS. The results in our study showed that this is also true in distributed
environments.
We now focus on the comparison between the performance of the DOCC-FV and the
DOCC-DA protocols under different parameter settings. Figure 4 shows the miss rates
19. TRANSACTION SCHEDULING 187
Figure 3. Transaction restart ratios.
of the DOCC-FV and the DOCC-DA protocols as a function of MTAR at different write
probabilities. When the workload is light, the miss rates under both protocols are low
and close to each other. As the workload increases, the performance of the DOCC-DA
protocol starts to outperform the DOCC-FV protocol. The maximum performance gain is
attained when the MTAR is around 2 tps. When the system begins to saturate (> 2.4 tps),
the performance gain diminishes gradually. Nevertheless, the performance of the DOCC-
DA protocol is consistently better than that of the DOCC-FV protocol across different
MTAR.
In the figure, it can also be observed that the miss rate of both protocols increases with
the write probability. When the write probability is 0, all transaction operations are read
operations. In the other end, all transaction operations are write operations. As the write
probability increases, there are more data conflicts and most of them are serious conflicts
that cannot be resolved by DASO. However, the DOCC-DA protocol still performs better
across different write probabilities. In particular, the performance gain is the largest when
the write probability is 0.2, that is, when the read operation is the majority.
One of reasons for this performance difference is the difference in the number of transac-
tion restarts, as indicated by restart ratio, incurred by the two protocols, shown in Figure 5.
The DOCC-FV protocol suffers performance degradation caused by restart overheads. The
curvesshowthattheDOCC-DAprotocolcanavoidanumberofunnecessaryrestarts. Again,
20. 188 LAM, LEE AND HUNG
Figure 4. Transaction miss rates with different workload.
note that the restart ratio increases as the workload increases in the beginning where more
restarts are incurred due to increasing data conflicts. When the workload further increases,
the restart ratio decreases. In this case, resource contention dominates data contention, mak-
ing more transactions miss their deadlines without any restart. Therefore, the restart ratio
decreases because many transactions are aborted and discarded due to deadline missing. In
the figure, it can also be observed that the restart ratio increases with the write probability
for both protocols because high write probability means more conflicts, in particular, the
serious conflicts.
The performance gain brought by the DOCC-DA protocol is mainly due to the backward
adjustment of the conflicting transactions with respect to a validating transaction in the
serialization order. Figure 6 shows the backtrack ratios at different MTAR and write prob-
abilities. Similar interpretation can be used to explain the difference in backtrack ratios
at different MTAR. However, there is one point to note. Given a certain workload, as the
write probability increases, the backtrack ratio increases to a certain peak value and drops
thereafter until it becomes zero again when the write probability is 1. Similar phenomenon
can also be found in Figure 7.
21. TRANSACTION SCHEDULING 189
Figure 5. Transaction restart ratios with different workload.
5.2. Different Proportion of Read/Write Operations
Figure 7 shows the miss rate as the write probability varies. It can be observed that there
is no performance difference between the protocols at both ends of the write probability.
Same behavior can be observed for different workloads. When all operations are read
operations, no data conflict occurs. Therefore, no DASO is made as shown in Figure 6.
On the other hand, when all operations are write operations, all data conflicts are serious
conflicts. Therefore, it is impossible to adjust the serialization order among transactions as
shown in Figure 6.
5.3. Impact of Slack
Figure 8 shows the miss rate when the slack varies. When the slack is tight, transactions are
moresensitivetotheresourcecontentionanditismorelikelyforthemtomisstheirdeadlines.
Therefore, a higher miss rate is resulted when the slack is tight. Furthermore, there may
not be sufficient remaining slack for a transaction to be restarted. In this case, reduction in
transaction restarts becomes very important. One reason is to save resource from spending
on restarted transactions. Another reason is to avoid transactions to be restarted. Therefore,
22. 190 LAM, LEE AND HUNG
Figure 6. Transaction adjustment ratios with different workload.
the performance gain brought by the DOCC-DA protocol becomes more significant when
the slack is tight. On the other hand, if the slack is not tight (baseline), a transaction may still
be able to meet its deadline although it has been restarted. For instance, when the MTAR is
under 1.2 tps, the miss rate under both protocols is insignificant when the slack is not tight.
However, when the slack is tight, the performance gain brought by the DOCC-DA protocol
is more than one third.
6. Conclusions
In this paper, we studied the performance of using optimistic approach to concurrency con-
trol in distributed real-time environments. The traditional optimistic approach suffers from
the problem of unnecessary restarts. This problem affects the performance of RTDBS as
transaction restarts can significantly increase the system workload and intensify resource
and data contention. In distributed environments, the complexity of the system and the high
communication overhead exacerbate the problem. Transaction restarts in distributed RT-
DBS make a transaction more difficult to meet its deadline than in a centralized one because
of the communication overhead. Therefore, the number of unnecessary restarts is the de-
terminant factor that affects the performance of optimistic approach in distributed RTDBS.
23. TRANSACTION SCHEDULING 191
Figure 7. Transaction miss rates with different write probabilities.
In this study, a new real-time DOCC protocol, called DOCC-DA, is proposed. The
protocolalleviatestheproblemofunnecessaryrestartbydynamicadjustmentofserialization
orders of the concurrent executing conflicting transactions with respect to a validating
transaction. Under the DOCC-DA protocol, only those transactions with serious conflict
with the validating transaction will be restarted. In addition, the design of the DOCC-DA
protocol is suitable to distributed environments in the sense that it reduces the number
of message passings between different sites by using a new distributed circular validation
scheme.
A series of simulation experiments have been done to investigate the performance of the
DOCC-DA protocol as compared to the 2PL-HP protocol and the DOCC-FV protocol. It is
found that the DOCC-DA protocol out-performs the 2PL-HP protocol and the DOCC-FV
protocol for a wide range of workload parameters by reducing the number of transaction
restarts. Another improvement can be observed in the reduction of miss rate that is also
important to the performance of the system. It is also observed that the DOCC-DA pro-
tocol gives a greater performance gain when transactions have a large proportion of read
operations. The performance gain is manifested by the use of DASO, which exploits the
semantics of read-write and write-write operations of transactions.
24. 192 LAM, LEE AND HUNG
Figure 8. Transaction miss rates with different slacks.
References
Abbott, R., and Garcia-Molina, H. 1988. Scheduling real-time transactions: A performance evaluation. Proceed-
ings of the 14th VLDB Conference. Los Angeles, pp. 1–12.
Abbott, R., and Garcia-Molina, H. 1992. Scheduling real-time transactions: A performance evaluation. ACM
Transactions on Database Systems 17(3): 513–560.
Bernstein, P. A., Hadzilacos, V., and Goodman, N. 1987. Concurrency Control and Recovery in Database Systems.
Mass: Addison-Wesley.
Datta, A., Viguier, I. R., Son, S. H., and Kumar, V. 1997. A study of priority cognizance in conflict resolution for
firm real time database systems. Real-Time Database and Information Systems, Research Advances. Boston.
Kluwer Academic Publishers.
Haerder, T. 1984. Observations on optimistic concurrency control schemes. Information Systems 9(2).
Haritsa, J. R., Carey, M. J., and Livny, M. 1990a. Dynamic real-time optimistic concurrency control. Proceedings
of 11th Real-time Systems Symposium. Florida, pp. 94–103.
Haritsa, J. R., Carey, M. J., and Livny, M. 1990b. On being optimistic about real-time constraints. Proc. of the
ACM SIGACT-SIGART-SIGMOD Symp. on Principles of Database Systems.
Haritsa, J. R., Carey, M. J., and Livny, M. 1992. Data access scheduling in firm real-time database systems.
Real-time Systems 4(3): 203–242.
Huang, J., Stankovic, J. A., Towsley, D., and Ramamritham, K. 1991. Experimental evaluation of real-time
optimistic concurrency control schemes. Proc. of the 17th VLDB Conf. Spain, pp. 35–46.
Huang, J., Stankovic, J. A., Ramamritham, K., Towsley, D., and Purimetla, B. 1992. On using priority inheritance
in real-time databases. Real-time Systems 4(3): 243–268.
Konana, P., Lee, J., and Ram, S. 1997. Updating timestamp interval for dynamic adjustment of serialization
order in optimistic concurrency control-time interval (OCC-TI) protocol. Information Processing Letters 63(4):
25. TRANSACTION SCHEDULING 193
189–193.
Kung, H. T., and Robinson, J. T. 1981. On optimistic methods for concurrency control. ACM Transactions on
Database Systems 6(2): 213–226.
Lam, K. Y. 1994. Concurrency control in distributed real-time database systems. Department of Computer
Science, City University of Hong Kong, Ph.D. Thesis.
Lam, K. Y., Hung, S. L., and Son, S. H. 1997. On using real-time static locking protocols for distributed real-time
databases. Real-Time Systems 13: 141–166.
Lam, K. W., Lam, K. Y., and Hung, S. L. 1995. Real-time optimistic concurrency control protocol with dynamic
adjustment of serialization order. Proceedings of IEEE Symposium on Real-time Technology and Applications.
pp. 174–181.
Lee, J., and Son, S. H. 1993a. An optimistic concurrency control protocol for real-time database systems. Proc.
of the 3rd Int. Symp. on Database Systems for Advanced Applications. Korea.
Lee, J., and Son, S. H. 1993b. Using dynamic adjustment of serialization order for real-time database systems.
Proceedings of 14th IEEE Real-time Systems Symposium. North Carolina, pp. 66–75.
Lee, J., and Son, S. H. 1996. Concurrency control algorithms for real-time database systems. Performance of
Concurrency Control Mechanisms in Centralized Database Systems. V. Kumar, ed., Prentice Hall.
Ramamritham, K. 1993. Real-time databases. International Journal of Distributed and Parallel Databases 1:
199–226.
Sha, L., Rajkumar, R., and Lehoczky, J. 1988. Concurrency control for distributed real-time databases. ACM
SIGMOD Record. pp. 82–98.
Sha, L, Rajkumar, R., and Lehoczky, J. P. 1990. Priority inheritance protocols: An approach to real-time synchro-
nization. IEEE Transactions on Computers 39(9): 1175–1185.
Son, S. H., and Chang, C. 1990. Performance evaluation of real-time locking protocols using a distributed software
prototyping environment. Proceedings of the 10th International Conference on Distributed Computing Systems.
Paris, pp. 124–131.
Son. S. H., and Koloumbis, S. 1993. A token-based synchronization scheme for distributed real-time databases.
Information Systems 18(6): 375–389.
Ulusoy, O., and Belford, G. G. 1992. A simulation model for distributed real-time database systems. Proceedings
of 25th Annual Simulation Symposium. pp. 232–240.
Ulusoy, O. 1994. Processing real-time transactions in a replicated database system. Technical Report BU-CEIS-
94-13, Bilkent University.
Yu, P. S., Wu, K. L., Lin, K. J., and Son, S. H. 1994. On real-time databases: Concurrency control and scheduling.
Proceedings of the IEEE 82(1): 140–157.