2. oracle
Larry ellison and his two friends and former co-
workers, Bob miner and Ed oates, started a consultancy
called Software Development Laboratories (SDL) in 1977.
SDL developed the original version of the Oracle software.
The name Oracle comes from the code-name of a CIA-
funded project Ellison had worked on while formerly
employed by Ampex
3. TRIGGERS
Use the CREATE TRIGGER statement to create and enable a database trigger,
which is:
A stored PL/SQL block associated with a table, a schema, or the database or
An anonymous PL/SQL block or a call to a procedure implemented in PL/SQL or
Java
Oracle Database automatically executes a trigger when specified conditions occur.
When you create a trigger, the database enables it automatically. You can
subsequently disable and enable a trigger with the DISABLE and ENABLE clause
of the ALTER TRIGGER or ALTER TABLE statement.
4. BEFORE
Specify BEFORE to cause the database to fire the trigger before executing
the triggering event. For row triggers, the trigger is fired before each
affected row is changed.
Restrictions on BEFORE Triggers BEFORE triggers are subject to the
following restrictions:
You cannot specify a BEFORE trigger on a view or an object view.
You can write to the :NEW value but not to the :OLD value.
AFTER
Specify AFTER to cause the database to fire the trigger after executing the
triggering event. For row triggers, the trigger is fired after each affected row is
changed.
Restrictions on AFTER Triggers AFTER triggers are subject to the following
restrictions:
You cannot specify an AFTER trigger on a view or an object view.
You cannot write either the :OLD or the :NEW value.
5. DELETE
Specify DELETE if you want the database to fire the trigger whenever a DELETE
statement removes a row from the table or removes an element from a nested table.
INSERT
Specify INSERT if you want the database to fire the trigger whenever an INSERT
statement adds a row to a table or adds an element to a nested table.
UPDATE
Specify UPDATE if you want the database to fire the trigger whenever an UPDATE
statement changes a value in one of the columns specified after OF. If you omit OF,
then the database fires the trigger whenever an UPDATE statement changes a value
in any column of the table or nested table.
For an UPDATE trigger, you can specify object type, array, and REF columns after
OF to indicate that the trigger should be fired whenever an UPDATE statement
changes a value in one of the columns. However, you cannot change the values of
these columns in the body of the trigger itself.
6. TABLE SPACES
A database is divided into one or more logical storage units called
tablespaces. Tablespaces are divided into logical units of storage called
segments, which are further divided into extents. Extents are a collection of
contiguous blocks.
Default Temporary Tablespace
When the SYSTEM tablespace is locally managed, you must define at least
one default temporary tablespace when creating a database. A locally
managed SYSTEM tablespace cannot be used for default temporary storage.
If SYSTEM is dictionary managed and if you do not define a default
temporary tablespace when creating the database, then SYSTEM is still used
for default temporary storage. However, you will receive a warning in
ALERT.LOG saying that a default temporary tablespace is recommended
and will be necessary in future releases.
7. SYSTEM TABLE SPACES
Every Oracle database contains a tablespace named SYSTEM, which Oracle creates
automatically when the database is created. The SYSTEM tablespace is always
online when the database is open.
To take advantage of the benefits of locally managed tablespaces, you can create a
locally managed SYSTEM tablespace, or you can migrate an existing dictionary
managed SYSTEM tablespace to a locally managed format.
In a database with a locally managed SYSTEM tablespace, dictionary managed
tablespaces cannot be created. It is possible to plug in a dictionary managed
tablespace using the transportable feature, but it cannot be made writable.
8. TABLES
A table clustering is a group of tables that share common columns and store related
data in the same blocks. When tables are clustered, a single data block can contain
rows from multiple tables. For example, a block can store rows from both the
employees and departments tables rather than from only a single table.
The cluster key is the column or columns that the clustered tables have in common.
For example, the employees and departments tables share the department_id
column. You specify the cluster key when creating the table cluster and when
creating every table added to the table cluster.
The cluster key value is the value of the cluster key columns for a particular set of
rows. All data that contains the same cluster key value, such as department_id=20, is
physically stored together. Each cluster key value is stored only once in the cluster
and the cluster index, no matter how many rows of different tables contain the
value.
9. An indexed cluster is a table cluster that uses an index to locate data. The cluster
index is a B-tree index on the cluster key. A cluster index must be created before any
rows can be inserted into clustered tables.
Assume that you create the cluster employees_departments_cluster with the cluster
key department_id, Because the HASHKEYS clause is not specified, this cluster is
an indexed cluster. Afterward, you create an index named idx_emp_dept_cluster on
this cluster key.
Indexed Cluster
CREATE CLUSTER employees_departments_cluster (department_id
NUMBER(4)) SIZE 512;
CREATE INDEX idx_emp_dept_cluster ON CLUSTER
employees_departments_cluster;
10. BITMAP INDEX
In a bitmap index, the database stores a bitmap for each index key. In a conventional
B-tree index, one index entry points to a single row. In a bitmap index, each index
key stores pointers to multiple rows.
Bitmap indexes are primarily designed for data warehousing or environments in
which queries reference many columns in an ad hoc fashion. Situations that may call
for a bitmap index include:
The indexed columns have low cardinality, that is, the number of distinct values is
small compared to the number of table rows.
The indexed table is either read-only or not subject to significant modification by
DML statements.
For a data warehouse example, the sh.customers table has a cust_gender column
with only two possible values: M and F. Suppose that queries for the number of
customers of a particular gender are common. In this case, the
customers.cust_gender column would be a candidate for a bitmap index.
Each bit in the bitmap corresponds to a possible rowid. If the bit is set, then the row
with the corresponding rowid contains the key value. A mapping function converts
the bit position to an actual rowid, so the bitmap index provides the same
functionality as a B-tree index although it uses a different internal representation.
11. FUNCTION BASED INDEX
You can create indexes on functions and expressions that involve one or more
columns in the table being indexed. A function-based index computes the value of a
function or expression involving one or more columns and stores it in the index. A
function-based index can be either a B-tree or a bitmap index.
The function used for building the index can be an arithmetic expression or an
expression that contains a SQL function, user-defined PL/SQL function, package
function, or C callout. For example, a function could add the values in two columns.
12. DOMAIN INDEX
Domain index is a customized index specific to an application. Oracle Database
provides extensible indexing to do the following:
Accommodate indexes on customized, complex data types such as documents,
spatial data, images, and video clips
Make use of specialized indexing techniques
You can encapsulate application-specific index management routines as an index
type schema object and define a domain index on table columns or attributes of an
object type. Extensible indexing can efficiently process application-specific
operators.
The application software, called the cartridge, controls the structure and content of a
domain index. The database interacts with the application to build, maintain, and
search the domain index. The index structure itself can be stored in the database as
an index-organized table or externally as a file.
13. PARTITOINING
Partitioning enables you to decompose very large tables and indexes into smaller
and more manageable pieces called partitions. Each partition is an independent
object with its own name and optionally its own storage characteristics.
Benefits include:
Increased availability
The unavailability of a partition does not entail the unavailability of the object. The
query optimizer automatically removes unreferenced partitions from the query plan
so queries are not affected when the partitions are unavailable.
Easier administration of schema objects
A partitioned object has pieces that can be managed either collectively or
individually. DDL statements can manipulate partitions rather than entire tables or
indexes. Thus, you can break up resource-intensive tasks such as rebuilding an
index or table. For example, you can move one table partition at a time.
14. Reduced contention for shared resources in OLTP systems
In some OLTP systems, partitions can decrease contention for a shared resource.
For example, DML is distributed over many segments rather than one segment.
Enhanced query performance in data warehouses
15. Range Partitioning
In range partitioning, the database maps rows to partitions based on ranges of values
of the partitioning key. Range partitioning is the most common type of partitioning
and is often used with dates.
List Partitioning
In list partitioning, the database uses a list of discrete values as the partition key for
each partition. You can use list partitioning to control how individual rows map to
specific partitions. By using lists, you can group and organize related sets of data
when the key used to identify them is not conveniently ordered.
16. Hash Partitioning
In hash partitioning, the database maps rows to partitions based on a hashing
algorithm that the database applies to the user-specified partitioning key. The
destination of a row is determined by the internal hash function applied to the row
by the database. The hashing algorithm is designed to evenly distributes rows across
devices so that each partition contains about the same number of rows.
Hash partitioning is useful for dividing large tables to increase manageability.
Instead of one large table to manage, you have several smaller pieces. The loss of a
single hash partition does not affect the remaining partitions and can be recovered
independently. Hash partitioning is also useful in OLTP systems with high update
contention. For example, a segment is divided into several pieces, each of which is
updated, instead of a single segment that experiences contention.
17. METERIALIZED VIEWS
Materialized views are query results that have been stored or "materialized" in
advance as schema objects. The FROM clause of the query can name tables, views,
and materialized views. Collectively these objects are called master tables (a
replication term) or detail tables (a data warehousing term).
Materialized views are used to summarize, compute, replicate, and distribute data.
They are suitable in various computing environments, such as the following:
In data warehouses, you can use materialized views to compute and store data
generated from aggregate functions such as sums and averages.
A summary is an aggregate view that reduces query time by precalculating joins and
aggregation operations and storing the results in a table. Materialized views are
equivalent to summaries You can also use materialized views to compute joins with
or without aggregations.
18. In materialized view replication, the view contains a complete or partial copy of a
table from a single point in time. Materialized views replicate data at distributed
sites and synchronize updates performed at several sites. This form of replication is
suitable for environments such as field sales when databases are not always
connected to the network.
In mobile computing environments, you can use materialized views to download a
data subset from central servers to mobile clients, with periodic refreshes from the
central servers and propagation of updates by clients to the central servers.
19. QUERY TRANSFORMATIONS
The optimizer employs many query transformation techniques. This chapter
describes some of the most important.
This chapter contains the following topics:
OR Expansion
In OR expansion, the optimizer transforms a query block containing top-level
disjunctions into the form of a UNION ALL query that contains two or more
branches. The optimizer achieves this goal by splitting the disjunction into its
components, and then associating each component with a branch of a UNION ALL
query.
View Merging
In view merging, the optimizer merges the query block representing a view into the
query block that contains it.
Predicate Pushing
In predicate pushing, the optimizer "pushes" the relevant predicates from the
containing query block into the view query block.
20. Star Transformation
Star transformation is an optimizer transformation that avoids
full table scans of fact tables in a star schema.
In-Memory Aggregation
T he key optimization of in-memory aggregation is to aggregate
while scanning.
Cursor-Duration Temporary Tables
To materialize the intermediate results of a query, Oracle
Database may implicitly create a cursor-duration temporary
table in memory during query compilation.
Table Expansion
In table expansion, the optimizer generates a plan that uses
indexes on the read-mostly portion of a partitioned table, but not
on the active portion of the table.
Join Factorization
In the cost-based transformation known as join factorization, the
optimizer can factorize common computations from branches of
a UNION ALL query.
21. Concurrency Control
Oracle’s multiversion concurrency control differs from the concurrency
mechanisms used by most other database vendors. Read-only queries are
given a read-consistent snapshot, which is a view of the database as it
existed at a specific point in time, containing all updates that were
committed by that point in time, and not containing any updates that were
not committed at that point in time. Thus, read locks are not used and read-
only queries do not interfere with other database activity in terms of locking.
22. Oracle supports two ANSI/ISO isolation levels, “read committed” and “serializ-
able”. There is no support for dirty reads since it is not needed. The two isolation
levels correspond to whether statement-level or transaction-level read consistency is
used. The level can be set for a session or an individual transaction. Statement-level
read consistency is the default.
Oracle uses row-level locking. Updates to different rows do not conflict. If two
writers attempt to modify the same row, one waits until the other either commits or
is rolled back, and then it can either return a write-conflict error or go ahead and
modify the row. Locks are held for the duration of a transaction.
23. Basic Structures for Recovery
In order to understand how Oracle recovers from a failure, such as a disk crash, it is
important to understand the basic structures that are involved. In addition to the data
files that contain tables and indices, there are control files, redo logs, archived redo
logs, and rollback segments.
The control file contains various metadata that are needed to operate the database,
including information about backups.
Oracle records any transactional modification of a database buffer in the redo log,
which consists of two or more files. It logs the modification as part of the operation
that causes it and regardless of whether the transaction eventually commits. It logs
changes to indices and rollback segments as well as changes to table data. As the
redo logs fill up, they are archived by one or several background processes (if the
database is running in archivelog mode).
24. The rollback segment contains information about older versions of the data (that is,
undo information). In addition to its role in Oracle’s consistency model, the
information is used to restore the old version of data items when a transaction that
has modified the data items is rolled back.
To be able to recover from a storage failure, the data files and control files should be
backed up regularly. The frequency of the backup determines the worst-case
recovery time,
Managed Standby Databases
To ensure high availability, Oracle provides a managed standby database feature.
(This feature is the same as remote backups, described in Section 17.10.) A standby
database is a copy of the regular database that is installed on a separate system. If a
catastrophic failure occurs on the primary system, the standby system is activated
and takes over, thereby minimizing the effect of the failure on availability. Oracle
keeps the standby database up to date by constantly applying archived redo logs that
are shipped from the primary database.
25. DATABASE ADMINISTRATION TOOLS
The intent of this book is to allow you to quickly and efficiently create an Oracle
database, and to provide guidance in basic database administration.
The following are some of the products, tools, and utilities you can use in achieving
your goals as a database administrator
Oracle Universal Installer (OUI)
The Oracle Universal Installer installs your Oracle software and options. It can
automatically launch the Database Configuration Assistant to install a database.
Database Configuration Assistant (DBCA)
The Database Configuration Assistant creates a database from templates that are
supplied by Oracle, or you can create your own. It enables you to copy a
preconfigured seed database, thus saving the time and effort of generating and
customizing a database from scratch.
26. Database Upgrade Assistant
This Database Upgrade Assistant guides you through the upgrade of your existing
database to a new Oracle release.
Oracle Net Manager
Net Manager is an alternate tool for configuring and managing Oracle Database
networks.
Oracle Enterprise Manager
The primary tool for managing your database is Oracle Enterprise Manager,
a web-based interface. After you have installed the Oracle software, created
or upgraded a database, and configured the network, you can use Oracle
Enterprise Manager for managing your database. In addition, Oracle
Enterprise Manager also provides an interface for performance advisors and
for Oracle utilities such as SQL*Loader and Recovery Manager.
27. REPLICATION
Replication is the process of copying and maintaining database objects in multiple
databases that make up a distributed database system. Replication can improve the
performance and protect the availability of applications because alternate data
access options exist. For example, an application might normally access a local
database rather than a remote server to minimize network traffic and achieve
maximum performance. Furthermore, the application can continue to function if the
local server experiences a failure, but other servers with replicated data remain
accessible
DISTRIBUTED DATABASES
Distributed database is a set of databases stored on multiple computers that
typically appears to applications as a single database. Consequently, an application
can simultaneously access and modify the data in several databases in a network.
Each Oracle database in the system is controlled by its local Oracle server but
cooperates to maintain the consistency of the global distributed database.
28. EXTERNAL DATASOURCES
An external data source is a connection to an external database. External data
sources usually contain data that does not change very much or data that is too large
to bring into the Active Data Cache.
External data source configurations can be exported and imported using ICommand,
but you cannot import or edit the contents using ICommand, Enterprise Link, or
Architect.Passwords are entered in clear text. You cannot use DSNs (data source
names).
29. EXTERNAL TABLES
The external tables feature is a complement to existing SQL*Loader functionality. It
enables you to access data in external sources as if it were in a table in the database.
Prior to Oracle Database 10g, external tables were read-only. However, as of Oracle
Database 10g, external tables can also be written to. Note that SQL*Loader may be
the better choice in data loading situations that require additional indexing of the
staging table. See Behavior Differences Between SQL*Loader and External Tables
for more information about how load behavior differs between SQL*Loader and
external tables.
To use the external tables feature, you must have some knowledge of the file format
and record format of the datafiles on your platform if the ORACLE_LOADER
access driver is used and the datafiles are in text format. You must also know
enough about SQL to be able to create an external table and perform queries against
it.