The document discusses the capabilities of RMAN, the Oracle database backup and recovery tool. It notes that RMAN offers flexibility, knowledge of database internals, data file checking, and quick recovery and cloning processes. While the syntax can be complex and there is a lack of practical knowledge, RMAN allows for efficient backups in various forms including incremental, retention settings, compression, and automatic control file backups. RMAN scripts can implement backup schedules and perform cleanup of backups and archive logs. RMAN also enables restore, recovery, point-in-time recovery, and bare database recovery. Control files store limited backup information locally while catalogs centralize information but require a catalog database.
Presentation backup and recovery best practices for very large databases (v...
RMAN: More Than Backup and Restore
1. RMAN
more than backup and restore
Rick van Ek
email : rickvek@xs4all.nl
Van Ek IT-Consultancy BV
2. Topics
• Why use RMAN
• Why not use RMAN
• Control file versus catalog
• Run blocks versus scripts
• List versus report
• Backup in all flavours / restore / recover
• Added value
• Efficiency and cost reduction
3. Why use RMAN
• Flexibility
• Internal knowledge of database
• Additional check of contents data files
• The method in case of ASM
• Transparency file system / ASM
• Quick recovery and clone process
• Transportable tablespace
4. Why not use RMAN
• Syntax complexity
• Reputation
• No GUI
• External tools not flexible (also GC)
• Little practical knowledge available
5. Backup in all flavours
• Backup database plus archive log files;
• Incremental backups is possible
• Keep to set a different retention on backup set
( weekly, monthly, yearly backup)
• Compression of backup set
• Throttle if bandwidth contention
• Spfile is automatically stored if autobackup
controllfile is active.
• Automatic check of used datablocks.
6. Example of backup script
run {
sql 'alter session set NLS_DATE_FORMAT=
”DD-MM-YYYY:HH24:MI:SS” ';
resync catalog ;
backup as compressed backupset database plus archivelog;
crosscheck backup ;
crosscheck archivelog all;
crosscheck backup of controlfile ;
delete noprompt obsolete ;
}
7. Clean up backups and archives
• Delete archive log backed up 1 times to disk;
• Delete obsolete backups
• Administration is kept automatically.
• Clean up archive log files on multiple location
• Clean up of archive of standby is not possible
– Exception is for logical standby
– It should be possible (Oracle 11g ??)
8. Restore / Recovery
• Restore datafile;
• Restore database ;
• Recover database ;
• Recover data block;
• Set until scn/sequence/time ...... PITR
• Create instance if bare recovery is chosen.
9. Bare Restore of database
• Start instance without init file from RMAN
• Set location of backup
• Restore spfile and control file from autobackup
• Place spfile and control file in correct location
• Restart instance with spfile and controfile
• Restore data files
• Recover database
10. Control file
• Stores information of backup over limited period
• Backup information is hard to restore
• You do not need anything else
• Powerfull at local usage
• Less powerfull at remote usage
11. Catalog
– Pros
• Central information over all backups
• Trough put information statistics
• Backup information over longer period available
• Information about incarnations
• Powerfull at local and remote usage
• Central storage of scripts
– Cons
• Needs additional database instance
• Slow on implicit / explicit resync catalog
12. LIST
• Copy data / control file
• Contents of backup set
• Incarnations
• Names of scripts stored in catalog
13. Report
• Status of database
• Status of tablespaces
• Contents of stored scripts
14. Cleanup / crosscheck
• Removes backup files not needed any more
• Removes archive logs not needed any more
• Crosscheck on backups
• Crosscheck on ....
15. Scripts
• Text file containing multiple commands
• Text file containing run block
• Can be stored in catalog database
– Locally stored scripts valid only for that instance
– Global stored scripts valid for all connected
instances
– Single point of maintenance
16. Run blocks
• Over ride default channels (clone database)
• Compiles lined in run block
• One or more job step
• Returns to defaults at the end
• Can be generated by RMAN
– memory scripts
– Here the intelligence of RMAN shows.
17. Added Value Part
• Clone database for test database
• Clone database for standby database
• Cleanup backup files
• Cleanup archive log files
• Transportable tablespaces (loading DWH)
• Move datafile into and out of ASM
18. Clone database with RMAN
• Make physical copy of database
• To same of different server
• Set until (scn/sequence/date_time)
• With or without renaming files / dbname
– db_filename_convert
– log_filename_convert
– Automatic rename instance / dbname
19. Pre-requisites for
clone database
• Instance needs to be started in nomount
• Rman connect to :
– Target ( database to be cloned)
– Auxiliary ( the clone instance )
– Catalog (Optional )
• Connect as user sys to databases
• Password file needs to be available
• Init file needs to be there (modified copy of
target)
20. Sample scripts clone database
Clone database to different database name and rename datafiles
(logfile only through init parameters)
RMAN> run{
allocate auxiliary channel AUX1 device type disk ;
duplicate target database to SID
db_file_name_convert=('SID1','SID');
}
Clone database to different database name
RMAN> run{
allocate auxiliary channel AUX1 device type disk;
duplicate target database to SID ;
}
Clone database with point in time
RMAN> run{
set until sequence xxxxx ;
allocate auxiliary channel AUX1 device type disk;
duplicate target database to SID ;
}
21. Clone database for standby
• Compression for transporting database
• No archivelogs needed
• High efficiently ( more then rsync on Unix)
• Starts database as standby when finished
• Control file for standby database
22. Pre-requisites for
clone standby database
• Same as for clone database
• Needs to connect to remote database
• No point in time
• No archive logs needed in backup
• Backup control file for standby
• Backup files needs to be accessible
23. Sample script clone
standby database
On primary server – connected only with local database.
run {
backup database ;
backup control file for standby ;
}
====> transfer database to standby server
====> if no catalog database then rman > catalog start with
'<location of your backup>' ;
On standby server – connected to primary and standby database.
RMAN> run{
allocate auxiliary channel AUX1 device type disk ;
duplicate target database for standby nofilenamecheck ;
}
24. Transportable tablespace
• Only way of transporting a tablespace without
stopping the database.
• Takes the backup of the datafile of the
tablespace needed
• Meta data is still imported with imp tool
25. Efficiency and cost reduction
• Compression option
• Only used data blocks
• Indexes ? Rebuild ?
• Redo log files, rebuild?
• Effective ratio is 10:1
– Less storage needed
26. Known issues
• Errors by implicit resync of catalog
– Correction of date in recover.bsq
– Set nls_..... $ORACLE_HOME/nls/data
• Trottle backup
– To much bandwidth from datafiles
– Tape unit cannot stream
27. Bandwidth statistics
/*
desc : calculation backup bandwidth reading
*/
col status format a12
col input_bytes_per_sec_display format a12
col "read bytes/sec" format a12
col "Start time" format a15
col "End time" format a15
col input_type format a10
col "input Mb" format 9999999999
select db_name
, input_type
, to_char(start_time,'HH:MI DD-MM-YY') "Start time"
, to_char(end_time,'HH:MI DD-MM-YY') "End time"
, input_bytes
, input_bytes_per_sec_display "read bytes/sec"
from rc_rman_backup_job_details
where input_type = 'DB FULL'
and status = 'COMPLETED'
and db_name = 'sid'
order by db_name, start_time
/
28. Conclusion
• Here is your advantage
– Using your backup files
– No need for testing a restore
– Easy to make duplicate database for testing
– Datafile check during backup
– Compression means reduction of size 10:1
– Build your standby easily
29. Finally
• Make notes of most used commands
• Mines are in USING RMAN DAY 2 DAY
– They are on the USB stick
• Questions ????
• You forgot you question, mail me.
30. RMAN
more than backup and restore
Rick van Ek
email : rickvek@xs4all.nl
Van Ek IT-Consultancy BV
1
31. Topics
• Why use RMAN
• Why not use RMAN
• Control file versus catalog
• Run blocks versus scripts
• List versus report
• Backup in all flavours / restore / recover
• Added value
• Efficiency and cost reduction
2
32. Why use RMAN
• Flexibility
• Internal knowledge of database
• Additional check of contents data files
• The method in case of ASM
• Transparency file system / ASM
• Quick recovery and clone process
• Transportable tablespace
3
The main point we are trying to make here is that RMAN is growing in capabilities. Thus
making it more interesting for using it.
There are some day 2 day jobs which easily can be done with this tool, adding more to
the value of using it.
More new method's of storage are coming available and they can be accessed
transparently with this tool.
Also migrating to new storage can be done easily with this tool.
Transportable tablespace without down time of the database is only possible with this
tool. Great process for loading data warehouses.
33. Why not use RMAN
• Syntax complexity
• Reputation
• No GUI
• External tools not flexible (also GC)
• Little practical knowledge available
4
Well if there is no knowledge available and there is a tool which gives a GUI access, then
use this.
When database knowledge is available then do not use the (to simple) GUI tools.
The reputation of RMAN is compatible with a Skoda automobile. It used to be not so
stable build and users still think of it in this way. Reality is the quality is dramatically
improved and there will be more improvement in the next release.
There is little simple documentation available which clarify how to use this.
External tools leave little room for specific backup schedules. Their usage is on a very
limited set of commands which are available, thus losing the advantage of this tool. If one
uses it as a normal copy of files then it is more complex and has no added value.
The problem with any external tool for backing up a database is that you are responsible
of tracking the contents of the backups. Where are the archives and where are the latest
data file backups.
34. Backup in all flavours
• Backup database plus archive log files;
• Incremental backups is possible
• Keep to set a different retention on backup set
( weekly, monthly, yearly backup)
• Compression of backup set
• Throttle if bandwidth contention
• Spfile is automatically stored if autobackup
controllfile is active.
• Automatic check of used datablocks.
5
The is a lot to be set about the backup command, it goes from a simple backup until a
backup for standby without archive logs.
A lot can be twisted and turned to make it run optimal. The most examples in various
documentations are in simple format.
There are not enough examples for full backup schedules with exceptions and build in
checks to full fill the complete business plan.
It is possible to have a daily backup plan and besides that a weekly, monthly and
quarterly backup plan. This can be done without redundant data and with automatic
clean up of the backups. The way it is done is simple and efficient.
Using RMAN means every time it does a backup of the data files it does a check of the
blocks for free. No other external backup has this capability, the only alternative is to
implement db-verify in the pre backup scripts. But then you are building in complexity.
One point which is hardly mentioned in any documentation is that an “automatic backup
of control file” also implicit back's up the spfile. The not dynamic init file is not backed up
in any method of this tool.
35. Example of backup script
run {
sql 'alter session set NLS_DATE_FORMAT=
”DD-MM-YYYY:HH24:MI:SS” ';
resync catalog ;
backup as compressed backupset database plus archivelog;
crosscheck backup ;
crosscheck archivelog all;
crosscheck backup of controlfile ;
delete noprompt obsolete ;
}
6
The first two lines are only in place because RMAN gives an error during an implicit
resync. It is a sporadic error which occurs a lot when there is also an agent installed.
There is no root cause investigation done, but there are two workarounds.
A compressed back up set is very efficient, as the tools knows about the database
internals it can do a lot. Testing with checksums on block showed that data files of tables
are binary compatible but indexes not. Are they rebuild??
Also no redo log files can be found in the back up set listing, are they recreated?
Delete on obsolete removes backup's and archive log files. The option delete list should
not be used in a dataguard environment.
36. Clean up backups and archives
• Delete archive log backed up 1 times to disk;
• Delete obsolete backups
• Administration is kept automatically.
• Clean up archive log files on multiple location
• Clean up of archive of standby is not possible
– Exception is for logical standby
– It should be possible (Oracle 11g ??)
7
Administration is not for nothing, so we can do things with it, we can delete backups
and/or archive log files which are not needed any more.
Full filling this is as simple as 'delete obsolete'. What is deleted is not available for
restore, that is also noted in the administration. It does not mean you cannot get around
it. Replacing backup file on disk and catalog these and they are available again.
If multiple archive log destinations are noted, not the alternate ones, they can be cleaned
up with 'delete input list'. This does not include the location of the standby database.
Cleaning up the archive log file while the standby is kept consistent needs a bit of
scripting around rman. Possible there is a way of doing it, but I did not find it yet.
37. Restore / Recovery
• Restore datafile;
• Restore database ;
• Recover database ;
• Recover data block;
• Set until scn/sequence/time ...... PITR
• Create instance if bare recovery is chosen.
8
The is a restore or recover possible on different levels, the higher the level one chooses
the more likely it is that you need to shutdown the database.
One thing I certainly not like is shutting down a production database. So if one has a
corrupt data block, then restore or recover that data block. You do not need to stop the
database for that. One does need a consistent back up set, so first a full backup of the
database with correct block needs to be fed to RMAN. Then it has to go through all
backed up archive log files to see if there were changes in there for the data block.
When all is done the block will be marked healthy.
This is the reason why one always needs to run archive log modus even on databases
from which one states the data can be easily rebuild. Most of the time they are glad they
do not have to go off line.
38. Bare Restore of database
• Start instance without init file from RMAN
• Set location of backup
• Restore spfile and control file from autobackup
• Place spfile and control file in correct location
• Restart instance with spfile and controfile
• Restore data files
• Recover database
9
There are some specifics on a bare restore, first the processes are started by rman with
out a pfile or spfile. They need to be restored first so tell rman where the backup files can
be found.
Then restore the spfile and control files, this will be located at $ORACLE_HOME/dbs or
%ORACLE_HOMEdatabase. Place the files in the correct location.
Start the instance with this spfile and control file(s). from here on it is simple a restore
and recover procedure.
39. Control file
• Stores information of backup over limited period
• Backup information is hard to restore
• You do not need anything else
• Powerfull at local usage
• Less powerfull at remote usage
10
The control file has at the end of the file an area where the backup information is stored.
The size is depending on what information is stored and how long it should be kept. Only
information about the defined period is kept.
As all information is local the scripts are running much faster, special when there is a
bad connection to the catalog. If any information for backup or restore is changed then
RMAN wants to write this to the catalog. The process is called resync catalog. So
disconnected from the catalog it is much faster at local operations.
As soon there is work done concerning restore / recover database there needs to be
more done on the command line. As the location of the rman files needs to be made
available to the controlfile, which has lost this at a restore.
Also cloning a (for standby) database on a different server is more complicated.
40. Catalog
– Pros
• Central information over all backups
• Trough put information statistics
• Backup information over longer period available
• Information about incarnations
• Powerfull at local and remote usage
• Central storage of scripts
– Cons
• Needs additional database instance
• Slow on implicit / explicit resync catalog 11
Great if there are mutiple databases / servers. Then the backup, restore and other scripts
can be stored in the catalog. This will give a single point of execution and a single point
of maintenance.
Also an advantage with refreshing cloned database on a different server as all
information about the backup files are immediate available.
41. LIST
• Copy data / control file
• Contents of backup set
• Incarnations
• Names of scripts stored in catalog
12
42. Report
• Status of database
• Status of tablespaces
• Contents of stored scripts
13
43. Cleanup / crosscheck
• Removes backup files not needed any more
• Removes archive logs not needed any more
• Crosscheck on backups
• Crosscheck on ....
14
If one wants to relay on the backup tool to do all administration then this tool also needs
to make sure all is consistent.
File not needed any more should be deleted. It will only works confusing, you also can
find you book easily if the book cupboard is tidy.
Not only obsolete backups / archives should be checked but also the needed files. Are
they complete, no missing parts/ archive log files. Or the whole process would be
worthless.
44. Scripts
• Text file containing multiple commands
• Text file containing run block
• Can be stored in catalog database
– Locally stored scripts valid only for that instance
– Global stored scripts valid for all connected
instances
– Single point of maintenance
15
The main thing is keeping the commands together. That is usually what one does with a
script.
Storing the script in the catalog as local script it will only available for that instance. When
stored as global script it will be available for all database instances know to the catalog.
Great if one needs a single point of maintenance.
It can then easily be scheduled by enterprise. But then all output must be checked for
errors, though shell scripting it will be easier as then the errors can be filtered and mailed.
45. Run blocks
• Over ride default channels (clone database)
• Compiles lined in run block
• One or more job step
• Returns to defaults at the end
• Can be generated by RMAN
– memory scripts
– Here the intelligence of RMAN shows.
16
Settings done in a run block remains available through out the run. This can be a sql
statement which does a “alter session set ......”
Only with in a runblock a default channel can be over ruled. This is needed if one wants
to clone a database.
Rman generates its own runblock based upon commands given in the user run block.
These memory scripts contain specific knowledge which rman has over an Oracle
database.
46. Added Value Part
• Clone database for test database
• Clone database for standby database
• Cleanup backup files
• Cleanup archive log files
• Transportable tablespaces (loading DWH)
• Move datafile into and out of ASM
17
47. Clone database with RMAN
• Make physical copy of database
• To same of different server
• Set until (scn/sequence/date_time)
• With or without renaming files / dbname
– db_filename_convert
– log_filename_convert
– Automatic rename instance / dbname
18
Even more hard if a point in time recovery is needed to recover information from the
database which is at current time not available.
Try base on point in time to restore the database on a different location with a different
database name for getting the then current sequence number.
And the production site is hanging and waiting for this information.
Running this scenario proves the power of this tool, it can be done in little more time then
a normal recovery. The tool it self tells which backup sets are needed for this. Just
provide them and run it.
It can be done manually but it will take more time, one needs a certain doses of luck and
lots of patience. Who can afford that is these hectic times, to be honest I'm a bit lazy at
that and always looking for the easy way.
48. Pre-requisites for
clone database
• Instance needs to be started in nomount
• Rman connect to :
– Target ( database to be cloned)
– Auxiliary ( the clone instance )
– Catalog (Optional )
• Connect as user sys to databases
• Password file needs to be available
• Init file needs to be there (modified copy of
target)
19
49. Sample scripts clone database
Clone database to different database name and rename datafiles
(logfile only through init parameters)
RMAN> run{
allocate auxiliary channel AUX1 device type disk ;
duplicate target database to SID
db_file_name_convert=('SID1','SID');
}
Clone database to different database name
RMAN> run{
allocate auxiliary channel AUX1 device type disk;
duplicate target database to SID ;
}
Clone database with point in time
RMAN> run{
set until sequence xxxxx ;
allocate auxiliary channel AUX1 device type disk;
duplicate target database to SID ;
} 20
50. Clone database for standby
• Compression for transporting database
• No archivelogs needed
• High efficiently ( more then rsync on Unix)
• Starts database as standby when finished
• Control file for standby database
21
51. Pre-requisites for
clone standby database
• Same as for clone database
• Needs to connect to remote database
• No point in time
• No archive logs needed in backup
• Backup control file for standby
• Backup files needs to be accessible
22
52. Sample script clone
standby database
On primary server – connected only with local database.
run {
backup database ;
backup control file for standby ;
}
====> transfer database to standby server
====> if no catalog database then rman > catalog start with
'<location of your backup>' ;
On standby server – connected to primary and standby database.
RMAN> run{
allocate auxiliary channel AUX1 device type disk ;
duplicate target database for standby nofilenamecheck ;
}
23
53. Transportable tablespace
• Only way of transporting a tablespace without
stopping the database.
• Takes the backup of the datafile of the
tablespace needed
• Meta data is still imported with imp tool
24
54. Efficiency and cost reduction
• Compression option
• Only used data blocks
• Indexes ? Rebuild ?
• Redo log files, rebuild?
• Effective ratio is 10:1
– Less storage needed
25
55. Known issues
• Errors by implicit resync of catalog
– Correction of date in recover.bsq
– Set nls_..... $ORACLE_HOME/nls/data
• Trottle backup
– To much bandwidth from datafiles
– Tape unit cannot stream
26
56. Bandwidth statistics
/*
desc : calculation backup bandwidth reading
*/
col status format a12
col input_bytes_per_sec_display format a12
col "read bytes/sec" format a12
col "Start time" format a15
col "End time" format a15
col input_type format a10
col "input Mb" format 9999999999
select db_name
, input_type
, to_char(start_time,'HH:MI DD-MM-YY') "Start time"
, to_char(end_time,'HH:MI DD-MM-YY') "End time"
, input_bytes
, input_bytes_per_sec_display "read bytes/sec"
from rc_rman_backup_job_details
where input_type = 'DB FULL'
and status = 'COMPLETED'
and db_name = 'sid'
order by db_name, start_time
/
27
57. Conclusion
• Here is your advantage
– Using your backup files
– No need for testing a restore
– Easy to make duplicate database for testing
– Datafile check during backup
– Compression means reduction of size 10:1
– Build your standby easily
28
58. Finally
• Make notes of most used commands
• Mines are in USING RMAN DAY 2 DAY
– They are on the USB stick
• Questions ????
• You forgot you question, mail me.
29