Back Up Next

Chapter 12 *

Logical Backup and Recovery *

Failure Scenarios *

Statement Failure *

Exercises *

User Process Failure *

Exercises *

Instance Failure *

Exercises *

User Error *

Exercises *

Media Failure *

Exercises *

Logical Backups *

Using EXPORT *

STATISTICS (keyword) *

Exercises *

EXPORT Modes *

USER Mode *

TABLE Mode *

FULL Mode *

Exercises *

Using EXPORT for Backups *

Exercises *

Using IMPORT *

Using INCEXP *

Using INCFIL *

Using INCVID *

Database Recovery with IMPORT *

Exercises *

Read Consistency and Database Export *

Exercises *

Using the EXPORT Direct Path *

Exercises *

Using the Right Character Set *

Exercises *

EXPORT, IMPORT, and Archived Redo Logs *

Exercise *

Chapter Summary *

Two-Minute Drill *


Chapter 12

Logical Backup and Recovery

In this chapter, you will understand and demonstrate knowledge in the following areas:

Failure scenarios
Logical backups

Logical export and import are useful for many functions including migration of database objects between different Oracle databases and version of Oracle. These activities are conducted with two tools provided by Oracle. These two utilities come with the software distribution of the Oracle database server. The two utilities provided by Oracle for the purpose of logical backup and recovery are called EXPORT, for backup of database objects, and IMPORT, for recovery of database objects in export dump files. EXPORT and IMPORT work closely together, and in fact, the only tool that can read an EXPORT file is the IMPORT utility. In addition, there is a discussion on database failure scenarios. These areas of Oracle backup and recovery comprise 10 percent of material tested in OCP Exam 3.

Failure Scenarios

In this section, you will cover the following topics related to failure scenarios:

Statement failure
User process failure
Instance failure
User error
Media failure

Dealing with database problems is the cornerstone of the job description for DBAs. The effort required to resolve different types of database problems depends greatly on the type of problem encountered. There are several different scenarios for failure in the Oracle database. These scenarios can be divided into five general categories, which will be discussed in this section. At a high level, the five categories include statement failure, user process failure, instance failure, user error, and media failure. Each of these categories has different implications on the need for DBA intervention in order to provide recovery from the situation. This section will provide a discussion of each area as it pertains to the effort required for resolution from the DBA perspective.

Statement Failure

In the course of normal transaction processing, many things can happen that cause problems for users executing their SQL statements. A statement failure occurs when Oracle cannot process a statement issued by a user. There are several causes for statement failure. First of all, a user may be attempting to issue a statement referencing a table that does not exist or to which they are not granted permission to use. In this case, Oracle issues the user an error that details both the phrase of the statement that contained the error and a reference to a special code, called the Oracle error message code, to help solve the problem. The user can then look the error up in the Server Messages Guide that Oracle publishes for all its database server releases. The following code listing shows the output from SQL*Plus when the user issues a statement that generates an error:

select employee_id, name, dob, title
from us_emp
where employee_id = 49585;

from us_emp
ERROR at line 2:
ORA-00942: table or view does not exist

Notice that in this situation there are three components to the interactive reply Oracle gives. First, the reply repeats the portion of the data request that generated the statement error. To highlight a specific area of the statement that failed, Oracle places an asterisk (*) under the offending clause or keyword. Following the first part of the reply, there is an identification of the line number that contains the error text. Following that, there is the final component of the error that occurred. In this case, the user statement issued to Oracle contained reference to a table that does not exist within the scope of what the user can see or manipulate.

Recovery from this error situation consists of Oracle immediately rolling back any changes made by the statement automatically. The involvement of a DBA may be minimal or extensive, depending on the desired outcome. On one hand, the user may have typed in the wrong table name, thus relieving the DBA of any involvement in the solution other than to direct the user to an appropriate data model for the table layout of the database. On the other hand, the fact that this user could not issue this statement may indicate a deeper problem related to privileges and/or roles granted to this user, or perhaps even the erroneous dropping of the table that does not exist. Although in many (if not most) cases, the DBA will not be required to analyze the problem and present a solution, there could be trouble lurking behind this seemingly innocent error message.


  1. What is statement failure? How is it resolved?
  2. What are the three components of the error message Oracle provides when statement failure occurs?

User Process Failure

The failure of a user process requires more intervention from the Oracle database. A user may cancel or terminate the statement they are executing. A classic example of user process failure occurs when the user is attempting to change data in a table but cannot due to some locking situation on the table. In order to regain control of the SQL*Plus session to stop waiting and move onto another task, the user may choose to execute the cancel command by typing a ctrl-c from from SQL*Plus . The issuance of a cancel command causes the user process to fail. Oracle may also issue the cancel command for a user process if there is something the process is trying to do that Oracle won’t let it do. In most cases, the cancellation of a user process will not cause the user’s session to fail, although there is a slight chance that this situation may occur.

If a user process terminates abnormally, then Oracle will have to get involved in the cleanup effort for that terminated process. Oracle has a special background process in the database architecture that handles this type of situation. The name of the process is PMON, the process monitor background process. When a user process fails, PMON is there to handle automatic process recovery for the database in several areas. Some of those areas are listed in the bullet points below:

Rollback on the failed process
Release of locks on tables and internal latches the process acquired
Removal of that process identifier from active processes in the V$ performance views

In the event that a user cancels his or her own process, no intervention is required from the DBA. This fact is due to Oracle’s ability to handle process failure recovery automatically in the areas listed in the previous bullet points. However, despite the utility it serves in the area of user process monitoring and cleanup, PMON does not do it all. If a process is killed by Oracle, then PMON will not execute any cleanup activities on it. Further, if it is an Oracle background process that has failed, the instance can no longer run. The intervention of a DBA in this situation will be to restart the instance.


  1. What is user process failure? How does the user cancel SQL statements?
  2. What background process handles memory cleanup in user process failure?

Instance Failure

The discussion of PMON’s inability to handle the failure of another background process and the subsequent fact that the instance will not be able to continue running in the event of background process failure provides a segue into the next type of failure that can occur in Oracle. This type of failure is called instance failure. Instance failure is a serious situation in most Oracle database environments. If the instance fails, then no users are able to access the database, and the DBA must determine the solution for the problem and restart the database.

Consider some of the potential causes for instance failure. There are many different things that may cause an instance to fail, including a problem with the memory of the host system running Oracle, a power outage, or disk corruption. Although instance failure requires the DBA to intervene in order to restart the database, the initiation and completion of the recovery of the instance is handled by the Oracle background process SMON. SMON handles several different components of instance recovery, including the rebuilding of the Oracle SGA, rolling forward all transactions in progress at the time of the crash, and then rolling back all uncommitted work that was in progress at the time the database crashed.

As mentioned, SMON handles instance recovery automatically, requiring only that the DBA restart the Oracle database. At the time that the DBA starts the database, SMON begins the tasks required to restore the instance to full operational capacity. The operations carried out by SMON may be somewhat extensive, depending on the transaction volumes that were occurring on the database at the time the database failure occurred.

The real effort involved in managing an instance failure on the part of the DBA is to figure out the problem or problems that caused instance failure in the first place. If the failure is due to a problem with the hardware of the machine hosting Oracle, then the DBA may need to work with the hardware vendor or the internal hardware support organization to ensure the installation of new parts to make the machine that runs the Oracle database operational again. If the problem that caused instance failure is due to the corruption of a disk, a few things will happen. First, since DBWR and LGWR handle the writing of data to and from disk, they are the background processes that will detect the corruption. As with all background processes, DBWR and LGWR maintain trace files that log all activities associated with their running execution. When a situation occurs that forces either of them not to be able to write data to the disk they must work with, the background process that experiences the problem writes a message to its own trace file. In addition, the process writes to the special alert log trace file that is maintained for the entire database instance. If the instance fails and the DBA wants to find out why, the DBA can look in the trace files for the Oracle background process(es) and/or the alert log. There, the message produced by the background process that encountered the error will appear, telling the DBA more information about the problem encountered. This information will require some interpretation in order to resolve hardware difficulty. Once the DBA has identified the cause of the Oracle instance failure and resolved any hardware problems that may have resulted in the instance failing, the DBA should restart the instance and let Oracle’s SMON process handle the rest.


  1. What is instance failure? What causes instance failure?
  2. What background process handles instance failure?

User Error

The users of an Oracle database may experience problems as a result of mistakes such as accidentally deleting data or database objects from the Oracle database. This scenario, although not high on the priority list of many organizations, can display the strength or weakness of backup and recovery skill on the part of the DBA. Sometimes the DBA can prevent the problem in production environments by not allowing users or developers any privileges that allow them to create, alter, or drop database objects in the application’s production environment. Though it can’t be done in every environment, this configuration prevents users other than the DBA from adding or taking away objects from the production database.

In all database environments, the correction of data that was changed, removed, or added in error by a user can be re-created by the user as well. The rollback segment, a tool used in conjunction with transaction processing, has an ideal role in the database that allows for easy recovery of data changes that were made accidentally. If, for example, the user issues a statement such as delete from tablename and inadvertently leaves off the where clause, the rollback statement allows the user to correct the mistake immediately.

There is one important thing to remember if the user has issued a statement that causes accidental alteration of data in the database. Do not commit the change. If, for example, a user issues a statement that changes the values stored in a column called LASTNAME for a table called EMPLOYEE from their own last name to "DOE," it is a simple task to discard the change in favor of the original data with the use of a rollback statement. However, if the statement is issued, then the change is committed, and then the user realizes it was a mistake, there could be a serious issue with data corruption on the database. In this situation, it pays to ensure that any user of the database is limited by the privileges and roles they are given so they can execute only the functions they are responsible for and no more.

In order to resolve this situation, the DBA must rely on a backup strategy. There are a couple of appropriate strategies for this situation. The first, and arguably most effective, involves the use of logical backup and recovery options provided by the EXPORT and IMPORT tools. Particularly during development periods, the DBA should consider daily or weekly export of database objects using the EXPORT tool, in addition to other database backup plans in place. Alternately, the DBA can use only physical backups to obtain copies of database objects. However, if only the physical approach is used, the DBA should allocate enough hardware to allow for a point-in-time recovery of the database object that were dropped or contained data that was changed accidentally. After the point-in-time recovery, the DBA will need to export the database object that must be restored on the other system. So, at some point the DBA will use EXPORT and IMPORT anyway.

The amount of time a DBA spends correcting user errors should be carefully considered. What sort of commitment does the DBA want to make for this potentially time-consuming area of work? Although it adds value to an organization to have the DBA perform this type of work, the better approach is to give users and developers minimal access to production, with progressively more access to test or development environments. In addition, the DBA may want to consider a strategy for users to develop their own database objects in a privately owned user schema, to allow for users to experiment with table creation and development in a noncritical environment.


  1. What is user error? How can the DBA resolve user errors?
  2. How can the DBA prevent user errors in production environments?

Media Failure

Media failure means disk failure, the loss of information on a disk that has either been corrupted at a hardware level, erased by mistake, or suffered some other form of irreparable damage. The backup and recovery strategy of an organization is designed to act as an insurance policy against problems in the database that render the database unusable. This is generally the most time-consuming problem a DBA will face in the arena. A database recovery that occurs as a result of media failure is tied directly to the abilities of the DBA to handle database recovery both before and after the disk failure.

The success of a DBA in this area starts far in advance of a problem with losing a disk on the machine hosting the Oracle database. Good database recovery starts with sound backup strategy. The DBA must evaluate the needs of the organization and obtain buy-in from users, developers, and managers alike. Second, the DBA must execute the backup strategy like clockwork. The best backup strategy in the world means nothing if backups are not run, just as the best homeowner’s insurance policy in the world means nothing if the home is robbed before you buy the policy.

When failure occurs, however, the clutch skills of the DBA are tested. As mentioned before, the success of a database recovery depends on both the complete recovery of data to the point of failure (or before the failure, depending on the needs of the recovery) and completion of the recovery in a timely manner. The time spent in recovery mode depends on several critical factors. The list following this paragraph will identify those factors:

Accessibility of backups The ability of a DBA to obtain data from the backup has a direct impact on the speed in which he or she can accomplish the recovery. Accessibility should be taken to mean both the physical availability of backups (onsite or offsite) AND the accessibility of backup data on the offline storage media used to archive the data. The backup data is only available to the DBA at the speed of its storage media.
Frequency of backups As mentioned last chapter, if backups are taken twice a month and the DBA is relying on the application of scores of archived redo logs to provide the data needed, then the backup will take much longer than if the DBA backed up the database daily.
Type of failure The amount of time the DBA will spend in recovery depends on the type of failure that has occurred. For example, if the database lost one disk drive that contained just a few read only tablespaces, the DBA will spend a much smaller period of time handling database recovery than if the DBA lost a disk containing several datafiles that contained highly volatile data that experienced infrequent backup anyway.
Type of backups The amount of time an organization spends in recovery depends also on the type of backup strategy used in the organization. If, for example, the business made an unwise choice to go with infrequent logical backups on a highly volatile database, then the entire organization may find itself keying in data manually as part of the recovery process.


  1. What is media failure? How is media failure managed in Oracle?
  2. Name and describe the factors that determine the amount of effort required to handle recovery in the event of media failure.

Logical Backups

In this section, you will cover the following topics related to logical backups:

EXPORT modes
Backup strategy and EXPORT
Read-consistency and database export
Using the EXPORT direct path
EXPORT, IMPORT, and archived redo logs
Using the right character set

This section will cover three different topic areas. The first area of discussion will include the use of the EXPORT utility. Many of the features and uses will be discussed. Following that treatment, the use of IMPORT as the complementary utility will be discussed. After covering the highlights of features and usage for each of these areas, the discussion will turn to a number of advanced topics of discussion for EXPORT and IMPORT. Some of the limitations of EXPORT and IMPORT, such as the inability to use archived redo logs in conjunction with the logical backup and recovery strategy and other specialized areas of knowledge will be presented.


The EXPORT tool offers many options and features for backing up a database. EXPORT accepts many parameters that tell the utility what data to export and in what fashion. The end result of an EXPORT is a file that contains data according to the specifications given at the time the export is executed. The file produced by EXPORT is in binary format, usable only by IMPORT, a complementary tool designed to bring the database objects and/or table data back into a database from the export file. The exported data is commonly referred to as a "dump," although it does not bear much resemblance to the core dump many developers may already have exposure to. EXPORT is run from the operating system prompt. The parameters to be used can be identified in two ways. The DBA can identify the features to be used on the command prompt as command-line options.

Exp userid=ORADBA/dbapass full=y file=’/oracle/export/dump’

Alternatively, the DBA can run EXPORT with a list of parameter options specified in a parameter file. The extension of the parameter file should be parfilename.par. There is also a graphical user interface available with EXPORT. Many UNIX users of the Oracle database may opt to stick with the command-line interface. However, the segment of Oracle databases running on Windows NT continues to climb. As such, it is expected that DBAs will become more familiar with Oracle with a graphic front end as time goes on. The following code block demonstrates the use of EXPORT in conjunction with a parameter file.

Exp parfile=EXPORT_120198.par

TIP: To use EXPORT and IMPORT, the catexp.sql utility script stored in the Oracle software home directory under rdbms/admin should be executed. This script creates several views required for the execution of EXPORT and IMPORT, as well as the EXP_FULL_DATABASE and IMP_FULL_DATABASE roles required to run these utilities.

Within the parameter file, there may be several different parameters listed, along with specified values. The parameters given either at the command line or in the parameter file determine several things about the database export. For example, the objects that will be exported in this execution of EXPORT, the name and location of the exported data file, and the name and password for the username EXPORT will use for execution, are all options that can be defined by command-line parameters. A sample listing of the parameters EXPORT can handle are displayed in the following table:

USERID (name/pass) The user and password under which EXPORT will execute.
BUFFER (number) Defines a buffer size EXPORT will use to fetch rows for the export file.
FILE (filename) Identifies the name and location of the export file produced.
GRANTS (Y/N) Indicates whether table grants should be included in the export.
INDEXES (Y/N) Indicates whether table indexes should be included in the export.
ROWS (Y/N) Indicates whether table rows should be included in the export.
CONSTRAINTS (Y/N) Indicates whether table constraints should be included in the export.
COMPRESS (Y/N) Indicates whether EXPORT will place all rows of the table into one initial extent in the export file.
FULL (Y/N) Indicates whether EXPORT should export the entire database.
OWNER (name) Indicates a list of users whose database objects should be exported (user mode).
TABLES (list) Indicates a list of tables that should be exported. (table mode)
RECORDLENGTH (number) Lists the size of each data record in the export file. If the DBA wants to import the exported file onto a database in another operating system, this value must be modified to the proper value for that operating system.
INCTYPE (keyword) Accepts the keywords complete, cumulative, or incremental to indicate the type of EXPORT executed.
HELP (Y/N) Displays a help message with all features of EXPORT described.
RECORD (Y/N) Will specify information about the export in one of the following SYS tables used to track export information: INCVID, INCFIL, INCEXP.
LOG (filename) Specifies the name of a file containing runtime details and error messages for the given export.
CONSISTENT (Y/N) Allows the cumulative or full export to obtain a read-consistent view of all data exported. Requires large rollback segment. Effect can be duplicated by the DBA enabling restricted session mode before beginning export.
FEEDBACK (number) When set, displays a dot to indicate progress on rows exported per table.

STATISTICS (keyword)

Accepts the estimate, compute, or none keywords. Used to generate statistics for cost-based optimization on the database.
MLS (Y/N) Used for Trusted Oracle. Stores the multilayer security label for tables and rows in the export.
MLS_LABEL_FORMAT Used for Trusted Oracle. Redefines the default format for multilayer security labels.
DIRECT (Y/N) Allows the DBA to run faster exports using the direct path. Similar in function to direct path in SQL*Loader.


  1. What are the different methods for running EXPORT?
  2. What are the different ways to specify parameters for running EXPORT?


There are three modes for using EXPORT. They are the user mode, the table mode, and the full mode. The emphasis for using each is different, and the discussion that follows will present the similarities and differences. In addition to the modes allowed with the EXPORT tool, there are several types of backups that can be used for the database to give a full range of options for backup and recovery.


EXPORT allows the DBA to move data objects from a database according to categories. In order to set up EXPORT to export the objects that are owned by a user, the DBA should provide a list of one or more users whose objects will be taken in the export. This option is useful in database situations where the DBA has configured several applications within the same database that use different usernames as schema owners for the database objects of their application.

Exp userid=DBA/password owner=HRAPL file=’/oracle/export/hrapl10298.dmp’

The export of database objects in user mode takes a backup of all objects specified by the other command-line parameters given. In the prior example, no values were specified for any of the parameters that determine which database objects are exported. Some of these parameters include GRANTS, INDEXES, ROWS, and CONSTRAINTS. Since the default for these parameters is Y, however, there is no need to specify the parameters unless the DBA specifically doesn’t want the objects included in the export. This point illustrates a fact about EXPORT—each parameter has a default value, and the default value is the one used for the export if the DBA doesn’t specify a value for it at either the command line or in the parameter file. Figure 12-1 illustrates EXPORT tool usage in user mode.

Fig12-01.jpg (34687 bytes)

Figure 1: Exporting in user mode

Consider the impact of exporting several database objects associated with a particular owner. If, for example, the DBA understands that a particular application in the database is undergoing schema changes, the DBA may want to take daily backups of that schema to ensure that the database objects are recoverable in the event of accidentally being dropped. Figure 12-1 illustrates the principle of exporting in user mode. There are database objects owned by several users in the database, but since the OWNER parameter is used, the user specified for that parameter is the only one whose objects will be exported. This operation typically is not considered an effective overall backup strategy, but does allow the DBA to take supplemental backups if the need arises to offer value-added service (mainly to developers who are modifying their database schemas). Alternately, this service could be used to back up certain tables in which users make changes to the data in error.

If the DBA wants to export database objects for more than one user, this is possible as well through the use of the OWNER parameter. The schema owners that the DBA wants to export database objects from can be listed in the OWNER parameter easily.

Exp userid=DBA/dbapass owner=HRAPL,FINAPL file=’/oracle/export/hrapl10298.dmp’


An alternative to exporting database objects in user mode is to export those objects on a per-table basis. This option is known as table mode. With table mode, the DBA can specify very selective exports that only draw data from a few different tables. This option is not appropriate as a full-service option for database backup and recovery. However, the DBA can use the table mode EXPORT for the purpose of highly supplemental, highly selective exports designed for the purpose of restoring a specific table or other object to the database.

Exp userid=DBA/dbapass tables=HRAPP.EMPLOYEE indexes=N file=’hrapl10298.dmp’
Exp userid=DBA/dbapass tables=EMPLOYEE indexes=N file=’hrapl10298.dmp’

Consider the situation presented in the beginning of the chapter, in the discussion of user errors. In these situations, the data, or the table itself, may have been changed or removed in error. In this type of situation, the DBA will find the table mode offered by EXPORT to be useful. In the code example provided above, there are two situations illustrated by the specification in the TABLES parameter. In the first example, the EMPLOYEE table is prefaced by mention of the schema that owns this table. If no schema is mentioned, EXPORT will do one of the following things. It will either try to find the table in the schema of the user specified under which the export will run, or EXPORT will look for all tables that the user it is running under can see to find the specified table.

If, for example, there are two EMPLOYEE tables in the database, one owned by HRAPP and one owned by DBA, then the first example in the code block preceding this paragraph will export the EMPLOYEE table owned by HRAPP, while the second will export the table owned by DBA. In general, when using EXPORT in table mode, the DBA should bear in mind the schema owners of each table in order to avoid exporting unwanted information.

For these examples, also notice that the DBA has chosen not to export the indexes. Since read consistency cannot always be guaranteed on an export, the DBA may choose not to export the indexes for a table, instead opting to re-create the index on the database after the table and/or its data have been imported. Remember that the default options for all database object parameters is Y for every run of EXPORT. That is to say, if EXPORT runs with GRANTS=N, and then run again without GRANTS specified, then the default value of Y is used for the GRANTS parameter.


The final option for running EXPORT is to do so in full mode. This mode will export all tables from all user schemas. In order to use full mode, the FULL parameter must be set to Y either at the command line or in the parameter file. Unlike table or user mode, full mode for database export is used in order to provide a full database backup and recovery option with logical methods.

Exp userid=DBA/password FULL=Y file=’hrapl10298.dmp’

In order to provide a full solution to the need for backup and recovery using EXPORT, the DBA should operate the utility in full mode with the FULL=Y parameter specified. The default value for this parameter is N, however, so it is important that the DBA ensure FULL=Y is explicitly stated each time EXPORT is run, either on the command line, or more preferably in a parameter file. Neglecting to do so will simply run the export in either table or user mode.

In order to use export in full mode to provide a complete backup and recovery option using logical means, the DBA must first understand an important concept. The concept is that there are several types of exports that can be executed when the database is running in full mode. The types of exports that can be used are complete, cumulative, and incremental. The type of full export taken depends on setting the INCTYPE parameter to either complete, cumulative, or incremental. Together, these exports can operate in a plan to provide the complete backup solution required for databases.

There are a few other restrictions to understand before discussion of the three types of full exports can be explained. The first restriction involves the situation of parameter conflict. There are three export modes available in EXPORT, which are specified using the OWNER, TABLES, and FULL parameter specifically. Consider the example that follows this paragraph.

Exp userid=DBA/password full=Y tables=EMPLOYEE file=’hrapl10298.dmp’

In the following example of running EXPORT, there is a conflict of interest between two of the parameters. The FULL=Y parameter indicates that the user wants to perform a full export of the database using EXPORT; however, the TABLES parameter also indicates that the EMPLOYEE table is the only table the DBA wants to export. The conflict arises because EXPORT cannot simultaneously back up the full database and the EMPLOYEE table only. In general, the DBA must be careful to avoid conflicts between the parameters used on database exports. Also, no one can execute a full export without the EXP_FULL_DATABASE role granted to her.

Complete Exports

A complete export is one that produces an export file containing all database objects in the database. A complete export allows the DBA to make a complete recovery on the database as well. Complete exports are a required component for the full backup and recovery strategy that will be discussed later in the section. In order to take a complete export while the database is in full mode, the DBA should use the INCTYPE parameter either at the EXPORT command line or in a parameter file. The following code block demonstrates usage of EXPORT to produce a complete export of data from the database:

Exp userid=DBA/password full=Y inctype=complete file=’010199cmpl.dmp’

The INCTYPE parameter can only be used in conjunction with full mode. The three permitted values for INCTYPE are complete, cumulative, and incremental. At the end of this section, there will be a discussion defining the use of these three types of full exports for the purpose of database backup and recovery.

Cumulative Exports

Unlike the complete export, which creates a backup of every logical database object in the database, the cumulative export creates a copy of only the database objects that have been changed in the database since the last time a cumulative or complete database export was taken.

Exp userid=DBA/password full=Y inctype=cumulative file=’010399cml.dmp’

Cumulative exports do not take complete exports of all objects in the database. Instead, the cumulative export will copy only the database objects that have changed since the last complete or cumulative export. Consider the following list to understand the implications of a cumulative export:

There are five tables in a database.
The database was backed up using a complete export three days ago.
Only two of the tables have had rows added or changed since then.
The cumulative export run will only contain the two tables that have changed.

Cumulative exports generally take less time to run than complete exports, because they export less data than the complete export. The two types of export are designed to work together to produce an overall recovery option. It is important to remember that if the data in a table has changed since the last complete export, then the entire table will be included in the cumulative export. EXPORT is not designed to back up individual rows for a table. The ROWS parameter simply instructs EXPORT to back up all rows for the table or none of the rows at all. Cumulative exports can be thought of as exports that bundle together the data changes saved in all incremental backups that take place between cumulative or complete exports.

Incremental Exports

An incremental export consists of all database objects that have changed since the last cumulative, complete, or incremental export run on the database. In other words, an incremental backup saves all database objects that occurred since the last backup of any type. A complete presentation of all three backups in action as part of a full-service backup and recovery strategy using logical means will appear at the end of this section. For now, the DBA should understand fully the implications of incremental export. An example for using EXPORT and specifying an incremental export appears in the following code block:

Exp userid=DBA/password full=Y inctype=incremental file=’010299inc.dmp’


  1. What objects are exported when EXPORT is run in user mode? What objects are exported when EXPORT is run in table mode?
  2. What parameters are set for each of the three modes of EXPORT?
  3. What are the three types of exports made in full mode?

Using EXPORT for Backups

With all export options presented, the DBA should now consider how the full export options work together to provide a full-service backup and recovery solution using logical means. First, the DBA must create a benchmark backup for the purposes of initiating a complete recovery, if need be. However, since complete backups can take a long time to execute and a lot of room to store, the incremental backup is a viable option for conducting backups in between. A sample backup schedule using all backup options presented in this discussion appears in Figure 12-2. First of all, since complete exports take a long time, an organization using EXPORT to handle their backups may prefer to run complete exports only on the weekends, when the database experiences low or nonexistent activity from end users. During the week, however, users are on the database frequently. As such, the backup strategy includes a daily incremental backup to save the objects that experienced changes that day. Finally, the use for cumulative backups is illustrated by the midweek backup. This is designed to reduce the number of exports that must be applied in the event that a recovery is necessary to restore damaged files. The cumulative export, then, collects all data that was saved in the incremental exports taken Sunday, Monday, and Tuesday and bundles all that information into the cumulative export. Essentially, a cumulative export can be considered redundant effort. However, the cumulative export reduces the number of backups required for recovery.

Fig12-02.jpg (35365 bytes)

Figure 2: Sample backup schedule using EXPORT

The EXPORT tool is useful for devising a backup strategy based on the logical view of the database. The three types of full backups are most effective for an overall backup strategy, while the user and table modes allow for highly specialized data storage for recovery purposes. Important to note, however, is that it is only possible to recover the database to the point in time that the most recent backup was taken. If a situation arises where the recovery of data entered after the most recent recovery was performed is required, then the users must reenter the data manually.


  1. Identify a backup strategy that uses EXPORT.
  2. What factors determine the type of backup that may be taken on any given day? What are some of the overall limitations of a backup strategy that uses EXPORT?


The IMPORT tool is designed to complement the functionality of EXPORT by allowing the DBA to take data stored in an EXPORT file and draw it back into the database. The only program that can read exported data is the IMPORT tool. In general, IMPORT allows the DBA to import data from an export file either into the same database or a different one, depending on the needs of the DBA.

Imp userid=DBA/password full=y file=’/oracle/export/010199exp.dmp’

IMPORT works in a manner similar to EXPORT. The DBA issues the command to run IMPORT, either from the command line, interactively, or with the use of a graphical user interface. IMPORT supports the use of many of the same parameters that EXPORT does, with a few differences. A full list of the parameters supported by IMPORT are listed below:

USERID (user/pass) The username and password used to run the IMPORT.
BUFFER (number) Parameter that defines the number of rows inserted into a database at one time.
FILE (filename) Determines the name of the export file to use for the input
SHOW (Y/N) Displays the contents of the export file but doesn’t actually cause IMPORT to import anything.
IGNORE (Y/N) Specifies whether to ignore errors that occur during import.
GRANTS (Y/N) Specifies whether grants in the export file should be imported.
INDEXES (Y/N) Specifies whether indexes in the export file should be imported.
ROWS (Y/N) Specifies whether rows in the export file should be imported.
FULL (Y/N) Determines whether the import will be in full mode.
FROMUSER (name) The names of schema user database object owners for the objects in the export file that should be imported.
TOUSER (name) Identifies the user schema into which database objects should be placed if the IMPORT is running in user mode.
TABLES (Y/N) Specifies whether tables in the export file should be imported.
RECORDLENGTH (number) Identifies the length in bytes of the each record in the export dump. Must be specified properly.
INCTYPE (keyword) Defines the type of import that will occur. Valid values are system and restore.
COMMIT (Y/N) Specifies whether IMPORT should commit after each time a buffer’s worth of data is written to the database
HELP (Y/N) Indicates whether IMPORT should display help information about the parameters and their meanings.
LOG (filename) Indicates the name of a file into which all IMPORT runtime information and errors will be stored.
DESTROY (Y/N) Indicates whether IMPORT should reuse the datafiles that exist in the database for storage of imported objects.
INDEXFILE (Y/N) Indicates whether IMPORT should create a file that contains a script to create the index for a table rather than creating the index itself.
FEEDBACK (Y/N) IMPORT gives the same dot notation to indicate progress in the importation of data.
MLS Used in conjunction with importing data into Trusted Oracle.
MLS_LISTLABELS Used in conjunction with importing data into Trusted Oracle.
MLS_MAPFILE Used in conjunction with importing data into Trusted Oracle.

Data is imported from the export dump file in the following way. First, the table definitions are brought into the database from the export file in order to create the table objects. Next, the actual row data from each table is brought into the database from the export file, indexes are created, the row data is brought into the database, and then the index data for the table is imported. The last items to be brought into the database from the export file are the integrity constraints and the triggers. There are some implications for the database objects that are created as a result of the order in which they are created. First, the data in an index must correspond to the data in the table or the index data could be corrupted. Secondly, if the integrity constraint should fail when imported, then there will be no data inserted into the table. Although the chances of this happening can be minimized with the placement of the database into restricted session mode, there is still the chance of a problem. Disabling integrity constraints manually during export will compound the problem.

However, the biggest problem the DBA may encounter with respect to the data inconsistency of a database relates to the trigger. If, for example, there is a trigger on a database that populates data or applies some business rule as a result of the change of data in a table, then that activity will not occur when the object is imported. This is because the trigger is imported last, and as such, it will not fire for each row as the row data is imported. There are ways to circumvent the problem; however, the DBA must be aware of object import order in order to understand that there could be problems related to the consistency of data imported from an export file.

However, even though the order of object importing is fixed, the DBA can eliminate certain steps by setting certain parameters. The parameters include TABLES, which indicates a listing of tables that are to be imported. Other parameters are assigned flag values Y or N, depending on whether the DBA does (Y) want to import that type of object, or not (N). The first parameter of this type is INDEXES, which indicates whether or not the indexes that correspond to tables should be imported. Another example is the GRANTS parameter, which determines if user object privileges should be imported from the export file. The final two are CONSTRAINTS and TRIGGERS, which determine whether or not constraints and/or triggers are imported, respectively. Another condition for the import of these objects is the fact that they are present in the export file. In other words, the DBA can specify Y or N for any of these parameters, but if the object is not in the export file, the object will not be imported.

Like EXPORT, IMPORT has the ability to run in a few different modes, such as user and table mode. IMPORT runs in user or table mode with the use of certain parameters. The parameter for table mode has already been introduced. That parameter is TABLES. The DBA can provide a list of tables for which IMPORT should draw from the export dump file into the database. However, the parameter must contain listed tables that are part of the export file or else the tables in the TABLES parameter listing will not be imported.

Imp userid=DBA/password file=’010199exp.dmp’ tables=EMPLOYEE

There are some slight differences in the way IMPORT handles user mode. First of all, the parameters for IMPORT’s user mode are slightly different than that for EXPORT. The parameters used in IMPORT user mode are called FROMUSER and TOUSER. In EXPORT, the user listed in the OWNER parameter for the execution of the export determine which user’s objects are exported. With IMPORT, however, two things must be determined. First, the DBA must specify the user schema whose objects will be drawn in from the export file. This step is done by listing the user schema in the FROMUSER parameter. The second part is to determine who will own the database object once it is imported. This step is particularly important in the situation where the user schema that owned the database object in the database from which the export was produced doesn’t exist in the database to which the export is applied. For this step, the TOUSER parameter is used.

Imp userid=DBA/password file=’010199exp.dmp’ fromuser=’MILON’ touser=’SHUG’

Tip: IMPORT can run in table or user mode to import database objects from dump files made by EXPORT running in full mode. IMPORT cannot run in full mode, however, when the dump file was made with EXPORT run in table or user mode.

IMPORT in full mode is also an option: however, the values available for the INCTYPE parameter are slightly different. The first is system. When INCTYPE is set to system, this means that the import will draw data from the export file specified that are system objects, except for those objects owned by SYS. When INCTYPE is set to restore, all database objects in the export file are imported by the IMPORT execution.

When data is imported, the order on which the export files are applied is critical to the success of the IMPORT run. First of all, in order to recover a database using an export file, the DBA must first apply the most recent complete export. If the backup schedule that was illustrated in Figure 12-2 is in place for the organization, then the last complete export file produced would generally be from the Saturday export. After the complete export is applied, then the most recent cumulative export can be applied. After that import from the cumulative export has been applied, then the DBA can apply all incremental export files produced up to the time that the data should be restored. If there are additional data changes that were made after the last incremental export, then those changes must be entered manually.

The order in which database export files are applied is relaxed somewhat in the situation where the DBA wants only to import objects in table or user mode. In this case, only one export file is needed, the one containing the data current to the time that the DBA wants to import. Use of IMPORT is similar to use of EXPORT in this situation, with the DBA using the TABLES or FROMUSER and TOUSER parameters appropriately to identify the tables or owners in the export file that will be extracted and the user schema to which they will belong once populated into the database.

In order to find out what export file to use to draw certain data into the database, there are a couple of options. The first option is the SHOW parameter. This special parameter can be used in conjunction with the FILE parameter to find out the names of the database objects that are contained in the EXPORT dump file.

Imp user=DBA/password show=Y file=’010199exp.dmp’

Another option for determining the contents of an export file, or the location of a database object in a particular export file, is as follows. There are some special dictionary views that the DBA can use to determine the location of database objects in export files, the unique tracking ID for a particular export, and the identification for the last export from the database. These three views are called INCEXP, INCFIL, and INCVID. The views are owned by the user SYS.


The INCEXP view contains a listing of all database objects that have been exported and the export files that contain them. The columns in this table include the owner’s unique ID for their username, the name of the database object, and the export ID number that contains the object.


The INCFIL table contains a listing of all database exports and their corresponding export IDs for tracking information. The columns in this table include the unique ID for the export, the name of the user that created it, the date and time it was created, and the type of export it is.


This table is very small—one column, in fact. The contents of the table are the export ID number for the last export that ran. This table is used to generate the export IDs. There are some constraints for using IMPORT. First, the user must have the IMP_FULL_DATABASE role granted to them. This role is created at the time the database is created, with the execution of the catexp.sql script. Also, the user that runs IMPORT must have the appropriate privileges to create the database objects that will be brought into the database as a result of the import. Finally, the version of IMPORT must match the version of EXPORT that created the export file.

Database Recovery with IMPORT

Database recovery with IMPORT occurs in the following way. First, the DBA must take the most recent database export and re-create the data dictionary and other database internals with the use of IMPORT running with the FULL=Y and INCTYPE=system parameters set.. Following this critical step, the DBA must run IMPORT again with FULL=y and INCTYPE=restore parameters set using the most recent complete database export. After that, the DBA imports all cumulative exports, starting with the first cumulative export after the most recent complete export, then the next cumulative export. After all cumulative exports are applied to the database, in order, the DBA then applies all incremental exports, in order, starting with the incremental export that followed the most recent cumulative export, then the next one, etc. With the DBA using the backup strategy as indicated in Figure 12-2 in mind, the following steps (with examples) are presented as a note of final clarity on the order required for media recovery using the IMPORT option.

  1. On Friday morning, a database failure occurs that requires media recovery. The DBA ran the most recent export Thursday night, an incremental. This export is the first that should be applied.
  2. Imp file=’thursdayinc.dmp’ userid=DBA/password full=y inctype=system

  3. The next step the DBA must accomplish is to apply the most recent complete export. In this backup strategy, complete backups are taken Saturday nights.
  4. Imp file=’saturdaycmpl.dmp’ userid=DBA/password full=y inctype=restore

  5. Only one cumulative export is taken in this backup strategy, on Wednesday evenings. This is the next backup to be applied.
  6. Imp file=’wednesdaycmlt.dmp’ userid=DBA/password full=y inctype=restore

  7. Since the problem occurred Friday morning, only one incremental export has taken place. Therefore, only one incremental must be applied. The following code block illustrates:
  8. Imp file=’thursdayinc.dmp’ userid=DBA/password full=y inctype=restore

Notice that, when using IMPORT, the most recent export backup created in the backup strategy is applied twice. The main difference between the two executions is the value set for INCTYPE. In the first, the SYSTEM import is run to re-create the database objects that are necessary for the database to run, which are stored in the SYSTEM tablespace. This includes data dictionary information that is vital to the proper execution of the Oracle database after the recovery is complete. If this step is omitted, the recovery will be unusable, and the Oracle database will need to be recovered again before usage is possible. Get it right the first time by remembering to set INCTYPE=system.


  1. What are the three modes IMPORT runs in? How is the mode of an IMPORT run determined? What two options are available for parameter passing for IMPORT?
  2. What are the two types of full import? What database information does each one import? What parameter determines the type of import taken?
  3. What parameters determine which types of database objects are imported?
  4. Identify a recovery strategy that uses IMPORT. How is the overall recovery performed? What are some of the overall limitations of a recovery strategy that uses IMPORT?

Read Consistency and Database Export

Read consistency is the idea that a statement or transaction has a consistent view of data during the period of time the statement or transaction executes. Oracle provides read consistency at the statement level automatically. In addition, with the use of database locks, the user can establish transaction-level read consistency on the database as well. The use of rollback segments is crucial for the successful application of transaction-level read consistency.

Read consistency is useful at the export level as well. It can be approximated with the use of the CONSISTENT parameter, which has the Y or N flag value set for it by the DBA at run time, either with a command-line parameter or in the parameter file. In order to give an EXPORT run a read-consistent view of the database, several things must happen. First, the DBA must ensure that there is adequate rollback segment space allotted to the export. In essence, the export can be thought of as one giant "transaction" in this way. As such, it is necessary to ensure that a large rollback segment is available for the export to support the additional transactions as they make changes to the database.

First, only the full export and the cumulative export can use the CONSISTENT parameter. This option is simply not available for incremental exports. Read consistency, or a lack of it, can be a problem for the DBA using the logical methods offered by IMPORT and EXPORT for the management of backup and recovery in the Oracle database. In some cases, data changes in the database that happen while the export is taking place may cause problems for IMPORT. This situation is a particular problem with referential integrity that is not enforced by the database constraints, but it can be a general problem related to the database as well. The best method for reducing the problem posed by the inability of EXPORT to guarantee the read consistency of the export is to put the database into restricted session mode before running the export. This step is accomplished with the alter system enable restricted session statement.


Without the critical step of locking users out of the database before running the export, there can be some serious implications. First of all, the database import later may contain inconsistencies such as rows in the index that are not in the table. The indexes of a database are exported after their associated tables are. This may lead to problems with finding no data in tables even when the values are there in the indexes. Another common problem with exports that are performed without read consistency is the fact that data changes that are happening while the export occurs are not being saved in the export. Bear in mind that once an export has saved the data and the definition of a database object such as an index, there is no going back to it at the end. If read consistency is not in place, there will be data inaccuracy in the export dump file.


  1. Explain the usage of the CONSISTENT parameter for database exports.
  2. What are the requirements of the CONSISTENT parameter? What are its limitations?

Using the EXPORT Direct Path

In many cases, high-performance export of the database objects is a crucial factor in the overall strategy of the database recovery. In order to improve the performance of a database export, the DBA can opt to use the direct path for the export. As with the direct path for SQL*Loader covered in Chapter 10, the direct path for EXPORT speeds the creation of the export file considerably by skipping over many of the steps required in normal database processing when handled conventionally, as is the case for users.

The direct path is specified by the DBA to be used for the run of EXPORT with the use of the DIRECT parameter. This parameter, which can be set to Y or N, has a default value of N and can be set by passing it on the command line of the EXPORT run or in a parameter file specified by the run. As the default value for DIRECT is N, the direct path export will not happen automatically. Instead, the direct path export happens only when the DIRECT parameter specifies it.

The direct path of a database export works in the following way. First, in a conventional path export, Oracle obtains the data from the database using standard SQL statements, which then are designed to run in the same way as the statement would run if a user had issued the statement directly from SQL*Plus. In contrast, the direct path export handles the unloading of data from the database in a more efficient way. EXPORT creates a select statement to obtain the data definition and data that is optimized to skip some of the steps involved in processing user SQL. The result of this optimization is a faster process that extracts data from the database into the export dump file.

The following factors are involved in the high performance of the direct path export. The BUFFER parameter is not used for direct path exports because the buffer defines an area into which data will be placed as part of the conventional path data fetch. There is also some interesting information the DBA should understand about the direct path export and the export file it produces, related to the size. The size of a conventional path export file and the direct path export will almost invariably be different. This fact is true even when the objects exported are exactly the same. The differences are related to the way EXPORT pads the length of each data record in the export binary and in the way a column may be split by the export in direct path exports. Although the differences shouldn’t be of major consequence, it is worth bearing the fact in mind in usage situations.


  1. Explain the use of the DIRECT parameter for database exports. What value is added when this parameter is used?
  2. What parameters may not appropriate for use in conjunction with DIRECT?

Using the Right Character Set

There are some other conditions that the DBA should consider when using IMPORT and EXPORT related to national language support. The character set data is saved in an export file. The character set for the export file must be the same as that for the database. If the character set for the session of the user is different than that used in the database, then the user must switch the character set of the session to that of the database or the export will fail.

With respect to importing data later, the character set of the export file can be the same or different than the character set of the database. In this situation, the import will perform an automatic character conversion on the export file as the data is imported. This degrades performance, so the DBA should expect that imports will take longer in the situation where the database receiving data has a different character set than the exported data does.


  1. Is it possible to import data from an export file if the character sets on the source database and destination database are different?
  2. What character set will be used in the export dump file, the source database’s character set or the character set of the session executing the export?

EXPORT, IMPORT, and Archived Redo Logs

In order to understand the abilities for data restoration that are provided by the logical backup and recovery options available using EXPORT and IMPORT, the DBA should recall the difference between physical and logical backups. Logical backups, on one hand, consist of taking the perspective of the logical database objects as they are stored within Oracle. These tools provide the DBA with the ability to store and re-create the objects in an Oracle database with accuracy up to the point in time that the export completed. However, from that point on, there is no way to recover data until the next logical backup is run.

Consider the other alternative viewpoint for the space Oracle takes to store data on disk drives. The information Oracle stores is in files. This file and directory viewpoint is the physical viewpoint of an Oracle database. The DBA can choose to back up the database from this perspective as well, using operating system methods or third-party products. As with logical backups, the physical backup allows the DBA to recover a database to the point in time the backup was taken as well.

However, there is another method for data backup that is available as well. This method is the redo log. As the DBA knows, the redo log stores database change entries. As the user and batch processes execute their database change statements on the Oracle database, Oracle’s LGWR process writes those changes to an online redo log. Oracle allows the DBA the ability to archive the redo logs produced. If there is a situation where the database fails, if the DBA has been archiving redo logs and running the right type of backup, the DBA can recover data to the last committed transaction that ran against the database before the media failure occurred. That "right type of backup," however, is the key to the whole equation.

Logical backups do not allow for recovery to the point of media failure. The fact is plain and simple. In order to provide recovery to the point of failure, the backup must work in conjunction with the archived redo logs. This is something that EXPORT and IMPORT simply cannot do.

When considering the backup strategy for a database, the DBA must be keenly aware of the need for that database to recover to the point in time of the failure. In some cases, that ability may not be deemed necessary by the organization running the database. Sometimes, the database may only be used by a few people, or may contain relatively static data. In this case, the DBA can usually guarantee that the database will not lose more than a day’s worth of changes when a disk crashes if the plan for backup and recovery presented in Figure 12-2 with IMPORT/EXPORT is used in the database. However, for those organizations that have highly volatile data and/or many users, the DBA should consider physical backup and recovery covered in Chapters 13, 14, and 15 in conjunction with archiving redo logs. Physical backups and archived redo logs are required to provide the recovery to point of failure.


  1. Should archiving redo logs be used in conjunction with the EXPORT and IMPORT backup strategy? Explain.

Chapter Summary

This chapter covers subject matter related to the function and purpose of logical backup and recovery. The subject matter in this area includes identifying failure scenarios and coverage of the methods and practices used for logical backup and recovery. Topics discussed include the meaning of the various failure scenarios, such as user error, statement failure, process failure, instance failure, and media failure. For logical backup and recovery, several topic areas were presented. First, an in-depth presentation of EXPORT usage and IMPORT usage was discussed, along with discussions on the impact of read consistency on the data quality of the exported information. After that, the usage of the direct path was presented. The irrelevance of archiving redo logs and the use of the logical backup and recovery, and the considerations of the character set used in the databases between which exported data is shared are topics of discussion in this chapter as well. In total, this chapter comprises about 10 percent of the material tested in OCP Exam 3.

The first area of discussion involves descriptions of the various failure scenarios a DBA may encounter in the course of administering an Oracle database. The first failure scenario is that of statement failure. Statements fail for many reasons. For one thing, the user issuing the statement may have spelled a table name incorrectly, causing Oracle to return a "table does not exist" error. Another reason for statement failure is the inability of the user to see the database object he or she is trying to access. Often, the cause for this problem is the fact that the user is missing or denied an object privilege, or that the object truly does not exist in the database.

Another type of failure scenario that the DBA may encounter is user process failure. In this situation, an entire process executing on the database fails. Oracle handles many aspects of process recovery after statement failure with the use of the PMON process. PMON is process monitor, a process that monitors the activities of user processes to clean up after processes that fail. Some of the things PMON handles include release of locks the statement may have had, rollback of any transaction activity the process may have generated, and removal of that process from the list of active processes maintained in the V$ dynamic performance views.

The next failure scenario considered in the chapter is the user error. This type of failure is generally caused when someone inadvertently deletes or changes data in the database and commits the change. Alternately, the situation may arise when the user or developer truncates or drops the table in error. Data definition language statements such as truncation and table drops are not recoverable with the use of the rollback segment in the same way that an update, insert and delete are. The DBA may need to intervene in this situation by providing data or object recovery. Although logical database backups for the supplemental support of user error is the ideal approach to this failure scenario, it is possible to make a point-in-time recovery using physical backup followed by an export of the database object that was changed or dropped.

Another scenario explored in the chapter is that of instance failure. This scenario occurs when there is some problem with the hardware of the host machine running Oracle that causes the database to shut down unexpectedly. Additionally, instance failure occurs when the shutdown abort command is used to shut down the database. Finally, instance failure can occur when there is some power failure on the host machine running Oracle that causes the instance to terminate abnormally. The SMON (system monitor) background process handles instance recovery the next time the Oracle instance is started if the database shows signs of instance failure. The most that is expected of the DBA in this situation is to identify the cause of the instance failure and resolve it. At any rate, if the DBA handles the hardware situation that creates instance failure, Oracle will handle the rest automatically once the instance is restarted.

The final situation that a DBA may encounter is media failure. Media failure, also known as disk failure, is the result of a problem with the disk drive that stores Oracle data. If there is a situation where the disk is irreparably damaged, or something happens that renders Oracle’s access to its physical database files impossible, then the DBA will have to obtain new hardware for the database to use to store information and also recover the files that were lost by using the backups the DBA keeps in the event of this type of an emergency. This situation is generally the most manual intensive for DBAs to handle out of all the failure scenarios. It is also the most destructive. In a situation of media failure, there is always the chance for permanent loss of data. The DBA must institute an effective strategy for database backup and recovery in order to guarantee that the users of the database do not experience loss of their data.

With possible failure scenarios established, the DBA should turn attention to the option of logical backup and recovery. The utility used for logical database backup is called EXPORT. This utility is included in the distribution of Oracle database software, and can be executed by the DBA or another user who has the EXP_FULL_DATABASE role granted to their username. This role is created by the catexp.sql script, which is run automatically at Oracle software installation time.

In order to manage the execution of EXPORT, the DBA passes a variety of parameters to the tool at the time the export is taken. These parameters control virtually every aspect of the export, from the name of the export dump file produced to the objects it will contain. Some of the parameter names that are crucial are FILE, USERID, OWNER, TABLES, and FULL. In order to execute, EXPORT must be able to log onto Oracle with a username and password.

The final three parameters mentioned above have the purpose of determining the mode that EXPORT will run in. The three modes available to EXPORT are user, table, and full. User mode is where EXPORT accepts the name of a user in the database who owns some database objects. The objects owned by that user schema are then the only objects that will be exported. Table mode is another mode where EXPORT is passed a list of specific tables that it will store in the export file. In both of these cases, the DBA can further limit the database objects that are taken for backup with the use of parameters. These parameters specify whether or not the classes of objects they represent will be exported into the dump file. These object parameters include INDEXES, CONSTRAINTS, TRIGGERS, and ROWS.

Full mode is specified with the FULL parameter set to Y. In order to run EXPORT in full mode without conflict, it is important that the DBA not set the OWNER or TABLES parameters in conjunction with setting the FULL parameter. There are three types of full exports available to the DBA. They are complete, incremental and cumulative. The type of EXPORT made in full mode depends on the setting the INCTYPE parameter to either complete, cumulative, or incremental. Complete exports save the information one would expect they would save—the entire database, minus any database objects specifically omitted by the INDEXES, CONSTRAINTS, and other parameters identified above. Incremental exports store only the database objects that have experienced changes since the last export of any type was taken. Note that the incremental export stores a copy of the entire object, not simply the data that was changed. In other words, if one column in one row in a million-row table was altered, the whole table is exported. Cumulative exports store all database objects that have changed since the last full or cumulative export was taken. In a sense, cumulative exports are somewhat redundant because they export the same data that is stored on incremental exports, which may have been taken prior to the cumulative export but after a complete or cumulative export. However, in this situation, cumulative exports provide the value of consolidating the data backed up on several incremental exports into one file.

Once exported, the data in an export file can be imported to the same database or to another database. This "recovery" of backup data is handled with the IMPORT utility. This utility complements EXPORT. IMPORT accepts several different parameters that modify how the IMPORT will operate. Like EXPORT, IMPORT runs in three modes. Those modes are user, table, and full. An important difference to note here is that IMPORT can run in user or table mode using an export dump file, regardless of the mode EXPORT ran in to produce the export dump. As long as the database object is there for IMPORT to use, IMPORT can specify it to be imported any way the DBA sees fit.

User mode for IMPORT is slightly different from the user mode for EXPORT. There are two parameters that manage the use of IMPORT in user mode. Those parameters are FROMUSER and TOUSER. The FROMUSER parameter corresponds to OWNER in EXPORT insofar as the database objects in the export dump have a schema owner. If the DBA wants to import the objects owned by a particular user stored in the export file, the DBA specifies that user schema in the FROMUSER parameter. However, the DBA must also specify the TOUSER parameter for the execution of IMPORT as well. This fact is due to the situation where the user on one database from which the database objects were extracted does not exist on the database to which the database objects will be imported.

There is a difference between the options used to run full imports as well. First, there are only two different types of full imports specified as values for the INCTYPE parameter, system and restore. The import must be run with FULL=Y and INCTYPE=system first using the last export created on the database before a media failure, and is critical to the successful recovery of the database. This run of IMPORT re-creates vital data dictionary and other SYSTEM tablespace information. Then IMPORT must be run to recover the database objects stored in all the different types of exports, namely complete, cumulative, and incremental. The proper order for applying exports to the database for recovery purposes after the most recent export is applied in the SYSTEM import, is listed as follows. First, apply the most recent complete export. Next, apply all cumulative exports since the complete one in least- to most-recent order. Finally, apply all incremental exports taken since the most recent cumulative export.

Bear in mind that the logical backup and recovery services provided by EXPORT and IMPORT have the limitation of only being able to provide recovery to the point in time of the most recent database export. In other words, if there have been several database changes since the most recent export, those changes will be lost. To recover those changes, the users will have to reenter the data.

In order to determine which exports contain which database objects, the DBA has several dictionary tables at his or her disposal. The three tables in the data dictionary that are used to determine the contents of export files are called INCEXP, INCVID, and INCFIL, and all are owned by SYS. INCEXP contains a listing of all the database objects that are stored in exports and the schema containing the object. INCFIL is a catalog of all database exports, their ID numbers, and all information pertaining to the creation of the export, such as time and date and the user who created it. INCVID contains the information for the last export that was created. INCVID is used for the purpose of creating an ID for the next export that is created on the database.

Read consistency for database exports is a topic of discussion in this chapter. Read consistency has been defined so far at the statement and transaction levels as the consistency of data in the database during the period of time the statement or transaction executes. It is specified with the CONSISTENT parameter. When specified, it allows the complete or cumulative export occurring in the database to have read consistency during the period of time of the export. Depending on the number of objects in the database, the time it takes the export to complete, and the number of transactions that are changing data in the database, the rollback segment space that is required to sustain the export can be quite large. In addition, read consistency is not possible for use with incremental exports.

Read consistency with the CONSISTENT parameter promises a lot, but in reality does not always measure up. Since the parameter cannot be used with incremental exports, the DBA must secure other methods for ensuring that the read consistency of the database is not compromised during the export. One popular method for doing so involves the use of Oracle’s restricted session mode. When the DBA issues the alter system enable restricted session, only the users that have the restricted session privilege granted to them may access the database. Ideally, this privilege is only granted to the DBA. Since no other users can access the database during this time, there are two factors that should be considered. First, the export should take place when there are few users on the database, considering the full-service backup and recovery approach offered in Figure 12-2. Second, the restricted session will prevent users from making database changes while the export runs, effectively creating read consistency.

As mentioned on a few different occasions, the goal of a database recovery is not only to restore lost data, but to do so quickly. A parallel objective involves the fast backup of data as well. Due to the fact that the export is generally conducted on a database in restricted mode in order to avoid the read inconsistency issues raised by leaving the database open to user changes and messing with the CONSISTENT option, the backup of a database using EXPORT should happen as quickly as possible. This can be assured when the DBA incorporates the DIRECT parameter into the usage of EXPORT. EXPORT uses the same SQL processing mechanism used by regular user queries to obtain data from the database. The direct path eliminates some of the processing that regular queries and the conventional export path incorporate. The direct path optimizes the queries used to obtain data from the database as well.

One of the biggest drawbacks of the logical backup and recovery options provided by IMPORT and EXPORT include their inability to restore data changes that were made after the most recent backup was taken. Generally, this function is provided with the application of archived redo logs, as will be discussed in the next two chapters. However, the export files that are created with EXPORT are incompatible with archived redo logs. As a result, it adds no value to database backup and recovery to archive redo log information. Logical backup strategy is most effective in situations where the database user population can withstand the inevitable data loss.

The final area covered in this chapter involves the use of national language support. When an export takes place, EXPORT uses the character set for the database to store the exported data for that database. The IMPORT tool can import data from that character set even if the target database uses a different character set, due to the fact that the IMPORT tool can perform a character set conversion on the fly. However, this conversion will increase the amount of time required for the import to finish. If the DBA tries to export the data from the database in a character set other than the character set for that database, the export will fail.

Two-Minute Drill

The types of database failure are user error, statement failure, process failure, instance failure, and media failure.
User error comes when the user permanently changes or removes data from a database in error. Rollback segments give supplemental ability to correct uncommitted user errors.
Statement failure occurs when there is something syntactically wrong with SQL statements issued by users in the database. Oracle rolls back these statements automatically.
Process failure occurs when a statement running against the database is terminated either by Oracle or by the user. Statement rollback, release of locks, and other process cleanup actions occur automatically by PMON.
Instance failure occurs when there is some problem with the host system running Oracle that forces the database to shut down. Recovery from this problem occurs when the instance is restarted. Instance recovery is handled automatically by the SMON process.
Media failure occurs when there is some problem with the disks that store Oracle data that renders the data unavailable. The DBA must manually intervene in these situations to restore lost data using backups.
Logical backup and recovery with EXPORT and IMPORT is one means by which the DBA can support backup and recovery.
EXPORT and IMPORT both accept certain parameters that will determine how the processes run.
These parameters are divided according to function.
There are parameters that handle the logistics of the database export. These parameters are USERID, FILE, CONSISTENT, and BUFFER.
There are parameters that limit the database objects that will be exported. These parameters are INDEXES, CONSTRAINTS, TRIGGERS, and GRANTS.
There are parameters that determine what mode the export will run in. These parameters are OWNER, TABLES, FULL, and INCTYPE.
Database export can happen with the EXPORT tool in three modes—table, user, and full.
In table mode, the DBA specifies a list of tables that will be exported by EXPORT.
In user mode, the DBA specifies a list of users whose database objects will be exported.
In full mode, the DBA will export all database objects, depending on certain factors.
There are three types of full exports. They are complete, cumulative, and incremental.
The type of export depends on the value specified for the INCTYPE parameter. Values are complete, cumulative, and incremental.
Complete exports save all database objects in the export file.
Incremental exports save all database objects that have been altered or added since the last export of any type was taken.
Cumulative exports save all database objects that have been altered or added since the last complete or cumulative export was taken.
There are parameters that handle the logistics of the database import. These parameters are USERID, FILE, CONSISTENT, and BUFFER.
There are parameters that limit the database objects that will be imported. These parameters are INDEXES, CONSTRAINTS, TRIGGERS, and GRANTS.
There are parameters that determine what mode import will run in. These parameters are FROMUSER and TOUSER for user mode, TABLES for table mode, and FULL and INCTYPE for full mode.
Database export can happen with the IMPORT tool in three modes—table, user, and full.
In table mode, the DBA specifies a list of tables that will be imported by IMPORT.
In user mode, the DBA specifies a list of users whose database objects will be imported.
In full mode, the DBA will import all database objects, depending on certain factors.
There are two types of full imports. They are system imports and restore imports.
The type of export executed depends on the value specified for the INCTYPE parameter mentioned. Values are system and restore.
Imports with FULL=y and INCTYPE=system restore database objects in the export file for the data dictionary. This complete import should always be the first performed in the event of a database recovery. The most recent export should be used.
Imports with FULL=y and INCTYPE=restore restore all other database objects in the export file to the damaged database.
There is a particular order required for database import.
First, the last export taken should be applied using the SYSTEM import.
Next, the most recent complete export should be applied using the RESTORE import.
Then, all cumulative exports taken since the complete export, starting from least to most recent, should be applied using the RESTORE import.
Finally, all incremental exports taken after the most recent cumulative export should be applied, from least to most recent incremental export, using the RESTORE import.
There are three dictionary tables used to track exported data.
INCEXP lists all exported database objects and the exports that contain them.
INCFIL lists all database exports by export ID number.
INCVID lists the most recent export ID number for the purpose of generating a new export ID number.
Read consistency is established when data doesn’t change for the life of a statement or transaction.
The CONSISTENT parameter handles read consistency for the database export.
Use of the CONSISTENT parameter is only permitted with the complete or cumulative export.
Read consistency can also be established by barring user access to the database during the time the backup is taken. This is done by issuing alter system enable restricted session.
Export obtains data from the database using the SQL execution mechanism that all other user processes use.
For better performance, the export can be run with the direct path, eliminating some of the steps required to handle standard SQL statements while optimizing the processing of other steps.
Direct path is specified using the DIRECT parameter. When using the direct path export , the BUFFER parameter is not valid.
Logical exports do not work in conjunction with archived redo logs. This has two implications. First, without archived redo logs, it is not possible to recover database changes to the point in time the database failed. Second, there is no value added by archiving redo logs.
Exports are in the same character set as the database from which the data came.
Data from a database can only be exported in the same character set as the database the data came from.
Data can be imported into another database using that database’s character set. If the character sets are different, IMPORT will execute a data conversion, which will lengthen the time required for the import.