Back Up Next

Chapter 20 *

Tuning Other Areas of the Oracle Database *

Monitoring and Detecting Lock Contention *

Levels of Locking in Oracle *

Exercises *

Identifying Possible Causes for Contention *

Exercises *

Using Tools to Detect Lock Contention *

Exercises *

Resolving Contention in an Emergency *

Exercises *

Preventing Locking Problems *

Exercises *

Identifying and Preventing Deadlocks *

Exercises *

Tuning Sort Operations *

Identifying SQL Operations that Use Sorts *

Exercises *

Ensuring Sorts Happen in Memory *

Exercises *

Allocating Temporary Disk Space for Sorts *

Exercises *

Using Direct Writes for Sorts *

Exercises *

Optimizing Load on the Oracle Database *

Configuring the SQL*Net Listener *

Exercises *

Configuring Dispatchers *

Exercises *

Configuring Shared Servers *

Exercises *

Chapter Summary *

Two-Minute Drill *


Chapter 20

Tuning Other Areas of the Oracle Database

In this chapter, you will understand and demonstrate knowledge in the following areas:

Monitoring and detecting lock contention
Tuning sorts
Optimizing load on the Oracle database

This chapter covers the final areas of tuning the Oracle database. Although each of these areas is important to understand from the perspective of day-to-day activities of the Oracle DBA, pay particular attention to tuning sorts, as the OCP Exam 4 will focus on their intricacies. All told, the materials in this chapter comprise about 20 percent of the material covered in OCP Exam 4 test questions.

Monitoring and Detecting Lock Contention

In this section, you will cover the following areas of monitoring and detecting lock contention:

Levels of locking in Oracle
Identifying possible causes for contention
Using tools to detect lock contention
Resolving contention in an emergency
Preventing locking problems
Identifying and preventing deadlocks

There are two objects in the Oracle architecture that manage control of access to the resources of an Oracle database. The first object is a latch, and its usage has already been discussed. Latches are used for control on Oracle internal resources, like redo logs, shared SQL buffers, the LRU list of buffers in the buffer cache, and other items that manage Oracle behind the scenes. The other devices used in an Oracle database to manage control of objects that users will encounter are called locks.

Levels of Locking in Oracle

Locks help to maintain transaction consistency on the Oracle database. A lock prevents one user from overwriting changes to the database that another user is making. Consider the necessity of transaction consistency. If two user processes are executing a series of procedures in order to make updates to the database system with no transaction consistency, there is no guarantee that data being updated by each user will remain the same for the life of that user’s transaction. However, with the existence of lock mechanisms comes the ability to perform transaction processing. Locking allows users to manipulate data freely during the transaction without worry that the data will change before they are done changing it.

There are several different types of locks within the Oracle architecture. It is worth taking note of what the different types of locks are and what how much scope each lock holds while it is enforced. Two different basic types of locks relate to Oracle data structures—they are the DDL locks and DML locks. DDL stands for data definition language—the statements used in the Oracle architecture for defining tables, indexes, sequences, and other devices used to define structures that are used by applications in data processing. These locks prevent the fundamental structure of a database object from changing as a user attempts to query or change data within it. For example, an alter table statement issued by the DBA will not complete until all user processes that are querying or changing that table have completed their query or change, and all other locks issued on that object before the DDL lock was requested have been resolved. Similarly, a DDL lock prevents a user process from querying or changing data while the object definition takes place.

The other type of locks in Oracle are the DML locks. These are the locks that user processes hold while they make changes to the database information. They allow the concept of transaction processing to exist within Oracle. Within the realm of DML locks are a few broad categories of scope that these locks will have when in action. The first scope category is the table lock. This lock, as one might guess, is over the entire contents of a table. During the period of time that a user process holds a table lock, no other process may make a change anywhere within that table. The next category of scope in the realm of DML locks is the row lock. This type of lock is more granular in that the table as a whole is left free for changes, except for certain records or rows within it. Figure 20-1 indicates the scope of table locks and row locks.

Fig20-01.jpg (12238 bytes)

Figure 1: Scope of table and row locks on tables

In addition to scope, there are some different access levels to which locks permit or deny access to the data being changed to other user processes. The first access level that will be covered here is exclusive access. Exclusive access means that for as long as the lock on a row or table is held, only the user process holding the lock can see or change the data in that row or table. The other access level is called shared access. This access level means that the user process holding the lock will be the only process allowed to make changes to that locked data, but other user processes will have the ability to access the data via query at all times. Exclusive access is much more controlled than shared access; however, in many situations, shared access is suitable for application needs, and as such, most Oracle default lock capability uses shared locks in some form.

Consider another subject of transaction processing—transaction-level read consistency. This term means that as a process executes a series of data change statements that constitute a transaction, the process should have a version of the data that is consistent throughout the entire transaction. With one exception, as noted below, all types of locks within the Oracle database provide transaction-level read consistency.

With those terms defined, turn attention now to the actual types of locks available in the Oracle architecture. There are five different types of locks in Oracle, and they are: exclusive, shared, shared row exclusive, row shared, and row exclusive. The following is a more complete discussion of each type of lock in the context of the term definitions above.

A lock on the entire contents of the table, during which time no user other than the holder of the lock can select or update any row in the table.
A lock on the entire contents of the table that allows any user to select data in the table when another user holds this lock, but only the holder of the lock can actually update the table data.
shared row
A lock on the entire contents of a table that allows users holding this lock to update data in the table. However, other processes are allowed to acquire row locks on the table, thereby NOT providing transaction-level read consistency.
row exclusive
A lock on a single row or group of rows that allows users holding this lock access to update the locked row. No user can query or change the row that is being held in row exclusive mode until the process holding the lock has relinquished it.
row shared A lock on a single row or group of rows that allows the user holding the lock access to update the locked row. Other users can query the row being held in a row shared lock, but only the holder of that lock can change the data.

Within the Oracle database, there are two different types of methods used to acquire locks. They can be acquired by issuing a data change statement such as update, insert, or delete. Alternately, a process can manually acquire locks using a special package provided by Oracle called DBMS_LOCK. This package is the only method available for acquiring table locks. For row locks, locking is possible in the following statements. The first statement is select for update. This statement selects rows as specified by the from and the where clause and places those rows selected under a row shared lock. The other statement considered is the update statement. This statement places all rows affected by the update under a row exclusive lock. All other locks can be obtained by executing procedures in the DBMS_LOCK package.


  1. What is a lock? How do locks facilitate the use of transaction processing?
  2. Describe the meaning of a shared lock and an exclusive lock. What are the two available scopes of locking within the Oracle architecture?
  3. What is the name of the Oracle package that provides several different lock functions?
  4. What are the five types of locks available in Oracle? How are these locks obtained?

Identifying Possible Causes for Contention

Users may sometimes have to wait to acquire locks on the database system, or wait to see data on the database that is held by a lock. This process of waiting for a lock itself is not contention, as many times a process will have to wait until another process completes its changes. However, when something happens to hold up data processing, then a lock contention issue may be the cause. One example of contention for resources as a result of locking may come from the unexpected placement of a database update statement in a stored procedure that is used by processes expecting to perform mainly queries against the database. In this example, the process is not expecting to perform an update to the database, so it does not contain any transaction completion statements like commits or rollbacks that would indicate to Oracle that the lock should be released. The effects can be particularly damaging—hundreds of locks may pile up on a table, causing performance on the table to reduce drastically, and the cleanup will require putting the database in restricted session mode to prevent additional locks accumulating while the DBA forces Oracle to relinquish the locks being held by user processes. Meanwhile, there is also the issue of correcting the stored procedure or embedded SQL that actually causes the problem.

Other causes of contention abound. If a process holds an exclusive lock on a database table and does not relinquish that lock, then other processes will contend with it for access to the rows of the table. The same effect will occur if the lock held is a row exclusive lock, albeit the effects are likely to be not as great, due to the limited scope of a row lock in general when compared to that of a table. As a general rule, it is ill-advised for DBAs and application administrators to start batch processes during times of heavy OLTP usage on the database because of the potential for lock contention. Figure 20-2 illustrates contention for tables that are held by table locks.

Fig20-02.jpg (19050 bytes)

Figure 2: Contention for tables held by table locks

The final area to cover with respect to locking is the dangerous situation where one process holds a lock that a second process must have before the second process can give up the lock it has that the first process needs. This situation is known as a deadlock. Oracle has gone to great lengths to make sure that the database can detect situations in which deadlocking occurs. However, there are other situations where deadlocks occur and the DBA will have a great deal of difficulty detecting them.

Another possibility for contention exists within the usage of the shared row exclusive lock. Although this lock is a table lock, it allows access to the table by other processes that also have the ability to acquire row locks on the table and change data. This situation means that the holder of the original shared row exclusive lock may have to wait for other processes that acquire row exclusive locks on the table to complete their changes and relinquish the lock before the original process can proceed.

A final possibility for contention exists on client/server systems. In this environment, it is possible for network problems or process errors on the client side to cause a process failure on the client. In some situations, there may occur a time when the user is in the process of updating a table via a row exclusive or shared lock, and the client process or SQL*Net network transportation layer fails. In this situation, although the process has terminated, the lock and the update have not. After a short period, Oracle will catch up to the zombie process and handle the cleanup and rollback portion of that process. But, in some cases there could be contention if the user does not understand that some time needs to pass before the lock they just let go actually relinquishes the resource on the table. In this situation, the user may simply restart their client process immediately after killing it and attempt to perform the same data change operation they just tried, placing a request for the same lock they still hold on another session, and lock-wait their own database activities.


  1. Identify a situation involving PL/SQL where lock contention might be produced. How can the contention issue be resolved?
  2. Identify a situation involving SQL*Net where lock contention might be produced.
  3. What is a deadlock?

Using Tools to Detect Lock Contention

Once contention starts occurring on the database, it can be hard to determine what the cause of the contention is and how to resolve it without having utilities and views at the DBA’s disposal for those purposes. One tool that is commonly used by DBAs in the task of identifying contention is the UTLLOCKT utility that is provided as part of the Oracle distribution software. On most systems, this utility SQL script can be found in the rdbms/admin directory under the Oracle software home directory. UTLLOCKT queries the V$ACCESS and the V$SESSION_WAIT views to find the sessions and processes that are holding locks and the sessions and processes that are waiting for those locks to be relinquished so they can update the resource being held. UTLLOCKT places the output from that query in a readable tree graph form. This script should be run by the DBA while logged on as user SYS. Before using this script, the DBA will need to run catblock.sql. The output from this script looks something like the following code block:

--------------- ---- -------------- --------- -------- --------
8 NONE None None 0 0
9 TX Share Exclusive (X) 604 302
7 RW Exclusive (X) S/Row-X(SSX) 50304040 19
10 RW Exclusive (X) S/Row-X(SSX) 50304040 19

One potential problem with this script is that in order to demonstrate the locks being held on the system, the script itself has to acquire some locks, thus potentially getting caught in the locking situation it is trying to help the DBA resolve. Another alternative for determining if there are contention issues that is available to the DBA, and that doesn’t require logging into the database as user SYS, is a query on the DBA_WAITERS view. This view lists session information for all sessions holding locks on a table and the processes that are waiting for that table to be free. In addition, the lock mode held by the holding session and the lock mode requested by the waiter are also available from this view. To determine if there is a lock contention issue on a particular table, the DBA can execute a query against the DBA_WAITERS view.

SELECT holding_session, mode_held,
waiting_session, mode_requested
FROM dba_waiters
ORDER BY holding_session;

Ordering the output of this query by holding session can help to identify if several processes are waiting for one session to relinquish a lock being held. The mode held will identify whether the lock holding the resource is a table or row lock, and whether the access level is exclusive or shared. This information will help the DBA resolve the contention issues in a hurry.

There is a method for determining if there are locking issues on the database. The method is to select information from V$SESSION where the value in the LOCKWAIT column is not NULL. Not only will this query obtain for the DBA which processes on the database are in contention, but the DBA can potentially identify what data manipulation operation is happening based on the value stored in the COMMAND column for sessions that are currently experiencing a lock wait.


  1. Identify the uses for the UTLLOCKT utility. What script must be written in order to use UTLLOCKT? What two views are used by this utility? What potential downfall does the UTLLOCKT utility have?
  2. What other database view can provide information about lock contention on the database?

Resolving Contention in an Emergency

One of the only guarantees a DBA will have in the course of regular production support on the Oracle database is that emergencies will arise that require immediate resolution of locking and contention issues. There are several ways for the DBA to combat the problem of lock contention. This section will detail some of them.

One blanket solution to resolving contention is to determine what session is holding the locks that make the whole database wait, and to kill that session. The DBA can execute the query listed above on the DBA_WAITERS view to determine the session ID (SID) of the process holding the lock. The other component required for killing a session is the serial number for that session. This information can be obtained from the V$SESSION dynamic performance view with the following query. Once the SERIAL# and SID are obtained from V$SESSION, the DBA can then issue the alter system kill session statement. Please note, however, that this method is a blanket solution that, at the very least, does not address the underlying problem of the locking situation. However, it is important to know at least how the "solution of last resort" works.

SELECT sid, serial#
FROM v$session
WHERE sid in (SELECT holding_session FROM dba_waiters);

KILL SESSION ‘sid,serial#’;


  1. What statement can be used to resolve lock contention in an emergency?
  2. What two pieces of information does this statement require?
  3. From which performance view can the DBA obtain this data?

Preventing Locking Problems

The better and more effective solution lies in the use of the DBMS_LOCK package, which was mentioned earlier in the chapter. This set of procedures allows the application to do many things. In addition to obtaining special table locks, this package has utilities that change the status of locks being held by user processes, and it also has a tool that allows the DBA to force a session to relinquish a lock. These procedures are used for resolving contention in emergency situations and should not be undertaken lightly. At the very least, the DBA should either try to contact users or management before pursuing the alter system kill session approach in production environments. The DBA should also ensure that the application developer follows up with a solution that ensures the locking issue will not arise in the future.

The two procedures that may be of greatest use in lock management are the convert( ) and release( ) procedures of the DBMS_LOCK package. The first procedure takes a lock of one type and converts it to another. For example, a process may be holding an exclusive lock on a table, in order to update several rows in a read-consistent manner. It may be possible to obtain the same data change information with a share lock, and by having the lock in that state, several SQL selects do not then have to wait for the process to relinquish its lock in order to simply select data. Or, if the application developer does not want other processes to see the changes it makes until the transaction completes, perhaps a reduction in lock scope from table to row is in order. By default, Oracle acquires the lowest level of locking for select for update or update statements—the shared row or exclusive row lock, respectively. For acquiring all other locks, the application developer must use the allocate_unique( ) procedure, which identifies the lock given with a lock ID consisting of a numeric unique value. For use of the convert( ) function, that lock ID must be passed to the procedure, as well as a numeric identifier for the lock mode requested and a time-out identifying the period of time after which the convert( ) function will no longer attempt to change the lock mode. The convert( ) function will return an integer value that details how the processing went for that execution. The release( ) function simply takes the lock ID generated by allocate_unique( ) and releases the lock. There is a return code for this function as well.

The above information about DBMS_LOCK is provided as an outline for the discussion between DBA and developer that must take place in order to prevent contention issues from occurring. Other functionality that DBMS_LOCK can provide is to ensure all processes on the system use the Oracle default locking mechanisms used in the select for update or update statements, rather than using higher levels of locking if those higher levels are not absolutely critical to the application.


  1. Identify the package that can be used to change lock status.
  2. What is the lock acquired by an update statement? By a select for update statement?

Identifying and Preventing Deadlocks

Deadlocks are situations that cause painful performance problems on the Oracle database. Situations arise where sometimes one process holds a lock on a resource while trying to obtain a lock for a second resource. A second process holds the lock for that second resource, but needs to obtain the lock for the first resource in order to release the lock on the second. This catch-22 is known as a deadlock. This situation can involve more than two processes as well. Figure 20-3 illustrates a simple deadlocking situation for better understanding. Both processes in the diagram hold a lock, but they each need the other’s lock to relinquish their own. Since neither process can proceed without the other giving up its lock, both processes are considered to be deadlocked. The figure is provided for information only, and does not illustrate a particular situation on the database that the DBA must watch out for. In fact, the Oracle database has several features built into it that prevent the occurrence of certain deadlocks, including the one illustrated in Figure 20-3. In reality, the DBA will have to identify and resolve far more challenging deadlock situations on the database.

Fig20-03.jpg (23793 bytes)

Figure 3: Deadlocking in action

There is only one solution for this situation, and that solution is the solution of last resort. The DBA must kill one or both processes in a deadlock. This solution has already been covered. When Oracle’s deadlock detection mechanisms discover a deadlocking situation on the database, they write a message to the alert log for the Oracle instance. This special trace file, which is maintained by the database, contains all error messages, along with some other meaningful information about the instance. The DBA should take note of the "deadlock detected while waiting for a resource" error messages, and any included process information from the alert log sent in order to assist the DBA in determining the cause of the deadlock.

There are three final notes to make on preventing deadlocks. The DBA should recommend to developers that they should try to set their processes up such that all processes acquire locks in the same order. This will prevent the situation where processes acquire locks on resources that others need in reversed order, which has a high probability of creating deadlock situations. The second point is for applications to always specify the lowest level of locking provided by Oracle in select for update and update statements. The locking mechanisms provided by Oracle in those two data change statements should be sufficient for almost all application development needs. Finally, in the interest of preventing lock contention in OLTP systems, all long-running batch updates should be scheduled to happen outside of the normal business day’s data processing.


  1. What is a deadlock? Where should the DBA look to see if deadlocking is present on the database?
  2. How does the DBA resolve lock contention issues on the database in emergencies?
  3. What should the DBA do in order to prevent locking problems on the database?

Tuning Sort Operations

In this section, you will cover the following topics related to tuning sort operations:

Identify SQL operations that use sorts
Ensuring sorts happen in memory
Allocating temporary disk space for sorts
Using direct writes for sorts

Relational database design traces its roots to an area of mathematics called set theory. E. F. Codd and C. J. Date applied the ideas behind set theory, namely the creation of a mapping system of records to attributes, into a new relational design that acts as the cornerstone of relational databases. The names have changed, as records are now called rows and attributes are called columns, but the heart of the subject is still there in every relational database that applications may design. One other tenet in that early theory was that order does not matter in relational database design. Certainly that measure holds true within the records in any table on an Oracle database. The only time a certain order can be identified is perhaps in the situation where a table uses a sequence of numbers as the primary key. Even then, the database user may not care about the ordering of data in the database.

Identifying SQL Operations that Use Sorts

In the real world, though, order often does matter. Since the rows in the Oracle database usually aren’t stored in any particular order, the user may want to force some order upon them. This type of operation may be used in reports or online, or within the B-tree index creation mechanism where the indexed column on the database is stored in a particular order with the intent of allowing fast access to the table on that sorted column. Hence, in the absence of storing Oracle data in a special order, often there is a need for sorting data on the database.

Several data manipulation activities will require sorts. One example is the order by operation. This option is commonly used in SQL selects in order to produce output from a query in an order on a certain column that is specified by the query. This option improves readability of data for the purposes of providing a more meaningful report. For example, a table dump for all employee data contains the information needed to produce a comparison report to find out who the 65 highest-paid employees are. However, since the data is provided in a haphazard format, through which the reader has to search intensively for several minutes or hours to find those 65 highly paid employees, the data really has no meaning. Instead, the report could be designed to list every employee and their salary in a department, in descending order on the SALARY column on the relevant table, using the order by clause of the select statement.

Fig20-04.jpg (19202 bytes)

Figure 4: Using sorts to give data meaning

Another SQL operation that utilizes sorts is the group by clause. This operation is used to collect data into groups based on a column or columns. This function can be useful in various reporting situations where a set of distinct column values may appear several times, mapped with unique values in other columns. For example, a table of states, their cities, and their cities’ populations may appear in a table. To derive the population for each state, the following group by statement may be used:

SELECT DISTINCT state_name, sum(city_population)
FROM state
GROUP BY state_name;

Sorts are used in several different situations on the Oracle database. Both the order and group operations use sorts. Sorts are also conducted as part of select, select distinct, minus, intersect, and union statements, as well as in the min( ), max( ), and count( ) operations. The sort join internal Oracle operation, run behind the scenes when a user executes a select statement to create a join, also uses sorts, as does the creation of indexes.


  1. What is a sort operation?
  2. What SQL operations use sorts?

Ensuring Sorts Happen in Memory

There are performance considerations involved in executing sorts. Oracle requires some temporary space either in memory or on disk in order to perform a sort. If Oracle cannot get enough space in memory to perform the sort, then it must obtain space in the temporary tablespace on disk to use. In most cases, the default size for this area in memory used for sorting is enough to store the entire sort; however, there can be situations where a large sort will require space on disk. Since data in memory can be accessed faster than data on a disk, it benefits the performance on sorts to keep all aspects of the sort within memory.

The DBA should monitor sort activities. The dynamic performance view that stores information about how frequently Oracle needs to access disk space to perform a sort is called V$SYSSTAT. To find the number of sorts occurring in memory vs. the number of sorts occurring on disk, the DBA can select the NAME and VALUE from V$SYSSTAT where the name is either ‘sorts(memory)’ or ‘sorts(disk)’. In the output from this query, a high value for memory sorts is desirable, while the desired value for disk sorts is as close to zero as possible.

If there is a consistently high number of disk sorts, or if number of disk sorts taking place on the database is increasing, then the DBA may want to consider increasing the space allocated for sorts in memory. This task is accomplished by increasing the value for the initialization parameter SORT_AREA_SIZE. This initialization parameter represents the greatest amount of memory a user process can obtain in order to perform a sort. Setting this value high allows the process to sort more data in fewer operations. However, as with increasing the size of any memory structure, the DBA will want to spend some time making sure that the additional size added to the sort area does not interfere with the amount of real memory available for the SGA. If the machine hosting Oracle starts paging the SGA into virtual memory on disk, there will be a bigger memory performance issue at hand associated with swapping information in real memory out to disk. One method the DBA can exercise in order to avoid problems with memory management as a result of increasing SORT_AREA_SIZE is to decrease another parameter associated with sorts, the SORT_AREA_RETAINED_SIZE. This initialization parameter represents the smallest amount of space Oracle will retain in a process’s sort area when the process is through using the data that was sorted. This may help memory, but at the expense of creating some additional disk utilization to move data around in temporary segments on disk. The DBA and application administrators may also improve database performance by ensuring that batch processing does not interfere with OLTP data usage during the normal business day.

Another way to improve performance with respect to sorting is to avoid them entirely. This method is particularly useful in the creation of indexes. As stated earlier, indexes use sorts to create the binary search tree that can then be used to find a particular value in the indexed column and its corresponding ROWID quickly. Use of sorts for index creation can only be accomplished if the data in the table is already sorted in appropriate order on the column that needs to be indexed. This option is useful if the operating system on the machine hosting Oracle has a particularly efficient sorting algorithm, or if there is only a tight window available for the DBA to create the index. The nosort clause allows the DBA to create an index based on table data that is already sorted properly. Important to remember in this scenario is that the table data needs to be sorted on the column being indexed in order for nosort to work. If the data in the table whose column is being indexed is not sorted, then the index creation process will fail.

CREATE INDEX uk_emp_01
ON emp (empid)


  1. For better performance, on which part of the system should sorts take place?
  2. What dynamic performance view can be used to determine how frequently sorts are using the various resources of the machine hosting Oracle?

Allocating Temporary Disk Space for Sorts

When a sort operation takes place and requires disk space to complete successfully, the disk space it uses is temporary. The appropriate tablespace to allocate this space in is the TEMP tablespace. The TEMP tablespace is used for user processes that require allocating temporary segments in order to process certain SQL statements. Sorts are one type of operation that may require temporary disk storage. The group by and order by clauses are two types of SQL statements that require sorts, which then in turn may create segments in the user’s temporary tablespace for the purpose of sorting. Care should be taken to ensure that the user’s temporary tablespace is not set to default to the SYSTEM tablespace, as temporary allocation of segments for operations like sorts can contribute to fragmenting a tablespace. Both the default and temporary tablespaces for a user are set in the create user or alter user statements. If the tablespaces are not set in either of those statements, the user will place temporary segments used for sorts in the SYSTEM tablespace. Given the importance of SYSTEM to the integrity of the database, it is important for the DBA to minimize any problems that may occur with space management.




  1. How can the DBA ensure that users utilize the TEMP tablespace for sorts requiring temporary segments?
  2. What tablespace should never be used to store temporary segments?

Using Direct Writes for Sorts

In some situations, the machine hosting the Oracle database may have extensive disk and memory resources available for effective performance on data sorts that the use of direct writes for sorting provides. This option is set up using three parameters from the Oracle initialization parameter file. Those parameters, along with an explanation of their usage, are as follows:

SORT_DIRECT_WRITES—should be TRUE or AUTO. When TRUE, Oracle will obtain buffers in memory that are designed to handle disk writes as part of the sort.
SORT_WRITE_BUFFERS—specified as an integer. When SORT_DIRECT_WRITES is TRUE, Oracle will obtain this number of buffers to handle disk I/O on sorts.
SORT_WRITE_BUFFER_SIZE—value specified as an integer in bytes. When SORT_DIRECT_WRITES is TRUE, Oracle will size each buffer obtained for disk writes to be the value specified for this parameter.

It is important to remember that using SORT_DIRECT_WRITES represents a large memory and disk resource commitment for the purpose of sorting data. Specifically, this option will require memory additional to whatever sort area has been allocated as part of the SGA in the amount of SORT_WRITE_BUFFERS times SORT_WRITE_BUFFER_SIZE. The DBA should use extreme care in order to make sure that this additional memory requirement does not cause a shortage of available real memory such that the host machine pages the SGA out into virtual memory on disk. By reducing the amount of memory allocated for SORT_AREA_SIZE the same amount that the SORT_DIRECT_WRITES parameter will consume as calculated by the formula above, the DBA alleviates some of the burden on real memory that is produced by increasing SORT_DIRECT_WRITES. Oracle recommends that SORT_DIRECT_WRITES be used only in the event that the above formula is 1/10 the value specified for SORT_AREA_SIZE. If the space allocated for direct writes is any larger than 10 percent of the SORT_AREA_SIZE, then this option should not be used.


  1. Explain how direct writes boost performance on sorts.
  2. What are the hardware considerations of using that option?
  3. What three initialization parameters are used to configure that option?

Optimizing Load on the Oracle Database

In this section, you will cover the following topics related to optimizing load on the Oracle database:

Configuring the SQL*Net listener
Configuring and monitoring dispatchers
Configuring and monitoring shared servers

The basic elements of the client/server architecture have already been covered in Chapter 17. Oracle provides interprocess communication ability with a product called SQL*Net. This application is used by both clients and servers to communicate with one another. Without the SQL*Net layer acting as the interpreter, the client process and the server process are unable to interconnect. The cornerstone of SQL*Net is a process called a listener. This process does just that—it listens to the network connection for requests from the client to come in and request data. A listener can be thought of as similar in many ways to a radio. The listener tunes into a particular "frequency" to listen for connections on the type of network being used, and "hears" requests issuing from only that network. In heterogeneous networking environments, there may be several different listeners in place on the machine running Oracle server, each tuned to the different network protocols available. Once the listener "hears" a request for information, the listener moves the user process along in the database until it is eventually connected to a server process that will assist the user process with obtaining data from the Oracle database.

Configuring the SQL*Net Listener

The SQL*Net listener has several parameters that govern its use on the Oracle server. These parameters are set for the listeners on the database in the listener.ora parameter file. The first parameter is the STARTUP_WAIT_TIME parameter. This parameter identifies the maximum time period that the listener can wait before it must respond to a status command issued by LSNRCTL. The default value for this parameter is zero, and that value should work best on all but the slowest networks. The next parameter is CONNECT_TIMEOUT. This parameter sets the length of time that the listener will wait for the client to issue a connection request once connection is established. After the time period passes, the listener will drop the connection and continue listening for new connections to appear. SQL*Net’s default value for this parameter is 10 seconds, which again should suffice for all but the slowest networks.

Another key parameter configured in the listener.ora file is the ADDRESS parameter. This parameter gives the SQL*Net connection description for the database connection information available for the specific database that the listener handles connections for. When using the LSNRCTL utility to manage the SQL*Net listener, the DBA must also specify the Oracle SID for the database the listener accepts connections for, the network "service" name that the host running Oracle is connected to (i.e., TCP, IPX, NETBeui, DecNet). The DBA must also specify some operating-system-specific information about the environment Oracle runs in, such as the location of the listener executable script, the Oracle software home directory, and other things.

Finally, there can be a PASSWORDS initialization parameter set into the listener.ora file that manages security by limiting who can administer SQL*Net on the host machine. It is recommended that this parameter be utilized in order to avoid security issues arising from the unauthorized administration of the SQL*Net listener.

The values for these parameters can be set in a few different ways. The first method that can be used to configure the SQL*Net listener is the LSNRCTL, or listener control, utility. This utility provides a graphical user interface for making changes to listener settings. The other method is to edit the listener parameter file, listener.ora. Please note that the listener parameter file is NOT the same as the Oracle database instance parameter initialization file, generally known as init.ora. Care should be taken to remember to differentiate between Oracle server configuration and SQL*Net configuration. In addition to the network name, node (host machine) name, and listener name (default is listener, must be unique if there are several listeners on the same machine hosting Oracle), there are several parameters that LSNRCTL must have in order to configure and run the listener.


  1. Name parameters that affect the listener process.
  2. What is the tool that can be used to manage the SQL*Net listener?
  3. What is the name of the initialization parameter file for the listener process?

Configuring Dispatchers

As discussed in Chapter 17, there are two ways that servers can be configured in Oracle to handle user processes. The first is the dedicated server configuration. In this setup, each user process that connects to the database via the SQL*Net listener has its own dedicated server process to read data into the buffer cache on behalf of the user process. The limitation pointed out for this setup was that allowing user processes to have a dedicated server process incurs a great deal of memory overhead on the Oracle database used to run a server process for every user process that connects.

Oracle’s response to this situation is the multithreaded server (MTS) architecture, whereby multiple user processes can have their database reads handled by a limited number of shared servers. As mentioned in Chapter 17, access to the shared server processes is brokered by a dispatcher. Once a client process establishes connection with the SQL*Net listener, the listener passes the user process along to a dispatcher that brokers the client’s access to the database. The dispatcher then connects user processes with shared server processes, which then perform operations on the Oracle database on the user processes’ behalf.

Contention for dispatcher processes is an important area of tuning. Contention is indicated to the DBA either by increased wait times for dispatcher processes to respond to user processes or by a high frequency of user processes encountering dispatchers that are busy. The number of times a user process encounters a busy dispatcher is called a busy rate, while the number of times a user process has to wait to actually obtain a dispatcher is called a wait time. As with every other resource covered, there is a dynamic performance view that maintains statistics for the busy rate of the dispatchers working on each network protocol handled by a SQL*Net dispatcher. The name of that performance view is V$DISPATCHER. The formula for calculating the busy rate ratio for each network the database machine is connected to is reflected by the following formula: (total busy time / (total busy time + total idle time)) * 100. Total busy time is reflected by the sum of values in the BUSY column for each network, and total idle time is reflected by the sum of values in the IDLE column for each network. The method for distinguishing networks on V$DISPATCHER is represented by unique network names in the NETWORK column. Both busy and idle times are measured in 1/100 seconds. To obtain the data to place in this equation, the following query can be utilized:

SELECT network,
SUM(busy) "total busy time",
SUM(idle) "total idle time"
FROM v$dispatcher
GROUP BY network;

If the output of the equation used to calculate the busy rate ratio using the data provided by the V$DISPATCHER view exceeds 50 percent, there is a problem with busy rates for dispatchers on the Oracle database. This problem of busy rates for dispatchers on the Oracle database can be corrected with the addition of dispatchers to the database. This step is accomplished by increasing the number of MTS_DISPATCHERS using the alter system set mts_dispatchers statement.

SET MTS_DISPATCHERS = ‘protocol_name, number’;

The value for the protocol_name variable corresponds to the network protocol from the query used above against the V$DISPATCHER process to obtain statistics to calculate the busy rate ratio. The value of the number variable for the alter system set mts_dispatchers statement above is the new number of dispatchers to be added. Care should be taken with adding dispatchers so as not to exceed the total number of dispatchers set for the entire database as specified in the MTS_MAX_DISPATCHERS initialization parameter. If increasing the number of dispatchers is required to improve dispatcher busy rates, then the initialization parameter MTS_MAX_DISPATCHERS should be increased as well. The highest value that can be specified for this initialization parameter is specific to the operating system hosting Oracle, but the default value for this parameter is five.

The other statistic relevant to monitoring dispatcher activity is the amount of time processes have to wait to get responses from the dispatcher while they sit in the dispatcher response queue. This ratio is calculated from statistics contained in the V$DISPATCHER performance view, as well as statistics for queue activity that are contained in the V$QUEUE performance view. The V$DISPATCHER view provides the name of the network protocol for which a wait ratio will be derived and the V$QUEUE view provides wait-time statistics and the total number of responses that have been queued for that instance. The common link between the two views is PADDR, the memory address of the dispatcher process. In order to prevent an error when Oracle divides by zero in the case of no processes having to enter the response queue, the following query also uses a decode( ) operation. The query that can be used to derive the wait ratio for dispatcher processes is as follows:

SELECT, DECODE(SUM(q.totalq),0,’NONE’,
FROM v$queue q, v$dispatcher d
WHERE q.type = ‘DISPATCHER’ and p.paddr = d.paddr;

No particular value for the wait ratio is necessarily unacceptable, although the DBA should attempt to keep the value for this statistic as low as possible and also try to prevent consistent increases in this statistic. If increases in this wait-ratio statistic occur steadily on the system, the DBA should consider adding dispatchers to the database. This step is accomplished by increasing the number of dispatcher processes using the alter system set mts_dispatchers statement with the appropriate protocol name and number included as the value for the MTS_DISPATCHERS parameter.

SET MTS_DISPATCHERS = ‘protocol_name, number’;


  1. What is Oracle’s multithreaded server architecture? What is a dispatcher?
  2. What performance views contain statistics for dispatcher performance?
  3. What two aspects of dispatcher performance should the DBA monitor? What initialization parameters affect the number of dispatcher processes on the system?

Configuring Shared Servers

Shared servers handle data operations on behalf of many user processes at the same time. Their counterparts, dedicated servers, handle data operations for only one user process. The management of shared servers involves determining whether user processes are waiting excessively on the request queue for shared server processes. The statistics for this average wait-time calculation are derived from the V$QUEUE dynamic performance view, and the columns used are the same as the ones used for determining the average wait for dispatchers. The required statistics for the shared server processes are contained in the WAIT and the TOTALQ columns, and are associated with the type of queue in the TYPE column that is equal to ‘COMMON’. To calculate the average wait time from data in the V$QUEUE table, use the following formula: total wait time / total requests enqueued. Wait time is the statistic contained in the WAIT column associated with the row where TYPE = ‘COMMON’. This is a total wait time for all requests. The total requests enqueued statistic is equivalent to the value contained in the TOTALQ column associated with the row where TYPE = ‘COMMON’. This is the total number of requests that have been enqueued for the life of the instance. There is no particular "right" value for the quotient of this equation, but ideally, the time should be as low as possible. Also, the value for the average wait time should be tracked over time. If this average wait time increases consistently, there could be a problem with the number of shared servers on the database.

Alleviating contention for shared servers is tricky. Usually, there shouldn’t be a problem with contention for shared servers because Oracle automatically increases the number of shared servers if the average wait time for the existing shared servers grows past a certain amount. However, if the maximum number of shared servers has been reached, as represented by the initialization parameter MTS_MAX_SERVERS (default 20), the DBA can increase the maximum number of shared servers by increasing this initialization parameter. After that, Oracle will automatically increase the number of shared servers. Of course, the DBA can increase the number of shared servers as well by increasing the value of another initialization parameter, MTS_SERVERS, with the alter system set mts_servers statement.



  1. What is the Oracle multithreaded server architecture?
  2. What is a shared server process? What initialization parameters affect the number of shared server processes on the Oracle database?
  3. How is shared server performance measured? What performance view contains statistics about shared server performance?

Chapter Summary

This chapter covered some miscellaneous aspects of tuning the Oracle instance. Those areas include tuning contention as demonstrated by locks, tuning sorts, and optimizing load on the Oracle database with the multithreaded server architecture. These topics cover materials that comprise 20 percent of the questions asked on OCP Exam 4.

The first area covered by this chapter was monitoring and detecting lock contention. There are five different types of locks on the Oracle database. Shared row locks entitle their holder to changing data in the locked row, but allow other users to access the locked data via query at any time. These locks are acquired automatically by select for update statements. Exclusive row locks entitle the holder to exclusive select and update access to the locked row. No other user can see or change data in the locked row. These locks are acquired automatically by the update statement. Shared locks are table locks that permit the holder of the lock the ability to change any row in the table, but any user on the database can view the data being changed at any time. Exclusive locks are table locks that permit the holder of the lock exclusive access to view or change data in a table. No other user can query or change data in that table for as long as the lock is held by that user. The final type of lock is the shared row exclusive lock. These locks are all acquired through calls to a special package that manages locks, called DBMS_LOCK.

Several possibilities for contention exist on the Oracle database. One area for contention arises from a process having a greater level lock than it needs to execute an update. Another example is when the application acquires a lock in an area that otherwise behaves in a "select only" manner, and as such never relinquishes the lock it is given. Another possibility for contention exists in the client/server architecture, when a user process drops off the database but still holds locks on the database. It takes some time before Oracle realizes the connection was lost and allows the transaction to roll back. During that time, there could be contention as the user process tries to connect again and execute the same process they were executing when connectivity was lost before.

There are utilities and views to discover what processes are experiencing and causing waits. The UTLLOCKT utility provides a tree diagram of all processes that are holding locks and all processes that are waiting for the lock to be released. Another method for updating data of this type is to query the DBA_WAITERS view of the Oracle data dictionary.

The method DBAs use to resolve contention issues in an emergency is to kill one or both of the processes in contention. For this to occur, the DBA needs the session identifier and the serial number (SID and SERIALl# columns of V$SESSION, respectively) for the process(es) that will be killed. This information comes from the V$SESSION dynamic performance view, where the session ID equals the holding session from DBA_WAITERS. The syntax for killing a session is to issue the alter system kill session statement, passing the value for session ID and serial number.

The best method for preventing a locking problem is to identify and solve the problem at the application level. Typically, the problem is being caused by a process that is acquiring locks that it isn’t giving up, or the process is acquiring locks that have more scope than the process really needs. The solution is to change the application to release the locks it acquires or to use locks with the least "locking power" necessary to accomplish what it is trying to accomplish.

Deadlocks are a particularly serious locking problem. Oracle is designed to resolve certain deadlocking situations automatically. The DBA can identify if these situations are occurring by checking the alert log for the Oracle instance for the "deadlock detected while waiting for a resource" error. Preventing deadlocks is accomplished by the DBA making the following two recommendations at the application level: ensure that all processes acquire locks in the same order, and always use the lock with the least amount of "locking power" required to carry out the transaction.

The next area covered in this chapter was tuning sorts. The processes that require sorts are group by, order by, select distinct, minus, intersect, union, min( ), max( ), count( ), and sort join internal Oracle operations and the creation of indexes. Sorting should be done in memory, where possible, to ensure maximum performance. The frequency of sorts occurring in memory can be assessed using the V$SYSSTAT performance view, where the value in the NAME column equals ‘sort(memory)’ or ‘sort(disk)’. To increase the sorts performed in memory, the size of the sort area can be increased, although the DBA should be careful to ensure that once increased, the entire Oracle SGA still fits in real memory. See Chapter 17 for more details on what happens when the SGA is sized out of real memory. The size of the sort area is determined by the value, in bytes, of the SORT_AREA_SIZE initialization parameter.

When using sorts that require disk space, the user processes should always use the special TEMP tablespace to allocate their temporary segments for the use of sorting. The DBA can ensure this happens by configuring all users with a temporary tablespace at user creation. If the TEMP tablespace is not created or specified, the user processes will allocate temporary segments in the SYSTEM tablespace. Since temporary segments have a tendency to fragment tablespaces, and since the SYSTEM tablespace is critical to the functionality of the Oracle database, it is ill-advised to use the SYSTEM tablespace for anything other than storing data dictionary tables and the initial rollback segments of the database.

In the conclusion of tuning sorts, the topic of using direct writes for sorting was discussed. This option should only be used on systems that have extensive memory and disk resources available for improving performance on sorts. Direct writes provide the functionality of allowing Oracle to write temporary blocks for sorts directly to disk, bypassing the buffer cache of the SGA. To use this option, three initialization parameters must be used. Those parameters are SORT_DIRECT_WRITES (set to TRUE to enable), SORT_WRITE_BUFFERS (which represents the number of buffers used for sort direct writes), and SORT_WRITE_BUFFER_SIZE (which is the size of the buffers used). When using this setup for performance improvements on sorts, the DBA should ensure that there is additional memory available on the system to accommodate the SGA size plus the value of SORT_WRITE_BUFFERS * SORT_WRITE_BUFFER_SIZE. Additionally, the value of the above equation should be no more than 10 percent of the value originally specified for the SORT_AREA_SIZE or there may be little performance gain from using sort direct writes.

The last area covered in this chapter was the section on optimizing load. The first area covered in this section was the configuration of the SQL*Net listener process. Initialization parameters for this process are contained in the listener.ora file. The name of the SQL*Net listener is usually LISTENER, but on machines running multiple versions of listener corresponding to multiple networks connected to the Oracle database, the names of the listener must be unique. Listener has several parameters that must be configured, including PASSWORDS, SID_LIST, STARTUP_WAIT_TIME, CONNECT_TIMEOUT, TRACE_LEVEL, TRACE_DIRECTORY, TRACE_FILE, LOG_DIRECTORY, and LOG_FILE. Each parameter can be set using the LSNRCTL utility that handles listener configuration. Each parameter listed above should have the name of the listener appended to the end of it in the listener.ora file.

The next area to consider when optimizing load on the multithreaded server architecture is configuring the dispatchers on the Oracle server. The dispatchers act as go-betweens to map user processes coming into Oracle via the client/server architecture to shared servers that obtain data from the database on behalf of the user process. The number of dispatcher processes is determined by the MTS_DISPATCHERS initialization parameter, and its default value is five. Determining contention for dispatcher processes is based on two statistics: the busy ratio and the wait ratio. The busy ratio is calculated based on statistics from the V$DISPATCHERS dynamic performance view, while the wait ratio is calculated based on statistics from the V$QUEUE view. The busy ratio is based on the total busy time divided by (busy time plus idle time), times 100. Wait ratio is calculated from the sum of waits divided by the sum of values in the TOTALQ column from the V$QUEUE view, times 100. If the busy ratio is over 50, or if the wait ratio is increasing steadily, the DBA can increase the number of dispatchers by increasing the value set for MTS_DISPATCHERS—by executing an alter system set mts_dispatchers command and specifying the protocol to which to add dispatchers, and the number of dispatchers to add to that protocol. The number of dispatchers added for any protocol may not go over the total number of dispatchers allowed for all protocols, as represented by the MTS_MAX_DISPATCHERS initialization parameter. If the DBA wishes to increase number of MTS_DISPATCHERS over the value of MTS_MAX_DISPATCHERS, then the DBA must increase the value set for MTS_MAX_DISPATCHERS as well.

The final aspect of optimizing load that was covered in this chapter was the configuration and monitoring of shared server processes. These are the processes that obtain data from the Oracle database on behalf of the user process. The number of shared servers on the database is determined by the MTS_SERVERS initialization parameter. The number of shared servers allowed on the system is determined by the MTS_MAX_SERVERS parameter, which defaults to 20. It is generally best to set MTS_SERVERS low, because Oracle will automatically add shared servers if the average wait time for a shared server by a user process goes up rapidly or consistently. The performance of shared servers is monitored by the DBA by viewing statistics collected by the V$QUEUE performance view. The average wait time is calculated by dividing the value in the WAIT column by the value in the TOTALQ column where the value in the TYPE column equals ‘COMMON’. If this statistic goes up consistently and the maximum number of shared servers has been reached, the DBA can increase the value for MTS_MAX_SERVERS in an attempt to alleviate high average wait times for shared servers.

Two-Minute Drill

Levels of locking include row shared, row exclusive, shared, exclusive, shared row exclusive.
Causes of lock contention are when a process doesn’t relinquish a lock it holds, when a process holds a higher level of lock than it really needs, and when a user process drops while holding a lock in the client/server architecture.
The UTLLOCKT procedure is used to detect lock contention. Additional information about lock contenders can be found in the DBA_WAITERS view.
The method to eliminate contention is to kill sessions that are deadlocked. The Session ID and serial number from V$SESSION are required for this activity. To kill a session, execute alter system kill session.
Preventing deadlocks is done at the application level by changing the application to relinquish locks it obtains or using locks with the least amount of scope required to complete the transaction.
Oracle errors arising from deadlocks can be found in the alert log, a special file the Oracle database uses to track all errors on that instance. The error "deadlock detected while waiting for a resource" corresponds to a deadlock.
Application developers can also prevent deadlocks by designing the application to acquire locks in the same order in all processes, and use the minimum locking capability required to complete the transaction.
SQL operations that use sorts include are group by, order by, select distinct, minus, intersect, union, min( ), max( ), count( ), and sort join internal Oracle operations and the creation of indexes.
Sorting should be done in memory. The V$SYSSTAT view can be queried to find the number of sorts done in memory vs. the number of sorts done using disk space.
To increase the number of sorts taking place in memory, increase the value set for the SORT_AREA_SIZE initialization parameter.
If a disk sort is performed, the DBA should ensure that all temporary segments allocated for that sort are placed in a temporary tablespace called TEMP. This is ensured by creating users with a temporary tablespace named in create user. If none is named, the default tablespace used for storing temporary segments will be SYSTEM—this can lead to problems because temporary segments fragment tablespaces and SYSTEM is critical to the proper function of the database.
If memory and disk resources permit, set the database to use SORT_DIRECT_WRITES. This parameter is set to TRUE if the database is to use direct writes to the database for sorting. The SORT_WRITE_BUFFERS parameter determines the number of buffers that will be used for direct writes and the SORT_WRITE_BUFFER_SIZE parameter will be used to determine the size of the buffers.
The SQL*Net listener process manages accepting connection requests from user processes. The listener is configured by LSNRCTL utility, and its initialization parameters are stored in listener.ora.
Dispatcher processes allocate user processes to shared servers. Their performance is monitored with the V$DISPATCHERS and the V$QUEUE views. Performance is measured by a busy ratio and a wait ratio. If there are contention issues with dispatchers, more can be added by increasing the MTS_DISPATCHERS setting with an alter system statement. In some cases, the number of the MTS_MAX_DISPATCHERS parameter may also need to be increased if the DBA needs to add a dispatcher for a network protocol and doing so exceeds the total number of dispatchers allowed for all protocols.
Shared server processes obtain data changes on behalf of user processes. Their performance is monitored by the DBA by calculating the average wait time for shared servers, based on statistics kept in the V$QUEUE performance view. Shared servers are added to the system automatically by Oracle. In some cases, the DBA may have to increase the value set for the initialization parameter MTS_MAX_SERVERS in order to allow Oracle to allocate more shared servers to handle additional transaction processing burden. Or, the DBA can allocate additional shared servers by executing alter system set mts_servers = num;.