In this chapter, you will understand and demonstrate knowledge in the following areas:
|Accessing and updating data|
|Managing transaction concurrency|
|Managing the database structure|
|Managing storage allocation|
Organizations use the Oracle database to manage their information. In order to work with a database, users in the organization must have some way to access the data in a database, and also some way to change the data in the database. The chapter at hand will cover how Oracle manages data change. The first area to be discussed will be accessing and updating data. How Oracle manages data access and data change will be presented, and the question of what happens when a user issues a data query or change request will be answered. Oracle also recognizes that data change in the database is never safe from the problems inherent in computer machinery that cause data loss. A discussion of how Oracle preserves data in the event that there is a problem will also appear in this chapter. Finally, the foundations of database management will be presented in a discussion of managing the database structure and storage allocation of various objects and resources in the database. These topics comprise 22 percent of the OCP Exam 2 on Oracle database administration.
In this section, you will cover the following topics related to accessing and updating data:
|Steps in SQL statement processing|
|The benefits of the shared SQL area|
|The function and contents of the buffer cache|
|Role of the server process|
|Role of the DBWR process|
|Events triggering DBWR activity|
Key to the use of an Oracle database is the ability to access and change data. Oracle simplifies the mechanics of accessing data in the database by providing mechanisms to access data and structures to manage access performance. With these built-in mechanisms handling the tasks of reading data from disk, saving it back to disk, and optimizing operation structure to improve SQL statement performance, the user can focus on more salient questions like "What data am I looking for?" This discussion will focus both on the mechanisms that process a users request for data and on the mechanisms that work behind the scenes to make data available quickly.
SQL provides a method for obtaining data from a database by allowing the user to define the data they need and NOT a process by which to obtain it. Think about this difference by considering the following statement: Im thirsty. To a person accustomed to dealing with thirst, the solution is obviousget a glass of water and drink it. However, there are steps to be followed in order to fulfill that statement.
1. Get off the couch.
2. Walk to the kitchen.
3. Open a cupboard door.
4. Remove a glass from the cupboard.
5. Put the open end of it underneath the tap.
6. Turn the cold water knob on the side of the tap.
7. Fill the glass to capacity.
8. Turn off the water.
9. Raise the glass to my lips.
10. Ingest the water, being careful to not get any up my nose.
Notice too that each step in the process may have a procedure to be performed as well. Getting to the kitchen may require standing, putting the left foot in front of the right, then the right in front of the left, and repeating the loop, then going down the stairs, etc. So, while defining a "want," there is a procedure defined (perhaps implicitly) that is used to get it. Most programming languages provide the developer with mechanisms to define a procedure to obtain desired information. Unfortunately, when someone looks at the procedures another developer has created, the original developers "want" gets lost in the details of how the developer obtained it.
SQL alleviates this situation by allowing a developer to define a desired outcome, and letting Oracles relational database management system figure out the steps or operations to obtain it. Those operations fall into a pattern detailed in Figure 7-1. The flow of operation in processing a SQL statement is as follows:
|Oracle opens the statement Oracle first obtains a cursor, or memory address in the library cache of the shared pool, where the statement will execute. Then Oracle obtains some space in the PGA called a private SQL area, where statement return values will be stored.|
|Oracle parses the statement Oracle creates a parse tree or execution plan for the statement and places it in the shared pool. The execution plan is a list of operations used to obtain data.|
|Oracle creates bind variables For select statements, bind variables for return values are created in the parsed SQL statement. This allows Oracle to share parsed operation, but not data, in the shared SQL area in the library cache of the shared pool. For update, insert, and delete commands, this step is omitted.|
|Oracle will execute the statement At this point, Oracle performs all processing to complete the specified operation.|
Figure 1: SQL statement process flow
In situations where a user making a data change via an update, insert, or delete, once the statement has been executed, the data is changed. On the other hand, if the user has issued a select statement, then the resulting set of the statement is available in the cursor at the end of the execution, and must be fetched from the cursor in order for the user to view it. The steps for executing SQL statements are as follows:
After execution, parse information for SQL statements stays in the shared SQL area for a short amount of time. If some user wants to reexecute a statement, the SQL processing mechanism will simply jump back to the execution phase of the previously parsed process. Thus, parse trees are reusable in Oracle. Not only can a process reuse the execution plan it creates in the parse phase of its own execution, it can also reuse any parsed execution plan that is cached in the shared pool. The criteria for reuse of an execution plan are as follows:
|To reuse an execution plan in the shared pool, the current SQL statement must be identical to the already parsed statement, including white space and commas.|
|The execution plan must be in the shared pool at the same time that the current SQL statement trying to reuse it is looking for it.|
There are two benefits to this setup for the entire database: increased statement performance and reduced memory. The performance gain is made both in the time it takes to open and parse the SQL statement. Thus, Oracle attempts to build on its own work as much as possible. The other benefit is that each iteration of the same statement in memory takes up that much less space under the shared model. Since there is only a finite amount of memory in the shared pool, the entire database would suffer if all processes had to wait for another process to finish in order to use the space in memory.
In addition to defining operations required to obtain data, Oracle has special mechanisms for managing the rapid access of data stored on disk. The first component to the method Oracle uses to manage disk data is the buffer cache, a memory structure in the Oracle instance that stores data blocks that house rows involved in recently executed SQL statements. There are several components to the buffer cache, the main one being the database block buffers. This component consists of a number of buffers, each designed to hold one data block. Another one, called the dirty buffer write queue, is a list of data block buffers containing blocks that contain data changes made by user processes.
The buffer cache improves performance of SQL statement processing by providing a copy in memory of the data being used in the statement. Accessing data stored in memory is faster than accessing information stored on disk, so queries that have the blocks they need in the buffer cache will run much faster than the processes that do not. Once those blocks are in memory, the buffer cache keeps them for a certain period of time before eliminating them. The period of time is determined by a modified "least recently used" or LRU algorithm in Oracle.
Additionally, the buffer cache releases SQL statements from being I/O bound in many cases. Data changes are made to blocks in memory, which are then held on the dirty buffer write queue until a background process called DBWR writes to disk. Having this special background process performing all disk writes frees user processes to do other things after making a data change instead of waiting in line to write the data change to disk immediately. In addition, the block stays around for other processes to access it, potentially improving performance for other statements as well.
To see real performance gains, the buffer cache must be large, particularly if the database itself is large or experiences high-volume or long-running transactions. The number of buffers in the buffer cache is determined by the DB_BLOCK_BUFFERS initialization parameter of the Oracle instance. The size of the database buffer cache is determined by multiplying the number of buffers, DB_BLOCK_BUFFERS, by the size of each buffer, which is the same as DB_BLOCK_SIZE. However, the DBA must be careful when configuring the buffer cache in order to avoid sizing the buffer cache out of real memory, for reasons that will be discussed later.
The server process accesses the database on behalf of user processes to read data from datafiles on disk into the buffer cache in the SGA. When SQL statements need to access certain data, Oracle places data into the buffer cache in the following way. If there is an index associated with the table that can be used by Oracle, the blocks for that index will be read into the buffer cache. Oracle then tracks down the appropriate ROWID corresponding to the data needed. As stated earlier, a ROWID is an exact location on disk for row of data. There are thee components of a ROWID: the data block address, the block row number, and the block datafile number. Commit this information with the following acronym: BRF, for block-row-file. Once the ROWID for a block containing table data is identified, that block is read into memory. If many rows are to be read into memory, then all blocks containing those rows are placed into data block buffers of the database buffer cache. Figure 7-2 demonstrates the format for ROWID data in Oracle.
Figure 2: ROWID format in Oracle
However, Oracle is not always able to bring data into memory in the most efficient way, as provided by some indexing mechanism in Oracle. When there is no index on the table, or if the SQL statement does not make use of the index, Oracle has no choice but to read every data block associated with a table into memory to find the block containing the data required for the user request. This method for obtaining data is called a full table scan. In this case, every data block in a table will be read into memory until the row containing the requested data is found.
Once in the buffer cache, that block will be held in the database buffer cache for a certain period of time. That time period is determined by the server process, the mechanism by which Oracle brought the data block into memory. If the data was brought into memory as part of an indexed search, the data block will be held in memory until the space is needed by another block, and the block in the buffer is the least recently used (LRU) block. In this way, frequently used blocks will stay in the buffer cache for a long time, while infrequently used buffers get eliminated to make way for more block storage. This method is often referred to as the LRU algorithm.
The method used to keep a data block in the buffer cache is slightly different if Oracle must use a full table scan to find the data the user requests. Oracle simply eliminates those buffers almost immediately after they are read by placing them on the least recently used end of the list of blocks slated for elimination from the buffer cache. Oracle doesnt simply use the LRU mechanism to hold data blocks brought in from full table scans because doing so would quickly fill the buffer cache with blocks from one table, eliminating blocks from other tables that may be in use by other queries. Without a modified LRU algorithm, full table scans will adversely impact performance not only for the query running the full table scan but for the entire database as well.
What often happens in the buffer cache is blocks from indexes on tables commonly accessed stay in the buffer cache longest. Oracles modified LRU algorithm facilitates this usage in the following ways. Index blocks are stored in the buffer cache, and frequent use will keep them in the buffer cache. Since an index block for a commonly used table is likely to be used frequently by users, the blocks of the index will likely live in the buffer cache for long periods of time. Storing the blocks of an index in memory is more effective than storing the actual rows of data in memory for two reasons. The first reason is that the index has much fewer blocks than the table, minimizing the number of blocks that need to be stored in memory in order to have easy access to a table. The second reason is that each index block stores searchable column values for table rows, plus an address on disk that will allow Oracle to know almost exactly where on disk to look for the full row data. The buffer cache then acts as a "superindex" of the data in the Oracle database.
Oracle uses the server process to move data blocks into the buffer cache in support of select and update statements. Oracle moves data block changes out of the buffer cache by means of the DBWR background process. DBWR is designed to minimize the dependency that database processes have on disk utilization. To understand the service the DBWR provides to all user processes on the database in order to improve performance, consider how the database would behave without having a special process earmarked to do nothing other than perform disk I/O. If there were no process to handle disk writes, then every user process that made changes to data would have to access disk resources individually to write those changes to disk. Since there is only one disk controller available to access the data blocks on any given disk drive, each process that wanted to write data blocks would have to wait in line to use the disk controller to find their data blocks.
This architecture has the potential for bottlenecks unless data for each user process could be placed on separate disks in order to relieve contention for the limited disk controller resources. This task is nearly impossible, however, due to the fact that extravagant numbers of disk controllers usually cannot be connected to one machine, and even if they could, there is really no way to predict accurately how user processes would want to access data in such a way so as to minimize contention for disk resources. Another problem inherent in this architecture of "every process for themselves" is the fact that frequently run updates would line up to write exactly the same data, over and over again, bumping into one another and creating sticky situations related to read consistency with respect to the data on disk or in memory.
DBWR, then, minimizes the bottleneck or dependency that any process will have on writing data to disk. With one process putting data onto the disk resources, working with a cache of data blocks already stored in memory for easy access, other user processes run faster and complete sooner.
Unless the database buffer cache is enormous enough to hold every data block cached in memory, and there is never a change to any data block that needs to be written to disk, chances are DBWR will be working quite frequently to write changes to disk. DBWR is triggered every three seconds to write changed blocks to disk. This trigger mechanism is called a timeout.
Another event that triggers DBWR to do its job is caused by data changes on the system. When a user process issues a data change against the database, DBWR must find the data block containing the row to be changed and make that change to the row. Once the block is altered, it is called a dirty buffer. However, DBWR doesnt just run right out there and save the dirty buffer to disk. Rather, all the dirty buffers are gathered into the dirty buffer write queue and written to disk at one time. This way, disk I/O is further minimized in order to reduce the disk utilization requirements of any process accessing the database. When the number of buffers on the dirty buffer write queue reaches a threshold limit, DBWR flushes the entire queue at once and writes all the buffers to disk.
DBWR also writes changed blocks to disk at periodic intervals determined by the setup of another major disk resource on the Oracle architecture, called the online redo log. The periodic intervals at which the changed blocks are written to disk are referred to as checkpoints. A more complete discussion of the checkpoint process will appear later in this chapter in the discussion of the online redo logs, but for now it is important to know that the checkpoint process exists, and that at the time of a checkpoint, all data blocks that have been changed since the last checkpoint are written to disk.
In this section, you will cover the following topics related to managing transaction concurrency:
|The log writer process|
|Components and purpose of the redo log|
|The purpose of checkpoints|
|Data concurrency and statement-level read consistency|
In order to ensure the viability of any application using its database management system, the Oracle database has several special design features that allow for database recovery to be as thorough and complete as restoring committed transactions up to the moment of failure. The main feature that provides this data restorability is the ability Oracle has to keep redo logs. Redo logs can be thought of as a scroll upon which every process that makes a data change on the Oracle database must jot a note of key facts about the change they made to the system. If a user process cannot write a redo log entry for the data change they make, then the transaction involving the data change cannot be completed. In addition to writing redo log entries, Oracle allows the DBA to choose whether or not to archive the redo log entries. In the event that the DBA does choose to archive redo logs, then the changes made to the database are stored on an ongoing basis, making recoverability a straightforward matter of applying redo log entries to a restored database up to the most recent redo log held in archive. This fact makes it possible to restore a database up to the moment in time the failure occurred. These mechanisms work together to achieve transaction concurrency in the Oracle database.
Oracle writes redo log entries from memory to disk using the LGWR process. As with DBWR, LGWR is designed to free the user processes in the Oracle database from having to write data to disk whenever they make a change to any information in the database. Oracle forces every process that makes a change to a database data component to write a change entry so that in case of an emergency the changes can be recovered. If not for LGWR, every process would have to write its own redo log entry to disk, causing severe bottlenecks. Instead, LGWR handles all redo log writes to disk. Without a queue of user processes waiting at the disk controller, LGWR can maximize its nearly private usage of particular disk controllers if the only contents of the disk are redo logs.
In addition to writing redo log entries from memory to disk, LGWR plays a critical role in actually writing the data changes made to the Oracle database. Every Oracle database has at least two redo logs available for the purpose of storing redo log entries. LGWR writes entries to the redo logs on disk, one entry to one log, until one log is filled. At the time that one log is filled, LGWR "switches" over to writing redo log entries to another log. This event is called a log switch. At the time a log switch occurs, LGWR declares a checkpoint to occur as well. When the checkpoint occurs, DBWR writes blocks in the dirty buffer write queue to their respective datafiles. In this way, LGWR rather directly determines when DBWR will write database changes to disk. At log switch event time, a checkpoint must occur, so LGWR will influence the saving of data blocks at least as often as it switches writing redo entries from one log to another. The LGWR process can specify other times a checkpoint can occur as well.
LGWR writes redo log information in the following way. When a transaction executes, it produces information that essentially states the changes the transaction is making to the database. This information is called redo. The redo information is collected in a part of the SGA called the redo log buffer. When the transaction commits, LGWR writes the redo information from the redo buffer to the online redo log.
The components of a redo log are the redo log buffer, the online redo log files, and the LGWR process. The redo log buffer stores redo log entries created by user processes for a short period of time until LGWR can write more permanent redo log entries to the online redo log on disk. LGWR is the only process on the Oracle database that handles disk I/O related to writing redo log entries. These two components comprise the memory structures and process Oracle uses to save change information in order to allow for recovery of that information in the event that the data on a disk becomes unavailable.
Typically, there are at least two online redo log groups available on the Oracle database to which LGWR writes redo log entries. Each redo log in the database consists of one or more files that store the redo entries, called members. In general, it is wise for the DBA to create multiple members for each online redo log group, as each member of an online redo log group contains the entire set of redo entries for the online redo log. Multiplexing, or making Oracle maintain multiple members (ideally on separate disks) for each online redo log group, is an important way to ensure the integrity of the online redo log against problems with disk failure. Again, the DBA should see the logical and physical component here. The logical component is the redo log group, while the physical components are the files, or "members," that comprise the actual redo log. The location of each redo log member and the name of the redo log to which the member belongs is determined by the create database statement, where the redo log names and member names are specified. Information about the physical location of redo log files is also stored in the control file. Figure 7-3 demonstrates the logical and physical view on redo logs.
Figure 3: Logical and physical view of redo logs
Several important details surround how the DBA will configure the redo log files of a database. The first (and perhaps most important) detail addresses the importance of having redo logs available if a database disk failure were to ever occur. In order to ensure the probability of recovery in the event of disk failure, the DBA should configure Oracle to mirror redo logs. Mirroring is the act of storing multiple copies of redo information associated with each redo log. Figure 7-4 illustrates an important pointeach redo log member in the redo log group contains a complete copy of the online redo log. Thus, having multiple members of the redo log group allows the DBA to keep multiple copies of the redo log available to LGWR. Mirroring is the act of placing each redo log member on separate disks so that, in the event there is a disk problem with a disk that holds a redo log, the entire instance can continue running with another copy of the redo log in a separate member on a different disk. This process is also called multiplexing.
Figure 4: Mirroring online redo logs
If mirrored online redo log members are not kept on multiple disks, there is significant risk posed to the instance by disk failure. If the disk containing the redo log that LGWR is writing experiences media failure, LGWR will not be able to write redo log entries and the Oracle instance will fail. Also, LGWR must write redo log entries to disk in order to clear space in the redo log buffer so that user processes can continue making changes to the database. Since LGWR cannot clear the space in memory if it cannot write the redo log entries to disk, the entire instance will fail.
The placement of redo log members on separate disks benefits the database in other ways. When archiving is enabled, and the DBA has specified that Oracle should run the ARCH process to automatically move archived redo logs to an archive destination, there can be contention at the time that a log switch occurs. If all members of the online redo log are on the same disk, ARCH and LGWR will contend for the disk controller resource because both processes need to execute activities on files on the same disk. With redo log members and the archive log destination spread across different disks, there is little possibility for ARCH and LGWR to contend because ARCH can work on what it needs to do using one disk while LGWR continues the switchover on another disk.
In order to enhance the recovery time of a database, the size of the redo log member should be as small as possible without causing excessive performance degradation. As stated before, when a redo log fills, LGWR conducts a log switch where it moves from writing redo entries from one redo log to another. At the time of a log switch, a checkpoint occurs on the database where DBWR writes changed data blocks to disk. It is important to have these activities occur in order to make sure that all data changes made are recorded on disk, both in live datafiles and in redo log files so that the DBA can recover in the event of a hardware problem. However, there are performance implications associated with these frequent saves. If redo log members are too small, LGWR spends excessive periods of time switching from one log to another. In heavy transaction environments, the database redo memory buffer may fill to capacity during the switch, causing user processes to wait for space in the redo memory buffer until LGWR starts clearing out redo entries to disk again. Also, the more time DBWR spends writing updated blocks to disk because of frequent checkpoints, the less time it has to retrieve blocks with data being requested by other user processes. In other words, both saving data and redo entries to disk and avoiding the performance degradation that these saves cause are important. But they give rise to the need for certain trade-offs in system performance for online transactions or performance in the event of requiring a database recovery.
The number of online redo log groups is also a factor in smooth redo log operation. Oracle sets a minimum for redo logs at two groups. LGWR writes to one group at a time until the group fills. At the time the group fills, a switch occurs and the LGWR process begins writing redo log entries to the other group. However, several factors may prevent LGWR from starting to write to another redo log group. One of the factors is that the other redo log group has not archived yet. Since there are only two groups online by default, LGWR recycles each group at switch time, overwriting the redo log entries in the other group. If archiving of redo logs is enabled, then the overwriting process cannot happen until ARCH has archived the redo log group. Switching may occur often on systems where redo log groups are too small or when there is a high transaction volume on the database. If switches occur frequently, the potential for contention between ARCH and LGWR over archiving and recycling the online redo log increases.
Two solutions exist for this problem. One solution is to increase the size of each redo log member. By increasing the size of each redo log member, switches occur less frequently because each redo log takes longer to fill. However, there is a performance cost to be paid when the DBA must attempt a recovery using large redo logs. It takes longer to restore the database when the redo logs are fewer and larger. An alternative to reducing the chance that LGWR will have to wait to begin writing entries to an online redo log when a switch occurs is to increase the number of online redo logs available on the Oracle instance. This can be set up at database creation time, or new redo log groups can be added after the fact with the alter database add logfile group statement. The only restrictions to this option are some options that are set by the create database command. These options limit the number and size of the redo logs that can be associated with a database. The options are called maxlogfiles, which limits the number of redo log groups, and maxlogmembers, which limits the size of each member or copy in the group. The only way to alter these options is to re-create the control file. In emergency situations only, and only for the lifetime of the current instance, the DBA can decrease the number of redo log groups that can be set for the instance by changing the LOG_FILES initialization parameter. However, this parameter offers little to the problem of wanting to add more redo log groups than maxlogfiles allows.
The checkpoint process has been mentioned several times in the discussion of transaction concurrency and the redo log architecture. During the course of regular database operation, users make changes to data. Blocks that have been changed while being stored in the buffer cache are stored in the dirty buffer write queue, a special part of the buffer cache where changed buffers are earmarked for a database save. A checkpoint is a point in time when DBWR writes all changes made to data blocks in memory to disk. The frequency of checkpoints is determined by many factors, one of which is directly attributed to the LGWR process. When LGWR fills an online redo log with redo entries from the redo memory buffer, LGWR performs a log switch; that is, it changes from writing redo entries from the online redo log it just filled to another redo log group. During the process of this log switch, a checkpoint occurs. At the time a checkpoint occurs, LGWR tells DBWR to write the blocks in the dirty buffer write queue to disk. Checkpoints, then, are points in time where changes to data block rows are written to disk.
Checkpoints occur at least as frequently as log switches. However, they can (and should) occur more frequently. If a database instance fails, the changes in blocks in the dirty buffer write queue of the database buffer cache that were not written to disk must be recovered from redo logs. Although this process happens automatically at startup with Oracles instance recovery methods handled by the SMON background process, the process may take a long time if checkpoints happen infrequently and transaction volumes large. The methods used to set checkpoints more frequently than log switches should be used only if the redo logs are sized to be very large. Typically, smaller redo logs fill faster, so checkpoints occur more often anyway when using small redo logs. But, if the redo log buffers must be large, then the DBA can specify more frequent checkpoints by using the LOG_CHECKPOINT_INTERVAL and/or LOG_CHECKPOINT_TIMEOUT initialization parameters of the init.ora file.
The two parameters that can be set to have checkpoints occur more frequently than log switches reflect two different principles upon which the frequency of checkpoints can be based--volume-based checkpoint intervals and time-based checkpoint intervals. The parameter called LOG_CHECKPOINT_INTERVAL sets checkpoint intervals to occur on a volume basis. The checkpoint happens at the point when LGWR writes a certain volume or number of redo entries to the redo log. When the capacity of the redo log reaches a certain point, the checkpoint occurs, and all the database blocks that are sitting in the dirty buffer write queue are written to the database. The process of writing data blocks in checkpoints continues on in this way. The principle behind setting checkpoint intervals to occur in this fashion is that during periods of high transaction volume, the dirty buffer write queue should be flushed more often because more changes are being made, and thus more redo log entries are being written to online redo logs. During periods of low transaction volume, fewer redo log entries will be written to online redo logs, and the LOG_CHECKPOINT_INTERVAL threshold will be crossed less frequently. As a result, there will be fewer checkpoints and fewer writes to disk. The principle of volume-based checkpoints with LOG_CHECKPOINT_INTERVAL is much the same as simply reducing the size of the online redo logs to have checkpoints occur at more frequent log switches. However, setting LOG_CHECKPOINT_INTERVAL to have checkpoints occur when threshold capacities of the redo log are exceeded offers an advantage over simply reducing the size of the redo log buffer. The additional overhead of a log switch, such as archiving the redo log, is avoided.
LOG_CHECKPOINT_INTERVAL is set as a numeric value representing the number of operating system blocks LGWR should write to the redo log after a log switch before a checkpoint should occur. This value is determined in the following way. If the DBA wants checkpoints to occur only as frequently as log switches, the value set for LOG_CHECKPOINT_INTERVAL is set to a value higher than the size of the redo log. If the DBA desires to set the LOG_CHECKPOINT_INTERVAL to effect a checkpoint at various thresholds in the writing of the current redo log, then the DBA must determine a few things. First, the DBA must divide the size of the redo log in bytes by the size of each Oracle block as specified at database creation with the DB_BLOCK_SIZE initialization parameter. Once this value is established, the DBA should then determine how frequently a checkpoint should occur between log switches, when checkpoints automatically occur. The last fact--and the fact that is most operating system specific--is that the DBA must find out how large the operating system blocks are for the machine hosting Oracle. After finding out the size of each operating system block, the DBA needs to figure out how many operating system blocks fit into one Oracle database block. Only when these values are obtained can the DBA specify the value for LOG_CHECKPOINT_INTERVAL using the formula in Figure 7-5.
Figure 5: Use this formula to calculate checkpoint interval
Using the formula above, assume that the DBA wants five checkpoints to occur on the database between log switches. The redo log is 1M in size, and the value set for DB_BLOCK_SIZE at database creation is 2K. Therefore, the size of the redo log in Oracle blocks is 1,024,000/2,048, or 500 blocks. Assume that the size of an operating system block in this case is 512 bytes, which is not uncommon for most UNIX environments. In this example, there are four operating system blocks in every Oracle block (2,048/512 = 4). With these values, the recommended setting according to the formula for LOG_CHECKPOINT_INTERVAL is (500/5) * 4, or 400.
The other principle for identifying checkpoint times is the time-based checkpoint interval. Although using the volume-based checkpoint interval method fits well into the overall method of having checkpoints occur at log switches, assigning a time-based checkpoint interval is far simpler to configure. The LOG_CHECKPOINT_TIMEOUT initialization parameter is used for this configuration. The value set for LOG_CHECKPOINT_TIMEOUT is equal to the number of seconds that will transpire before another checkpoint will occur. The principle behind this method for specifying checkpoints is that the checkpoint process should occur at regular time intervals.
One concern that the DBA may have when specifying checkpoints to occur at regular intervals is that a checkpoint will always occur at the time of a log switch, whether the switch occurs at a regular time interval or not. In order to avoid a situation where a log switch causes a checkpoint to occur immediately before or after a checkpoint occurs as specified by LOG_CHECKPOINT_TIMEOUT, the DBA should try to determine the average amount of time it takes for the redo log to fill, and specify a time interval that is some fraction of that based on the number of times on average that the DBA wants a checkpoint to occur between the checkpoints that automatically occur at log switches.
To figure out the time it takes for a redo log to fill, a special file can be used. The special file used is called a trace file. Every process in the Oracle database, including all background processes and user processes, can generate a trace file that records the activities of the process. In order to determine the amount of time it takes for the transaction entries on the Oracle database to fill a redo log, the DBA can review the trace file associated with the LGWR background process to determine the time periods between log switches. To disable time-based checkpoints, the DBA should set the LOG_CHECKPOINT_TIMEOUT to zero (0).
Finally, there may be instances when the DBA wants to specify a checkpoint to occur. There are two ways to make a checkpoint occur. One is to force a log switch, and the other is to force a checkpoint. Both activities require the DBA to issue an alter system command. For a checkpoint to occur as part of a log switch, the option specified with the alter system command is switch logfile. For checkpoints to occur by themselves, without a log switch, the option specified with the alter system command is checkpoint. In this case, as in the case of any checkpoint that occurs without a corresponding log switch, that checkpoint is referred to as a fast checkpoint, while any checkpoint that involves a log switch is called a full or complete checkpoint.
Data concurrency and statement-level read consistency are provided by Oracle transaction processing. A transaction is a group of data change statements that are grouped together in a database designed to meet these needs. For example, imagine the storage of an invoice in a database. The invoice itself that contains the name and address of the purchaser, the total dollar amount, and the persons charge account number are stored on one table, while an itemized breakdown of goods purchased is stored on another. Figure 7-6 illustrates this configuration. The figure displays a typical parent-child table relationship used to map a one-to-many relationship of invoices to elements of the invoice.
Figure 6: When two tables represent one logical data object
Say, for example, that there is a user that must eliminate invoice #2 from the system. She issues the following statement to eliminate the data from INVOICE_ELEMENT. This is all well and good, except that if other users come into this process, they might think there was a problem with invoice #2 and attempt to add in the invoice elements again before the first user could issue the delete of invoice #2 from the INVOICE table.
DELETE FROM invoice_element
WHERE inv# = 2;
The concept of transaction processing can eliminate this problem. Just as the data in both tables is considered to be components in one logical object--namely an invoice--the removal of information from two tables of that logical object should also be treated logically as one act. This act can be called a transaction. Consider the following pseudocode:
DELETE FROM invoice_element
WHERE inv# = 2;
DELETE FROM invoice
WHERE inv# = 2;
In this situation, the act of deleting from two tables is considered one event, or a transaction. The beginning of the transaction may consist of some mechanism ensuring that no other user may change this data, or perhaps even view the data, before the transaction is complete. The mechanism used for this act in place of a generic "begin transaction," is a lock. The process listed above may acquire a lock on the rows or the table before the delete commands are issued in order to prevent a misunderstanding over the status of the data.
The second point to be made here is that at any point during the transaction, the changes made can be discarded in favor of the way the data looked before the transaction began. Oracle does not save the changes made by any changes to the database immediately. Instead, the changes are marked as made, but the option is given to discard them until such point when the user issues a command to explicitly save the changes to the database. The mechanism through which Oracle explicitly saves data changes is called a commit, while the mechanism through which Oracle discards a data change is called a rollback. There is an associated concept in committing and rolling back transactions called a savepoint, whereby a transaction can be divided into portions for the purpose of committing or rolling back only part of a transaction at a time.
Transactions function as the cornerstone of data concurrency. In concurrent data change environments, locks and the commit and rollback statements provide the mechanisms necessary to allow for the logical grouping of database change activities that are transactions. With the concept of a transaction in place, the user is then able to earmark data for their exclusive access and change, and also may decide when the changes they make are finished and therefore ready for viewing by the rest of the system.
The final area of discussion concerns rollback segments and the idea of read consistency. By default, when a statement executes, Oracle ensures that the data viewed, retrieved, and changed by the statement will be consistent from the time the statement starts to the time the statement finishes. This is called statement-level read consistency. The same ability must exist at the transaction level in order to have multiple statements that can be made into components of some other atomic data operation. Recalling the example from earlier, say that there are transactions on the system that are operating on the basis that invoice #2 exists on the database. During the course of activity in those other transactions, some records are written to other areas of the system that require data from those rows associated with invoice #2. Meanwhile, the user from before is on the system, and is currently running a transaction deleting invoice #2. In this situation, the user deleting invoice #2 may commit her transaction before the other users using invoice #2 data commit their own transactions. However, even though the data deletion is saved to the database, it still must exist somewhere for the transactions operating on the assumption that the rows corresponding to invoice #2 exist to have transaction-level read consistency. This read consistency at the transaction level is provided by a disk resource on the Oracle database called a rollback segment. The rollback segment holds a record of all uncommitted transactions and changes in the process of committing in order for other statements to rely on a read-consistent view of database. The locking mechanisms, commits, rollbacks, and the rollback segment disk resource all combine to make transactions possible on the Oracle database, which in turn allows for data concurrency and statement and transaction-level read consistency within and between transactions.
In this section, you will cover the following topics related to managing the database structure:
|Database storage allocation|
|Customizing the database structure|
|Preparing necessary tablespaces|
|The different types of segments|
With the discussion of how data is changed in the Oracle database, the DBA should turn attention to understanding how data is stored in the Oracle database. The various ways disk resources can be viewed in the Oracle architecture were introduced as part of the last chapter. Those two views are the logical view and the physical view. The physical view of the database consists of the physical files, such as datafiles, parameter files, control files and redo log files, that house Oracle data in Oracle-readable formats. The logical view of disk resources is the perspective of the logical objects being stored. Some examples of these objects are segments and extents that comprise tables, indexes, and rollback segments. Another logical database object that is available for data storage is the tablespace. Each of these logical objects corresponds in some way to a physical object. For example, the tablespace--a logical disk resource--consists of one or more datafiles that store tablespace data on disk. Other examples are the segments and extents that hold table data. Each segment and extent consists of data blocks that store table, index, or rollback segment data. The blocks, then, are components of the physical datafiles that store the segments and extents of a tablespace. A full treatment of these items appears in this section.
Each database has space allocated to it for storage. That space is determined either in the create database statement for datafiles and redo logs or in alter database where new datafiles or redo logs are created. Additionally, the objects in a database have space allocated to them for storage too, as part of the create object or alter object statement that brings the object into being. The size of an object or file is specified either in kilobytes or megabytes. Storage allocation in Oracle has an underlying concept or idea. The idea is that Oracle simply allocates large blocks of a machine resource like memory or disk space, and deals with management of the utilization of that resource within the architecture of the Oracle database later. This design feature empowers the DBA in many ways, and also empowers the Oracle system in many ways. By simply obtaining large blocks of system resource from the machine when a database is created, Oracle minimizes its dependency on any particular operating system or machine configuration and sticks with some basic features that all machines provide, like memory, disk space, and processing power. All the rest of the resource management is handled by Oracle. In addition, Oracle avoids the "nickel and dime" effect of continuous creeping growth by getting all the space it needs for the database or an object within it, then filling that space as needed.
Logical database objects like tables and indexes generally reside within physical database objects like datafiles. Their existence and utilization are managed more directly by the Oracle architecture. As such, logical database objects generally have more extensive storage parameters for DBAs to configure and tune the objects in ways that allow applications to use the disk resources effectively. Some of the areas that can be configured in a logical database resource include the following:
|The maximum growth size of the object|
|Space management within the data block|
|Segment/extent allocation for the database object|
|How Oracle manages free space in the object|
|Default parameters for objects created within a tablespace|
The basic object of storage within the logical disk resource usage architecture is the data block. Blocks are collected into segments, which can be used to build tables, indexes, and rollback segments. When the data stored in a table exceeds that capacity of the underlying segment that houses the table, the table can obtain additional data blocks collected into a logical storage object called an extent. Segments and extents that house data blocks storing table, index, and rollback segment data are collectively stored in tablespaces, which are the largest logical units of data storage on the Oracle database.
The Oracle database offers a great deal of flexibility in how it can be configured in order to meet the storage needs of nearly every organization. There are some fundamental concepts behind customization on the database that the soon-to-be DBA must understand. Every Oracle database consists of several components. The first part that every Oracle database consists of is a host machine. Ultimately, it is the hardware constraints of the system running Oracle that will largely determine what the Oracle database is capable of handling. For example, a machine with only one hard drive with 2 gigabytes of space will not store a 10G database, no matter how well tuned the Oracle instance may be. Basic elements of customizing the Oracle database to meet the needs of the organization using Oracle begin with the customization of the hardware running Oracle. Some elements on a hardware platform that could be customized include the following:
|Available memory on the system, including cache memory|
|Number of processors, or CPUs, on the machine hosting Oracle|
|Number of disk drives available on the machine|
|Offline storage capability; for example, tape drives or optical drives|
While it is true that every machine hosting an Oracle database should have the above components available, the quality and/or quantity of devices of each type offers ample opportunity for customization. Of particular interest to most DBAs should be the number of available CPUs and disk drives on the machine hosting Oracle, as well as the available memory on the system. The daily production performance of the Oracle database depends heavily on each of these components being powerful and, if possible, plentiful. Much of the DBAs initial work on database installation and customization involves determining how to fit Oracles need for fast, noncontentious access to its disk resources around the reality of the available drives on the hardware that hosts Oracle. Within reason, more is better in this area of customization.
Of great importance are the offline storage devices that will assist the DBA and/or system administrator in backing up and recovering the Oracle database. Although the performance offered by the worlds greatest tape drive may not be noticed by the vast majority of users of the Oracle system in day-to-day production life, everyone will notice how slow the recovery is coming as a result of a poor choice in this department the day a disk drive crashes on the database. The recoverability of the Oracle database is the ultimate test of how well an organization can withstand its own need for electronic storage and retrieval of its mission-critical data. Besides, as in most support roles, the DBA gets no notice when everything related to the database runs smoothly. Superhuman recovery in emergency situations, however, is expected of every DBA, and nothing less.
Once a good machine with multiple processors to speed user access and multiple disk drives to prevent excessive dependence on any one drive is purchased, the DBA customizes the Oracle database around the hardware reality of its host machine. There is an entire school of thought on the physical layout of any Oracle database and its relationship to the performance of the Oracle database. Much of that school of thought appears on OCP Exam 4, Performance Tuning. For now, the DBA should be aware of the fact that having multiple disk drives available on the database means that the DBA should place each physical object--namely datafiles, redo logs, control files, parameter files, application software, archive log destination, and EXPORT dump destination--in such a way as to minimize the possibility that a failure of any one drive causes the failure of the entire database.
Proper use of the redo logs is key to the success of database recovery, as well as the performance of the database when recovery is not taking place. Loss of a drive that holds the only redo log for the database is a problem when a drive holding production data fails later. As mentioned before, mirroring redo information can spare the database the risks associated with no backups.
If possible, depending on the availability of multiple disk drives, the DBA may also want to store multiple copies of the control files and parameter files to minimize the risk of these important physical disk resources being lost by the failure of a disk drive on the database. Oracle recommends that the DBA maintain at least two control files for every Oracle database because a loss of the only control file or parameter file means that the instance will be unable to start. Having a copy of the control file and parameter file on different disks available to the database will minimize the possibility that any one disk failure will kill the system. A duplicate copy of the parameter file can be created with the use of operating system commands for copying flat files. To create additional control files for the database, the DBA can execute the following steps:
Finally, in terms of preserving backups in any database backup situation, it is important that the DBA copy any and all backups made to the backup disk destination off the machine entirely. This idea of backing up the database to offline storage media allows the DBA the peace of mind of knowing that the database is recoverable even if the entire machine fails. Making copies of backup media will also ensure a reduced dependency on any particular backup for the success of database recovery.
After these physical considerations related to failure, backup, and recovery are made, there are other aspects of database usage of its hardware that the DBA can address to achieve better performance. These are areas for customization of the database as well. A discussion of this area of customization must be prefaced with a discussion of efficient usage of logical database resources in light of constraints on physical resources.
With the idea that logical database structures can and should be built on the constraints of physical resources, the following discussion is offered on how to identify and prepare the logical storage components of the Oracle database.
When an Oracle database is installed, one tablespace is created to store a few logical database objects that Oracle needs to function. Those objects are the initial rollback segment of the database and all data dictionary objects for the database. This tablespace is called SYSTEM, and it is an incredibly important resource on the database, and its integrity on the system should be guarded carefully. As with any tablespace, the SYSTEM tablespace can contain any type of segment or extent used for any database object created. However, to put every database object that the database applications will use in the SYSTEM tablespace leaves the objects that must be in that tablespace vulnerable to storage needs, caused by the presence of those other objects. It is important to remember that if any damage should come to the SYSTEM tablespace, the database will need to be recovered.
By taking database objects and placing them in other databases designed to fit their storage needs, the DBA prevents the possibility of headaches related to storage needs. One of the first tasks the DBA should plan is how to store database objects like indexes, tables, temporary tablespaces, and rollback tablespaces. Each of these objects presents different needs that will be discussed in a moment. At this point, it is sufficient to say that these different types of database objects do exist, and that their storage needs are different. Therefore, the DBA should create appropriate tablespaces for each of these objects. A tablespace is created with the following command on the Oracle database:
CREATE TABLESPACE temp_01
DATAFILE db01_temp01.dbf SIZE 10M
DEFAULT STORAGE (
INITIAL 100K NEXT 100K
MINEXTENTS 1 MAXEXTENTS 450
PCTINCREASE 10 )
Notice that in the definition of the logical tablespace resource, there is a specification of a physical resource for that tablespace, a datafile. A size value is provided at tablespace creation as well, which specifies the total size of that tablespace. Additionally, there are default storage parameters associated with the tablespace. These default storage parameters are used for each database object that is placed in the tablespace that does not have storage parameters set explicitly by the database object create statement. When identifying default storage parameters for a tablespace, the DBA should attempt to set parameters that work well for a certain type of database object, and then within that type of object, the parameters set will be general guidelines for storage allocation.
The DBA must recognize that different types of objects need different types of tablespaces to fit them. In order to understand the different types of tablespaces, the DBA must understand the different types of objects that a tablespace may store. Since every logical database object used by a database application will ultimately consist of segments and extents, the discussion will focus on the different types of segments available on the Oracle database.
The first, and perhaps most obvious, type of segment is the table segment. Table segments store data from tables. Each segment contains data blocks that store the row data for that table. How the data in a table fills is determined most closely by the type of role the data in that table will support. For example, if a table supports an application that accepts large volumes of data insertions, then the segments, extents, and blocks that comprise that table will fill at a regular pace and rarely, if ever, reduce in size. Therefore, the DBA managing that segment will want to create a structure that is conducive to regular growth, paying attention to the ever-increasing space requirements of that system. In this case, the DBA will want to pay attention to how the table will handle growth. Growth in a data segment is generally handled with extents. If the segment runs out of space to handle the size of the table, then the table will acquire an extent from the remaining free space in the tablespace. If, however, this table is designed for storing validation data, then the size requirements of the table may be a bit more static. In this case, the DBA may want to focus more on ensuring that the entire table fits comfortably into one segment, thus reducing the potential fragmentation that extent allocation may cause. A more complete discussion on extents and fragmentation will appear later in the chapter.
Another type of segment is the index segment. Similar to table segments, index segments can be classified according to the type of role they support in the database. If the table to which the index is associated is designed for volume transactions, as in the example above, then the index, too, should be planned for growth. By nature, the index will be smaller than the database. What does an index consist of, exactly? The standard types of indexes available on Oracle databases prior to Oracle8 are binary search tree, or B-tree, indexes and bitmap indexes. Indexes are much smaller than the tables they provide location data for, but the principles behind their growth are the same. If an index is associated with a validation table, then the size of the index may be relatively static.
Rollback segments offer a bit of a departure from the types of segments just discussed. The purpose of a rollback segment is to store change data from uncommitted transactions for the purpose of read consistency and transaction concurrency. In the previous two examples, the segments used to store data for tables and indexes are generally for one-time use. In other words, unless the data is deleted from the table, the storage allocated to the table will never be reused. In contrast, once a user process has written the changes it intends to make and commits the transaction, the space the uncommitted transaction information held in a rollback segment is no longer needed to hold that data. Oracles rollback segment architecture is designed to allow the rollback segment to reuse that space at a later time. A rollback segment may have a few extents allocated to it in order to store uncommitted transactions. As the number of uncommitted transactions rises and falls, so too does the number of extents allocated to the rollback segment. Where possible, the rollback segment will try to place uncommitted transaction data into segments and extents allocated to it. For example, if the entire initial extent of a rollback segment contains data from committed transactions, the rollback segment will reuse that extent to store new data from uncommitted transactions.
Another type of segment available in the Oracle architecture is the cluster segment. Cluster segments are designed to support the use of clusters on the database. A cluster is a physical grouping of two or more tables around a common index. This grouping is done to enhance performance on searches that perform joins on the two tables. Sizing of cluster segments should be performed carefully, as each segment will essentially be storing data from two different tables in each block. A more complete discussion of cluster segments will appear in the next chapter.
The final type of segment to consider is the temporary segment. True to its name, the temporary segment is allocated to store temporary data for a user transaction that cannot all be stored in memory. One popular use for temporary segments in user processes is for sorting data selected into a requested order. These segments are allocated on the fly, and dismissed when their services are no longer required. Their space requirement is marked by short periods of high storage needs followed by periods of no storage need.
Each of these segment types present very different, and often conflicting, storage needs on a tablespace that might house them. At this point, the DBA should understand why it is a bad idea to place all database objects in the SYSTEM tablespace. Instead, the DBA should create different tablespaces designed to hold each of the various types of segments and extents so as to keep them separated from one another and also to allow for storage allocation for each type that is appropriate for the needs of each type of segment.
In this section, you will cover the following topics related to managing storage allocation:
|Allocating extents to database objects|
|Database storage allocation parameters|
|Using space utilization parameters|
|Displaying database storage information|
Storage allocation is an ongoing maintenance task in the Oracle database. Tables and indexes will grow, and more space will be required to store them. The ongoing nature of storage allocation is the topic of the final areas of discussion in this chapter. Once the physical storage requirements of a system are determined in the configuration of physical database disk resources like datafiles and redo logs, the remaining maintenance activity is to allocate additional storage space to the logical structures, such as adding datafiles to tablespaces or obtaining extents for tables, indexes, or rollback segments.
Database objects are stored in segments of finite size. As new rows are added to tables, the space of the segment is used to store that new data. After some time, the segment will fill. At the time a new piece of data must be added to the table, if the segment storing the table data is full, the table must allocate a new extent to store that data in. Figure 7-7 illustrates the situation where extents are acquired on an Oracle database. When an object requires a new extent, the size of the space obtained for the database object depends on the storage parameters set for the database object or for the tablespace containing the object. For reference, those parameters are listed in the next discussion.
Figure 7: Acquiring extents
A new extent will be acquired for a database object only if there is no room in any of the current extents for the data being inserted or updated. Once acquired, an extent is never relinquished unless the DBA reorganizes the table manually with the use of database tools like EXPORT, and re-creates the object with only one extent. Oracle bases the size of the extent on the value in the next clause if there is only one extent on the database. If Oracle is getting ready to acquire a third extent or more, the size of the extent will be the value of the next clause times the value specified for the pctincrease clause. If the DBA would like to specify a different size for an extent than the one that will be calculated, the DBA can issue the alter object statement to change the value specified for the next storage clause, substituting object with the name of the database object in question. The next clause informs Oracle of the size it should use for the next extent it allocates. Another way the DBA can manipulate the size of the next extent allocated is to change the pctincrease clause. This clause will change the percentage increase of the next extent over the size of the last one created when a new extent is issued.
ALTER TABLE employee
STORAGE (NEXT 10M);
ALTER TABLE employee
STORAGE (PCTINCREASE 50);
There are several different storage parameters associated with logical database objects. These parameters are associated with how the object manages segments and extents. Other parameters that can be set relating to specific database objects that deal with how the database object manages data on the more granular block level will be discussed later in the treatment of each database object. Each tablespace can be created with default values set for each of these parameters. If a database object is created with no value explicitly named for any or all of the storage parameters, the storage parameter for the table will be whatever the default value for that parameter was in the tablespace.
|initial||Segment storage clause that determines the size (either in kilobytes or megabytes as determined by specifying K or M) of the initial extent comprising the database object.|
|next||Segment storage clause that determines the size (either in kilobytes or megabytes as determined by specifying K or M) of the second extent comprising the database object.|
|minextents||Segment storage clause that determines the minimum number of extents a database object may have.|
|maxextents||Segment storage clause that determines the maximum number of extents a database object may have. A special keyword unlimited can be used to allow unlimited number of extents on the object.|
|pctincrease||Segment storage clause that specifies the permitted percentage of growth of each subsequent extent allocated to the database object. For example, if a table is created with initial 1M, next 1M, minextents 2 and pctincrease 20 storage clauses, the third extent created for this object will be 1.2M in size, the fourth will be 1.44M in size, etc. Oracle rounds up to the nearest block when use of this option identifies an extent size that is not measurable in whole blocks. The default value for this option is 50 percent. Note: This option is NOT available for rollback segment creation.|
|optimal||Segment storage clause that specifies the optimal number of extents that should be available to the rollback segment. Note: This is ONLY available for rollback segment creation.|
|freelists||Database object storage clause that specifies the number of lists in each freelist group that Oracle will maintain of blocks considered "free" for the table. A free block is one that either has not reached its pctfree threshold or fallen below its pctused threshold. A more complete discussion of pctfree and pctused will appear in the next chapter.|
|freelist groups||Database object storage clause that specifies the number of groups of freelists for database objects that Oracle will maintain in order to know which blocks have space available for row storage. Note: this is available for Parallel Server Option usage.|
One critical way the DBA can limit the space usage of a particular database object is to restrict the number of extents a database object may allocate. For example, if a database object can only allocate a maximum number of 20 extents, chances are more favorable that the object will not go over its size limit than if the database object can allocate a maximum number of 100 extents. Another possibility for limiting space usage is to set tablespace default storage parameters to certain values, and use those default values for every object created in the tablespace by not specifying explicit storage parameter values in the object creation statement. This method allows for consolidation of all values specified for storage, and also allows the DBA to set standards in database object space allocation.
The best approach for designing storage accommodations on a database object is to have larger and fewer segments and extents available for a database object to use. There is some overhead involved in allocating extents that can be minimized if the frequency that a database object will need to extend is minimized. When the extents are larger, Oracle will be able to place more data in them, thus reducing the need to extend. Frequent allocation and deallocation of extents also has the tendency to fragment a tablespace, which can cause problems for the DBA when a tablespace requires reorganization. Although there are tools available for this task, it is time-consuming. Ultimately, larger extents are easier to manage.
A final area of space management that can be used to limit the amount of space allocated to the database object is limiting the percentage increase for each extent allocated to the database object. Specifying a percentage increase for each additional extent has positive effects and negative effects in reducing the maintenance burden of database object storage management. Consider the following example of using percentage increases for storage management. Table EMP has been created with a percentage increase of 50 percent on each extent allocated to it. The first extent is 1M, which fills quickly as employees are added to the company. The next extent is 1.5M. If the rate at which records are added to the database stays constant, it will take longer for the second extent to fill than it took the first extent. Each extent allocated will be 50 percent larger than the extent allocated before it, so the third extent will have 2.25M allocated to it, which takes even less time to fill. The benefit of specifying a percentage increase in this situation is that fewer extents are allocated to store 4.75M of data (three) by specifying the percentage increase. If no percentage increase had been permitted, the same data would have required five extents.
However, consider the following situation. The size of EMP has stabilized, with only a few rows needing to be added monthly. The size of the table has reached the 4.75M capacity of the three extents, but only a few more rows need to be added. With the percentage increase specified for each extent allocated, the next extent will be 3.375M. However, most of that space will be empty, as only a few rows will occupy that final extent allocated to the table. In some cases, specifying a percentage increase can cause too much space to be allocated to a database object. Of course, the only way a DBA could know that the high percentage increase was no longer necessary would be to carefully monitor the database tables to determine growth patterns and adjust values accordingly. In fact, careful monitoring is the only way a DBA can effectively manage the storage allocation in a database.
Each parameter above can be used by the DBA to manage the usage of a databases available space. One critical way the DBA can limit the space usage of a particular database object is to restrict the number of extents a database object may allocate. For example, if a database object can only allocate a maximum number of 20 extents, chances are more favorable that the object will not go over a certain size than if the database object can allocate a maximum number of 100 extents.
Another possibility for limiting space usage is to set tablespace default storage parameters to certain values, and use those default values for every object created in the tablespace by not specifying explicit storage parameter values in the object creation statement. This method allows for consolidation of all values specified for storage, and also allows the DBA to set standards in database object space allocation.
The DBA should not only be concerned about the amount of space granted to a database object. The DBA must pay attention to see that the objects use the space they are given in an appropriate manner. Space usage is determined at the block level through use of the pctfree and pctused options. The appropriate manner for a database object to manage its space depends on how the object itself is utilized by the database applications. If the database object experiences a high update transaction volume that changes many fields from NULL to actual values, the storage within that database object should allow for additional growth per row. Conversely, if the object activity is marked by high insert volume of fairly static length rows, then the space usage within each block may try to place as many rows as possible into each block before allocating another one to fill. Finally, if a tables size is static and rows are infrequently added to the table, then the DBA may want to consider setting options for block space usage to put as many rows as possible in each block while also making sure that the entire table fits into one segment.
The pctfree option is used to specify the amount of space in each block Oracle should leave free for the growth of rows added to the block. For example, if a table has pctfree specified to be 10 percent, then Oracle will stop adding new rows to the block when there is only about 10 percent free space left in the block. A DBA may set this value high if it is determined that each row in the block will be updated often and each update will add several bytes to the total row length. Setting pctfree high prevents performance killers like chaining (where Oracle breaks a row apart because no data block can store the entire row) and row migration (where Oracle moves an entire row to another block because the original block doesnt have the room to store it anymore). Conversely, if the rows in the block will not be updated frequently, or if the updates that will occur will not affect the size of each row, the DBA may set the value for pctfree low on that database object.
The pctused option is used to specify a threshold by which Oracle will determine if it is acceptable to start adding rows to a block again. As data is deleted from a table, its space utilization may fall. There is a trade-off inherent in specifying this option because although it is good to utilize free space in a block to add row data, there is some overhead involved in marking a block as free and adding it to a freelist for that database object, especially if only one or two rows can even be added to the block. In many cases, the DBA may want it so that any block added to the freelist will have space available for 20 or more rows. In which case, the DBA will set the pctused option low. There are other performance trade-offs inherent in the use of pctfree and pctused that will be discussed in the next chapter.
Several means exist for the DBA to determine storage information for database objects in the database. These methods center around the use of the data dictionary. There are three different classifications for data dictionary views associated with tracking information on logical database objects like tables, indexes, clusters, rollback segments, and temporary tables. The names of dictionary views are usually taken from the objects represented by the data in the dictionary view, preceded by classification on the scope of the data. Each segment has its own data dictionary view that displays the storage information. Assuming that the DBA wants to know the storage parameters set for all objects on the database, the DBA may use the following views to determine storage information for the segment types already discussed:
|DBA_TABLES View contains information about all tables in the database.|
|DBA_INDEXES View contains information about all indexes in the database.|
|DBA_ROLLBACK_SEGS View contains information about all rollback segments in the database.|
|DBA_CLUSTERS View contains information about all clusters in the database.|
|DBA_SEGMENTS This summary view contains all types of segments listed by the data dictionary views above and their storage parameters.|
|DBA_TABLESPACES The DBA can use this view to see the default storage settings for the tablespaces in the database.|
|DBA_TS_QUOTAS The DBA can use this view to identify the tablespace quotas assigned for users to create objects in their default and temporary tablespaces.|
This chapter covered a great deal of information related to database administration on Oracle. The topics covered at a glance were how Oracle supports the accessing and updating of data, how Oracle manages transaction concurrency, how the DBA manages database structure, and how the DBA manages storage allocation. These areas consist of 22 percent of material tested on OCP Exam 2 and are key functions for understanding both how the Oracle database works and how to manage its efforts.
Accessing and updating data was the first area covered. Oracle allows users to access and change data via the SQL language. SQL is a unique language in that it allows users to define the data they want in terms of what they are looking for, not in terms of a procedure to obtain the data. Oracle manages the obtaining of data by translating SQL into a series of procedures Oracle will execute to fetch the data the user requested. The steps Oracle uses in SQL statement processing are opening the statement cursor, which is a memory address Oracle will use for storing the statement operation; parsing the statement into a series of data operations; binding variables in place of hard coded values to allow for parse tree sharing; executing the statement; and fetching the results (query only). After the statement is executed, the parse information is left behind in the library cache of the shared pool in order to reduce the amount of memory required to handle user processes and also to boost performance of SQL statement processing.
In order to further boost performance, Oracle maintains an area of the SGA called the buffer cache, which is used to store data blocks containing rows from recently executed SQL statements. Part of this buffer cache contains an area called the dirty buffer write queue, which is a list of blocks containing row data that has been changed and needs to be written to disk. When users issue statements that require Oracle to retrieve data from disk, obtaining that data is handled by the server process. Another database process, DBWR, eliminates I/O contention on the database by freeing user processes from having to perform disk writes associated with the changes they make. Since users only deal directly with blocks that are in the buffer cache, they experience good performance while the server and DBWR processes handle all disk utilization behind the scenes.
The Server process does its job whenever user processes need more blocks brought into the cache. In order to make room for the incoming data, the server process eliminates blocks from the buffer cache according to which ones were used least recently. One exception to this rule is made for blocks that were brought into the buffer cache to support full table scans. These buffers are eliminated almost immediately after they are scanned. The DBWR process will write buffers back to the database when triggered to do so by another process, called LGWR, during a special database event called a checkpoint. DBWR also writes data to the database every three seconds in a timeout.
Oracle handles the tracking of changes in the database through the use of the online redo log. There are several components to the online redo log. The first is an area in memory where user processes place the redo log entries they have to make when they write a change to the database. This area is called the redo log buffer. Another component of the online redo log is a set of files on disk that store the redo log entries. This is the actual "online redo log" portion of the architecture. There is a minimum of two online redo logs in the Oracle database. They consist of one or more files, called "members," that contain the entire contents of the redo log. For safety purposes, it is best to put each redo log member on a separate disk so as to avoid the failure of one disk causing the failure of an entire Oracle instance. The final component of the online redo log is the log writer process (LGWR), a background process mechanism that writes redo entries from the memory buffer to the online redo log.
The use of checkpoints has already been mentioned, and will now be explained. A checkpoint is performed every time LGWR fills an online redo log with redo entries and has to switch to writing entries to another redo log. A checkpoint is when LGWR sends a signal to DBWR to write all changed data blocks in the dirty buffer write queue out to their respective datafiles on disk. By default, checkpoints happen once every log switch, but can happen more often, depending on the values set for LOG_CHECKPOINT_INTERVAL or LOG_CHECKPOINT_TIMEOUT. These two parameters allow for transaction volume-based or time-based checkpoint intervals.
In multiple-user environments, it must be remembered that there are special considerations required to ensure that users dont overwrite others changes on the database. In addition, users must also be able to have read-consistent views of the data, both for individual statements and for collections of statements treated as one operation. The key to transaction concurrency without overwriting another users changes is the concept of transaction processing. Transactions are made possible in the Oracle database with the use of mechanisms that allow one and only one user at a time to make a change to a database table. These mechanisms are called locks. In addition, when the user makes a change to the database, that change isnt recorded on disk right away. Instead, the change is noted in a database object called a rollback segment. This mechanism allows the user to make a series of changes to the database and save or commit them once as one unit of work. Another feature this architecture allows for is the ability to discard the changes made in favor of the way the data used to look. This act is called a rollback. The rollback segment allows for read-consistent views of the data on the database at the transaction level.
Database structure was another aspect of OCP Exam 2 this chapter covered. There is a physical and a logical view of the database. The physical structure permits the database to grow to a certain size, while the logical structure regulates its setup. Storage is governed by parameters set at object creation. These parameters can be changed at various points in the maintenance of the object. Storage allocation should work around the reality of the physical database design in that the DBA should attempt to place objects over several disks to better utilize the physical resources available to Oracle.
At database creation, Oracle creates a special tablespace called SYSTEM to hold the data dictionary and the initial rollback segment of the database. There are several different types of segments on the database that correspond to the various types of database objects. Some examples are tables, indexes, rollback segments, clusters, and temporary segments. For the most part, these objects have different storage needs, and as such it is usually best for them to be in separate tablespaces.
When the data in a database object grows too large for the segment to store all the data, Oracle must acquire another extent for the object. The size of the initial extent, the acquired extent, the number of extents allowed, and possible percentage increases for each extent of an object are all governed by the use of storage parameters. Another aspect of database usage that is governed by storage parameters is how the data in each data block owned by the object will be stored. In order to find out the storage parameters and the overall space usage for a database object, the DBA can utilize several views in the data dictionary.
|The steps in SQL statement processing are as follows:|
|Open cursor in the shared pool library cache and create a private SQL area in PGA.|
|Parse statement into an execution plan.|
|Bind variables (select only).|
|Execute statement execution plan.|
|Fetch values from cursor (select only).|
|Leave execution plan available in shared pool library cache.|
|The shared SQL area, or library cache of the SGAs shared pool, has the following benefits:|
|Improves performance of subsequent executions of the same query by saving the previous work of developing an execution plan.|
|Saves space in memory by not storing duplicate execution plans.|
|Oracles buffer cache allows user processes to avoid I/O processing entirely by working off data contained in the buffer cache.|
|The buffer cache contains blocks selected or changed in recent SQL statements.|
|The server reads data into the buffer cache when user processes request the data. When a block enters the cache, it will stay in the cache for as long as its data is recently used.|
|Blocks in the buffer cache that contain changed row data are stored in the dirty buffer write queue.|
|Blocks that are least recently used, or blocks read into the buffer cache as a result of full table scans, are eliminated from the buffer cache.|
|Blocks are read into the buffer cache by the server process.|
|The server process draws blocks into the cache whenever they are required by user processes.|
|DBWR writes changed blocks out of the dirty buffer write queue whenever LGWR issues a checkpoint. The event of LGWR telling DBWR to write blocks out of the dirty buffer write queue and onto disk is called a checkpoint.|
|DBWR also writes dirty blocks to disk every three seconds in an event called a timeout.|
|The Oracle redo log architecture consists of the following components:|
|The LGWR process moves redo entries from memory onto disk.|
|The redo memory buffer stores redo entries from user processes, eliminating the need for user processes to handle I/O directly.|
|The online redo log is a group of one or more files, called members, where LGWR writes the redo log entries from memory. There are at least two online redo logs in the Oracle instance.|
|Checkpoints are events in which LGWR tells DBWR to write all changed blocks to disk.|
|Oracle handles data concurrency and statement- and transaction-level read consistency by using transaction processing.|
|Components of transaction processing are as follows:|
|Locks These prevent more than one user process from making data changes, and sometimes from even looking at changing data, at one time.|
|The rollback segment architecture This architecture allows a user to determine when the changes they have made are acceptable for saving. A commit saves data changes, while a rollback discards data changes.|
|Database storage allocation is handled in physical disk resource creation and logical disk resource creation.|
|Physical database disk resources are as follows:|
|Redo log files|
|Logical database disk resources are as follows:|
|The DBA can work around resource conflicts and improve performance with the right hardware available.|
|It is important to customize a database around the need for good backups.|
|The DBA should ensure that the redo log is adequately mirrored, or spread across disks so as to minimize the dependency of Oracle on any one disk on the machine hosting it.|
|At database creation, there is one tablespaceSYSTEM. The DBA should NOT place all database objects into that tablespace. Instead, the DBA should create multiple tablespaces for the various types of objects on the database and place those objects into that tablespace.|
|There are several different types of segments. They are as follows:|
|Tablestores table row data|
|Indexstores index row data|
|Rollbackstores data from uncommitted transactions until the transaction completes|
|Clusterstores cluster row data. A cluster is a special database object designed to enhance performance on selecting data from two closely-related tables.|
|Temporarystores data on a temporary basis from user SQL statements. In certain cases, Oracle needs extra space to perform special operations on selected data (such as a sort).|
|When a segment containing a database object cannot store any more data for that table, Oracle will obtain an extent to store the data.|
|There are several logical storage options that govern how Oracle obtains extents for segments. They are as follows:|
|initial The size in K or M of the initial extent of a data object.|
|next The size in K or M of the next extent allocated to a data object.|
|minextents The minimum number of extents a data object will have.|
|maxextents The maximum number of extents a data object will have.|
|pctincrease A percentage increase in the next extent size that will be allocated for a database object.|
|freelists The number of lists of free data blocks that will be maintained by Oracle in each freelist groups (see below).|
|freelist groups The number of groups of freelists Oracle will maintain for a data object. The default is one.|
|If the DBA wants to size the next extent granted to a database object, she can alter the object to change the next option to set an explicit size for the next extent created. The change will take effect on the next extent granted.|
|If the DBA wants to change the percentage increase of each extent obtained, she can alter the object to change the pctincrease option. The percentage increase change will take effect on the next extent created.|
|Oracle allows the DBA to control how an object uses its space allocation at the block level. The options are as follows:|
|pctfree The amount of space left free for size increases of rows already added to the block via updates.|
|pctused A threshold that, once the block is filled to pctfree, the capacity held by that block must fall below in order for Oracle to add more row data to the block.|
|Information for each storage parameter and for space parameters is available for specific types of objects in the following data dictionary view:|
|Default settings for storage parameters as set by each tablespace can be found in the DBA_TABLESPACES view.|