Execute the alter database open resetlogs; command from Oracle Server*Manager.. Issue the recover database until time 'YYYY-MM-DD:HH24:MI:SS' command from within Oracle Server*Manager..
Trang 1If the structure of a database table is changed—using an alter table command for example—the programs that create the SQL*Loader readable files must be changed If this is not done, proper backups of all data within the database object will not be taken
For some sites, however, this type of backup is practical despite the obvious constraints For a more detailed discussion
of the implementation of SQL*Loader, refer to the Oracle7 Server Utilities User's Guide
Types of Database Failure
Every DBA experiences a database failure at some point It might be a minor failure in which the users never even know that they lost service, or it might be a severe loss that lasts for several days Most failures fall somewhere in the middle
Most failures result primarily from the loss or corruption of a physical data file Of course, many other factors can cause database problems Indeed, problems can occur in the memory structures (the SGA), the system hardware, or even the Oracle software that prevent the DBA from starting up the database The following sections describe the most common types of failures
Tablespace
If a loss or corruption takes place in any of the database files that make up a tablespace, media recovery is required The extent of the media recovery needed depends largely on the extent of the data file loss or corruption The three types of recovery available for this type of recovery are
● Database recovery
● Tablespace recovery
● Data file recovery
The database recovery method is generally chosen if the SYSTEM tablespace has been damaged, in which case it syncs all the data files within the database during the recovery procedure The tablespace recovery method is used if recovery
is needed for multiple tablespaces that had become damaged, such as from the loss of a disk drive The data file recovery method is performed if only a single database file has been damaged The commands used to implement these methods are
recover database;
recover tablespace users;
recover datafile '/u03/oradata/norm/rbs01.dbf';
Control File
Whenever a database loses a control file, there is generally little impact on the database itself as long as the DBA has mirrored the control files To recover the control file, follow these steps (which assume that the control file has been mirrored):
1 From Oracle Server*Manager, do connect internal and perform a shutdown (or shutdown immediate) on the database
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 22 Copy one of the existing control files over the corrupted file If it is not possible to do this, copy it to another location and reflect the change in the CONTROL_FILES parameter of the INIT.ORA parameter file, or remove
it completely
3 From Oracle Server*Manager, do connect internal and perform a startup on the database
The database will bring the control file in sync with the database, and the users will experience no loss of service or downtime
If a control file has been lost and there is no backup, Oracle continues to run until it attempts to access the control file At that point, the Oracle instance aborts Two options available to the DBA:
● Create a new control file
● Restore the control file from backup
To create a control file, you must first create a SQL script that will adequately recreate the existing control file If a backup to trace is part of regular backups, the script already exists in the USER_DUMP_DEST directory Use ls -lt in UNIX to find the most recent one Use view to make sure that it creates a control file and is not simply SQL*Trace output) Perform the following steps:
1 Locate or create a SQL script
2 From Oracle Server*Manager, do connect internal
3 If a new create script was created, issue the startup nomount command Execute the SQL script Then execute the commands
recover database;
alter system archive log all;
alter database open;
4 If the create control file script is from a backup to trace, execute the script from a shutdown database It will execute all the intermediate steps and open the database
If you choose to use a backup control file, issue the following recover command in place of the standard recover
If all the members of a redo log group became corrupted or were lost, the scenario is entirely different Dealing with the loss of an entire redo log group is the same as dealing with an unmirrored redo log The two possibilities are
● The redo logs were not the active group
● The redo logs were the active group
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 3If the redo log group was not the active group, the corrupt group and its members eventually cause the database to shut down The trick is to recognize that damage has been done and to react before the database shuts down Restore the online redo log from tape, or copy it from an existing redo log group if they are the same size If the disk itself is corrupt and unavailable, rename the redo log group If you are lucky enough to catch the database at this time, this is the best alternative Otherwise, if the database attempts to access the corrupted redo log, the redo log must be recovered as if the active redo log was lost (see below)
The more likely scenario is that the database aborted because it lost an inactive online redo log The recovery steps are basically the same, but they are done in an offline fashion Recover the offending redo log group, or make a copy of an existing group if they are the same physical size From Oracle Server*Manager, do connect internal and start up the database The downtime involved should be minimal
A loss of the current online redo log requires a limited recovery scenario Although a full database recovery is not actually applied, you must to make the database think that one has occurred Only then can processing continue The steps are
1 From Oracle Server*Manager, do connect internal Use shutdown, shutdown immediate, or shutdown abort to shut down the database
2 Execute startup mount on the database instance
3 Once the database has been mounted, issue the recover database command At the next prompt, enter cancel
4 Issue an alter database rename command to move the corrupted redo logs to a new location The new files are created automatically
5 Execute the alter database open resetlogs; command from Oracle Server*Manager The database is brought back online for continued operations
Operations that require restarting an aborted Oracle database instance can be quite complex The complications that can arise during an operation as sensitive as a recovery are numerous If the recovery process does not seem to work
properly, stop and contact Oracle technical support immediately
Even though the loss of an archive log does not affect the continued operations of the database—which why
NOARCHIVELOG mode is available— if anything occurs that requires database recovery before the next backup, it will
be impossible to recover the database
Because archive logs facilitate recovery, their loss is often realized only when it is too late It is a difficult position for a DBA, and there is no clear right or wrong solution It depends on the backup schedule It is easier to wait a few hours until the next hot backup than to wait several days for the next cold backup
We recommend that you immediately initiate a hot backup of the database It will slow down things and cause the system to choke a little on processing, especially during peak usage time It is far better, though, than waiting and hoping Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 4that nothing will happen
Recovery Methods
There are several methods for performing database recovery Each methods offers a trade-off between speed and
simplicity The following sections describe the major types of recovery available through the Oracle RDBMS
Cold Restore
In a cold restore, all the database files, control files, and redo logs are restored from tape or disk, and the database is restarted It is the simplest, most complete recovery operation to perform The primary drawback is that anything done to the database since the last backup is lost
The steps in a cold restore are
1 Make sure that the current Oracle database instance is shut down
2 Replace all the existing database files, control files, and redo logs with earlier versions from tape or disk
The time and date stamps on all of the files from the recovery should be for the same period of time If they are not, the database will be out of sync and will not open properly
1 From Oracle Server*Manager, do connect internal and issue a startup command The database is now ready for use
Full Database Recovery
In a full database recovery, also called a complete recovery, data changed since the last backup can be restored One or
more database files are restored from backup Archive logs are then applied to them until they are in sync with the rest of the database
The steps in a full database recovery are
1 Make sure that the database instance is shut down
2 Restore the data file from tape or disk
3 From Oracle Server*Manager, do connect internal and perform startup mount on the database instance
4 Issue the recover database automatic command from within Oracle Server*Manager Oracle Server*Manager responds by applying all the required changes to the database instance Depending on the length of time since the last backup and the size and number of the archive logs, this wait can take a few seconds or several hours
5 After the SVRMGR> prompt returns, issue the alter database open noresetlogs; command The database is now completely recovered and available for use
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 5There are several variations of the recover database command, including recover datafile and recover tablespace
Time-Based Recovery
Sometimes a recovery is required, but not everything in the archive logs is necessary Suppose, for example, that an overzealous developer deploys a job that deletes every other row in a transaction processing table In this case, a full recovery will not work Because the transactions that corrupted the table are in the archive logs, a full recovery simply restores from the last backup and processes all the transactions, including the haphazard delete If you know that the job ran at 2:30 p.m., you can use time-based recovery to recover until 2:29 p.m That way, the table is exactly as it appeared
before the job ran This is also called an incomplete recovery
A time-based recover is performed exactly like a full recovery, with the exception of the recover database command The steps are
1 Make sure that the database instance is shut down
2 Restore the data file from tape or disk
3 From Oracle Server*Manager, do connect internal and perform startup mount on the database instance
4 Issue the recover database until time 'YYYY-MM-DD:HH24:MI:SS' command from within Oracle
Server*Manager This is a mask for the time and day on which the recovery should stop Oracle Server*Manager responds by applying all the required changes to the database instance Depending on the length of time since the last backup and the size and number of the archive logs, this wait can take a few seconds or several hours
5 After the SVRMGR> prompt returns, issue the alter database open resetlogs; command The database is now completely recovered and available for use
Cancel-Based Recovery
Even if you do not know the exact time when an error occurred, you might feel reasonably certain that you can isolate when to terminate the recovery based on the thread/sequence number Perhaps there was a break in the archive logs because you had the database out of ARCHIVELOG mode for a short time, or perhaps you want more control over what archive logs are applied as part of the recovery The solution is cancel-based recovery
Under cancel-based recovery, you are prompted after each archive log is applied The recovery process continues until either the recovery is complete or you enter cancel at the prompt The prompt appears within Oracle Server*Manager as
Specify log: [<RET> for suggested | AUTO | FROM logsource | CANCEL]
Once you enter cancel at the prompt, the recovery stops
The steps in a cancel-based recovery are
1 Make sure that the database instance is shut down
2 Restore the data file from tape or disk
3 From Oracle Server*Manager, do connect internal and perform startup mount on the database instance
4 Issue the recover database until cancel command from within Oracle Server*Manager Oracle Server*Manager Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 6responds by prompting you before each archive log is applied The recovery ends when the database encounters the final archive log or when you enter cancel
5 The SVRMGR> prompt will return If the recovery ran until completion, issue the alter database open
noresetlogs; command If you entered cancel to end the recovery, issue the alter database open resetlogs; command The database will be recovered until the point of completion or cancellation
Sample Database Backup Scripts
The code examples in the following sections show you how to set up and execute hot and cold backup schemes These are not highly intensive processing modules There are certainly ways to make them more sophisticated For example, you could make the Oracle data dictionary determine which files to backup Figure 14.7 shows the sample database that scripts try to backup
Figure 14.7 Sample Oracle database layout.
# Oracle RDBMS Cold Backup
# shutdown the database
# make copies of all database files, control files, and redo logs
find /u*/oradata/norm/*.dbf /u*/oradata/norm/*.ctl /u*/oradata/norm/*.log | cpio -ocvB > /dev/rmt/0hc
# startup the database
Trang 7Hot Backup
This hot backup script shows a backup that occurs to disk instead of to tape, as in the cold backup Whereas the cold backup shuts down the database and does a mass file copy, the hot backup tediously copies the database files for each tablespace The cold backup is more dynamic than the hot backup because it uses wildcards and the OFA Whenever a new database file is added or changed, the hot backup must be changed If it is not changed, an adequate backup is not be taken
Unlike the cold backup script, which makes its copies to tape, the hot backup script makes copies of the Oracle files to disk Either type of copy is acceptable for either backup method
#!/bin/sh
# Oracle Hot Backup Script
$ORACLE_HOME/bin/svrmgrl << EOF
connect internal
REM ** Backup System Tablespace
alter tablespace system begin backup;
!cp /u01/oradata/norm/system01.dbf /b01/oradata/norm/system01.dbf
!cp /u03/oratata/norm/system02.dbf /b03/oradata/norm/system02.dbf
alter tablespace system end backup;
REM ** Backup TOOLS tablespace
alter tablespace tools begin backup;
!cp /u01/oradata/norm/tools01.dbf /b01/oradata/norm/tools01.dbf
alter tablespace tools end backup;
REM ** Backup RBS Tablespace
alter tablespace rbs begin backup;
!cp /u01/oradata/norm/rbs01.dbf /b01/oradata/norm/rbs01.dbf
alter tablespace rbs end backup;
REM ** Backup USERS Tablespace
alter tablespace users begin backup;
!cp /u01/oradata/norm/users01.dbf /b01/oradata/norm/users01.dbf
alter tablespace users end backup;
REM ** Backup PROD Tablespace
alter tablespace prod begin backup;
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 8!cp /u03/oradata/norm/prod01.dbf /b03/oradata/norm/prod01.dbf
!cp /u05/oradata/norm/prod02.dbf /b03/oradata/norm/prod02.dbf
alter tablespace prod end backup;
REM ** Perform Control file backup
alter database backup controlfile to '/b01/oradata/norm/control.ctl';
alter database backup controlfile to trace;
REM ** Backup OnLine Redo Logs
Cold backups shut down the database Hot backups take backups while the database is functioning There are also supplemental backup methods, such as exports Each type of backup has its advantages and disadvantages The major types of instance recovery are cold restore, full database recovery, time-based recovery, and cancel-based recovery
This chapter also contains sample scripts that you can use to build your own backup scripts
Previous
Page TOC
Next Page Home
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 9Previous
Page TOC
Next Page Home
● 15
❍ Performance Tuning and Optimizing
■ General Concepts in Database Tuning
■ Applications Tuning
■ Database Tuning
■ Operating System Tuning
■ Performance Tools
■ Viewing SGA and Parameter Settings
■ UTLBSTAT and UTLESTAT
■ EXPLAIN PLAN
■ SQL*Trace and TKPROF
■ Dynamic Performance (V$) Tables
■ Tuning Database SGA
■ Examining the Current SGA
■ Changing the SGA Size
■ Database Block Size
■ Database Buffer Cache
■ Shared Pool Size
■ Sort Area Size
■ Ramifications of SGA Changes
■ Contention Issues
■ I/O Contention and Load Balancing
■ Rollback Segment Contention
■ Redo Log Contention
■ Checkpoints
■ Database Objects
■ Tables and Indexes
■ Migrated and Chained Rows
■ Types and Classes of Locks
■ Unresolved Locking Issues
■ Checking Locking Situation
■ Summary
15
Performance Tuning and Optimizing
Give a user an inch, and he wants a mile If you change a database query so that it runs in one minute instead of five, the user will want it to work in 30 seconds No matter how fast a database runs, there is always the need to make it go faster Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 10Ultimately, this task falls to the DBA A DBA really has two levels of responsibility: actual and perceived
Actual responsibility means the tasks for which a DBA is genuinely responsible: keeping the database available for to-day business needs, creating new user accounts, monitoring the overall health of the database, and so on Perceived responsibility means the responsibility incurred when there is any problem with the database—or even a conflict in the corporate IS structure A DBA is often asked why the database is down when a link has broken in the WAN, or why the database is performing slow when a poorly written application is deployed into a production environment
day-Because all database problems are perceived to be the responsibility of the DBA, it falls to him—whether he likes it or not—to validate the claims or dispel the rumors The DBA must have a solid foundation of knowledge to base his decisions on In many larger IS departments, the DBA may not be responsible for performance tuning In others, the DBA may be responsible only for database—but not application—performance tuning At some sites, the DBA is responsible for all performance tuning functions of the database
This chapter deals with the art of performance tuning
For more information about performance considerations while designing a database, see Chapter 17, "Designing
a Database.";
General Concepts in Database Tuning
When you are called on to optimize or tune a system, it is of paramount importance that you distinguish between the two levels of performance tuning: applications tuning and database tuning They are distinct areas of expertise and are often handled by different people The DBA should have at least an overview of the importance and functions of each type of tuning
At the base of everything is the operating system, which drives the physical functionality—such as how to access the physical disk devices On top of this level rests the RDBMS, which interacts with the operating system to store
information physically Applications communicate with the RDBMS to perform business tasks
Applications Tuning
Applications tuning deals with how the various applications—forms, reports, and so on—are put together to interact with the database Previous chapters discussed how a database is little more than a series of physical data files Essentially, an application is nothing more than a program that issues calls to the database, which in turn are interpreted as physical reads and writes from the physical data files Applications tuning means controlling the frequency and amount of data that the application requests from or sends to the database
Here are some general guidelines for tuning applications:
● Generate an EXPLAIN PLAN on all the queries in the application This helps you determine whether a query has been properly optimized The EXPLAIN PLAN is discussed later in this chapter
● Check the EXPLAIN PLAN of database views This is important because views are indistinguishable from tables when they are used in queries Because the SQL for a view is not executed until it is queried, an inefficient view can drastically slow down the performance of an otherwise efficient application Be especially wary of joining views with other views
● If an application that was performing acceptably begins to perform slowly, stop and determine what has
changed In many cases, queries run fine in test environments and in the first few months of production until Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 11data accumulates; an index might now be needed to expedite the database searches In other cases, however, an index that invalidates existing EXPLAIN PLANs might have been added This is a real danger when too many people can create indexes on production tables The more indexes that a table has, the longer it takes to load or change data in a database table; it also impacts the speed with which the database returns query results
● Match SQL where possible Applications should use the same SQL statements wherever possible to take advantage of Oracle's Shared SQL Area The SQL must match exactly to take advantage of this
● Be as specific as possible The more specific a database query is, the faster a query executes For example, querying a table by a ROWID is far more specific than querying with the LIKE clause Unless it is necessary to use less specific queries in an application, always write queries that can use the PRIMARY KEY or other indexed information
● Be aware of how often queries are made against the database and whether they are necessary Avoid too
frequent or unnecessary calls, such calling a loop that initially queries the DUAL table for the name of the user Each time the loop executes, the query is executed Other types of queries are even more expensive Whenever possible, process data in memory and refrain from querying the database
● SQL is not a file handler One of the most common mistakes in SQL programming is made by people who have previous programming experience using file handlers, such as BTRIEVE or ISAM Software developers should
be wary of writing two separate queries for master/detail relationships—that is, one query for the master and another for the details for that master—instead of just a single query They involve extra processing overhead that can have a substantial overhead for applications programs
● Tuning does not solve the problems of poor design This is the most essential truth in applications tuning It emphasizes what everyone who has ever worked in systems development knows: Spend time proactively, not reactively No matter how many indexes are created, how much optimization is done to queries, or how many caches and buffers are tweaked and tuned—if the design of a database is faulty, the performance of the overall system suffers
These are only guidelines for applications tuning Each site has its own specific problems and issues that affect the problems that occur in applications More often than not, it is the duty of the developers to tune and modify their own programs without the involvement of the DBA Because of perceived responsibility, however, the DBA must work with the applications development staff to resolve these problems
At the database level, there are three kinds of tuning:
Trang 12caches, buffers, and shared pools that reside in memory and compose the core memory structures for the Oracle
RDBMS I/O tuning is concerned with maximizing the speed and efficiency with which the RDBMS accesses the physical data files that make up its basic storage units Contention tuning seeks to resolve problems in which the
database fights against itself for database resources
There are only four basic steps involved in database tuning They hold true for all three types of tuning:
1 Gather information
2 Determine optimal changes
3 Implement changes
4 Monitor the database
As with applications tuning, the more proactively the process is done, the more effective it is The process is seldom effective when it is done on the fly or without the proper amount of research
Operating System Tuning
Tuning at the operating system level is beyond the scope of this chapter This task falls to the system
administrator—only in rare cases to the DBA However, it is often the role of the DBA to offer suggestions Some issues
to consider are
● Paging and swapping At the operating system level, paging and swapping is used to transfer information from the system's memory (RAM) to disk and back again This enables the system to manipulate more information than it normally could handle in real memory However, excessive paging and swapping can cause system performance to degrade The DBA and the system administrator should work together to optimize memory to reduce or eliminate paging and swapping
● Stripping, mirroring, and RAID In many cases, the disk drives write a piece of information across several disks
(striping), write all the information across a pair of disks (mirroring), or writing all the information across every disk in a fixed-number set (RAID) These disk drive configurations can help make I/O more efficient by distributing reads and writes across many disks In some cases, they increase the fault-tolerance of the system itself It is important for the DBA to be aware that many of these configurations have an impact on I/O performance RAID drives, for example, must access each disk in the set for every read and write operation
● Shared memory Used for communication between processes, shared memory settings are usually configured
within the operating system The Oracle Installation and Configuration Guide gives the minimum settings for
configuring shared memory for an Oracle RDBMS These settings are the minimum required for running the RDBMS In practice, they should generally be set higher
● Maximum processes One of the drawbacks of Oracle (or any other RDBMS) is the amount of overhead that
required from the system on which it runs One of the areas that Oracle takes extended overhead is in processes The database itself consists of background process—PMON, SMON, LGWR, DBWR, and so on—plus an additional processes for each user who connects to the database Although this value can be limited at the database level through the PROCESSES parameter in the INIT.ORA parameter file, it is important to make certain that the operating system supports the number of processes It is also important to ensure that the number of available processes allows for growth
● Maximum open files A particular problem in an Oracle RDBMS is the maximum number of open files that a single process can hold The number of files is defined at the operating system and RDBMS levels The RDBMS sets this limit with the MAXDATAFILES parameter of the create database statement There is also a limit at operating system level, which depends on the configuration of the operating system In both cases, the DBA can change the value
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 13It is not advisable to change the operating system priority of the Oracle background processes If these values are altered, the database might process information less efficiently If you must modify them, set all database processes
to the same value
Performance Tools
In tuning a database, the first and most crucial step is gathering statistics on the current database performance These tools give a benchmark of how the database is currently performing and enable the DBA to gauge progress by measuring improvement
Viewing SGA and Parameter Settings
Use the Oracle Server*Manager to view current parameter settings for an Oracle RDBMS instance The show sga command shows the current size and makeup of the SGA You can also display the INIT.ORA parameters with the show parameter command To display only a particular parameter, add it to the command For example,
% svrmgrl
SVRMGR> Connect internal
Connected
SVRMGR> show parameter block
All the database parameters are shown, even ones that have not been explicitly set in the INIT.ORA parameter file Parameters that the DBA has not set are shown with their default values By spooling this list to a data file, the DBA can get an accurate snapshot of a database's settings
UTLBSTAT and UTLESTAT
To determine what needs to be fixed in an Oracle RDBMS instance, you must first determine what is broken In some cases, performance problems occur sporadically; however, they are usually have a specific pattern Do they occur around lunch time? At night? Early in the morning? One of the keys to performing successful performance tuning is being able
to identify when the problem is occurring
Oracle provides tools that enable you to examine in detail what the Oracle RDBMS was doing during a specific period of time They are the begin statistics utility (utlbstat) and the end statistics utility (utlestat) These scripts enable you to take
a snapshot of how the instance was performing during an interval of time They use the Oracle dynamic performance (V
$) tables to gather information
It is important to use the utlbstat and utlestat utilities only against a database instance that has been running for a while Because an Oracle RDBMS instance reinitializes its dynamic performance table during database startup,
information gathered from a database that has not been running and had time to gather information is inconclusive
To use utlbstat and utlestat, the database must have been started with the value of TIMED_STATISTICS in the INIT.ORA parameter file set to TRUE Oracle does not collect some of the information required for the report if this
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 14parameter is not set to TRUE Setting TIMED_STATISTICS to TRUE, however, causes the database instance to incur overhead The amount is small—only about 4-8 percent in quantitative terms—and it is necessary to take an accurate snapshot of the database performance Many DBAs set this parameter to TRUE only when they gather statistics
Once you have set the required parameters, the database has run for a sufficient period of time, and you have identified the window, you take the snapshot by using utlbstat To execute either script, you must have the ability to connect internal to the database Running utlbstat tells the RDBMS instance to begin gathering statistics until told otherwise It is executed as follows:
% svrmgrl
SVRMGR> @$ORACLE_HOME/rdbms/admin/utlbstat
From the moment when this script is executed, the Oracle RDBMS instance gathers performance statistics It continues
to do so until you run the utlestat script, which stops gathering performance statistics It is important that the database remain active and not be shut down while utlbstat is running
% svrmgrl
SVRMGR> @$ORACLE_HOME/rdbms/admin/utlestat
When you run utlestat, the database creates a report called REPORT.TXT in the current directory, which contains the statistical information gathered Each report contains the following information:
● Library cache statistics
● System summary statistics
● System-wide wait event statistics
● The average length of the dirty buffer write queue
● File I/O statistics
● SGA and cache statistics
● Latch statistics
● Rollback segment statistics
● Current initialization parameter settings
● Dictionary cache statistics
● Start and stop time statistics
A sample report called REPORT.TXT is included on the CD-ROM and shows what a report produced by utlestat might look like
Generating the report is simple; interpreting it is another matter entirely The rest of this chapter looks at what this information means The report itself gives some brief hints When in doubt, always remember to keep hit rates high and wait times low
EXPLAIN PLAN
Performance tuning does not always have to happen on a global, database-level view In theory, most tuning should take Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 15place at much lower, scalable levels where the performance impact is more easily measured A fundamental truth of database tuning and optimization is that performance tuning is not sorcery or magic Optimizing a database will not make a poorly tuned application run faster; the reverse is also true, though less common It is important to examine how the database handles processing at the application, or SQL, level
To do this, Oracle provides a tool in the form of the EXPLAIN PLAN, which enables the DBA to pass a SQL statement
through the Oracle optimizer and learn how the statement will be executed by the database—the execution plan That
way, it is possible to learn whether the database is performing as expected—for example, whether it uses an index on a table instead of scanning the entire database table
Several factors can affect the results returned by an EXPLAIN PLAN They include
● Changes in statistics when running the database under the Cost-Based Optimizer
● The use of HINTS under the Rule-Based Optimizer that cause the query to select a particular execution path
● The addition or deletion of new indexes on one of the tables in the SQL statement when running the database under the Rule-Based Optimizer
● Subtle changes in the WHERE or FROM clause of a SQL SELECT statement when running the database under the Rule-Based Optimizer
● The presence of database objects with the same name as the object being referenced in the schema of the user executing the query
It is important to understand that the results of an EXPLAIN PLAN are, therefore, by no means fixed and finite The DBA must be aware of changes made to database objects—such as adding new indexes—and how fast the tables are growing
The Oracle RDBMS uses the EXPLAIN PLAN by storing information about how a query is executing in a table within the user's schema The table must exist for the EXPLAIN PLAN to work To create the table, the user must execute the following script Of course, he must have the CREATE TABLE and RESOURCE or quota privileges on his default tablespace
SET STATEMENT_ID = 'QUERY1'
INTO PLAN_TABLE FOR
SELECT O.ORDER_DATE, O.ORDERNO, O.PARTNO, P.PART_DESC, O.QTY
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 16FROM ORDER O, PART P
WHERE O.PARTNO = P.PARTNO
Note the SET STATEMENT and INTO clauses of the EXPLAIN PLAN The value of SET STATEMENT is used to make the execution of the EXPLAIN PLAN stored within the table unique; it can be virtually any string up to 30
characters in length Specifying a table in the INTO clause, on the other hand, tells the EXPLAIN PLAN where to place information about the query execution In the previous example, the execution of the query is identified as QUERY1 and has its information stored in the table PLAN_TABLE
Now that the EXPLAIN PLAN has loaded the table with information, there is the obvious question of how to retrieve
and interpret the information provided Oracle provides a script in the Oracle7 Server Utilities Guide that displays
information in a tree-like fashion It is
SELECT LPAD(' ', 2*(LEVEL-1))||operation||' '||
options, object_name ÒQUERY PLANÓ
FROM plan_table
START WITH id = 0 AND statement_id = 'QUERY1'
CONNECT BY PRIOR id = parent_id
/
By running a SQL query through the EXPLAIN PLAN, a pseudo-graph similar to the following is produced:
QUERY PLAN
SORT ORDER BY
NESTED LOOPS
FILTER
NESTED LOOPS OUTER
TABLE ACCESS FULL HEADER
TABLE ACCESS BY ROWID DETAIL
INDEX RANGE SCAN DETAIL_PK
INDEX RANGE SCAN DETAIL_PK
TABLE ACCESS FULL HEADER
TABLE ACCESS BY ROWID DETAIL
INDEX RANGE SCAN DETAIL_PK
INDEX RANGE SCAN DETAIL_PK
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 17NESTED LOOPS OUTER
TABLE ACCESS FULL HEADER TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK TABLE ACCESS FULL HEADER TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK FILTER
TABLE ACCESS FULL HEADER TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK TABLE ACCESS FULL HEADER TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK NESTED LOOPS OUTER
TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK TABLE ACCESS FULL HEADER TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 18When you interpret the output, it is important to understand that all operations, as reported by the EXPLAIN PLAN, are basically operation/option combinations There is no way to discuss all these combinations or the possible interpretations
of all the EXPLAIN PLAN scenarios As with many aspects of the IS industry—especially relational databases—the only true teacher is experience However, here are some of the more common operation/option pairs that EXPLAIN PLANs returns:
FILTER Eliminates rows from a table by conditions specified in the WHERE clause of a SQL
MERGE/JOIN Combines two sorted lists of data into a single, sorted list; used on multi-table queries
SORT/GROUP BY Sorts table data as specified in a GROUP BY clause of the SQL statement
SORT/JOIN Performs a sort on the data from the tables before a MERGE JOIN operation
SORT/ORDER BY Sorts table data as specified in an ORDER BY clause of a SQL statement
SORT/UNIQUE Performs a sort on table data being returned and eliminates duplicate rows
TABLE ACCESS/FULL Performs a full scan of the database table to locate and return required data
TABLE ACCESS/ROWID Locates a row in a database table by using its unique ROWID
VIEW
Returns information from a database view
The EXPLAIN PLAN is a powerful tool for software developers because it enables them to ensure that their queries are properly tuned Of course, changes made to database objects can adversely affect the results of the EXPLAIN PLAN, but they are useful in determining where the performance drains on an application will occur
SQL*Trace and TKPROF
Oracle SQL*Trace and EXPLAIN PLAN are similar in that they are both used to do performance tuning at the
application level and that they both show the manner in which the Oracle RDBMS executes a query Unlike the
EXPLAIN PLAN, which simply shows how the database optimizer chooses to execute a query to return specified information, SQL*Trace reveals the quantitative numbers behind the SQL execution In addition to an execution plan, SQL*Trace generates factors such as CPU and disk resources, in addition to an execution plan This is often considered
a lower-level view of how a database query is performing, for it shows factors at both the operating system and RDBMS levels
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 19To use SQL*Trace, you must first set some parameters in the INIT.ORA parameter file:
MAX_DUMP_FILE_SIZE Denotes the maximum size for an Oracle-generated file This value is the number in
operating system blocks (which may differ from the size in database blocks)
SQL_TRACE
Causes a trace file to be written for every user who connects to the database when it is set
to TRUE Because of disk space requirements and database overhead, it should be used judiciously
TIMED_STATISTICS Causes the database to gather database statistics when this value is set to TRUE It causes
overhead of 4-8 percent
USER_DUMP_DEST
The directory path where trace files will be written
Once you have set the INIT.ORA parameters have been set, you can invoke the SQL*Trace utility manually If the SQL_TRACE parameter is set, it is not necessary to invoke SQL*Trace manually because a trace file will be written automatically; however, it is more common to call it manually To invoke SQL*Trace, use either SQL or PL/SQL
Use SQL when there is specific query to be analyzed For example,
You can either type in the query at the SQL prompt or source it in from an external file that contains the query
In many cases, especially through applications such as SQL*Forms, it is necessary to invoke the trace facility by using PL/SQL This is especially helpful when you are dealing with a third-party application for which the SQL syntax is not readily obvious To invoke SQL*Trace, use the following PL/SQL statement:
BEGIN
DBMS_SESSION.SET_SQL_TRACE (TRUE);
/* PL/SQL code goes here */
As with SQL*Plus, the trace gathers information until the session disconnects or is deactivated
/* PL/SQL code goes here */
DBMS_SESSION.SET_SQL_TRACE (FALSE);
END;
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 20After the trace file has been generated, it must be converted into a readable format Oracle provides the TKPROF utility
to accomplish this task Using TKPROF, you can convert the raw trace file into a readable report
Locating a trace file in the dump directory can be quite a task, especially if many other files exist Two tricks speed this process The first is to use the UNIX command ls -lt to list the files in date order, with the newest file listed first The other option is to use a SELECT USERNAME FROM DUAL as part of the trace and issue a grep USERNAME *.trc to find the trace file
Once the trace file has been located, it is necessary to run the TKPROF utility against it to produce readable output This information is statistical and shows how queries perform at the database and operating system level The report produced
by TKPROF contains CPU usage, disk utilization, and the count of rows returned by the query (or queries) enclosed in the trace file output You can also have TKPROF return EXPLAIN PLAN information from each query in the trace TKPROF is invoked as follows:
% tkprof ora_4952.trc ora_4952.log
This statement takes the trace output from the ORA_4952.TRC SQL*Trace file and generates its output in the file named ORA_4952.LOG This particular statement does not generate an EXPLAIN PLAN for any of the queries
contained in the trace file Supplemental options enable you to control a certain extent or the information that is
produced They are
EXPLAIN Enables you to specify a username and password that will generate an EXPLAIN PLAN for each query
RECORD Enables you to specify an output file that will contain all the statements in the trace file
SORT Enables you to control the order in which the analyzed queries are displayed
SYS Indicates whether to include queries run against the SYS tables (the data dictionary) in the trace output
TABLE
Specifies the schema.tablename to use when generating a report with the EXPLAIN option
When you run the trace file through TKPROF, it generates a report For example,
************************************************************
select o.ordid, p.partid, o.qty, p.cost, (o.qty * p.cost)
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 21from part p, order o
where o.partid = p.partid
call count cpu elapsed disk query current rows
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer hint: CHOOSE
Parsing user id: 22 (MERLIN)
● In most cases, the execute value should be higher than parse value If the number of parses is the same as the number of executions, you should consider increasing the size of the shared pool
● Compare the amount of physical I/O (total of the disk column) to the amount of logical I/O (sums of the totals
of the query and current columns) The logical I/O should be much higher than the physical I/O
Dynamic Performance (V$) Tables
Another useful tool for database tuning is the dynamic performance tables, also called the v$ tables (which are really views despite this name) The v$ views are views on the Oracle x$ tables, which are SGA-held memory structures created by the database at startup These tables—and their views—are updated in real time as the database runs, and provide the DBA a good view of the current status of the database Several third-party applications use the v$ tables to access statistical or performance monitoring data, and the views are used by the monitor component of the Oracle Server*Manager After database creation, the v$ tables can be accessed only by the user SYS, who can make grants on them to other users
Oracle Corporation has stated that it is not committed to supporting the v$ tables in future product releases Although that would be unlikely, given the number of market products that use these views, be aware of the possibility Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 22The v$ views are useful in many applications, such as backup and recovery, administrative monitoring, and performance tuning Here are some of the more commonly used views as they apply to performance tuning:
V$DB_OBJECT_CACHE Contains information about all the database objects that currently exist in the library cache
of the SGA
V$FILESTAT Contains the amount of physical reads and writes taking place on a specific data file
associated with the database
V$LATCH Contains a current statistical picture of all the latches within the database
V$LATCHHOLDER Contains the name of the current latchholder of the each latch specified in V$LATCH
V$LATCHNAME Contains the name of each latch in the V$LATCH view
V$LIBRARYCACHE Contains statistics that represent the overall performance of the library cache area of the
SGA
V$ROLLSTAT Contains statistics on all the online rollback segments in the database
V$ROLLCACHE Contains statistical information about the performance of the data dictionary cache of the
SGA
V$SESSION_WAIT Provides information on what sessions are waiting for other sessions if one session is
waiting for another to complete a task or event
V$SESSTAT Contains current statistical information for each active database session
V$SESS_IO Contains current logical and physical I/O information for each active database session
V$SGASTAT Summarizes statistical information on the overall SGA
V$SQLAREA Contains statistical information on the cursor cache of the SGA
V$STATNAME Contains the names of all the statistics from V$SESSTAT
V$WAITSTAT Contains information on block contention—active only when TIMED_STATISTICS is set
to TRUE
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 23Because statistics are gathered over a period of time, a database that has just started up does not have sufficient statistics to provide any sort of tuning information Statistics gathered from a newly started database are misleading and cannot pinpoint actual performance problems As a rule of thumb, a database should run for several hours before you gather any performance statistics
Here is a query that uses the V$ views It displays the name and current value of each database statistic It is useful for quickly seeing how the database is performing
select n.statistic# , n.name , s.value
from v$statname n , v$sysstat s
where n.statistic# = s.statistic#
and value > 0
/
There are many more V$ views that are not mentioned here Many gather I/O, cache, and buffering statistics that are
invaluable for performance tuning Consult the Oracle7 Server Administrator's Guide and the Oracle7 Applications Developers Guide for more information on these views
SQL*DBA—the predecessor to Oracle Server*Manager—is being made obsolete in mid-1996 It contains several scripts that the monitor utility uses Located in the $ORACLE_HOME/rdbms/sqldba directory, they give you insight into how
to use some of the V$ views They also show how many of the V$ views have relationships with other V$ views
Tuning Database SGA
Problems with database applications often involve memory and disk drives When the CPU runs faster than the
throughput of the input/output devices (such as the disk drives), the system is called I/O bound When the throughput of the input/output devices is faster than the CPU, the system is called CPU bound
Most systems are I/O bound, so it is easy to dismiss poor performance as a by-product of poor throughput transfer Many DBAs, therefore, perform load balancing and contention analysis to optimize performance What is forgotten, however,
is that poor memory management can often aggravate many performance problems Poor use of the available memory can contribute to throughput that is less than superior For example, sorts that could be optimized to run in memory run
to disk, or the operating system pages and swaps processes to disk
It is important, therefore, to understand the single most memory- intensive part of the Oracle RDBMS—the system global area (SGA) By definition, the SGA is simply a combination of buffers and caches stored in virtual memory that enables the database to function To be efficient, Oracle performs many of its operations in memory, writing to disk only
in bulk so as to optimize performance hits This is good because from the standpoint of software
development•97Äaccessing the disk drive is ÒexpensiveÓ in terms of performance cost, whereas running a process in memory is Òinexpensive.Ó
As Figure 15.1 shows, the SGA is composed of three primary units:
● The redo log buffer
● The database buffer cache
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 24● The shared SQL area (the shared pool)
Figure 15.1 Architecture of the Oracle SGA.
It is important to ensure that the SGA is large enough to fit comfortably into the system's existing memory It should also
be small enough to coexist with other applications and not allocate more memory than it requires It is equally important
to make certain that there is enough shared memory—semaphores—available to support the database instance Like all
other aspects of performance tuning, memory management means balancing available resources against needed
resources and reaching an effective compromise
Examining the Current SGA
To tune the SGA of a database instance, you must determine the current size of the SGA size There are several ways to
do this, including extracting the information from the DBA views or V$ tables or calculating it based on values in the INIT.ORA parameter file The simplest method, however, is to issue the show sga command from Oracle
Server*Manager For example,
% svrmgrl
SVRMGR> connect internal
Connected
SVRMGR> show sga
Total System Global Area 95243632 bytes
Fixed Size 46384 bytes
Variable Size 70588480 bytes
Database Buffers 24576000 bytes
Redo Buffers 32768 bytes
The size of the SGA remains constant as long as the database is running, although the DBA can change it when the database is restarted
Changing the SGA Size
The sum of the parts equals the whole Sizing the SGA is no exception Changing the size of the SGA requires that values of some INIT.ORA parameters be modified, which in turn alters the overall size of the SGA The following parameters control the size of the SGA:
DB_BLOCK_BUFFERS The number of database blocks (of size DB_BLOCK_SIZE) allocated to the database buffer
Trang 25Some of the parameters require their arguments in bytes, whereas others require their arguments in blocks
Once the size of a SGA buffer is set, the size of the SGA remains constant as long as the database continues to run If the values of the these three parameters are changed and the database is restarted, they immediately take effect You should make a backup of the INIT.ORA parameter file before you make considerable changes For example:
You should also ensure that these values are always set appropriately high, but not inordinately high The ramifications
of changing the size of the SGA are discussed later in this chapter
Database Block Size
The "building blocks" of any database are the size of its blocks You set this value with the DB_BLOCK_SIZE
parameter, and its range is operating system-specific—approximately 512 bytes to 16M
This value represents how data "pieces" are transferred to and from the instance's SGA during an operation The more data that the database can transfer in a single operation, the fewer operations that it has to perform; consequently, the overall performance of the instance improves The value of DB_BLOCK_SIZE should be a multiple of the operating system block size On some systems, the default operating system block size is sufficient On other systems, the best speed is twice that value The best way to determine this is to generate a test instance; use different sizes of blocks and conduct benchmark testing Always keep in mind the limits imposed by the operating system when you do this As with all other areas of performance tuning, a trade-off occurs—setting the size of the block too high can actually degrade the performance
Once a database is created, the only way to change the value of DB_BLOCK_SIZE is to recreate the database This
makes sense Whenever a database instance is created, Oracle physically allocates several database files of size X in
which it will store various forms of information—the data dictionary, tables, indexes, and so on These files are created with blocks of size DB_BLOCK_SIZE and are mapped so that the database can recognize each one If the value of DB_BLOCK_SIZE is changed, the blocks no longer begin and end where the database expects The RDBMS cannot correctly manipulate data if it cannot recognize the blocks
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.