1. Trang chủ
  2. » Công Nghệ Thông Tin

Tài liệu ORACLE8i- P22 docx

40 336 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Monitoring and Tuning Latches, Locks, and Waits
Trường học Sybex Inc.
Chuyên ngành Database Management
Thể loại tài liệu
Năm xuất bản 2002
Thành phố Alameda
Định dạng
Số trang 40
Dung lượng 470,7 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

When using the redo copy latch, Oracle acquires the redo allo-cation latch only long enough to get memory in the redo log buffer allocated.. Tuning Redo Allocation Latch Problems Oracle’

Trang 1

CHAPTER 17 • MONITORING AND TUNING LATCHES, LOCKS, AND WAITS

836

If your system administrator manages these systems, you may not be privy to themapping of physical disks Perhaps, in part, the administrator doesn’t understand theimportance of such mapping or doesn’t know how to create such a map Either way,it’s imperative that you make clear to your system administrator that you need toknow where this data resides, and with what other systems it interacts

Correcting I/O problems can make a huge difference in performance This actionalone can transform you into a hero Trust me I’ve been there

How’s the Shared Pool Doing?

After tuning I/O, you may still be seeing a good deal of latch contention Now’s thetime to look at the shared pool and take advantage of the scripts provided in Chap-ter 15 A poorly tuned shared pool can cause all sorts of latch issues Run the variousscripts that monitor the hit ratios of the shared pool Are your hit ratios low? If so,you must add memory

WARN I NG Any time you add memory to any Oracle memory structure such as theshared pool, make sure you don’t add so much that you cause the system to start thrash-ing memory pages between the swap disks and memory This will inevitably result in per-formance problems You’ll only wind up worse off than before

If the hit ratios are not low, and you see thrashing, make sure you have not allocated memory to the point that you are paging or excessively swapping memory

over-to and from the disk If you are, you must correct this problem immediately You’llneed to enlist the help of your system administrator to determine if you’re havingsystem memory contention issues

In keeping with the theme of reducing I/O is the idea that the fewer numbers ofblocks you have to deal with, the less work the database will have to do, and latch con-tention will be minimized Do everything you can to reduce I/Os during queries Makesure your tables are allocating block storage correctly (look at PCTUSED/PCTFREE).Making sure tables load in order of the primary key (or most often used) index columns,and tuning your SQL to return the result set in the fewest number of block I/Os (logical

or physical) will result in a reduction in latch contention—-we guarantee it

Finally, don’t just throw CPUs, disks, and memory at the problem: that’s the wrongkind of lazy solution and often doing so prolongs the problem As your database sys-tem grows, even the faster hardware will not be able to handle the load The onlyexception to this rule is if you simply do not have enough disk space to properly dis-tribute the I/O of the database If this is the case then you simply have to buy moredisk pronto

Trang 2

NOTE In some cases, lack of memory in the shared pool or the database buffer cache

actually is the problem This can be true if you see low hit ratios in any of the memory

RESERVED_SIZE If there isn’t enough memory available for that chunk to be stored

in reserved memory, it will be stored in the normal memory area of the shared pool

You can positively affect shared pool fragmentation by increasing theSHARED_POOL_RESERVED_MIN_ALLOC parameter so that your largest PL/SQL pro-grams are loaded there This approach will eliminate fragmentation issues

Another method that can be used to limit fragmentation of the shared pool is theuse of the DBMS_SHARED_POOL.KEEP procedure to pin often used PL/SQL objects inthe shared pool (See Chapter 20 for more on DBMS_SHARED_POOL.) You might con-sider pinning commonly used objects in the SGA every time the database starts up

Doing so will help improve performance, and will go a long way toward reducing formance problems

per-Tune Up Your SQL

If everything looks hunky-dory with the shared pool then make sure you are using usable SQL statements with bind variables as much as possible If you aren’t, you cancause all sorts of problems, including latching contention See Chapter 16 for moreinformation on how to write reusable SQL and how to determine if SQL needs to berewritten You may also want to take advantage of cursor sharing in Oracle8i, which isalso discussed in Chapter 16

re-General Tuning for Latch Contention

Because there are various levels of latches, contention for one latch can cause contentionagainst other, lower-level latches A perfect example is the attempt to acquire a redocopy latch to quickly allocate memory in the redo log buffer Depending on the size

Beyond Simple Database Management

P A R TIII

Trang 3

CHAPTER 17 • MONITORING AND TUNING LATCHES, LOCKS, AND WAITS

838

latches rather than use the one redo copy latch Having acquired the redo allocationlatch, Oracle will then quickly try to acquire the level-six redo copy latch Oracleneeds this latch only long enough to allocate space in the redo log buffer for theentries it needs to write; then it releases the latch for other processes to use Unfortu-nately, a delay in getting the redo copy latch can keep other processes from acquiringthe available redo allocation latches The bottom line is that you must always deal withlatch contention level by level, tuning from the highest level (15) to the lowest (0).Consider increasing the _SPIN_COUNT parameter if you are seeing excessive sleeps

on a latch On many systems it defaults to 2000, but yours might be different Ifyou’re seeing problems with redo copy latches or other latch sleeps, see what you can

do by playing with this parameter

You can use the ALTER SYSTEM command to reset the spin count as well, whichmeans you don’t have to shut down the database Here’s the syntax for this:

ALTER SYSTEM SET “_SPIN_COUNT” = 4000;

After you have reset the spin count, let the system run normally for a few minutesand then check to see if the number of spins has dropped Also, has there been anychange in the number of sleeps? (Note that sleeps for the redo copy latch are notunusual.)

WAR N I N G Remember that hidden or undocumented parameters are not ported by Oracle in most cases That includes _SPIN_COUNT (though it was a docu-mented parameter until Oracle8) With this in mind, test all hidden parameters beforeyou decide to use them in production, and find out about any bugs by checking Oracle’sregistered information

sup-TIP In conjunction with your latch contention tuning, keep in mind that tuning bad SQLstatements can have a huge impact on latching overall So by all means tune the instance

as best you can—but often your best results will come from SQL tuning

Tuning Redo Copy Latch Problems

Oracle’s multiple redo copy latches are designed to relieve the hard-pressed singleredo allocation latch When using the redo copy latch, Oracle acquires the redo allo-cation latch only long enough to get memory in the redo log buffer allocated Oncethat operation is complete, it releases the redo allocation latch and writes the redo log

Trang 4

entry through the redo copy latch Note that sleeps for the redo copy latches are mal and unique to this latch.

nor-The process will sleep if it fails to acquire one of the redo copy latches When itwakes up, it tries to acquire the next redo copy latch in order, trying one at a timeuntil it is successful Oracle executes a sleep operation between each acquisitionattempt, so you see the increases in the SLEEP columns of the V$LOCK data dictio-nary view That being the case, if you get multiple processes fighting for this latch,you are going to get contention You can do a couple of things to try to correct thisproblem

Increase the number of redo copy latches by increasing the default value of theparameters LOG_SIMULTANEOUS_COPIES and LOG_ENTRY_PREBUILD_THRESHOLD

Check your operating system documentation for restrictions on increasing these values

Tuning Redo Allocation Latch Problems

Oracle’s lone redo allocation latch serializes access to the redo log buffer, allocatingspace to it for the server processes Sometimes this latch is held for the entire period

of the redo write, and sometimes just long enough for the allocation of memory inthe redo log buffer The parameter LOG_SMALL_ENTRY_MAX_SIZE sets a thresholdfor whether the redo allocation latch will be acquired for the duration of the redo logbuffer write If the size of the redo is smaller (in bytes) than LOG_SMALL_ENTRY_

MAX_SIZE, the redo allocation latch will be used If the redo is larger, a redo copylatch will be used So if you see latch contention in the form of sleeps or spins on theredo allocation latch, consider reducing LOG_SMALL_ENTRY_MAX_SIZE

NOTE There is a school of opinion for setting LOG_SMALL_ENTRY_MAX_SIZE to 0 andalways using the redo copy latches We contend that things Oracle are rarely so black andwhite Always test a setting like this, and always be willing to accept that something elsewill work better

Other Shared Pool Latching Problems

Latching issues in the shared pool are usually caused by insufficient memory tion As far as database parameters go, there isn’t a lot to tune with respect to theshared pool beyond memory Of course, maintain your typical vigilance over I/O dis-tribution and bad SQL Beyond Simple Database Management

alloca-P A R TIII

Trang 5

CHAPTER 17 • MONITORING AND TUNING LATCHES, LOCKS, AND WAITS

840

Tuning Buffer Block Waits

Data block waits listed in the V$WAITSTAT view can indicate an insufficient number

of free lists available on the table or index where the wait is occurring You may need

to increase it In Oracle8i you can dynamically increase or decrease the number of freelists in an object by using the ALTER TABLE statement with the FREELISTS keyword inthe STORAGE clause

NOTE By the way, don’t expect to see waits in V$WAITSTAT for the FREELIST class Thisstatistic applies only to free list groups Free list groups are used in Oracle Parallel Server(OPS) configurations, so it’s unlikely that you’ll use them if you are not using OPS

Another concern is the setting of the object’s INITRANS parameter The INITRANSparameter defaults to 1, but if there is substantial DML activity on the table, you mayneed to increase the setting This parameter, as well, can be adjusted dynamically.Note that by increasing either free lists or INITRANS for an object, you are reducingthe total space available in a block for actually storing row data Keep this in mind

It can be hard to identify exactly what object is causing the buffer block wait lems Perhaps the easiest way is to try to capture the waits as they occur, using theV$SESSION_WAIT view Remember that this view is transitory, and you might need tocreate a monitoring script to try and catch some object usage trends Another way tomonitor object usage is to enable table monitoring and watch the activity recorded inthe SYS.DBA_TAB_MODIFICATIONS view You can create a job to copy those stats to apermanent table before you update the statistics See Chapter 16 for more on tablemonitoring

prob-WARNING Carefully measure the performance impact of monitoring It generally isinsignificant, but with a system that is already “performance challenged,” you may furtheraffect overall performance by enabling monitoring Nevertheless, the potential gains fromknowing which tables are getting the most activity may well override performance con-cerns Remember that short-term pain for long-term gain is not a bad thing You can pay

me now, or you can pay me later

It’s the same old saw: The easiest way to reduce buffer block waits is to tune yourI/O and then tune your SQL Statements and databases that run efficiently will reducethe likelihood of buffer block waits

Trang 6

Beyond Simple Database Management

P A R TIII

By the way—others suggest that reducing the block size of your database isanother solution This approach might or might not reduce waiting; in anycase, it’s not a good idea The overwhelming advantages of larger block sizescannot be ignored

Last Word: Stay on Top of I/O Throughput!

We discussed proper placement of database datafiles in Chapter 4 If you’re ning multiple databases, I/O distribution becomes critical In addition, fileplacement, partitioning, tablespaces, and physical distribution all affect I/Othroughput to an even greater degree Thus the concepts discussed in Chapter 4have direct application in tuning methodologies We’ll close this chapter withseven tenets for maximizing I/O throughput:

run-1 Separate data tablespaces from index tablespaces.

2 Separate large, frequently used tables, into their own tablespaces If you

partition tables, separate each partition into its own tablespace

3 Determine which tables will frequently be joined together and attempt to

distribute them onto separate disks If you can also manage to put them

on separate controllers as well, so much the better

4 Put temporary tablespaces on their own disks, particularly if intense disk

sorting is occurring

5 Beware of the system tablespace Often times DBAs think it is not heavily

used You might be surprised how frequently it is read from and written

to Look at V$FILESTAT and see for yourself

6 Separate redo logs onto different disks and controllers This is for both

per-formance reasons and recoverability reasons

7 Separate your archived redo logs onto different disks The archiving

process can have a significant impact on the performance of your system

if you do not distribute the load out correctly

TIP One last bit of advice On occasion, it’s the actual setups of the disk and filesystems that are hindering performance Make sure you ask your system administra-tor for help if you are having serious performance problems He or she might haveadditional monitoring tools on hand that can help you solve your problem A lazyDBA takes advantage of all resources at his or her disposal, always

Trang 7

CHAPTER 18

Oracle8i Parallel Processing

F E A T U R I N G : Parallelizing Oracle operations 844 Using parallel DML and DDL 845 Executing parallel queries 849 Performing parallel recovery

operations 854 Tuning and monitoring parallel

processing operations 855

Trang 8

P Oracle, a single database operation can be divided into subtasks, which are

performed by several different processors working in parallel The result isfaster, more efficient database operations

In this chapter, we discuss several options for effectively implementing parallelprocessing The chapter begins with some basics of parallelizing operations, and thendiscusses how to use parallel DML and DDL, execute parallel queries, and performparallel recovery operations Finally, you will learn about the parallel processing param-eters and how to monitor and tune parallel processing operations

Parallelizing Oracle Operations

In Oracle8i, parallel processing is easy to configure, and it provides speed and mization benefits for many operations, including the following:

opti-• Batch bulk changes

• Temporary rollup tables for data warehousing

• Data transfer between partitioned and nonpartitioned tables

• Queries

• Recovery operationsPrior to Oracle8i, you needed to configure your database instance for DML opera-tions Oracle8i automatically assigns these values Setting the init.ora parameterPARALLEL_AUTOMATIC_TUNING to TRUE will establish all of the necessary parame-ters to default values that will work fine in most cases (Oracle recommends that PARALLEL_AUTOMATIC_TUNING be set to TRUE whenever parallel execution isimplemented.)

NOTE Setting the PARALLEL_AUTOMATIC_TUNING parameter to TRUE automaticallysets other parallel processing parameters If necessary, you can adjust individual parallelprocessing parameters in the init.orafile to tune your parallelized operations See the

“Tuning and Monitoring Parallel Operations” section later in this chapter for details

When you use parallel processing, the database evaluates the number of CPUs on

Trang 9

determine the default degree of parallelism (DOP) The default degree of parallelism is

determined by two initialization parameters First, Oracle estimates the number ofblocks in the table being accessed (based on statistics in the data dictionary) anddivides that number by the value of the initialization parameter PARALLEL_

DEFAULT_SCANSIZE Next, you can limit the number of query servers to use bydefault by setting the initialization parameter PARALLEL_DEFAULT_MAX_SCANS

The smaller of these two values is the default degree of parallelism For example, ifyou have a table with 70,000 blocks and the parameter PARALLEL_DEFAULT_

SCANSIZE is set to 1000, the default degree of parallelism is 70

Rather than accepting the default degree of parallelism, you can tell Oracle what thedegree of parallelism should be You can assign a degree of parallelism when you create

a table with the CREATE TABLE command or modify a table with the ALTER TABLEcommand Also, you can override the default degree of parallelism by using the PAR-ALLEL hint (as explained in the “Using Query Hints to Force Parallelism” section later

in this chapter) To determine the degree of parallelism, Oracle will first look at the tem parameters, then the table settings, and then the hints in the SQL statements

sys-NOTE When you are joining two or more tables and the tables have different degrees of

parallelism associated with them, the highest value represents the maximum degree of parallelism.

Using Parallel DML and DDL

You can use parallel DML to speed up the execution of INSERT, DELETE, and UPDATEoperations Also, any DDL operations that both create and select can be placed in par-allel The PARALLEL option can be used for creating indexes, sorts, tables—any DDLoperation, including SQL*Loader

Enabling and Disabling Parallel DML

When you consider what is involved in performing standard DML statements forINSERT, UPDATE, and DELETE operations on large tables, you will be able to see theadvantage of parallel processing To use parallel DML, you first need to enable it Usethe following statement to enable parallel DML:

ALTER SESSION ENABLE PARALLEL DML;

USING PARALLEL DML AND DDL

Beyond Simple Database Management

P A R TIII

Trang 10

When you’re finished with the task you want to perform in parallel, you can able parallel DML, as follows:

dis-ALTER SESSION DISABLE PARALLEL DML;

Alternatively, simply exiting the session disables parallel DML

You can also use the ALTER SYSTEM command to enable and disable the LEL option

PARAL-Creating a Table with Parallel DML

Let’s work through an example to demonstrate parallel DML in action In this example,you will use parallel DML to create a table that combines information from two otherexisting tables The example involves data from a product-buying club of some sort(such as a music CD club or a book club) The two tables that already exist are PROD-UCT and CUSTOMER

The PRODUCT table contains information about the products that a customer hasordered It has the following columns:

CUST_NOPROD_NOPROD_NAMEPROD_STATPROD_LEFTPROD_EXPIRE_DATEPROD_OFFERSThe CUSTOMER table contains information about each customer, including thecustomer’s name, address, and other information It has the following columns:CUST_NO

CUST_NAMECUST_ADDCUST_CITYCUST_STATECUST_ZIPCUST_INFO

Trang 11

Using parallel DML, you will create a third table named PROD_CUST_LIST Thistable will contain combined information from the PRODUCT and CUSTOMER tables,based on specific criteria It will have the following columns:

CUST_NOPROD_NAMEPROD_NOCUST_NAMECUST_ADDCUST_CITYCUST_STATECUST_ZIPPROD_LEFTPROD_OFFERSThe CUSTOMER table has a one-to-many relationship with the PRODUCT table

Using parallel DML is a fast way to create the PROD_CUST_LIST table, since thismethod uses the power of multiple CPUs

For the PROD_CUST_LIST table, you will identify all customers who have fiveproducts left prior to their club contract’s expiration (PROD_LEFT = 5) You might usethis information to send a discounted renewal option to these customers You alsowill include customers whose contract has expired and who have been sent three orfewer renewal offers (PROD_OFFERS <= 3) You may want to offer these customers aspecial incentive to get them back Listing 18.1 shows the code to create the table

Listing 18.1: Creating a Table with Parallel DML

SQL> ALTER SESSION ENABLE PARALLEL DML;

SQL> INSERTINTO prod_cust_list (SELECTb.cust_no

, a.prod_name, a.prod_no, a.prod_left, a.prod_offers, b.cust_name, b.cust_add, b.cust_city

USING PARALLEL DML AND DDL

Beyond Simple Database Management

P A R TIII

Trang 12

, b.cust_state, b.cust_zipFROM

product a, customer bWHERE a.cust_no = b.cust_noAND a.prod_left = 5

AND a.prod_expire_date <= SYSDATEAND a.prod_offers <= 3);

COMMIT;

ALTER SESSION DISABLE PARALLEL DML;

The first ALTER SESSION command enables parallel processing The INSERT ment that follows processes the operation in parallel The final ALTER SESSION state-ment disables the PARALLEL option

state-What you accomplished here with the INSERT statement can be done with theDELETE statement and UPDATE statement as well You can see how this feature could

be useful for creating and updating data warehousing applications, as well as forreporting from them

Using Parallel DDL

Parallel DDL (PDDL) is a misnomer; rather than literally creating an object in parallel,

PDDL loads the data in parallel PDDL applies to situations in which you build the

object in the same statement in which you define it The example in Listing 18.1 ated a third table with the SELECT INTO clause This is parallel DDL, because the codecreates the table and populates it with data at the same time Similarly, the SELECTcommand and associated INSERT command can be split into parallel processes Thus,any of your DDL operations that both create and select can be placed in parallel.Here is an example of using parallel DDL to create an index:

cre-SQL> CREATE INDEX product_cust_no_n1

ON product(cust_no)PARALLEL;

In this example, index creation results in a full table scan, but it is accomplished inparallel The SELECT portion is run using PARALLEL

Trang 13

Parallel Loading with SQL*Loader

If you have a lot of data to load via SQL*Loader (discussed in depth in Chapter 22),using the PARALLEL option can reduce the load time However, if you think aboutwhat SQL*Loader is accomplishing in a direct-path load operation, you will knowthat parallel loading is a bit more complex In a direct-path load, SQL*Loader skipsthe portion of trying to look for space in existing blocks It goes to the table’s high-water mark and starts inserting rows in the new blocks, while periodically reestablish-ing the new high-water mark

Multiple processes will need to know what each of the other processes is plishing Since only one process can access the table header block at a time, settingPARALLEL=TRUE for multiple direct loads changes what the loading processes do Inthis case, each process creates its own temporary tablespace in the tablespace that isbeing loaded The indexes are not maintained Once each process has completed theload it is assigned, each temporary segment is merged into the table by adding theextents to the header block of the table

accom-Because the parallel load is accomplished in this fashion, each process needs tohave its own input file This requires some planning on your part, rather than leavingthe task up to the Parallel Manager You must drop all indexes and re-create themwhen the load is complete

Executing Parallel Queries

The ideal environment for implementing a parallel query (PQ) is one in which tablesand indexes are partitioned, so that the query can access multiple partitions Usingpartitioning in conjunction with parallelism is the best way to speed up the execution

of your SQL code Parallel queries on partitioned tables execute quickly through the

use of partition pruning—a very large table is separated into many different sections,

and then the parallel operations access only the parts needed, rather than the wholetable (See Chapter 24 for details on Oracle partitioning.)

Parallel query operation (PQO) employs a producer-consumer input/output approach

A SQL statement is handled through a client process, the Query Coordinator (QC), and

parallel execution (PX) processes The client process sends the SQL statement to the

QC The QC takes the original SQL, breaks it apart, and sends it to the various CPUs

The PX slave receives the SQL from the QC and then gathers the data from the desired

tables The producer part of the PX slave accesses the database (produces data) Theconsumer part of the PX slave accepts (consumes) data from the producer The PX

EXECUTING PARALLEL QUERIES

Beyond Simple Database Management

P A R TIII

Trang 14

slave returns the results to the QC, where the results are combined into one returnstatement and returned to the client process.

The total number of PX slaves on the instance is controlled by the PARALLEL_MAX_SERVERS parameter in init.ora PX slaves are borrowed from a pool of slaves

in the instance as needed Communication between slaves is handled by exchangingmessages through a message queue in the SGA For example, if the PARALLEL_MAX_SERVERS parameter is set to 8, the number of PX slaves borrowed from a pool

of slaves is a maximum of eight The PARALLEL_MAX_SERVERS parameter might also

be set to a lower value, depending on how the memory pool is set up and how manyother processes are presently running with the PARALLEL option on You can adjustthe size of the message buffer and the PARALLEL_MAX_SERVER parameter, as described

in the “Tuning and Monitoring Parallel Operations” section later in this chapter

Using One PX Slave

The following is an example of a parallel query executed using one PX slave, usingthe CUSTOMER table from the example presented in the “Creating a Table with Paral-lel DML” section earlier in this chapter

SQL> SELECTCOUNT(1)FROM product;

Figure 18.1 shows the process flow when the parallel query is invoked First, theserver process that communicates with the client process becomes the QC The QC isresponsible for handling communications with the client and managing an addi-tional set of PX slave processes to accomplish most of the work in the database The

QC enlists multiple PX slave processes, splits the workload between those slaveprocesses, and passes the results back to the client The example shown in Figure 18.1depicts three PX slave processes, which are considered to be one PX slave

Trang 15

FIGURE 18.1

Parallel query execution with one

PX slave

Here’s how the query is handled behind the scenes:

1 The client process sends the SQL statement to the Oracle server process.

2 The server process, which is controlled by PARALLEL_MAX_SERVERS, does the

following:

• Develops the best parallel access path to retrieve the data

• From the original SQL, creates multiple queries that access specific table titions and/or table ROWID ranges

par-• Becomes QC to manage PX slave processes

• Recruits PX slave processes to execute the rewritten queries

• Assigns partitions and ROWID ranges to process for the PX slave processes

3 The PX slave processes do the following:

• Accept queries from the QC

• Process assigned partition and ROWID ranges assigned by the QC

• Communicate results to other PX slave processes via messages through sage queues

mes-SQL

Results

DatabaseClient

process

QueryCoordinator

PX slaveprocess

PX slaveprocess

PX slaveprocessSQL

ResultsSQLResultsSQLResults

SQL area

CPU3CPU2

CPU1EXECUTING PARALLEL QUERIES

Beyond Simple Database Management

P A R TIII

Trang 16

4 The QC does the following:

• Receives the result sets from the PX slave processes

• Performs final aggregation if necessary

• Returns the final result set to the client process

Using Two PX Slaves

Two PX slaves are used for each parallel query execution path for a merge or a hashjoin operation, or when a sorting or an aggregation operation (functions such as AVG,COUNT, MAX, and MIN) is being accomplished in the original query In this case,each slave acts as both producer and consumer in the relationship The slaves thataccess the database produce data, which is then consumed by the second set of PXslaves

Here is an example of a parallel query executed using two PX slaves, using thePRODUCT table from the example presented in the “Creating a Table with ParallelDML” section earlier in this chapter:

SQL> SELECTprod_name, COUNT(1)FROM productGROUP BY prod_name;

In a sorting operation (GROUP BY), the first set of PX slave processes will selectrows from the database and apply limiting conditions The result will be sent to thesecond set of PX slave processes for sorting The second set of PX slave processes hasthe task of sorting rows within a particular range Each of the PX (producer) slaveprocesses that retrieved data directly sends its results to the designated slave process,according to the sort key

Figure 18.2 shows the process flow when two PX slaves are used This figure depictstwo columns of PX slave processes with communication between each column, repre-senting two slave processes The row of PX slave processes next to the database is con-sidered the producer, because these slave processes get the data, and the second row

of PX slave processes is considered the consumer, because these processes receive thedata from the first set of PX slave processes to send back to the QC

Trang 17

FIGURE 18.2

Parallel query

execu-tion with two PX slaves

Using Query Hints to Force Parallelism

Using hints to override the degree of parallelism established on a table is an easy way

to manage the work to be performed For example, suppose you have a table that hasthe PARALLEL option turned on and set to 2 Now you need to get the

SUM(AMOUNT) for each quarter from this table, so you want to increase the degree

of parallelism to get the results faster For example, if you raise the degree of lelism to 6, you divide the work six ways and speed up the operations

paral-NOTE As noted earlier in the chapter, the order of operation is system parameter first;

then table settings, which can override system parameters; then hints, which overridetable settings

You can enable parallelism with the PARALLEL hint and disable it with theNOPARALLEL hint The PARALLEL hint has three parameters:

• The table name

• The degree of parallelism

• The instance settingSince there are no keywords in the hint, you must be careful with the syntax Inthe following example, a hint is used to tell the Optimizer to use a degree of paral-lelism of 3 when querying the CUSTOMER table

DatabaseClient

process

QueryCoordinator

PX slaveprocess

PX slaveprocess

PX slaveprocess

PX slaveprocess

PX slaveprocess

PX slaveprocessSQL area

Sort area

EXECUTING PARALLEL QUERIES

Beyond Simple Database Management

P A R TIII

Trang 18

SELECT /*+ PARALLEL(customer,3) */

FROM customer;

You can also use hints to specify the number of instances that should be involved

in performing the query Simply include a value for the instances after the degree ofparallelism parameter, as in this example:

SELECT /*+ NOPARALLEL(customer) */

FROM customer;

Performing Parallel Recovery Operations

During an Oracle database recovery operation, the recovery server process has a lot to

do It reads a recovery record from the log file; reads the block from the datafile, ifnecessary, and places it into the buffer cache; and then applies the recovery record Itrepeats this process until all of the recovery records have been applied This meansthat the recovery server process is busy performing a great deal of reading, writing,and blocking on input/output during database recovery Given that database recovery

is often a time-pressured operation, the ability to speed it up by parallelization isclearly welcome

NOTE Prior to Oracle 7.1, the only form of parallel recovery was to start up multiple usersessions to recover separate datafiles at the same time Each session read through theredo logs independently and applied changes for its specified datafile This methoddepended on the ability of the I/O subsystem to parallelize the separate operations If theoperating system couldn’t parallelize, you would see little improvement in performance

Oracle 7.1 and later offer true parallel recovery capabilities With this feature, therecovery server process acts as a coordinator for several slave processes The recoveryserver process reads a recovery record from the redo log file and assigns the recovery

Trang 19

been applied The slave process performs the other steps: It receives each recoveryrecord from the recovery server process, reads a block into buffer cache if necessary,and applies the recovery record to the data block The slave process continues until it

is told that the recovery is complete

You can invoke parallel recovery in either of two ways:

• Set RECOVERY_PARALLELISM in the init.ora file

• Supply a PARALLEL clause with the RECOVER command in Server Manager

(See Chapter 10 for more information about the RECOVER command.)You must have set PARALLEL_MAX_SERVERS to above 0 before you can enable paral-lel recovery, because it uses the parallel servers as the recovery slaves The RECOVERY_

PARALLELISM parameter specifies the number of processes that will participate in allel recovery during recovery The RECOVERY_PARALLELISM setting cannot begreater than the PARALLEL_MAX_SERVERS setting Oracle will not exceed the value

par-of PARALLEL_MAX_SERVERS for recovery, even if the DBA requests a higher degree par-ofparallelism The PARALLEL_MAX_SERVERS parameter is discussed in more detail inthe next section

Contrary to Oracle’s claim that there is little benefit to using parallel recovery with

a setting of less than 8, personal experience reveals that the best performance isachieved with RECOVERY_PARALLELISM set to two times the CPU count Systemswith faster disk channels may benefit from a higher setting, perhaps three times theCPU count We recommend tuning toward disk channel saturation

NOTE If your system is using asynchronous I/O, there will be little benefit to using allel recovery

par-Tuning and Monitoring Parallel Operations

Oracle8i uses many initialization parameters to control how various processing operations will operate As explained earlier in the chapter, when you setthe init.ora parameter PARALLEL_AUTOMATIC_TUNING to TRUE, Oracle automati-cally sets other parallel processing parameters If necessary, you can adjust the otherparameters individually to tune parallel operations on your system Table 18.1 liststhe init.ora parameters for parallel server processes, along with a brief description,the default value, and the valid values for each parameter

parallel-TUNING AND MONITORING PARALLEL OPERATIONS

Beyond Simple Database Management

P A R TIII

Trang 20

TABLE 18.1: INIT.ORA PARAMETERS FOR PARALLEL SERVER PROCESSES

LARGE_POOL_SIZE 0 Controls whether large objects are stored in

the large pool section of the shared pool.The minimum value is 600KB

PARALLEL FALSE Controls whether direct loads are

per-formed using parallel processing

PARALLEL_MAX_SERVERS 0 Sets the maximum number of parallel

processes that can be created for theinstance The maximum setting is 3599.PARALLEL_MIN_PERCENT 0 Sets the minimum percentage of requested

parallel processes that must be available inorder for the operation to execute in paral-lel The maximum setting is 100

PARALLEL_MIN_SERVERS 0 Sets the minimum number of parallel

processes created at instance startup to beused by parallel operations in the database.Valid values are 0 to the value of PARALLEL_MAX_SERVERS

PARALLEL_SERVER FALSE Enables Oracle Parallel Server (OPS)

Defines the instance group (by name)used for query server processes

NULLPARALLEL_INSTANCE_

GROUP

Controls the amount of shared poolspace used by parallel query operations

Depends on ing system; about2KB

operat-PARALLEL_EXECUTION_

MESSAGE_SIZE

Optimizes parallelized joins involvingvery large tables joined to small tables.FALSE

TUNING

Varies the degree of parallelism based onthe total perceived load on the system.FALSE

PARALLEL_ADAPTIVE_

MULTI_USER

Specifies the number of processesspawned to perform parallel recovery.For parallel recovery, you can set this toLOW (number of recovery servers maynot exceed 2 ×CPU count) or HIGH(number of recovery servers may notexceed 4 ×CPU count)

FALSE (no parallelrecovery)FAST_START_PARALLEL_

ROLLBACK

Ngày đăng: 26/01/2014, 19:20

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w