1. Trang chủ
  2. » Công Nghệ Thông Tin

OCA /OCP Oracle Database 11g A ll-in-One Exam Guide- P8 pptx

10 406 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 255,73 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Neither the large pool nor the Java pool has changed since instance startup, but there have been changes made to the sizes of the shared pool and database buffer cache.. There are five b

Trang 1

TIP Determining the optimal size is a matter for performance tuning, but it

is probably safe to say that most databases will need a shared pool of several hundred megabytes Some applications will need more than a gigabyte, and very few will perform adequately with less than a hundred megabytes

The shared pool is allocated at instance startup time Prior to release 9i of the

database it was not possible to resize the shared pool subsequently without restarting

the database instance, but from 9i onward it can be resized up or down at any time This resizing can be either manual or (from release 10g onward) automatic according

to workload, if the automatic mechanism has been enabled

EXAM TIP The shared pool size is dynamic and can be automatically managed.

The Large Pool

The large pool is an optional area that, if created, will be used automatically by various processes that would otherwise take memory from the shared pool One major use of the large pool is by shared server processes, described in Chapter 4 in the section “Use the Oracle Shared Server Architecture.” Parallel execution servers will also use the large pool, if there is one In the absence of a large pool, these processes will use memory on the shared pool This can cause contention for the shared pool, which may have negative results If shared servers or parallel servers are being used, a large pool should always be created Some I/O processes may also make use of the large pool, such as the processes used by the Recovery Manager when it is backing up to a tape device

Sizing the large pool is not a matter for performance If a process needs the large pool of memory, it will fail with an error if that memory is not available Allocating more memory than is needed will not make statements run faster Furthermore, if a large pool exists, it will be used: it is not possible for a statement to start off by using the large pool, and then revert to the shared pool if the large pool is too small

From 9i release 2 onward it is possible to create and to resize a large pool after

instance startup With earlier releases, it had to be defined at startup and was a fixed

size From release 10g onward, creation and sizing of the large pool can be completely

automatic

EXAM TIP The large pool size is dynamic and can be automatically managed.

The Java Pool

The Java pool is only required if your application is going to run Java stored procedures within the database: it is used for the heap space needed to instantiate the Java objects However, a number of Oracle options are written in Java, so the Java pool is considered

Trang 2

standard nowadays Note that Java code is not cached in the Java pool: it is cached in

the shared pool, in the same way that PL/SQL code is cached

The optimal size of the Java pool is dependent on the Java application, and how

many sessions are running it Each session will require heap space for its objects If

the Java pool is undersized, performance may degrade due to the need to continually

reclaim space In an EJB (Enterprise JavaBean) application, an object such as a stateless

session bean may be instantiated and used, and then remain in memory in case it

is needed again: such an object can be reused immediately But if the Oracle server

has had to destroy the bean to make room for another, then it will have to be

reinstantiated next time it is needed If the Java pool is chronically undersized,

then the applications may simply fail

From 10g onward it is possible to create and to resize a large pool after instance

startup; this creation and sizing of the large pool can be completely automatic With

earlier releases, it had to be defined at startup and was a fixed size

EXAM TIP The Java pool size is dynamic and can be automatically managed.

The Streams Pool

The Streams pool is used by Oracle Streams This is an advanced tool that is beyond

the scope of the OCP examinations or this book, but for completeness a short

description follows

The mechanism used by Streams is to extract change vectors from the redo log and

to reconstruct statements that were executed from these—or statements that would

have the same net effect These statements are executed at the remote database The

processes that extract changes from redo and the processes that apply the changes

need memory: this memory is the Streams pool From database release 10g it is

possible to create and to resize the Streams pool after instance startup; this creation

and sizing can be completely automatic With earlier releases it had to be defined at

startup and was a fixed size

EXAM TIP The Streams pool size is dynamic and can be automatically

managed

Exercise 1-3: Investigate the Memory Structures of the Instance In

this exercise, you will run queries to determine the current sizing of various memory

structures that make up the instance Either SQL Developer or SQL*Plus may be used

1 Connect to the database as user SYSTEM

2 Show the current, maximum, and minimum sizes of the SGA components

that can be dynamically resized:

select COMPONENT,CURRENT_SIZE,MIN_SIZE,MAX_SIZE

from v$sga_dynamic_components;

Trang 3

This illustration shows the result on an example database:

The example shows an instance without Streams, hence a Streams pool of

size zero Neither the large pool nor the Java pool has changed since instance startup, but there have been changes made to the sizes of the shared pool and database buffer cache Only the default pool of the database buffer cache has been configured; this is usual, except in highly tuned databases

3 Determine how much memory has been, and is currently, allocated to program global areas:

select name,value from v$pgastat where name in ('maximum PGA allocated','total PGA allocated');

Instance Process Structures

The instance background processes are the processes that are launched when the instance is started and run until it is terminated There are five background processes that have a long history with Oracle; these are the first five described in the sections that follow: System Monitor (SMON), Process Monitor (PMON), Database Writer (DBWn), Log Writer (LGWR), and Checkpoint Process (CKPT) A number of others have been introduced with the more recent releases; notable among these are

Manageability Monitor (MMON) and Memory Manager (MMAN) There are also some that are not essential but will exist in most instances These include Archiver (ARCn) and Recoverer (RECO) Others will exist only if certain options have been enabled This last group includes the processes required for RAC and Streams

Additionally, some processes exist that are not properly documented (or are not documented at all) The processes described here are those that every OCP candidate will be expected to know

Trang 4

Figure 1-6 provides a high-level description of the typical interaction of several

key processes and SGA memory structures The server process is representative of the

server side of a client-server connection, with the client component consisting of a

user session and user process described earlier The server process interacts with the

datafiles to fetch a data block into the buffer cache This may be modified by some

DML, dirtying the block in the buffer cache The change vector is copied into the

circular log buffer that is flushed in almost real-time by the log writer process (LGWR)

to the online redo log files If archivelog mode of the database is configured, the

archiver process (ARCn) copies the online redo log files to an archive location

Eventually, some condition may cause the database writer process (DBWn) to write

the dirty block to one of the datafiles The mechanics of the background processes and

their interaction with various SGA structures are detailed in the sections that follow

There is a platform variation that must be cleared up before discussing processes

On Linux and Unix, all the Oracle processes are separate operating system processes,

each with a unique process number On Windows, there is one operating system

process (called ORACLE.EXE) for the whole instance: the Oracle processes run as

separate threads within this one process

SMON, the System Monitor

SMON initially has the task of mounting and opening a database The steps involved

in this are described in detail in Chapter 3 In brief, SMON mounts a database by

locating and validating the database controlfile It then opens a database by locating

and validating all the datafiles and online log files Once the database is opened and

in use, SMON is responsible for various housekeeping tasks, such as coalescing free

space in datafiles

Figure 1-6 Typical interaction of instance processes and the SGA

Trang 5

PMON, the Process Monitor

A user session is a user process that is connected to a server process The server process

is launched when the session is created and destroyed when the session ends An orderly exit from a session involves the user logging off When this occurs, any work done will be completed in an orderly fashion, and the server process will be terminated

If the session is terminated in a disorderly manner (perhaps because the user’s PC is rebooted), then the session will be left in a state that must be cleared up PMON monitors all the server processes and detects any problems with the sessions If a session has terminated abnormally, PMON will destroy the server process, return its PGA memory to the operating system’s free memory pool, and roll back any incomplete transaction that may have been in progress

EXAM TIP If a session terminates abnormally, what will happen to an active

transaction? It will be rolled back, by the PMON background process

DBWn, the Database Writer

Always remember that sessions do not as a general rule write to disk They write data (or changes to existing data) to buffers in the database buffer cache It is the database writer that subsequently writes the buffers to disk It is possible for an instance to have several database writers (up to a maximum of twenty), which will be called DBW0, DBW1, and so on: hence the use of the term DBWn to refer to “the” database writer The default is one database writer per eight CPUs, rounded up

TIP How many database writers do you need? The default number may well

be correct Adding more may help performance, but usually you should look at tuning memory first As a rule, before you optimize disk I/O, ask why there is any need for disk I/O

DBWn writes dirty buffers from the database buffer cache to the datafiles—but

it does not write the buffers as they become dirty On the contrary: it writes as few buffers as possible The general idea is that disk I/O is bad for performance, so don’t

do it unless it really is needed If a block in a buffer has been written to by a session, there is a reasonable possibility that it will be written to again—by that session, or a different one Why write the buffer to disk, if it may well be dirtied again in the near future? The algorithm DBWn uses to select dirty buffers for writing to disk (which will clean them) will select only buffers that have not been recently used So if a buffer is very busy, because sessions are repeatedly reading or writing it, DBWn will not write

it to disk There could be hundreds or thousands of writes to a buffer before DBWn cleans it It could be that in a buffer cache of a million buffers, a hundred thousand of them are dirty—but DBWn might only write a few hundred of them to disk at a time These will be the few hundred that no session has been interested in for some time DBWn writes according to a very lazy algorithm: as little as possible, as rarely as possible There are four circumstances that will cause DBWn to write: no free buffers, too many dirty buffers, a three-second timeout, and when there is a checkpoint

Trang 6

EXAM TIP What will cause DBWR to write? No free buffers, too many dirty

buffers, a three-second timeout, or a checkpoint

First, when there are no free buffers If a server process needs to copy a block into

the database buffer cache, it must find a free buffer A free buffer is a buffer that is

neither dirty (updated, and not yet written back to disk) nor pinned (a pinned buffer

is one that is being used by another session at that very moment) A dirty buffer must

not be overwritten because if it were changed, data would be lost, and a pinned buffer

cannot be overwritten because the operating system’s memory protection mechanisms

will not permit this If a server process takes too long (this length of time is internally

determined by Oracle) to find a free buffer, it signals the DBWn to write some dirty

buffers to disk Once this is done, these will be clean, free, and available for use

Second, there may be too many dirty buffers—”too many” being another internal

threshold No one server process may have had a problem finding a free buffer, but

overall, there could be a large number of dirty buffers: this will cause DBWn to write

some of them to disk

Third, there is a three-second timeout: every three seconds, DBWn will clean a few

buffers In practice, this event may not be significant in a production system because

the two previously described circumstances will be forcing the writes, but the timeout

does mean that even if the system is idle, the database buffer cache will eventually be

cleaned

Fourth, there may be a checkpoint requested The three reasons already given will

cause DBWn to write a limited number of dirty buffers to the datafiles When a

checkpoint occurs, all dirty buffers are written This could mean hundreds of thousands

of them During a checkpoint, disk I/O rates may hit the roof, CPU usage may go to 100

percent, end user sessions may experience degraded performance, and people may start

complaining Then when the checkpoint is complete (which may take several minutes),

performance will return to normal So why have checkpoints? The short answer is, don’t

have them unless you have to

EXAM TIP What does DBWn do when a transaction is committed? It does

absolutely nothing

The only moment when a checkpoint is absolutely necessary is as the database is

closed and the instance is shut down—a full description of this sequence is given in

Chapter 3 A checkpoint writes all dirty buffers to disk: this synchronizes the buffer

cache with the datafiles, the instance with the database During normal operation,

the datafiles are always out of date, as they may be missing changes (committed and

uncommitted) This does not matter, because the copies of blocks in the buffer cache

are up to date, and it is these that the sessions work on But on shutdown, it is necessary

to write everything to disk Automatic checkpoints only occur on shutdown, but a

checkpoint can be forced at any time with this statement:

alter system checkpoint;

Trang 7

Note that from release 8i onward, checkpoints do not occur on log switch (log

switches are discussed in Chapter 14)

The checkpoint described so far is a full checkpoint Partial checkpoints occur more frequently; they force DBWn to write all the dirty buffers containing blocks from just one or more datafiles rather than the whole database: when a datafile or tablespace is taken offline; when a tablespace is put into backup mode; when a tablespace is made read only These are less drastic than full checkpoints and occur automatically

whenever the relevant event happens

To conclude, the DBWn writes on a very lazy algorithm: as little as possible, as rarely as possible—except when a checkpoint occurs, when all dirty buffers are written

to disk, as fast as possible

LGWR, the Log Writer

LGWR writes the contents of the log buffer to the online log files on disk A write of

the log buffer to the online redo log files is often referred to as flushing the log buffer.

When a session makes any change (by executing INSERT, UPDATE, or DELETE commands) to blocks in the database buffer cache, before it applies the change to the block it writes out the change vector that it is about to apply to the log buffer To avoid loss of work, these change vectors must be written to disk with only minimal delay To this end, the LGWR streams the contents of the log buffer to the online redo log files on disk in very nearly real-time And when a session issues a COMMIT, the LGWR writes in real-time: the session hangs, while LGWR writes the buffer to disk Only then is the transaction recorded as committed, and therefore nonreversible LGWR is one of the ultimate bottlenecks in the Oracle architecture It is impossible

to perform DML faster than LGWR can write the change vectors to disk There are three circumstances that will cause LGWR to flush the log buffer: if a session issues

a COMMIT; if the log buffer is one-third full; if DBWn is about to write dirty buffers First, the write-on-commit To process a COMMIT, the server process inserts a commit record into the log buffer It will then hang, while LGWR flushes the log buffer to disk Only when this write has completed is a commit-complete message returned to the session, and the server process can then continue working This is the guarantee that transactions will never be lost: every change vector for a committed transaction will be available in the redo log on disk and can therefore be applied to datafile backups Thus, if the database is ever damaged, it can be restored from backup and all work done since the backup was made can be redone

TIP It is in fact possible to prevent the LGWR write-on-commit If this is

done, sessions will not have to wait for LGWR when they commit: they issue the command and then carry on working This will improve performance but also means that work can be lost It becomes possible for a session to COMMIT, then for the instance to crash before LGWR has saved the change vectors Enable this with caution! It is dangerous, and hardly ever necessary There are only a few applications where performance is more important than data loss

Trang 8

Second, when the log buffer is one-third full, LGWR will flush it to disk This is

done primarily for performance reasons If the log buffer is small (as it usually should

be) this one-third-full trigger will force LGWR to write the buffer to disk in very nearly

real time even if no one is committing transactions The log buffer for many applications

will be optimally sized at only a few megabytes The application will generate enough

redo to fill one third of this in a fraction of a second, so LGWR will be forced to

stream the change vectors to disk continuously, in very nearly real time Then, when

a session does COMMIT, there will be hardly anything to write: so the COMMIT will

complete almost instantaneously

Third, when DBWn needs to write dirty buffers from the database buffer cache to

the datafiles, before it does so it will signal LGWR to flush the log buffer to the online

redo log files This is to ensure that it will always be possible to reverse an uncommitted

transaction The mechanism of transaction rollback is fully explained in Chapter 8

For now, it is necessary to know that it is entirely possible for DBWn to write an

uncommitted transaction to the datafiles This is fine, so long as the undo data needed

to reverse the transaction is guaranteed to be available Generating undo data also

generates change vectors As these will be in the redo log files before the datafiles are

updated, the undo data needed to roll back a transaction (should this be necessary)

can be reconstructed if necessary

Note that it can be said that there is a three-second timeout that causes LGWR

to write In fact, the timeout is on DBWR—but because LGWR will always write just

before DBWn, in effect there is a three-second timeout on LGWR as well

EXAM TIP When will LGWR flush the log buffer to disk? On COMMIT; when

the buffer is one-third full; just before DBWn writes

CKPT, the Checkpoint Process

The purpose of the CKPT changed dramatically between release 8 and release 8i of the

Oracle database In release 8 and earlier, checkpoints were necessary at regular intervals

to make sure that in the event of an instance failure (for example, if the server machine

should be rebooted) the database could be recovered quickly These checkpoints were

initiated by CKPT The process of recovery is repairing the damage done by an instance

failure; it is fully described in Chapter 14

After a crash, all change vectors referring to dirty buffers (buffers that had not

been written to disk by DBWn at the time of the failure) must be extracted from

the redo log, and applied to the data blocks This is the recovery process Frequent

checkpoints would ensure that dirty buffers were written to disk quickly, thus

minimizing the amount of redo that would have to be applied after a crash and

therefore minimizing the time taken to recover the database CKPT was responsible

for signaling regular checkpoints

From release 8i onward, the checkpoint mechanism changed Rather than letting

DBWn get a long way behind and then signaling a checkpoint (which forces DBWn to

catch up and get right up to date, with a dip in performance while this is going on)

Trang 9

from 8i onward the DBWn performs incremental checkpoints instead of full checkpoints

The incremental checkpoint mechanism instructs DBWn to write out dirty buffers at a constant rate, so that there is always a predictable gap between DBWn (which writes blocks on a lazy algorithm) and LGWR (which writes change vectors in near real time) Incremental checkpointing results in much smoother performance and more predictable recovery times than the older full checkpoint mechanism

TIP The faster the incremental checkpoint advances, the quicker recovery

will be after a failure But performance will deteriorate due to the extra disk I/O, as DBWn has to write out dirty buffers more quickly This is a conflict between minimizing downtime and maximizing performance

The CKPT no longer has to signal full checkpoints, but it does have to keep track

of where in the redo stream the incremental checkpoint position is, and if necessary instruct DBWn to write out some dirty buffers in order to push the checkpoint

position forward The current checkpoint position, also known as the RBA (the redo byte address), is the point in the redo stream at which recovery must begin in the event of an instance crash CKPT continually updates the controlfile with the current checkpoint position

EXAM TIP When do full checkpoints occur? Only on request, or as part of an

orderly database shutdown

MMON, the Manageability Monitor

MMON is a process that was introduced with database release 10g and is the enabling

process for many of the self-monitoring and self-tuning capabilities of the database The database instance gathers a vast number of statistics about activity and

performance These statistics are accumulated in the SGA, and their current values can

be interrogated by issuing SQL queries For performance tuning and also for trend analysis and historical reporting, it is necessary to save these statistics to long-term storage MMON regularly (by default, every hour) captures statistics from the SGA and writes them to the data dictionary, where they can be stored indefinitely (though by default, they are kept for only eight days)

Every time MMON gathers a set of statistics (known as a snapshot), it also launches

the Automatic Database Diagnostic Monitor, the ADDM The ADDM is a tool that analyses database activity using an expert system developed over many years by many DBAs It observes two snapshots (by default, the current and previous snapshots) and makes observations and recommendations regarding performance Chapter 5

describes the use of ADDM (and other tools) for performance tuning

EXAM TIP By default, MMON gathers a snapshot and launches the ADDM

every hour

Trang 10

As well as gathering snapshots, MMON continuously monitors the database and

the instance to check whether any alerts should be raised Use of the alert system is

covered in the second OCP exam and discussed in Chapter 24 Some alert conditions

(such as warnings when limits on storage space are reached) are enabled by default;

others can be configured by the DBA

MMNL, the Manageability Monitor Light

MMNL is a process that assists the MMON There are times when MMON’s scheduled

activity needs to be augmented For example, MMON flushes statistical information

accumulated in the SGA to the database according to an hourly schedule by default If

the memory buffers used to accumulate this information fill before MMON is due to

flush them, MMNL will take responsibility for flushing the data

MMAN, the Memory Manager

MMAN is a process that was introduced with database release 10g It enables the

automatic management of memory allocations

Prior to release 9i of the database, memory management in the Oracle environment

was far from satisfactory The PGA memory associated with session server processes

was nontransferable: a server process would take memory from the operating system’s

free memory pool and never return it—even though it might only have been needed

for a short time The SGA memory structures were static: defined at instance startup

time, and unchangeable unless the instance was shut down and restarted

Release 9i changed that: PGAs can grow and shrink, with the server passing out

memory to sessions on demand while ensuring that the total PGA memory allocated

stays within certain limits The SGA and the components within it (with the notable

exception of the log buffer) can also be resized, within certain limits Release 10g

automated the SGA resizing: MMAN monitors the demand for SGA memory

structures and can resize them as necessary

Release 11g takes memory management a step further: all the DBA need do is set

an overall target for memory usage, and MMAN will observe the demand for PGA

memory and SGA memory, and allocate memory to sessions and to SGA structures

as needed, while keeping the total allocated memory within a limit set by the DBA

TIP The automation of memory management is one of the major technical

advances of the later releases, automating a large part of the DBA’s job and

giving huge benefits in performance and resource utilization

ARCn, the Archiver

This is an optional process as far as the database is concerned, but usually a required

process by the business Without one or more ARCn processes (there can be from one

to thirty, named ARC0, ARC1, and so on) it is possible to lose data in the event of a

failure The process and purpose of launching ARCn to create archive log files is

described in detail in Chapter 14 For now, only a summary is needed

Ngày đăng: 06/07/2014, 13:20

TỪ KHÓA LIÊN QUAN