1. Trang chủ
  2. » Công Nghệ Thông Tin

Tài liệu OCA: Oracle Database 11g Administrator Certified Associate- P23 doc

50 246 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Tài Liệu OCA: Oracle Database 11g Administrator Certified Associate- P23 Doc
Trường học University of Oracle
Chuyên ngành Database Administration
Thể loại Tài liệu
Năm xuất bản 2023
Thành phố San Francisco
Định dạng
Số trang 50
Dung lượng 907,38 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Describe and use methods to move data Directory objects, Û N SQL*Loader, External Tables Explain the general architecture of Oracle Data Pump Û N Use Data Pump Export and Import to move

Trang 1

12 To recover a data file from the SYSTEM or UNDO tablespace, the instance must be in which

13 The STATUS column of the dynamic performance view V$LOGFILE contains what value if

one of the redo log file group members has been lost because of a media failure?

A INVALID

B STALE

C DELETED

D The column contains a NULL value.

14 Place the following events or actions leading up to and during instance recovery in the

cor-rect order

1. The database is opened and available

2. Oracle uses undo segments in the undo tablespace to roll back uncommitted transactions

3. The DBA issues the STARTUP command at the SQL*Plus prompt

4. Oracle applies the information in the online redo log files to the data files

15 You noticed that when your instance crashes, it takes a long time to start up the database

Which advisor can be used to tune this situation?

A The Undo Advisor

B The SQL Tuning Advisor

C The Database Tuning Advisor

D The MTTR Advisor

E The Instance Tuning Advisor

16 If a data file is missing when the instance is started, where is the error message recorded?

A Only in the alert log.

B All missing files are returned directly to the administrator in the SQL*Plus session.

C The first missing file is returned directly to the administrator in the SQL*Plus session,

and the rest of the missing files are identified in V$RECOVER_FILE

Trang 2

17 In ARCHIVELOG mode, the loss of a data file for any tablespace other than the SYSTEM or

UNDO tablespace affects which objects in the database?

A The loss affects only objects whose extents reside in the lost data file.

B The loss affects only the objects in the affected tablespace, and work can continue in

other tablespaces

C The loss will not abort the instance but will prevent other transactions in any

tablespace other than SYSTEM or UNDO until the affected tablespace is recovered

D The loss affects only those users whose default tablespace contains the lost or damaged

data file

18 Which dynamic performance view shows the data files either needing media recovery or

missing at instance startup?

19 A fire breaks out in the server room near the routers, and the operations manager cuts off

power to all servers, including the database servers Before the fire is put out, the disk drive

containing the SYSTEM tablespace and both network cards on the Oracle Database 11g

server are destroyed The user SCOTT was about to create a new table, but the connection was dropped after the power was disconnected from the server This scenario is primarily

an example of what kind of failure?

20 Which of the following conditions prevents the instance from progressing through the

NOMOUNT, MOUNT, and OPEN states?

A One of the redo log file groups is missing a member.

B The instance was previously shut down uncleanly with SHUTDOWN ABORT.

C Either the spfile or init.ora file is missing.

D One of the five multiplexed control files is damaged.

E The USERS tablespace is offline, with one of its data files deleted.

Trang 3

Answers to Review Questions

1 D The distance (in bytes) between the checkpoint position in a redo log group and the end

of the current redo log group can never be more than 90 percent of the size of the smallest redo log group

2 C The failure of one statement is considered a statement failure, and one way to solve the

problem is to enable resumable-space allocation When resumable space is enabled, Oracle generates an alert and places the session in a suspended state

3 C The parameter FAST_START_MTTR_TARGET specifies the desired time, in seconds, to

recover a single instance from a crash or instance failure The parameters LOG_CHECKPOINT_

TIMEOUT and FAST_START_IO_TARGET can still be used in Oracle 11g but should be used

only together with an advanced-tuning scenario or for compatibility with older versions of Oracle MTTR_TARGET_ADVICE and FAST_START_TARGET_MTTR are not valid initialization parameters

4 D The PMON process periodically polls server processes to make sure their sessions are

still connected

5 C A DBA’s disconnection of a session is an intentional process termination, not a failure

If a user’s PC reboots, the user does not get a chance to log off, and the session is cleaned

up by PMON; similarly, disconnecting from the application or SQL*Plus before logging out

is considered a user-process failure A network problem can prematurely disconnect a user session, causing a user-process failure In all cases, PMON performs the session cleanup, whether the disconnection was intentional or not

6 A, C In addition to configuring a backup listener process and installing multiple network

cards, you can implement connect-time failover and a backup network connection to reduce the possibility of network failures

7 B The instance must be shut down, if it is not already down, to repair or replace the missing

or damaged control file

8 B, C Media failure, physical corruption, logical corruption, and missing data files all can

be identified by the Data Recovery Advisor, which also provides recommendations for repair

9 B, E If a tablespace is taken offline because a data file is missing, the instance can still be

started as long as the missing data file does not belong to the SYSTEM or UNDO tablespace

10 A If a network card fails, the failure type is network; the actual media containing the

database files are not affected

11 B The Data Recovery Advisor in Oracle 11g Release 1 does not support RAC databases

It is integrated with EM Database Control and with RMAN CHANGE FAILURE and other commands can be executed using RMAN The ADVISE FAILURE command must be run before you can perform REPAIR FAILURE

Trang 4

12 D Unlike recovery of non–system-critical tablespaces other than SYSTEM or UNDO that can

be recovered with the database in OPEN state, the database must be in MOUNT state to recover either the SYSTEM or UNDO tablespace

13 A If the redo log file group member has been lost because of a media failure or inadvertent

deletion, the STATUS column is set to INVALID when an attempt is made to write redo mation to that member

infor-14 B Instance recovery, also known as crash recovery, occurs when the DBA attempts to open

the database but the files were not synchronized to the same SCN when the database was shut down Once the DBA issues the STARTUP command, Oracle uses information in the redo log files to restore the data files (including the undo tablespace’s data files) to the state before the instance failure Oracle then uses undo data in the undo tablespace after the database has been opened and made available to users to roll back uncommitted transactions

15 D The MTTR Advisor can tell the DBA the most effective value for the FAST_START_

MTTR_TARGET parameter This parameter specifies the maximum time required in seconds

to perform instance recovery

16 C In addition to reporting the first missing file to the administrator and listing all the

missing files in the dynamic performance view V$RECOVER_FILE, the missing data file(s) are noted in the DBWR background-process trace files

17 B The loss of one or more of a tablespace’s data files does not prevent other users from

doing their work in other tablespaces Recovering the affected data files can continue while the database is still online and available

18 A The dynamic performance view V$RECOVER_FILE contains a list of the data files that

either need media recovery or are missing when the instance is started

19 B The primary failure in this scenario is instance Subsequently, a network failure will

occur when connections are attempted through the burned-out router However, no nections are possible until the network card in the server is replaced; the instance cannot start because of a media failure on the disk containing the SYSTEM tablespace

con-20 D All copies of the control files as defined in the spfile or the init.ora file must be

iden-tical and available If one of the redo log file groups is missing a member, a warning is recorded in the alert log, but instance startup still proceeds If the instance was previously shut down with SHUTDOWN ABORT, instance recovery automatically occurs during startup

Only an spfile or an init.ora file is needed to enter the NOMOUNT state, not both If a tablespace is offline, the status of its data files is not checked until an attempt is made to bring it online; therefore, it will not prevent instance startup

Trang 5

Describe and use methods to move data (Directory objects,

Û N

SQL*Loader, External Tables) Explain the general architecture of Oracle Data Pump

Û N

Use Data Pump Export and Import to move data between

Û N

Oracle databases

Intelligent Infrastructure Enhancements

ÛÛ

Use the Enterprise Manager Support Workbench

Û N

Managing Patches

Û N

Trang 6

use these tools to back up data from a table or a schema before making changes for quick recovery Oracle Data Pump is a high-performance data-movement tool that you can use to unload and load data between Oracle databases, and you can use the SQL*Loader tool to load data received from external sources such as flat files.

In this chapter you will also learn about contacting Oracle Support through Enterprise

Manager Support Workbench EM Support Workbench is new in Oracle 11g and can be

used to examine a database problem and contact Oracle Support for a resolution EM can also alert you when database patches are ready You will learn to use EM to stage and apply a patch

Understanding Data Pump

The Data Pump facility is a high-speed mechanism for transferring data or metadata from one database to another or from operating-system files Data Pump employs direct path unloading and direct path loading technologies Unlike the older export and import programs (exp and imp), which operated on the client side of a database session, the Data Pump facility runs on the server Thus, you must use a database directory to specify dump-file and log-file locations

You can use Data Pump to copy data from one schema to another between two bases or within a single database You can also use it to extract a logical copy of the entire database, a list of schemas, a list of tables, or a list of tablespaces to portable operating-system files Data Pump can also transfer or extract the metadata (DDL statements) for a database, schema, or table

data-You can call Data Pump from the command-line programs expdp and impdp or through the DBMS_DATAPUMP PL/SQL package, or you can invoke it from EM

Data Pump export extracts data and metadata from your database, and Data Pump import loads this extracted data into the same database or into a different database, option-ally transforming metadata along the way These transformations let you, for example, copy tables from one schema to another or remap a tablespace from one database to another

These are some of the key features of Data Pump:

A fine-grained object selection using

Û

An option to specify a lower-compatibility version so only supported object types are

Û N

exported

Trang 7

The ability to perform export and import in using parallel processes

Û N

The ability to detach and attach to a job from the client session, allowing the DBA to

Û N

close the export/import session and yet have the ability to administer the jobs

An option to change target table names, tablespace names, and schema names

Û N

Another option to compress metadata or data or both during export

Û N

A tablespace metadata export to support the transportable tablespace feature of the

Û N

database

An option to append data to an existing table or to truncate and load data to an

exist-Û N

ing tableThe automatic use of direct path export whenever possible

Û N

The ability to copy data from one database to another using a network

Û N

The ability to specify a sample percentage to unload only a subset of data

Û N

The ability to monitor job progress; job status can be queried from the database or

Û N

using EM

An option to restart or terminate failed export and import jobs

Û N

Architecture of Data Pump

In Oracle 11g Data Pump, the database does all the work This is a major deviation from

the architecture of export/import utilities, which previously ran as clients and did the major part of the work The dump files for export/import were stored at the client, whereas the Data Pump files are stored at the server Figure 17.1 shows the Data Pump architecture

Data Pump Components

Data Pump consists of the following components:

Data Pump API DBMS_DATAPUMP is the PL/SQL API for Data Pump, which is the engine

Data Pump jobs are created and monitored using this API

Metadata API The DBMS_METADATA API provides the database object definition to the Data Pump processes

Client Tools Data Pump client tools expdp and impdp use the procedures provided by the DBMS_DATAPUMP package These tools make calls to the Data Pump API to initiate and moni-tor Data Pump operations

Data-movement APIs Data Pump uses the Direct Path API (DPAPI) to move data Certain

circumstances do not allow the use of DPAPI; in those cases, the Oracle external table with the ORACLE_DATADUMP access driver API is used

Trang 8

F i g U r E 17.1 Data Pump architecture

Export Dump Client: expdp Import DumpClient: impdp

Other Clients:

Enterprise Manager, SQL*Plus

Metadata API:

DBMS_METADATA

Database

DBMS_DATAPUMP: Data and Metadata Movement Engine

Direct Path API ORACLE_DATAPUMPExternal Table

API

Data Pump Processes

Oracle Data Pump jobs, once started, are performed by various processes on the database server The following are the processes involved in the Data Pump operation:

Client process This process is initiated by the client utility—expdp, impdp, or other clients—

to make calls to the Data Pump API Since Data Pump is completely integrated into the database, once the Data Pump job is initiated, this process is not necessary for the progress

of the job

Shadow process When a client logs into the Oracle Database, a foreground process is

created (a standard feature of Oracle) This shadow process services the client data dump API requests This process creates the master table and creates Advanced Queries (AQ) queues used for communication Once the client process ends, the shadow process goes away too

Master control process (MCP) The master control process controls the execution of the

Data Pump job; there is one MCP per job MCP divides the Data Pump job into various metadata and data-load or -unload jobs and hands them over to the worker processes The MCP has a process name of the format <ORACLE_SID>_DMnn_<PROCESS_ID> It maintains the job state, job description, restart information, and file information in the master table

Trang 9

Worker process The MCP creates the worker processes based on the value of the ALLEL parameter The workers perform the tasks requested by the MCP, mainly loading

PAR-or unloading data and metadata The wPAR-orker processes have the fPAR-ormat <ORACLE_SID>_

DWnn_<PROCESS_ID> The worker processes maintain the current status in the master table that can be used to restart a failed job

Parallel query (PQ) processes The worker processes can initiate parallel-query processes

if an external table is used as the data-access method for loading or unloading These are standard parallel-query slaves of the parallel-execution architecture

Oracle Data Pump cannot be used to load data into a database from data exported using the exp utility.

Let’s consider the example of an export Data Pump operation and see all the activities and processes involved Say user A invokes the expdp client, which initiates the shadow pro-cess The client calls the DBMS_DATAPUMP.OPEN procedure to establish the kind of export to

be performed The OPEN call starts the MCP process and creates two AQ queues

The first queue is the status queue, used to send the status of the job, which includes ging information and errors Clients interested in the status of the job can query this queue

log-This is strictly a unidirectional queue—the MCP posts the information to the queue, and the clients consume the information The second queue is the command-and-control queue, which is used to control the worker processes established by the MCP and to perform API commands and file requests This is a bidirectional queue where the MCP listens and writes The commands are sent to this queue by the DBMS_DATAPUMP methods or by using the parameters of the expdp client

Once all the components (parameters and filters) of the job are defined, the client (expdp) invokes DBMS_DATAPUMP.START_JOB Based on the number of parallel processes requested, the MCP starts the worker processes The MCP directs one of the worker processes to do the metadata extraction using the DBMS_METADATA API

During the operation, a master table is maintained in the schema of the user who ated the Data Pump export The master table has the same name as the name of the Data Pump job This table maintains one row per object with status information In the event of

initi-a finiti-ailure, Diniti-atiniti-a Pump uses the informiniti-ation in this tiniti-able to restiniti-art the job The miniti-aster tiniti-able

is the heart of every Data Pump operation; it maintains all the information about the job

Data Pump uses the master table to restart a failed or suspended job The master table is dropped (by default) when the Data Pump job finishes successfully

The master table is written to the dump file set as the last step of the export dump tion and is removed from the user’s schema For an import dump operation, the master table

opera-is loaded from the dump file set to the user’s schema as the first step and opera-is used to sequence the objects being imported

While the export job is underway, the original client who invoked the export job can detach from the job without aborting the job This is especially useful when performing long-running data export jobs Users can attach the job at any time using the DBMS_DATAPUMP methods and query the status or change the parallelism of the job

Trang 10

Since the master table is created in the Data Pump user’s schema as a table, if there is an existing table in the schema with the Data Pump job name, the job fails The user must have appropriate privileges to create the table and have appropriate tablespace quotas

Data Access Methods

Data Pump chooses the most appropriate data-access method Two methods are supported:

direct path access and external table access Direct path export has been supported since

Oracle 7.3 External tables were introduced in Oracle9i, and support for writing to external tables has been available since Oracle 10g Data Pump provides an external-tables access

driver (ORACLE_DATAPUMP) that can be used to read and write files The format of the file is the same as the direct path methods; hence, it’s possible to load data that is unloaded in another method Data Pump uses the Direct Load API whenever possible The following are the exceptions when an external tables method will be used:

Tables with fine-grained access control are enabled in insert and select operations

Û N

A domain index exists for a

Û

A global index on multipartition table exists during a single-partition load

Û N

Clustered table or table has an active trigger during import

Û N

A table contains a

Û

N VARRAY column with an embedded opaque type

Loading and unloading very large tables and partitions, where the

Û

clause can be used to an advantage

Loading tables that are partitioned differently at load time and unload time

Û N

Using Data Pump Clients

Oracle 11g comes with the expdp utility to invoke Data Pump for export and comes with impdp for import The Data Pump export utility (expdp) unloads data and metadata to a set

of OS files called dump files The Data Pump import utility (impdp) loads data and data stored in an export dump file to a target database expdp and impdp accept parameters that are then passed to the DBMS_DATAPUMP program The command-line executable name for Data Pump export is expdp and for Data Pump import is impdp on Windows as well as Unix platforms For a user to invoke expdp/impdp, they need to set up a directory where the dump files will be stored and they must have appropriate privileges to perform Data Pump export/import In the next section, I will discuss how to set up the export dump location

Trang 11

meta-Setting Up the Dump Location

Since Data Pump is server-based, directory objects must be created in the database where the Data Pump files will be stored Directory objects are named directory locations on the database server representing the physical location on the server’s file system Directo-ries are used with several database features, including BFILEs, external tables, utl_file, SQL*Loader, and Data Pump

The directory object contains the location of a specific operating-system directory By using a named directory object, you do not have to hard-code the directory path in pro-grams, and you get file-management flexibility

Under Unix, you create directories with the CREATE DIRECTORY statement, like this:

CREATE DIRECTORY dump_dir AS ‘/oracle/data_pump/dumps’;

CREATE DIRECTORY log_dir AS ‘/oracle/data_pump/logs’;

Under Windows, you create directories like this:

CREATE DIRECTORY dpump_dir AS ‘G:\datadumps’;

Directories are not schema objects, like tables or synonyms, because they are not owned

by a schema Instead, directories are like profiles or roles in that they are owned by the database To control access to a directory, you need to grant the READ or WRITE object privi-lege on that directory, like this:

GRANT read,write ON DIRECTORY dump_dir TO PUBLIC;

To create directories, you must have the CREATE ANY DIRECTORY system privilege By default, only the users SYSTEM and SYS have this privilege Be careful in granting this system privilege to users, because the database employs the operating-system credentials of the database-instance owner

Directory objects are owned by the SYS user; thus, the directory names must be unique across the database.

The user executing Data Pump must have been granted permissions on the directory

READ permission is required to import, and WRITE permission is required to export and to create log files or SQL files

Note that the oracle user (who owns the software installation and database files) must have read and write OS privileges on the directory The user SCOTT, for example, need not have any OS privileges on the directory for Data Pump to succeed

A default directory can be created for Data Pump operations in the database Privileged users (with the EXP_FULL_DATABASE or IMP_FULL_DATABASE privilege) need not specify a directory object name when performing the Data Pump operation The name of the default directory must be DATA_PUMP_DIR Also, the privileged users need not have explicit READ or WRITE permission on DATA_PUMP_DIR

Trang 12

Using EM Database Control, you can create and edit directory objects On the Database Control Schema page, click Directory Objects under Database Objects Figure 17.2 shows the Directory Objects screen that appears.

F i g U r E 17 2 Directory Objects screen of EM

Click the Edit button to change the physical directory You can also use the Delete ton to delete an existing directory and the Create button to create a new directory

but-Data Pump can write three types of files to the OS directory defined in the database

Remember that absolute paths are not supported; Data Pump can write only to a directory defined by a directory database object The file types are as follows:

Dump files These contain data and metadata information.

Log files These record the standard output to a file and contain job progress and status

information

SQL files Data- dump import can extract the metadata information from a dump file,

which can be used to create database objects without using the Data Pump import utility

You can specify the location of the files to the Data Pump clients using three methods (given in the order of precedence):

Prefix the filename with the directory name separated by a colon; for example,

Û N

DUMPFILE=dumplocation:myfile.dmp.Use the

Û

N DIRECTORY parameter on the OS environment

Define the

Û

N DATA_DUMP_DIR directory in the database for privileged users

The export and import done using the expdp and impdp tools can have different modes based on the requirement The next section discusses this

Trang 13

Specifying Export and Import Modes

Export and import using the Data Pump clients can be performed in five different modes to unload or load different portions of the database When performing the dump-file import, specifying the mode is optional; when no mode is specified, the entire dump file is loaded with the mode automatically set to the one used for export

Table 17.1 describes the export and import modes

Ta b l E 17.1 Export and Import Modes in Data Pump

Database Performed by

speci-fying the FULL=Y

Data and metadata for only those objects contained in the specified tablespaces are unloaded The export user requires the EXP_

FULL_DATABASE role.

All objects contained in the specified tablespaces are loaded The import user requires the IMP_

FULL_DATABASE privilege

The source dump file can

be exported in database, tablespace, schema, or table mode.

Schema Performed by

speci-fying the SCHEMAS

parameter This is the default mode

Only objects belonging to the specified schema are unloaded The EXP_FULL_

DATABASE role is required

to specify a list of schemas.

All objects belonging to the specified schema are loaded The source can be a database or schema-mode export The IMP_FULL_

DATABASE role is required to specify a list of schema.

Table Performed by

speci-fying the TABLES

parameter

Only the specified table, its partitions, and its dependent objects are unloaded The export user must have the SELECT

privilege on the tables.

Only the specified table, its partitions, and its dependent objects are loaded This requires the

IMP_FULL_DATABASE role

to specify tables ing to a different user.

belong-Transport tablespace Performed by specifying the

TRANSPORT_

TABLESPACES parameter

Only metadata for tables and their dependent objects within the speci- fied set of tablespaces are unloaded Use this mode to transport tablespaces from one database to another.

Metadata from a port tablespace export is loaded.

Trang 14

trans-In a database-mode export, the entire database is exported to operating-system files, including user accounts, public synonyms, roles, and profiles In a schema-mode export, all data and metadata for a list of schemas is exported At the most granular level is the table-mode export, which includes the data and metadata for a list of tables A tablespace-mode export extracts both data and metadata for all objects in a tablespace list as well as any object dependent on those in the specified tablespace list Therefore, if a table resides in your specified tablespace list, all its indexes are included whether or not they also reside in the specified tablespace list In each of these modes, you can further specify that only data

or only metadata be exported The default is to export both data and metadata

With some objects, such as indexes, only the metadata is exported; the actual internal structures contain physical addresses and are always rebuilt on import

The files created by a Data Pump export are called dump files, and one or more of

these files can be created during a single Data Pump export job Multiple files are created

if your Data Pump job has a parallel degree greater than 1 or if a single dump file exceeds the filesize parameter All the export dump files from a single Data Pump export job are

called a dump-file set.

Using expdp

You use the expdp utility to perform Data Pump exports Any user can export objects or a complete schema owned by the user without any additional privileges Nonprivileged users must have WRITE permission on the directory object and must specify the DIRECTORY param-eter or specify the directory object name along with the dump filename

Here is an example to perform an export by user SCOTT Since Scott is not a privileged user, he must specify the DIRECTORY object name

$ expdp scott/tiger

Export: Release 11.1.0.6.0 - Production on Saturday, 15 November, 2008 13:50:05

Copyright (c) 2003, 2007, Oracle All rights reserved

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production

With the Partitioning, OLAP, Data Mining and Real Application Testing optionsORA-39002: invalid operation

ORA-39070: Unable to open the log file

ORA-39145: directory object parameter must be specified and non-nullLet’s create a directory for user SCOTT and grant read and write privileges on this directory:

SQL> CREATE DIRECTORY dumplocation AS ‘/u02/dpump’;

Directory created

Trang 15

SQL> GRANT READ, WRITE on DIRECTORY dumplocation TO scott;

Grant succeeded

Now, let’s try the export specifying the directory:

$ expdp scott/tiger directory=dumplocationExport: Release 11.1.0.6.0 - Production on Saturday, 15 November, 2008 16:04:22

Copyright (c) 2003, 2007, Oracle All rights reserved

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production

With the Partitioning, OLAP, Data Mining and Real Application Testing optionsFLASHBACK automatically enabled to preserve database integrity

Starting “SCOTT”.”SYS_EXPORT_SCHEMA_01”: scott/********

directory=dumplocationEstimate in progress using BLOCKS method

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATATotal estimation using BLOCKS method: 192 KB

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMAProcessing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEXProcessing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINTProcessing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICSProcessing object type SCHEMA_EXPORT/TABLE/COMMENT

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICSProcessing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA exported “SCOTT”.”DEPT” 5.914 KB 4 rows exported “SCOTT”.”EMP” 8.570 KB 14 rows exported “SCOTT”.”SALGRADE” 5.867 KB 5 rows exported “SCOTT”.”BONUS” 0 KB 0 rowsMaster table “SCOTT”.”SYS_EXPORT_SCHEMA_01” successfully loaded/unloaded

******************************************************************************

Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:

/u02/dpump/expdat.dmpJob “SCOTT”.”SYS_EXPORT_SCHEMA_01” successfully completed at 16:04:55

$Since you did not specify any other parameters, expdp used default values for the file-names (expdat.dmp and export.log), did schema-level export (login schema), calculated job estimation using the blocks method, used a default job name (SYS_EXPORT_SCHEMA_01), and exported both data and metadata

Trang 16

Data Pump Export Parameters

You can use various parameters while invoking expdp You can obtain a list of parameters

by specifying expdp help=y:

$ expdp help=yExport: Release 11.1.0.6.0 - Production on Saturday, 15 November, 2008 16:54:49

Copyright (c) 2003, 2007, Oracle All rights reserved

The Data Pump export utility provides a mechanism for transferring data objects

between Oracle databases The utility is invoked with the following command:

Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmpYou can control how Export runs by entering the ‘expdp’ command followed

by various parameters To specify parameters, you use keywords:

Format: expdp KEYWORD=value or KEYWORD=(value1,value2, ,valueN) Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott

or TABLES=(T1:P1,T1:P2), if T1 is partitioned tableUSERID must be the first parameter on the command line

Keyword Description (Default) -ATTACH Attach to existing job, e.g ATTACH [=job name]

COMPRESSION Reduce size of dumpfile contents where valid keyword

values are: ALL, (METADATA_ONLY), DATA_ONLY and NONE

CONTENT Specifies data to unload where the valid keyword values are: (ALL), DATA_ONLY, and METADATA_ONLY

DATA_OPTIONS Data layer flags where the only valid value is:

XML_CLOBS-write XML datatype in CLOB formatDIRECTORY Directory object to be used for dumpfiles and logfiles

DUMPFILE List of destination dump files (expdat.dmp), e.g DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp

ENCRYPTION Encrypt part or all of the dump file where valid keyword values are: ALL, DATA_ONLY, METADATA_ONLY,

Trang 17

ENCRYPTION_PASSWORD Password key for creating encrypted column data.

ESTIMATE Calculate job estimates where the valid keyword values are: (BLOCKS) and STATISTICS

ESTIMATE_ONLY Calculate job estimates without performing the export

EXCLUDE Exclude specific object types, e.g EXCLUDE=TABLE:EMP

FILESIZE Specify the size of each dumpfile in units of bytes

FLASHBACK_SCN SCN used to set session snapshot back to

FLASHBACK_TIME Time used to get the SCN closest to the specified time

FULL Export entire database (N)

HELP Display Help messages (N)

INCLUDE Include specific object types, e.g INCLUDE=TABLE_DATA

JOB_NAME Name of export job to create

LOGFILE Log file name (export.log)

NETWORK_LINK Name of remote database link to the source system

NOLOGFILE Do not write logfile (N)

PARALLEL Change the number of active workers for current job

PARFILE Specify parameter file

QUERY Predicate clause used to export a subset of a table

REMAP_DATA Specify a data conversion function, e.g REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO

REUSE_DUMPFILES Overwrite destination dump file if it exists (N)

SAMPLE Percentage of data to be exported;

SCHEMAS List of schemas to export (login schema)

STATUS Frequency (secs) job status is to be monitored where the default (0) will show new status when available

TABLES Identifies a list of tables to export - one schema only

TABLESPACES Identifies a list of tablespaces to export

TRANSPORTABLE Specify whether transportable method can be used where valid keyword values are: ALWAYS, (NEVER)

TRANSPORT_FULL_CHECK Verify storage segments of all tables (N)

TRANSPORT_TABLESPACES List of tablespaces from which metadata will be

unloaded

VERSION Version of objects to export where valid keywords are:

(COMPATIBLE), LATEST, or any valid database version

The following commands are valid while in interactive mode

Note: abbreviations are allowedCommand Description -ADD_FILE Add dumpfile to dumpfile set

CONTINUE_CLIENT Return to logging mode Job will be re-started if idle

Trang 18

EXIT_CLIENT Quit client session and leave job running.

FILESIZE Default filesize (bytes) for subsequent ADD_FILE commands

HELP Summarize interactive commands

KILL_JOB Detach and delete job

PARALLEL Change the number of active workers for current job

PARALLEL=<number of workers>

REUSE_DUMPFILES Overwrite destination dump file if it exists (N)

START_JOB Start/resume current job

STATUS Frequency (secs) job status is to be monitored where the default (0) will show new status when available

STATUS[=interval]

STOP_JOB Orderly shutdown of job execution and exits the client

STOP_JOB=IMMEDIATE performs an immediate shutdown of the Data Pump job

$

FLASHBACK_SCN and FLASHBACK_TIME are mutually exclusive parameters

The DUMPFILE parameter can specify more than one file The filenames can be separated, or you can use the %U substitution variable If you specify %U in the DUMPFILE file-name, the number of files initially created is based on the value of the PARALLEL parameter

comma-Preexisting files that match the name of the files generated are not overwritten; an error

is flagged To forcefully overwrite the files, use the REUSE_DUMPFILES=Y parameter The FILESIZE parameter determines the size of each file Table 17.2 shows some examples

You can specify all the parameters in a file and specify the filename with the PARFILE parameter The only exception is the PARFILE parameter inside the parameter file Recursive PARFILE is not supported.

The SAMPLE parameter is useful to get a subset of data unloaded from the source table

Specify the percentage of rows that need to be unloaded using this parameter The SAMPLEparameter is not valid for network exports

In the next section, I will discuss the impdp utility, which does the import from a dump file created using expdp

Trang 19

Ta b l E 17 2 Data Pump DUMPFILE Examples

Parameter Examples File Characteristics

DUMPFILE=exp%U.dmp FILESIZE=200M Initially the 200MB, the next file will be created.exp01.dmp file will be created; once the file is

DUMPFILE=exp%U_%U.dmp PARALLEL=3 Initially three files will be created: exp02_02.dmp, and exp03_03.dmp Notice that every occur-exp01_01.dmp,

rence of the substitution variable is incremented each time

Since there is no FILESIZE, no more files will be created.

DUMPFILE=DMPDIR1:exp%U.dmp, DMPDIR2:exp%U.dmp

impdp has several modes of operation, including full, schema, table, and tablespace In the full mode, the entire content of an export file set is loaded In a schema-mode import, all content for a list of schemas in the specified file set is loaded The specified file set for a schema-mode import can be from either a database or a schema-mode export With a table-mode import, only the specified table and dependent objects are loaded from the export file set With a tablespace-mode import, all objects in the export file set that were in the specified tablespace list are loaded

With all these modes, the source can be a live database instead of a set of export files

Table 17.3 shows the supported mapping of export mode to import mode

Ta b l E 17 3 Export to Import Modes

Database Schema Table Tablespace Live database

Full

Trang 20

Source Export Mode Import Mode

Database Schema Live database

Schema

Database Schema Table Tablespace Live database

Table

Database Schema Table Tablespace Live database

Tablespace

The IMP_FULL_DATABASE role is required if the source is a live database or the export sion required the EXP_FULL_DATABASE role

ses-Data Pump Import Parameters

You can use various parameters while invoking impdp You can obtain a list of parameters

by specifying impdp help=y:

$ impdp help=yImport: Release 11.1.0.6.0 - Production on Saturday, 15 November, 2008 21:13:53

Copyright (c) 2003, 2007, Oracle All rights reserved

The Data Pump Import utility provides a mechanism for transferring data objects

between Oracle databases The utility is invoked with the following command:

Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmpYou can control how Import runs by entering the ‘impdp’ command followed

by various parameters To specify parameters, you use keywords:

Format: impdp KEYWORD=value or KEYWORD=(value1,value2, ,valueN) Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmpUSERID must be the first parameter on the command line

Ta b l E 17 3 Export to Import Modes (continued)

Trang 21

Keyword Description (Default) -ATTACH Attach to existing job, e.g ATTACH [=job name].

CONTENT Specifies data to load where the valid keywords are:

(ALL), DATA_ONLY, and METADATA_ONLY

DATA_OPTIONS Data layer flags where the only valid value is:

SKIP_CONSTRAINT_ERRORS-constraint errors are not fatal

DIRECTORY Directory object to be used for dump, log, and sql

files

DUMPFILE List of dumpfiles to import from (expdat.dmp), e.g DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp

ENCRYPTION_PASSWORD Password key for accessing encryptß ed column data

This parameter is not valid for network import jobs

ESTIMATE Calculate job estimates where the valid keywords are:

(BLOCKS) and STATISTICS

EXCLUDE Exclude specific object types, e.g EXCLUDE=TABLE:EMP

FLASHBACK_SCN SCN used to set session snapshot back to

FLASHBACK_TIME Time used to get the SCN closest to the specified time

FULL Import everything from source (Y)

HELP Display help messages (N)

INCLUDE Include specific object types, e.g INCLUDE=TABLE_DATA

JOB_NAME Name of import job to create

LOGFILE Log file name (import.log)

NETWORK_LINK Name of remote database link to the source system

NOLOGFILE Do not write logfile

PARALLEL Change the number of active workers for current job

PARFILE Specify parameter file

PARTITION_OPTIONS Specify how partitions should be transformed where the valid keywords are: DEPARTITION, MERGE and (NONE)QUERY Predicate clause used to import a subset of a table

REMAP_DATA Specify a data conversion function, e.g REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNOREMAP_DATAFILE Redefine datafile references in all DDL statements

REMAP_SCHEMA Objects from one schema are loaded into another schema

REMAP_TABLE Table names are remapped to another table, e.g REMAP_TABLE=EMP.EMPNO:REMAPPKG.EMPNO

REMAP_TABLESPACE Tablespace object are remapped to another tablespace

REUSE_DATAFILES Tablespace will be initialized if it already exists (N)

SCHEMAS List of schemas to import

SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state

SQLFILE Write all the SQL DDL to a specified file

STATUS Frequency (secs) job status is to be monitored where the default (0) will show new status when available

Trang 22

STREAMS_CONFIGURATION Enable the loading of Streams metadataTABLE_EXISTS_ACTION Action to take if imported object already exists.

Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE

TABLES Identifies a list of tables to import

TABLESPACES Identifies a list of tablespaces to import

TRANSFORM Metadata transform to apply to applicable objects

Valid transform keywords: SEGMENT_ATTRIBUTES, STORAGE, OID, and PCTSPACE

TRANSPORTABLE Options for choosing transportable data movement

Valid keywords: ALWAYS and (NEVER)

Only valid in NETWORK_LINK mode import operations

TRANSPORT_DATAFILES List of datafiles to be imported by transportable mode

TRANSPORT_FULL_CHECK Verify storage segments of all tables (N)

TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded

Only valid in NETWORK_LINK mode import operations

VERSION Version of objects to export where valid keywords are:

(COMPATIBLE), LATEST, or any valid database version

Only valid for NETWORK_LINK and SQLFILE

The following commands are valid while in interactive mode

Note: abbreviations are allowedCommand Description (Default) -CONTINUE_CLIENT Return to logging mode Job will be re-started if idle

EXIT_CLIENT Quit client session and leave job running

HELP Summarize interactive commands

KILL_JOB Detach and delete job

PARALLEL Change the number of active workers for current job

PARALLEL=<number of workers>

START_JOB Start/resume current job

START_JOB=SKIP_CURRENT will start the job after skipping any action which was in progress when job was stopped

STATUS Frequency (secs) job status is to be monitored where the default (0) will show new status when available

STATUS[=interval]

STOP_JOB Orderly shutdown of job execution and exits the client

STOP_JOB=IMMEDIATE performs an immediate shutdown of the Data Pump job

$

Trang 23

You must include one parameter to specify the mode, either full, schemas, tables, or tablespaces You can include several other parameters on the command line or place them

in a file and use the parfile= parameter to instruct impdp where to find them Here are some examples of imports:

Read the dump file

HR_TEST, importing only metadata, writing the log file to the database directory chap7, and naming this log file HR_TEST.imp

impdp system/password network_link=prod schemas=”HR”

remap_schema=”HR:HR_TEST” content=metadata_only logfile= dumplocation:HR_TEST.imp

Read the dump file

The combinations of parameters you can use in copying data and metadata give you, the DBA, flexibility in administering your databases

When using the schema-level import with the SCHEMAS parameter, if the schema does not exist in the target database, the import operation creates

it with the same attributes from the source The schema created by the import operation will need to have the password reset.

You can use the CONTENT, INCLUDE, and EXCLUDE parameters in the impdp utility to filter the metadata objects Their behavior is the same as in the expdp utility I’ll discuss them in detail in the “Data and Metadata Filters” section In the next section, I will discuss meth-ods to use a different target for tablespaces, schemas, and data files

Import Transformations

While performing the import, you can specify a different target name for data files, tablespaces, or schemas These transformations are possible because the object metadata is stored in the dump file as XML The REMAP_ parameters are used to specify this When any one of the three REMAP_ parameters is used, Data Pump makes transformations to the meta-data DDL during import The IMP_FULL_DATABASE role is required to use these parameters

You can use these parameters multiple times if there is more than one transformation to

Trang 24

be made, but the same source cannot be repeated more than once The following are the parameters you can use to specify a different target name for each type of object:

REMAP_DATAFILES    Using this parameter, you can specify a different name for the data file

The filename referenced could be in a CREATE TABLESPACE, CREATE LIBRARY, or CREATE DIRECTORY statement REMAP_DATAFILES is especially useful when performing a full data-base import, when the tablespaces are being created by impdp and the source directories do not exist in the target database server, or when the source and target platforms are different (VMS, Windows, Unix) The syntax is as follows:

REMAP_DATAFILE=source_datafile:target_datafile

REMAP_SCHEMA    Using this parameter, you can load all the objects belonging to the source schema to a target schema Multiple source schemas can map to the same target schema If the target schema specified does not exist, the import operation creates the schema and per-forms the load The syntax is as follows:

REMAP_SCHEMA=source_schema:target_schema

REMAP_TABLE Using this parameter, you can rename a table while performing the import

Only the table is renamed; its dependent indexes, triggers, constraints, and columns are not renamed The syntax is as follows:

TRANSFORM=name:boolean_value[:object_type]

The name of the transform can be either SEGMENT_ATTRIBUTES or STORAGE STORAGE removes the STORAGE clause from the CREATE statement DDL, whereas SEGMENT_ATTRIBUTESremoves physical attributes, tablespaces, logging, and storage attributes boolean_value can

be Y or N; the default is Y The type of object is optional; the valid values are TABLE and INDEX.For example, if you want to ignore the storage characteristics during the import and use the defaults for the tablespace, you can do the following:

impdp dumpfile=scott.dmp transform=storage:N:table exclude=indexesThe next example will remove all the segment attributes; the import will use the user’s default tablespace and its default storage characteristics:

impdp dumpfile=scott.dmp transform=segment_attributes:N

In the next section, I will discuss how data can be copied from one database to another without using a dump file

Trang 25

Network-Mode Import

NETWORK_LINK enables the network-mode import using a database link The database link must be created before performing the import Export is performed on the source database based on the various parameters; the data and metadata are passed to the source database using the database link and loaded To get a consistent export from the source database, you can use the FLASHBACK_SCN or FLASHBACK_TIME parameter

Using FLASHBACK_SCN, FLASHBACK_TIME, ESTIMATE, or TRANSPORT_TABLESPACES requires the NETWORK_LINK parameter to also be specified Here is an example of how to copy the SCOTT schema in the source (remote) database to LARRY in the target (local) database Scott’s objects are stored in the USERS tablespace; in the target, you will create Larry’s objects in the EXAMPLE tablespace The database link name is NEW_DB

$ impdp schemas=scott network_link=new_db remap_schema=scott:larry  remap_tablespace=users:example

The network mode import is different from using SQL*Net to perform the import: impdp username/password@database.

In the next example, data is read via the database link PROD, and it imports only the data from HR.DEPARTMENTS into schema HR_TEST.DEPARTMENTS Write a log file to file DEPT_DATA.log.impdp system/password network_link=prod schemas=”HR”

remap_schema=”HR:HR_TEST” content=data_only include=TABLE:”= ‘DEPARTMENTS’”

logfile= dumplocation:HR_TEST.imp

Using network Mode to refresh Test Data from production

Consider that you periodically refresh the Oracle10g test database with production data

Since you have to preserve all the grants on the test schema, you can perform the ing steps using SQL*Plus and exp/imp tools to perform the data refresh:

follow-1. Disable all the foreign keys

2. Disable all the primary keys.

3. Drop the indexes so that the import goes faster.

4. Truncate the tables.

5. Export the data from the production database.

Ngày đăng: 24/12/2013, 02:17

TỪ KHÓA LIÊN QUAN