1. Trang chủ
  2. » Công Nghệ Thông Tin

Microsoft Press Configuring sql server 2005 môn 70 - 431 phần 5 docx

98 228 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Configuring SQL Server 2005
Trường học Microsoft Press
Chuyên ngành Computer Science / Database Management
Thể loại Textbook
Năm xuất bản 2005
Thành phố Redmond
Định dạng
Số trang 98
Dung lượng 2,76 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

So you decide to implement a trigger on the Employee table that fires on an UPDATE operation and logs pay-rate audit information into the DECLARE @now DATETIME SET @now = getdate BEGIN

Trang 1

SET @ContactType = CASE

Check for employee

WHEN EXISTS(SELECT * FROM [HumanResources].[Employee] e WHERE e.[ContactID] = @ContactID)

THEN 'Employee'

Check for vendor

WHEN EXISTS(SELECT * FROM [Purchasing].[VendorContact] vc

INNER JOIN [Person].[ContactType] ct

ON vc.[ContactTypeID] = ct.[ContactTypeID]

WHERE vc.[ContactID] = @ContactID) THEN 'Vendor Contact'

Check for store

WHEN EXISTS(SELECT * FROM [Sales].[StoreContact] sc

INNER JOIN [Person].[ContactType] ct

ON sc.[ContactTypeID] = ct.[ContactTypeID]

WHERE sc.[ContactID] = @ContactID) THEN 'Store Contact'

Check for individual consumer

WHEN EXISTS(SELECT * FROM [Sales].[Individual] i WHERE i.[ContactID] = @ContactID)

THEN 'Consumer' END;

Return the information to the caller

IF @ContactID IS NOT NULL BEGIN

INSERT @retContactInformation SELECT @ContactID, @FirstName, @LastName, @JobTitle, @ContactType;

END;

RETURN;

END;

SELECT * FROM dbo.ufnGetContactInformation(1);

Deterministic vs Nondeterministic Functions

When working with functions, it’s important to know whether the function you are

using is deterministic or nondeterministic Deterministic functions return, for the

same set of input values, the same value every time you call them The SQL Server

built-in function COS, which returns the trigonometric cosine of the specified angle,

is an example of a deterministic function In contrast, a nondeterministic function can

return a different result every time you call it An example of a nondeterministic

function is the SQL Server built-in function GETDATE(), which returns the current

system time and date SQL Server also considers a function nondeterministic if the

Trang 2

Lesson 1: Implementing Functions 357

function calls a nondeterministic function or if the function calls an extendedstored procedure

Whether a function is deterministic or not also determines whether you can build anindex on the results the function returns and whether you can define a clusteredindex on a view that references the function If the function is nondeterministic, youcannot index the results of the function, either through indexes on computed col-umns that call the function or through indexed views that reference the function

Quick Check

■ What are the two types of UDFs, and how are they used?

Quick Check Answer

■ Scalar functions return a single value and are generally used in column lists

and WHERE clauses.

Table-valued functions return a table variable and are used in the FROM

clause

PRACTICE Create a Function

In this practice, you create a scalar function to return the model name for a productgiven a particular product ID You then create a table-valued function to return the

contents of the Product table for a given model ID.

1 Launch SQL Server Management Studio (SSMS), connect to your instance, open

a new query window, and change the context to the AdventureWorks database.

2 Create and test the GetModelNameForProduct scalar function by executing the

SELECT @ModelName = Production.ProductModel.Name FROM Production.Product INNER JOIN Production.ProductModel

ON Production.Product.ProductModelID = Production.ProductModel.ProductModelID

WHERE Production.Product.ProductID = @ProductID

Trang 3

RETURN(@ModelName) END;

GO

SELECT dbo.GetModelNameForProduct(717);

3 Create and test the table-valued function GetProductsForModelID by executing

the following code:

CREATE FUNCTION dbo.GetProductsForModelID (@ProductModelID int) RETURNS @Products TABLE

(

ProductNumber nvarchar(25) NOT NULL,

FinishedGoodsFlag dbo.Flag NOT NULL,

SafetyStockLevel smallint NOT NULL,

SizeUnitMeasureCode nchar(3) NULL, WeightUnitMeasureCode nchar(3) NULL,

ProductSubcategoryID int NULL,

DiscontinuedDate datetime NULL, rowguid uniqueidentifier NOT NULL,

) WITH EXECUTE AS CALLER

AS BEGIN INSERT INTO @Products SELECT ProductID, Name, ProductNumber, MakeFlag, FinishedGoodsFlag, Color, SafetyStockLevel, ReorderPoint, StandardCost, ListPrice, Size, SizeUnitMeasureCode, WeightUnitMeasureCode, Weight, DaysToManufacture, ProductLine, Class, Style,

ProductSubcategoryID, ProductModelID, SellStartDate, SellEndDate, DiscontinuedDate, rowguid, ModifiedDate

FROM Production.Product WHERE Production.Product.ProductModelID = @ProductModelID

Trang 4

Lesson 1: Implementing Functions 359

RETURN END;

encap-■ Scalar functions return a single value

■ Table-valued functions return a table variable

■ Computed columns or views based on deterministic functions, which return thesame value every time they are called, can be indexed Those using nondeter-ministic functions, which can return different results every time they are called,cannot be indexed

Lesson Review

The following questions are intended to reinforce key information presented in thislesson The questions are also available on the companion CD if you prefer to reviewthem in electronic form

NOTE Answers

Answers to these questions and explanations of why each answer choice is right or wrong are located in the “Answers” section at the end of the book.

1 Which of the following are valid commands to use within a function?

A UPDATE Table1 SET Column1 = 1

B SELECT Column1 FROM Table2 WHERE Column2 = 5

C EXEC sp_myproc

D INSERT INTO @var VALUES (1)

Trang 5

Lesson 2: Implementing Stored Procedures

Stored procedures are the most-used programmatic structures within a database A

pro-cedure is simply a name associated with a batch of SQL code that is stored and cuted on the server Stored procedures, which can return scalar values or result sets,are the primary interface that applications should use to access any data within a data-base Not only do stored procedures enable you to control access to the database, theyalso let you isolate database code for easy maintenance instead of requiring you tofind hard-coded SQL statements throughout an application if you need to makechanges In this lesson, you see how to create a stored procedure, recompile a storedprocedure, and assign permissions to a role for a stored procedure

exe-After this lesson, you will be able to:

■ Create a stored procedure.

■ Recompile a stored procedure.

■ Assign permissions to a role for a stored procedure.

Estimated lesson time: 20 minutes

Creating a Stored Procedure

Stored procedures can contain virtually any construct or command that is possible toexecute within SQL Server You can use procedures to modify data, return scalar val-ues, or return entire result sets

Stored procedures also provide a very important security function within a database.You can grant users permission to execute stored procedures that access data withouthaving to grant them the ability to directly access the data Even more important,stored procedures hide the structure of a database from a user as well as only permitusers to perform operations that are coded within the stored procedure

The general Transact-SQL syntax for creating a stored procedure is the following:

CREATE { PROC | PROCEDURE } [schema_name.] procedure_name [ ; number ]

[ { @parameter [ type_schema_name ] data_type }

Trang 6

Lesson 2: Implementing Stored Procedures 361

<procedure_option> ::=

[ ENCRYPTION ] [ RECOMPILE ] [ EXECUTE_AS_Clause ]

<sql_statement> ::=

{ [ BEGIN ] statements [ END ] }

<method_specifier> ::=

EXTERNAL NAME assembly_name.class_name.method_name

Each procedure must have a name that is unique within the database and that forms to the rules for object identifiers

con-Procedures can accept any number of input parameters, which are used within the stored procedure as local variables You can also specify output parameters, which let

a stored procedure pass one or more scalar values back to the routine that called theprocedure

You can create procedures with three options When you create a procedure with the

ENCRYPTION option, SQL Server encrypts the procedure definition Specifying the RECOMPILE option forces SQL Server to recompile the stored procedure each time

the procedure is executed The EXECUTE AS option provides a security context for

the procedure

BEST PRACTICES Recompilation

Stored procedures are compiled into the query cache when executed Compilation creates a query plan as well as an execution plan SQL Server can reuse the query plan for subsequent executions,

which conserves resources But the RECOMPILE option forces SQL Server to discard the query plan

each time the procedure is executed and create a new query plan There are only a few extremely rare cases when recompiling at each execution is beneficial, such as if you add a new index from

which the stored procedure might benefit Thus, you typically should not add the RECOMPILE

option to a procedure when you create it.

The body of the stored procedure contains the batch of commands you want to cute within the procedure The following are the only commands that you cannot exe-cute within a stored procedure:

USE <database>

Trang 7

The following code shows a sample stored procedure that logs errors in a table called

ErrorLog:

CREATE PROCEDURE [dbo].[uspLogError]

@ErrorLogID [int] = 0 OUTPUT contains the ErrorLogID of the row

inserted by uspLogError in the

BEGIN

SET NOCOUNT ON;

Output parameter value of 0 indicates that error information was not logged

SET @ErrorLogID = 0;

BEGIN TRY Return if there is no error information to log

IF ERROR_NUMBER() IS NULL RETURN;

Return if inside an uncommittable transaction

Data insertion/modification is not allowed when a transaction is in an uncommittable state

IF XACT_STATE() = -1 BEGIN

PRINT 'Cannot log error since the current transaction is in an uncommittable state '

+ 'Rollback the transaction before executing uspLogError in order to successfully log error information.';

RETURN;

END

INSERT [dbo].[ErrorLog]

( [UserName], [ErrorNumber], [ErrorSeverity], [ErrorState], [ErrorProcedure], [ErrorLine], [ErrorMessage]

) VALUES ( CONVERT(sysname, CURRENT_USER), ERROR_NUMBER(),

ERROR_SEVERITY(), ERROR_STATE(), ERROR_PROCEDURE(), ERROR_LINE(), ERROR_MESSAGE() );

Trang 8

Lesson 2: Implementing Stored Procedures 363

Pass back the ErrorLogID of the row inserted SET @ErrorLogID = @@IDENTITY;

END TRY BEGIN CATCH PRINT 'An error occurred in stored procedure uspLogError: ';

EXECUTE [dbo].[uspPrintError];

RETURN -1;

END CATCH END;

Assign Permissions to a Role for a Stored Procedure

As with all objects and operations in SQL Server, you must explicitly grant a user mission to use an object or execute an operation To allow users to execute a storedprocedure, you use the following general syntax:

per-GRANT EXECUTE ON <stored procedure> TO <database principle>

Chapter 2, “Configuring SQL Server 2005,” covers the GRANT statement and

data-base principles

The use of permissions with stored procedures is an interesting security mechanism.Any user granted execute permissions on a stored procedure is automatically dele-gated permissions to the objects and commands referenced inside the stored proce-dure based on the permission set of the user who created the stored procedure

To understand this delegation behavior, consider the previous example code The

stored procedure dbo.uspLogError inserts rows into the dbo.ErrorLog table UserA has insert permissions on dbo.ErrorLog and also created this stored procedure UserB does not have any permissions on dbo.ErrorLog However, when UserA grants EXE-

CUTE permissions on the dbo.uspLogError procedure, UserB can execute this

proce-dure without receiving any errors because the SELECT and INSERT permissions necessary to add the row to the dbo.ErrorLog table are delegated to UserB However,

UserB receives those permissions only when executing the stored procedure and still

cannot directly access the dbo.ErrorLog table.

The permission delegation possible with stored procedures provides a very powerfulsecurity mechanism within SQL Server If all data access—insertions, deletions,updates, or selects—were performed through stored procedures, users could notdirectly access any table in the database Only by executing the stored procedureswould users be able to perform the actions necessary to manage the database Andalthough users would have the permissions delegated through the stored procedures,

Trang 9

they would still be bound to the code within the stored procedure, which can performactions such as the following:

■ Allowing certain operations to be performed only by users who are on a fied list, which is maintained in another table by a user functioning in an admin-istrative role

speci-■ Validating input parameters to prevent security attacks such as SQL injection

Quick Check

1 What is a stored procedure?

2 Which operations can a stored procedure perform?

Quick Check Answers

1 A stored procedure is a name for a batch of Transact-SQL or CLR code that

is stored within SQL Server

2 A procedure can execute any commands within the Transact-SQL language

except USE, SET SHOWPLAN_TEXT ON, and SET SHOWPLAN_ALL ON.

PRACTICE Create a Stored Procedure

In this practice, you create two stored procedures that will update the hire date for allemployees to today’s date and then compare the procedures

1 If necessary, launch SSMS, connect to your instance, open a new query window,

and change the context to the AdventureWorks database.

2 Create a stored procedure to update the hire date by executing the following

code:

CREATE PROCEDURE dbo.usp_UpdateEmployeeHireDateInefficiently

AS DECLARE @EmployeeID int

DECLARE curemp CURSOR FOR SELECT EmployeeID FROM HumanResources.Employee OPEN curemp

FETCH curemp INTO @EmployeeID

WHILE @@FETCH_STATUS = 0 BEGIN

UPDATE HumanResources.Employee SET HireDate = GETDATE() WHERE EmployeeID = @EmployeeID

Trang 10

Lesson 2: Implementing Stored Procedures 365

FETCH curemp INTO @EmployeeID END

CLOSE curemp DEALLOCATE curemp

3 Create a second stored procedure to update the hire date by executing the

following code:

CREATE PROCEDURE dbo.usp_UpdateEmployeeHireDateEfficiently

AS DECLARE @now DATETIME

SET @now = GETDATE()

UPDATE HumanResources.Employee SET HireDate = @now

4 Compare the execution between the two procedures by executing each of the

queries in the following code separately:

EXEC dbo.usp_UpdateEmployeeHireDateInefficiently EXEC dbo.usp_UpdateEmployeeHireDateEfficiently

BEST PRACTICES Code efficiency

Databases are built and optimized for set-oriented processes instead of row-at-a-time processes When constructing stored procedures, you always want to use the minimum amount of code that also minimizes the amount of work performed Although both of the procedures in this practice accomplish the requirement to change all employees’ hire dates, the second procedure executes significantly faster The first procedure not only reads in the entire list of employees, but it also exe- cutes an update as well as a call to a function for each employee The second procedure executes

the GETDATE() function only once and performs a single update operation.

Lesson Summary

■ Stored procedures are stored batches of code that are compiled when executed

■ Procedures can be used to execute almost any valid command while also ing a security layer between a user and the tables within a database

provid-Lesson Review

The following questions are intended to reinforce key information presented in thislesson The questions are also available on the companion CD if you prefer to reviewthem in electronic form

Trang 12

Lesson 3: Implementing Triggers 367

Lesson 3: Implementing Triggers

A trigger is a specialized implementation of a Transact-SQL or CLR batch that

auto-matically runs in response to an event within the database You can create two types

of triggers in SQL Server 2005: data manipulation language (DML) triggers and data

def-inition language (DDL) triggers DML triggers run when INSERT, UPDATE, or DELETE

statements modify data in a specified table or view DDL triggers, which run inresponse to DDL events that occur on the server such as creating, altering, or drop-ping an object, are used for database administration tasks such as auditing and con-

trolling object access In this lesson, you see how to create AFTER and INSTEAD OF

DML triggers, how to identify and manage recursive and nested triggers, and how tocreate DDL triggers to perform administration tasks

After this lesson, you will be able to:

■ Create DML triggers.

■ Create DDL triggers.

■ Identify recursive and nested triggers.

Estimated lesson time: 20 minutes

DML Triggers

Unlike stored procedures and functions, DML triggers are not stand-alone objects,and you cannot directly execute them A DML trigger is attached to a specific table orview and defined for a particular event When the event occurs, SQL Server automat-ically executes the code within the trigger, known as “firing the trigger.” The events

that can cause a trigger to fire are INSERT, UPDATE, and DELETE operations.

Triggers can fire in two different modes: AFTER and INSTEAD OF.

An AFTER trigger fires after SQL Server completes all actions successfully For ple, if you insert a row into a table, a trigger defined for the INSERT operation fires

exam-only after the row passes all constraints defined by primary keys, unique indexes, straints, rules, and foreign keys If the insert fails any of these validations, SQL Server

con-does not execute the trigger You can define AFTER triggers only on tables And you can create any number of AFTER triggers on a view or table.

An INSTEAD OF trigger causes SQL Server to execute the code in the trigger instead

of the operation that caused the trigger to fire If you were to define an INSTEAD OF

trigger on the table in the previous example, the insert would not be performed, so

Trang 13

none of the validation checks would be performed SQL Server would execute the

code in the trigger instead You can create INSTEAD OF triggers on views and tables The most common usage is to use INSTEAD OF triggers on views to update multiple base tables through a view You can define only one INSTEAD OF trigger for each

INSERT, UPDATE, or DELETE event for a view or table.

The code within a trigger can be composed of any statements and constructs valid for

a batch, with some exceptions Following is a brief list of some of the more importantcommands or constructs that you cannot use within a trigger:

■ Databases cannot be created, altered, dropped, backed up, or restored

■ Structural changes cannot be made to the table that caused the trigger to fire,

such as CREATE/ALTER/DROP INDEX, ALTER/DROP TABLE, and so on.

MORE INFO Trigger exceptions

You can find the full list of commands and constructs that are not allowed within a trigger in the SQL Server 2005 Books Online article “CREATE TRIGGER (Transact-SQL).”

SQL Server does not support triggers against system objects such as system tables anddynamic management views Also, triggers will fire only in response to logged opera-

tions Minimally logged operations such as TRUNCATE and WRITETEXT do not cause

a trigger to fire

BEST PRACTICES Referential integrity

You can use triggers to enforce referential integrity However, you should not use triggers in place

of declarative referential integrity (DRI) via a FOREIGN KEY constraint DRI is enforced when the

modification is made, before the change is part of the table, and is much more efficient than

exe-cuting trigger code However, you cannot define FOREIGN KEY constraints across databases To

enforce referential integrity across databases, you must use triggers.

Triggers have access to two special tables that are dynamically generated: INSERTED and DELETED INSERTED and DELETED tables are visible only within a trigger and

cannot be accessed by any other construct such as a stored procedure or function

The structure of the INSERTED and DELETED tables exactly matches the column

def-inition of the table on which the trigger was created Therefore, you can reference umns by using the same name as the table for which the trigger was defined

col-When you execute an INSERT operation, the INSERTED table contains each row that was inserted into the table, whereas the DELETED table does not contain any rows When you execute a DELETE statement, the DELETED table contains each row that

Trang 14

Lesson 3: Implementing Triggers 369

was deleted from the table, whereas the INSERTED table does not contain any rows When you execute an UPDATE statement, the INSERTED table contains the after image of each row you updated, and the DELETED table contains the before image of

each row that you updated The before image is simply a copy of the row as it existed

before you executed the UPDATE statement The after image reflects the data in the row after the UPDATE statement has changed appropriate values.

The general Transact-SQL syntax for creating a DML trigger is as follows:

CREATE TRIGGER [ schema_name ]trigger_name

ON { table | view }

[ WITH <dml_trigger_option> [ , n ] ] { FOR | AFTER | INSTEAD OF }

{ [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] } [ WITH APPEND ]

[ NOT FOR REPLICATION ]

AS { sql_statement [ ; ] [ , n ] | EXTERNAL NAME <method specifier [ ; ] > }

<dml_trigger_option> ::=

[ ENCRYPTION ] [ EXECUTE AS Clause ]

Every trigger must have a name that conforms to the rules for object identifiers

You use the ON clause to specify the table or view that the trigger will be created

against If the table or view is dropped, any triggers that were created against the tableare also dropped

Using the WITH clause, you can do the following:

■ Specify whether the code in the trigger will be encrypted when it is created

■ Specify an execution context

The FOR clause specifies whether the trigger is an AFTER or INSTEAD OF trigger as

well as the event(s) that cause the trigger to fire You can specify more than one eventfor a given trigger if you choose

Most people can ignore the WITH APPEND clause, which applies only to 65

compat-ibility mode, because most organizations should have upgraded their SQL Server 6.5

databases by now The NOT FOR REPLICATION clause is covered in Chapter 19,

“Managing Replication.”

Following the AS clause, you specify the code that you want to execute when the

trig-ger is fired

Trang 15

Let’s look at an example of how to use triggers Human Resources has a strict policythat requires any changes to an employee’s pay rate to be audited The audit mustinclude prior pay rate, current pay rate, the date the change was made, and the name

of the person who made the change You could accomplish the audit process within

an application, but you cannot guarantee that all pay rate changes take place through

applications that you control So you decide to implement a trigger on the Employee table that fires on an UPDATE operation and logs pay-rate audit information into the

DECLARE @now DATETIME

SET @now = getdate()

BEGIN TRY

INSERT INTO dbo.EmployeeAudit (RowImage, PayRate, ChangeDate, ChangeUser) SELECT 'BEFORE', INSERTED.PayRate, @now, suser_sname() FROM DELETED

INSERT INTO dbo.EmployeeAudit (RowImage, PayRate, ChangeDate, ChangeUser) SELECT 'AFTER', INSERTED.PayRate, @now, suser_sname() FROM INSERTED

END TRY

BEGIN CATCH

Some error handling code ROLLBACK TRANSACTION END CATCH

Recursive and Nested Triggers

Because triggers fire in response to a DML operation and can also perform additionalDML operations, there is the possibility for a trigger to cause itself to fire or to fireadditional triggers in a chain

A trigger causing itself to fire is called recursion For example, suppose that an UPDATE

TRIGGER is created on the Customers table that modifies a column in the Customers

table The modification in the trigger causes the trigger to fire again The trigger

mod-ifies the Customers table again, causing the trigger to be fired yet again Because this

recursion can lead to an unending chain of transactions, SQL Server has a mechanism

Trang 16

Lesson 3: Implementing Triggers 371

to control recursive triggers The RECURSIVE_TRIGGERS option of a database is mally set to OFF, preventing recursion by default If you want triggers to fire recur-

nor-sively, you must explicitly turn on this option

NOTE INSTEAD OF triggers

An INSTEAD OF trigger does not fire recursively.

Recursion can also occur indirectly For example, suppose that an UPDATE operation

on the Customers table causes a trigger to fire to update the Orders table The update

to the Orders table then fires a trigger that updates the Customers table Indirect sion is a subset of the cases referred to as nested triggers.

recur-The most general case of nested triggers is when a trigger makes a change that causes

another trigger to fire By setting the NESTED TRIGGERS option to 0 at the server

level, you can disable all forms of nested triggers

DDL Triggers

New in SQL Server 2005 is the ability to create triggers for DDL operations, such aswhen a table is created, a login is added to the instance, or a new database is created.The main purposes of DDL triggers are to audit and regulate actions performed on adatabase DDL triggers let you restrict DDL operations even if a user might normallyhave the permission to execute the DDL command

For example, you might want to prevent anyone, including members of the sysadminfixed server role, from altering or dropping tables in a production environment You

can create a DDL trigger for the ALTER TABLE and DROP TABLE events that causes

the commands to be rolled back and a message returned telling the users thatapproval is needed before they can alter or drop the table

The general syntax for creating a DDL trigger is as follows:

CREATE TRIGGER trigger_name

ON { ALL SERVER | DATABASE } [ WITH <ddl_trigger_option> [ , n ] ]

{ FOR | AFTER } { event_type | event_group } [ , n ]

AS { sql_statement [ ; ] [ , n ] | EXTERNAL NAME < method specifier > [ ; ] }

<ddl_trigger_option> ::=

[ ENCRYPTION ] [ EXECUTE AS Clause ]

<method_specifier> ::=

assembly_name.class_name.method_name

Trang 17

MORE INFO Event groups

You can find the events that are valid for DDL triggers in the SQL Server 2005 Books Online article

“Event Groups for Use with DDL Triggers.”

An example of a DDL trigger to prevent the dropping or altering of a table is as lows:

fol-CREATE TRIGGER tddl_tabledropalterprevent

1 What are the two types of triggers?

2 What are they generally used for?

Quick Check Answers

1 SQL Server 2005 provides DML and DDL triggers.

2 DML triggers fire in response to INSERT, UPDATE, and DELETE statements

executed against a specific table DML triggers are generally used to form operations against the data that was modified in a table DDL triggersfire in response to DDL commands being executed on the server DDL trig-gers are used mainly for security and auditing purposes

per-PRACTICE Creating DML and DDL Triggers

In these practices, you create a DML trigger that audits list-price changes and a DDLtrigger that prevents dropping tables in a database

 Practice 1: Create a DML Trigger

In this practice, you create a DML trigger on the Production.Product table that audits

when the list price changes

1 If necessary, launch SSMS, connect to your instance, open a new query window,

and change the database context to the AdventureWorks database.

Trang 18

Lesson 3: Implementing Triggers 373

2 Create an auditing table by executing the following command:

CREATE TABLE Production.ProductAudit (AuditID int identity(1,1) PRIMARY KEY,

ListPriceBefore money NOT NULL, ListPriceAfter money NOT NULL, AuditDate datetime NOT NULL, ChangeUser sysname NOT NULL);

3 Create a trigger against the Production.Product table that logs all changes in the

audit table For simplicity, store everything in an XML column:

CREATE TRIGGER tuid_ProductAudit

ON Production.Product FOR UPDATE

AS INSERT INTO Production.ProductAudit (ProductID, ListPriceBefore, ListPriceAfter, AuditDate, ChangeUser) SELECT INSERTED.ProductID, DELETED.ListPrice, INSERTED.ListPrice, getdate(), suser_sname()

FROM INSERTED INNER JOIN DELETED ON INSERTED.ProductID = DELETED.ProductID;

4 Change a row of data in the Production.Product table.

5 Observe the effect of the trigger by selecting the data from the audit table.

6 Can you explain why there are two rows of data in the Production.ProductAudit

table for each row that is changed?

 Practice 2: Create a DDL Trigger

In this practice, you create a DDL trigger that prevents any table from being dropped

1 If necessary, launch SSMS, connect to your instance, open a new query window,

and change the database context to the AdventureWorks database.

2 Create the DDL trigger by executing the following code:

CREATE TRIGGER tddl_tabledropprevent

ON DATABASE FOR DROP_TABLE

AS PRINT 'Tables cannot be dropped!' ROLLBACK ;

3 Create a table for testing purposes, as follows:

CREATE TABLE dbo.DropTest (ID int NOT NULL);

Trang 19

4 Try to drop the table you just created by executing the following code:

DROP TABLE dbo.DropTest;

5 Verify that the table still exists by executing the following code:

SELECT ID from dbo.DropTest

Lesson Summary

■ SQL Server supports two types of triggers: DML and DDL

DML triggers can be either AFTER or INSTEAD OF triggers You can create any number of AFTER triggers for a table or view, but you can create only one

INSTEAD OF trigger for each data-modification operation for a table or view.

When DML triggers fire, they have access to special tables named INSERTED and DELETED.

■ DDL triggers fire in response to DDL events that occur on the server, such as ating, altering, or dropping an object The main purposes of DDL triggers are toprovide an additional means of security and to audit any DDL commands issuedagainst a database

cre-Lesson Review

The following questions are intended to reinforce key information presented in thislesson The questions are also available on the companion CD if you prefer to reviewthem in electronic form

Trang 20

Chapter 9 Review 375

Chapter Review

To further practice and reinforce the skills you learned in this chapter, you can

■ Review the chapter summary

■ Review the list of key terms introduced in this chapter

■ Complete the case scenario This scenario sets up a real-world situation ing the topics of this chapter and asks you to create a solution

involv-■ Complete the suggested practices

■ Take a practice test

■ You use stored procedures to perform any programmatic actions on a server.Stored procedures, which can return scalar values or result sets, are the primaryinterface that applications should use to access any data within a database

■ Triggers are a special type of stored procedure that you use to execute code in

response to specified actions DML triggers execute in response to INSERT,

UPDATE, and DELETE operations DDL triggers execute in response to DDL

commands

Key Terms

Do you know what these key terms mean? You can check your answers by looking upthe terms in the glossary at the end of the book

■ data definition language (DDL) trigger

■ data manipulation language (DML) trigger

■ deterministic function

■ function

■ input parameter

Trang 21

Suggested Practices

To help you successfully master the exam objectives presented in this chapter, plete the following tasks

com-Creating Functions

Practice 1 Within your existing databases, locate a calculation or result set that

is generated on a frequent basis and that isn’t straightforward to re-create eachtime Encapsulate this code into a function and adjust your application code touse the function instead of using ad hoc SQL code

Trang 22

Chapter 9 Review 377

Creating Stored Procedures

Practice 1 Move all the ad hoc SQL code from your applications into stored cedures and call the procedures to perform the actions Once all access

pro-(INSERT/UPDATE/DELETE/SELECT) is through stored procedures, remove all

direct permissions to any base tables from all users

ATE and ALTER actions that also roll back those operations This process creates

a structure that prevents any accidental changes to objects within any database

on the server To perform these operations, a sysadmin would have to disable theDDL trigger first Make sure that you do not prevent yourself from altering aDDL trigger; if you do, you won’t be able to make any changes

Take a Practice Test

The practice tests on this book’s companion CD offer many options For example, youcan test yourself on just the content covered in this chapter, or you can test yourself onall the 70-431 certification exam content You can set up the test so that it closely sim-ulates the experience of taking a certification exam, or you can set it up in study mode

so that you can look at the correct answers and explanations after you answer eachquestion

MORE INFO Practice tests

For details about all the practice test options available, see the “How to Use the Practice Tests” section in this book’s Introduction.

Trang 24

Chapter 10

Working with Flat Files

A common task when working with a database is importing data from other sources.One of the most frequently used methods of transferring data is by using one or more

flat files A flat file is a file that is not hierarchical in nature or a file that contains data

meant for a single table in the database Using flat files for data import and export isbeneficial because the format is often common between the source and destinationsystems Flat files can also provide a layer of abstraction between the source and des-tination This chapter covers the factors you need to consider before performing anydata-load operations It then covers the different methods you can use to efficiently

import files into SQL Server, including bulk copy program (bcp), the BULK INSERT Transact-SQL command, the OPENROWSET Transact-SQL function, and the SQL

Server Integration Services (SSIS) Import/Export Wizard

Exam objectives in this chapter:

■ Import and export data from a file

❑ Set a database to the bulk-logged recovery model to avoid inflating thetransaction log

Run the bcp utility.

❑ Perform a Bulk Insert task

Import bulk XML data by using the OPENROWSET function.

❑ Copy data from one table to another by using the SQL Server 2005 tion Services (SSIS) Import/Export Wizard

Integra-Lessons in this chapter:

■ Lesson 1: Preparing to Work with Flat Files 381

Lesson 2: Running the bcp Utility 387

■ Lesson 3: Performing a BULK INSERT Task 393

■ Lesson 4: Importing Bulk XML Data 398

■ Lesson 5: Using the SSIS Import/Export Wizard 402

Trang 25

Before You Begin

To complete the lessons in this chapter, you must have

■ A computer that meets the hardware and software requirements for MicrosoftSQL Server 2005

■ SQL Server 2005 Developer, Workgroup, Standard, or Enterprise Editioninstalled

Real World

Daren Bieniek

My work since the mid-1990s has focused mostly on business intelligence (BI)and data warehousing, so I have loaded a lot of flat files into many databases Infact, I have loaded hundreds of terabytes of data from flat files (nearly all intoSQL Server), and I consider flat files an excellent choice for loading databaseslarge or small

From my experience, here is a quick story about the importance of using theappropriate file formats for data loads I was working with a client who wasbringing in data from several systems, more than 25 GB a week in flat files, andthe client suggested that we leave behind the “old” flat files and move to the

“newer” XML files The client could not give me any good reasons why hewanted to change, other than saying it was a general industry direction I pro-tested and told the client that this is not one of XML’s strengths and that thecompany would incur unnecessary overhead However, the client insisted that

we run a test, and I did

First, the client’s 25 GB in flat files grew to more than 100 GB as XML filesbecause of XML’s tag overhead, so we now needed four times the storage andbringing the file across the network took more than four times as long Second,while loading from the XML files, processor utilization increased substantially(from the overhead of XML tag parsing because tags now made up more than

75 percent of the files’ size), and other resources were also more heavily taxedduring this time Additionally, the load time tripled, causing the load to nowextend past the maintenance window Having learned his lesson, the clientimmediately decided that it was best to stay with the flat files The moral of thisstory is that you should use the format that best fits the data you are loading; notswitch to the “latest” format just because it is there

Trang 26

Lesson 1: Preparing to Work with Flat Files 381

Lesson 1: Preparing to Work with Flat Files

Before starting the file imports, it is important to review the factors that influence ging behavior and performance of the bulk data loads You need to consider factorsrelated to the source of the import, the import mechanism you are using, and the des-tination of the data You also need to make sure the database you’re loading into is set

log-to the Bulk-Logged recovery model

After this lesson, you will be able to:

■ List items that affect the logging and performance of bulk operations.

■ Explain the impact of recovery models during bulk loads.

■ Change the recovery model for a database in preparation for a bulk load.

Estimated lesson time: 15 minutes

Source File Location

The source of the data is important because it is a major determining factor in the

speed and complexity of the import For example, if the source is a flat file on a work share, other factors outside of the import server’s control can influence perfor-mance These factors include network performance and file server performance.Regardless of how fast the import mechanisms and data destination, the import willrun only as fast as the source data can be read Therefore, it is important to considerthe performance of the data source as a factor in determining overall import perfor-mance As with any operation on a computer, the process is only as fast as the slowestcomponent involved

net-Import Mechanism

The import mechanism (bcp, BULK INSERT, OPENROWSET, or SSIS) you choose is

important in many ways, most of which we will explore later in this chapter However,keep in mind that although there is substantial overlap in the functionality of the dif-ferent import mechanisms, they each have their place for certain types of imports

Data Destination

The destination of the data is probably the single most important factor in ing not only the performance of your import, but also its overall impact on the server.Included in the definition of data destination are the database server, the database,and the data structure The database server in general is important because its overall

Trang 27

determin-design and usage plays a major role in determining the method and performance ofthe data load However, a discussion of server design is outside of the scope of thisbook The next factor is the database itself You need to ask many questions aboutyour database to determine which data-load mechanism works best What level ofuptime is needed? Is there a maintenance window during which you can load the

data? What recovery model is being used? Many other database factors can affect your

decision The last data destination item that affects the data import is the data ture, or table design, itself Does the table have clustered and/or nonclusteredindexes? Does the table have active constraints or triggers? Does the table alreadyhave several million rows, or is it empty? Is the table a source for replication?

struc-A Best-Case Scenario

The best-case scenario is bulk-loading data into an empty heap (a table with noindexes) that is not involved in replication, with no constraints or triggers, with thedatabase placed into the Bulk-Logged recovery model, and during a maintenance win-dow Here is what makes this a best-case scenario

First, the database is using the Bulk-Logged recovery model This model differs fromthe Full recovery model in many ways, one of which is that bulk-load operations areminimally logged so the transaction log will not be filled by the bulk-load operation.There are several caveats surrounding minimal logging For example, if the table that

is being bulk-loaded already has data and has a clustered index, the bulk load will befully logged, even if the database is using the Bulk-Logged recovery model (See thesidebar titled “Ensuring Minimal Logging” for more information.)

Ensuring Minimal Logging

You use the Bulk-Logged recovery model to minimize bloating the transactionlog during bulk loads However, it is important to remember that simply settingthe recovery model to Bulk-Logged is not enough Other conditions must be metfor minimal logging to occur The following conditions are necessary for mini-mal logging:

■ Database recovery model is set to Bulk-Logged

■ Table is not replicated

■ TABLOCK hint is used

■ Destination table meets population and indexing requirements (as shown

in Table 10-1)

Trang 28

Lesson 1: Preparing to Work with Flat Files 383

Table 10-1 shows the level of logging (Minimal, Index, or Full) that will occurunder different circumstances

Note that the table population and indexing criteria are applied at the batchlevel, not the load level Therefore, if you load 100,000 rows in 10 batches with10,000 rows per batch into an empty table with a clustered index, SQL Serverlogs the first 10,000 rows minimally and fully logs the remaining rows (90,000)

Quick Check

■ Why is it useful to switch the recovery mode to Bulk-Logged before loading data?

bulk-Quick Check Answer

■ Switching logging modes from Full to Bulk-Logged lets the database bly perform minimal logging during the data load Data that is loaded dur-

possi-ing a bulk load usually has no need for the point-in-time recovery capability

of the Full recovery model Decreasing the volume of log writes improvesperformance and helps alleviate the log bloat that occurs during bulk loads

It is important to performance that you complete the bulk load during a maintenancewindow The obvious reason is so that the bulk load won’t have to contend with usersfor server resources But the less obvious reasons are that the bulk load can use a tablelock, and the recovery model can be altered The load operation can acquire a tablelock instead of the more granular locks that it would acquire otherwise A table lock

is not only more efficient, it is also required for minimal logging to occur Additionally,most databases operate using the Full recovery model during normal usage There-fore, if you perform the bulk load during a maintenance window, you can switch thedatabase to the Bulk-Logged recovery model Although you can switch the database tothe Bulk-Logged recovery model during normal usage, certain recovery capabilitiesare lost, such as point-in-time recovery To switch to the Bulk-Logged recovery model,

Table 10-1 Logging Level Under Different Conditions

Clustered Index Nonclustered Indexes

Table Empty Minimal Minimal Minimal Minimal

Has Data Full Minimal Index Minimal

Trang 29

use either the ALTER DATABASE Transact-SQL command or SQL Server Management Studio (SSMS) An example of using ALTER DATABASE to set the recovery model to

Bulk-Logged follows:

ALTER DATABASE AdventureWorks SET RECOVERY BULK_LOGGED;

After you complete the bulk loads, you should switch the database back to the Fullrecovery model and immediately perform a transaction log backup Doing so reen-ables point-in-time recovery from the time of the log backup forward This log backupnot only stores the minimal logging that occurred during the bulk load but alsoplaces a copy of the bulk-loaded data into the log backup This distinction is impor-tant because the log backup needs access to the log files and the data files that werethe destination of the bulk load Starting a log backup while a bulk load is occurring

to the same data file might introduce contention, which causes both operations tooccur more slowly than they would separately Therefore, it is usually wise to waituntil you have finished the bulk loads and placed the database back into Full recoverymode before starting the log backup

SQL Server Recovery Models

SQL Server provides three recovery models: Full, Bulk-Logged, and Simple Forthe most part, the recovery model affects the way SQL Server uses and manages

a database’s transaction log

The Full recovery model records every change caused by every transaction at agranular level, which allows for point-in-time recovery You must back up thetransaction log to allow SQL Server to reuse log space

The Bulk-Logged recovery model is similar to the Full recovery model, but varieswhen you bulk load data If certain conditions are met, the Bulk-Logged recoverymodel does not record the row inserts at a granular level; instead, it logs onlyextent allocations, which saves a significant amount of log space Like the Fullrecovery model, you must perform a transaction log backup for SQL Server toreuse log space

The Simple recovery model is the same as Bulk-Logged, except that you do notneed to back up the transaction log for space to be cleared and reused Therefore,when using the Simple recovery model, transaction log backups are unreliable.For more information about recovery models, see Chapter 2, “Configuring SQLServer 2005.”

Trang 30

Lesson 1: Preparing to Work with Flat Files 385

PRACTICE Change the Recovery Model

In this practice, you will change the recovery model of the AdventureWorks database

from Full to Bulk-Logged and back again

1 Open SSMS.

2 In the Connect To Server window, specify a Server type of Database Engine,

enter the appropriate Server name, and use the appropriate Authentication mation for your environment Click Connect

infor-3 Press Ctrl+N to open a new query window.

4 To see the current recovery model that AdventureWorks is using, type the

follow-ing command:

SELECT DATABASEPROPERTYEX('AdventureWorks', 'Recovery');

If you are still using the default recovery model, the query should return ‘FULL’

If anything else is returned, just use the command from step 7 to change therecovery model back to Full

5 In the query window, above the SELECT command from step 4, type the

follow-ing command to set the recovery model to Bulk-Logged:

ALTER DATABASE AdventureWorks SET RECOVERY BULK_LOGGED;

Now, the query window should look like the following:

ALTER DATABASE AdventureWorks SET RECOVERY BULK_LOGGED;

SELECT DATABASEPROPERTYEX('AdventureWorks', 'Recovery');

6 Click Execute, and the result set should now show ‘BULK_LOGGED’, which

means that you have successfully changed the recovery model to Bulk-Logged

7 In the query window, replace the words BULK_LOGGED with FULL so that the

query window now reads as follows:

ALTER DATABASE AdventureWorks SET RECOVERY FULL;

SELECT DATABASEPROPERTYEX('AdventureWorks', 'Recovery');

8 Click Execute, and the result set should now show ‘FULL’, meaning that you

have successfully changed the recovery model back to Full

Lesson Summary

■ Many factors are involved in efficiently bulk-loading data, including the teristics of the data source, the bulk-load mechanism, and the destination of theimport

Trang 31

charac-■ Placing a database into the Bulk-Logged recovery model helps to minimize thebloating of the transaction log during a bulk load, but only if several otherrequirements are met.

Lesson Review

The following questions are intended to reinforce key information presented in thislesson The questions are also available on the companion CD if you prefer to reviewthem in electronic form

A It is safer to set the recovery model of a database to Bulk-Logged when it is

not in use by end users

B Minimal logging requires that a table have a clustered index, and clustered

indexes can be created only when the database is in single-user mode

C A table lock must be acquired to minimize logging, and this is not practical

during regular usage

D bcp can be run only when the database is in single-user mode.

Trang 32

Lesson 2: Running the bcp Utility 387

Lesson 2: Running the bcp Utility

One of the oldest and most well-known methods of bulk loading data into a SQL

Server database is by using the bcp command-line utility Many people consider bcp to

be the “quick and easy” method of bulk loading data, and they are mostly right In this

lesson, you learn what bcp is good for and what it is not good for Then you will see how to use bcp to import data into SQL Server.

After this lesson, you will be able to:

Explain the use of the bcp command-line utility.

Explain certain situations when bcp should not be used.

List certain common bcp parameters and explain their use.

List the permissions necessary for a user to bulk-load data into a table by using bcp.

Execute the bcp command to import data.

Estimated lesson time: 15 minutes

What Is bcp?

The abbreviation bcp stands for bulk copy program Because bcp is a program, you do

not execute it from within a query window or batch but rather from the commandline It is an external program, which means it runs outside of the SQL Server process

As its name indicates, you use bcp to bulk copy data either into or out of SQL Server.

However, this lesson primarily explores the import or loading of data

Here are two limitations to keep in mind about bcp:

bcp has limited data-transformation capabilities. If the data that you are loading

needs to go through complex transforms or validations, bcp is not the correct

tool to use

bcp has limited error handling capabilities. bcp might know that an error occurred

while loading a given row, but it has limited reaction options Based on the settings

you use during the bcp load, bcp can react to an erroneous row by either erroring out of the bcp load or by logging the row and error (up to a user-specified maxi- mum count) and then erroring out of the bcp load The program does not have the

native capability to recover and retry a given row or set of rows during the sameload process, as SSIS might do, or to send a notification to someone about theerrors that occurred

Trang 33

bcp Command-Line Syntax

The syntax for the bcp command is as follows:

bcp {[[database_name.][owner].]{table_name | view_name} | "query"}

{in | out | queryout | format} data_file [-mmax_errors] [-fformat_file] [-x] [-eerr_file]

[-Ffirst_row] [-Llast_row] [-bbatch_size]

[-n] [-c] [-w] [-N] [-V (60 | 65 | 70 | 80)] [-6]

[-q] [-C { ACP | OEM | RAW | code_page } ] [-tfield_term]

[-rrow_term] [-iinput_file] [-ooutput_file] [-apacket_size]

[-Sserver_name[\instance_name]] [-Ulogin_id] [-Ppassword]

[-T] [-v] [-R] [-k] [-E] [-h"hint [, n]"]

As you can see, there are many parameters and options The following discussion

cen-ters on the most frequently used bcp paramecen-ters.

MORE INFO bcp parameters

For a full description of all the parameters available for bcp, see the SQL Server 2005 Books Online

topic “bcp Utility.” SQL Server 2005 Books Online is installed as part of SQL Server 2005 Updates

for SQL Server 2005 Books Online are available for download at www.microsoft.com/technet/ prodtechnol/sql/2005/downloads/books.mspx.

Flat files can come in many formats: with or without header rows, varying field iters or row delimiters, and so on Some of the parameters that help with these vari-

delim-ances are -t, -r, and -F.

IMPORTANT Parameters are case-sensitive

Note that bcp parameters are case-sensitive Therefore, -t and -T are different and unrelated

parameters.

-t defines the column delimiter or field “t”erminator The default for this parameter is

\t (tab character), or tab delimited If you are familiar with importing and exporting

files in Microsoft Office Excel, you are probably familiar with tab-delimited files

-r defines the “r”ow delimiter or “r”ow terminator The default for this parameter is \n

(newline character)

-F defines the number of the “F”irst row to import from the data file This parameter

can be useful in many ways, including telling bcp to skip the first row because it is the file header You can also use -F in a case in which part of a file has been processed and

you want to restart processing where it left off

Trang 34

Lesson 2: Running the bcp Utility 389

NOTE Most common bcp parameters

The bcp parameters -t, -r, and -F are the most commonly used parameters for bulk importing an

ASCII character file.

bcp Hint Parameter

In addition to the previously mentioned commonly used bcp parameters, the -h or

“h”int parameter can have a substantial impact on both performance and logging

overhead of the data-load operation Unlike some of the other bcp parameters, you use the -h parameter to specify a set of hints for use during the bulk import There are sev-

eral hints you can use, including TABLOCK and ORDER You use the TABLOCK hint

to tell the bcp command to use a table lock while loading data into the destination

table As noted before, using a table lock decreases locking overhead and allows theBulk-Logged recovery model to perform minimal logging Use the ORDER hint tospecify that the records in the data file are ordered by certain columns If the order ofthe data file matches the order of the clustered index of the destination table, bulk-import performance is enhanced If the order of the data file is not specified or doesnot exactly match the ordering of the clustered index of the destination table, theORDER hint is ignored

Exam Tip The Hint parameter applies only to importing data from a file to a table When used with out, queryout, or format, the Hint parameter is ignored.

Exam Tip Both the bcp TABLOCK and ORDER hints are important for import performance But

ORDER is useful only if it exactly matches the sort order of the destination table’s clustered index.

bcp Permissions

The minimum security permissions a user needs to successfully import data to a table

by using bcp are the SELECT/INSERT permissions However, unlike SQL Server 2000, SQL Server 2005 requires that the user have ALTER TABLE permissions to suspend trigger execution, to suspend constraint checking, or to use the KEEPIDENTITY

option

Trang 35

Quick Check

What permissions are needed to run the following bcp command?

bcp Table1 in c:\test.txt -T -c

Quick Check Answer

First, the -T parameter instructs bcp to use a trusted connection, which

means that all database work will be done using the permissions granted tothe Microsoft Windows user executing the command Second, to import

data with bcp, the user must have SELECT and INSERT permissions on the

target table Finally, the defaults that are implied by the command are thattriggers and constraints will be disabled; therefore, the user also needs

ALTER TABLE permission.

PRACTICE Importing Data by Using bcp

In this practice, you create the necessary objects and run a bcp import to a table.

NOTE See the companion CD for practice files

Lessons 2, 3, and 4 in this chapter use the files in the \Practice Files\Chapter 10 folder on the companion CD.

 Practice 1: Prepare the Environment

In this practice, you create a database, a table, a folder, and a file to be used for testingpurposes The folder stores the import file and text files that contain the script to cre-ate the table and some commands that are pretyped to help you move quicklythrough the exercise

1 In the root folder of the C drive, create a folder named FileImportPractice.

2 Copy all the files in the \Practice Files\Chapter 10 folder on the companion CD

to the folder you just created

3 Open SSMS and connect to the Database Engine.

4 Create a database named FileImportDB It does not need to be very large (10 MB

should be enough), and for our learning purposes, you should configure thedatabase to use the Simple recovery model

5 Using Windows Explorer, in the FileImportPractice folder, double-click the

ExamTableCreateScript.sql file

Trang 36

Lesson 2: Running the bcp Utility 391

6 A Connect To Database Engine dialog box opens Make sure that you connect to

the test server in which you created the FileImportDB database.

7 Click Execute to run the script and create a table named Exam within the

File-ImportDB database.

8 Verify that the script ran without error and that the Exam table was created.

9 Become familiar with the ExamImportFile.txt file (Open the file in Notepad.) It

is ANSI character data, with four columns separated (delimited) by tabs, androws delimited by the newline character Also note that the fourth column isalways empty (NULL in our case) You will use the fourth column in a later prac-tice The four columns in the file are ExamID, ExamName, ExamDescription,and ExamXML, in that order

10 Don’t open any of the bcp, BulkInsert, or OpenRowSet command files yet They

are included in case you have difficulty in later practices

 Practice 2: Run bcp

In this practice, you run the bcp command to import 500 rows into the new Exam

table that you created in Practice 1

1 Open Notepad.

2 Try to formulate the proper bcp command to copy the ExamImportFile.txt into

the FileImportDB Exam table Remember that the defaults for column and row terminators are /t (tab) and /n (newline), respectively.

3 When you think you have the right command, paste it into a command prompt

and run it It doesn’t matter if the data is imported more than once, so you do notneed to clear the table between attempts

4 Hints: If you are having trouble, remember that there are actually several ways to

form the bcp command properly However, the quickest is to use the -c ter, which means that the import file is character data and defaults to using /t (tab) and /n (newline) as column and row terminators.

parame-5 You also need to specify how to connect to the SQL Server The easiest and best

way to do this is to simply use the -T parameter, which instructs bcp to connect

using a trusted connection (Windows Authentication)

6 Therefore, here is the simplest command:

bcp FileImportDB Exam in "c:\FileImportPractice\ExamImportFile.txt" -T -c

Trang 37

7 If you like, the command is also available in the bcpImportCommand.txt file.

Simply copy it to the command prompt and run it You should get a message ing that 500 rows were imported The message also tells you how long theimport took and how many rows per second it extrapolates to

say-Lesson Summary

bcp is an out-of-process command-line utility for importing or exporting data

quickly to or from a file

bcp has extremely limited data-transformation and error-handling capabilities.

bcp provides numerous parameters that give you substantial flexibility in using

the utility The -t, -r, and -F parameters are the most commonly used parameters

for bulk importing an ASCII character file

Certain bcp hints, such as TABLOCK, must be used for minimal logging to occur.

Lesson Review

The following questions are intended to reinforce key information presented in thislesson The questions are also available on the companion CD if you prefer to reviewthem in electronic form

NOTE Answers

Answers to these questions and explanations of why each answer choice is right or wrong are located in the “Answers” section at the end of the book.

1 When loading data from a file that uses a comma for the field delimiter and

new-line for the row delimiter, and the file has a header row at the beginning, whicharguments MUST you specify? (Choose all that apply.)

A -T

B -t

C -r

D -F

Trang 38

Lesson 3: Performing a BULK INSERT Task 393

Lesson 3: Performing a BULK INSERT Task

BULK INSERT is the in-process brother to the out-of-process bcp utility The BULK INSERT Transact-SQL command uses many of the same switches that bcp uses,

although in a less cryptic format For example, instead of using -t to designate the column terminator, as it is in bcp, you can use FIELDTERMINATOR = , which is much easier to read and remember In this lesson, you learn the differences between bcp and

BULK INSERT, and see how to use BULK INSERT to insert data into a SQL

Server table

After this lesson, you will be able to:

Explain the differences between bcp and BULK INSERT.

Explain certain situations when BULK INSERT should not be used.

List certain common BULK INSERT parameters and explain their use.

List the permissions necessary for a user to bulk load data into a table by using BULK INSERT.

Execute a BULK INSERT command to import data into SQL Server.

Estimated lesson time: 15 minutes

Differences Between BULK INSERT and bcp

Two of the biggest differences between bcp and BULK INSERT are that BULK INSERT can only import data and it executes inside SQL Server Whereas bcp can either import or export data, BULK INSERT (as its name implies) can only import (insert) data into a SQL Server database Also, bcp is executed from the command line and

runs outside of the SQL Server process space, meaning that all communications

between bcp and SQL Server are done via InterProcess Communications (IPC) In contrast, BULK INSERT runs inside the SQL Server process space and is executed

from a query window or query batch Other than these two differences and someminor variations in security, the commands behave almost exactly the same

NOTE bcp vs BULK INSERT

bcp runs out-of-process and is executed from the command line, whereas BULK INSERT runs

in-process and is executed from a query window or Transact-SQL batch.

Trang 39

All of the caveats that apply to bcp for minimal logging—for example, there can be no

clustered index on a populated table and you must use the TABLOCK hint—also

apply to BULK INSERT.

Following is the syntax for the BULK INSERT command:

BULK INSERT

[ database_name [ schema_name ] | schema_name ] [ table_name | view_name ] FROM 'data_file'

[ WITH (

[ [ , ] BATCHSIZE = batch_size ]

[ [ , ] CHECK_CONSTRAINTS ] [ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ] [ [ , ] DATAFILETYPE =

{ 'char' | 'native'| 'widechar' | 'widenative' } ]

[ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ] [ [ , ] LASTROW = last_row ]

[ [ , ] MAXERRORS = max_errors ] [ [ , ] ORDER ( { column [ ASC | DESC ] } [ , n ] ) ] [ [ , ] ROWS_PER_BATCH = rows_per_batch ]

BULK INSERT And any files you extract from a SQL Server database by using bcp,

including those you extract in native formats, you can load into a SQL Server database

by using BULK INSERT.

MORE INFO BULK INSERT parameters

For a detailed description of the BULK INSERT command’s many options, see the SQL Server 2005

Books Online topic “BULK INSERT (Transact-SQL).”

Let’s look at the same parameters we discussed for bcp to compare what they look like

in BULK INSERT.

Trang 40

Lesson 3: Performing a BULK INSERT Task 395

FIELDTERMINATOR Specifies the field or column terminator or delimiter As

with the bcp -t parameter, the default value is /t (tab character) To explicitly

declare a different field terminator, such as | (the pipe character), you would

specify the following as part of the BULK INSERT command:

FIELDTERMINATOR = ‘|’

-r parameter, the default value is /n (newline character) To explicitly declare a

different row terminator, such as |>| (the pipe, greater than, and pipe characters

concatenated together), you specify the following as part of the BULK INSERT

command:

ROWTERMINATOR = ‘|>|’

FIRSTROW Specifies the first row in the file that will be inserted into the table.

As with the bcp -F parameter, FIRSTROW can be used to skip a header row or to

restart the loading of a file at a certain row number To explicitly declare a row to

start at, such as row 2, you would specify the following as part of the BULK

INSERT command:

FIRSTROW = 2

BULK INSERT Permissions

When it comes to BULK INSERT security, there are a few things to note, especially

because SQL Server 2005 handles security differently from SQL Server 2000 SQLServer 2005 varies from SQL Server 2000 in how it verifies file access permissions InSQL Server 2000, it didn’t matter what type of login was used (Windows user or SQL

login); the BULK INSERT command would access the import file by using the security

privileges of the SQL Server service account This was a potential security issue thatmight allow users to get access to a file that their Windows user accounts could not

get to directly In SQL Server 2005, using an integrated login, the BULK INSERT

com-mand uses the file access privileges of the user account that is executing the query,not the SQL Server service account The only exception to this is if SQL Server is oper-

ating in Mixed Mode, and the BULK INSERT command is executed by a SQL Server

login that does not map to a Windows user account In this case, SQL Server still usesthe file access permissions of the SQL Server service account

In addition, to use the BULK INSERT command, the user executing the BULK INSERT command must have at least INSERT and ADMINISTER BULK OPERATION permis- sions And if the BULK INSERT command will suspend trigger execution, suspend constraint checking, or use the KEEPIDENTITY option, the user must also have

ALTER TABLE permissions.

Ngày đăng: 09/08/2014, 09:21