1. Trang chủ
  2. » Công Nghệ Thông Tin

SQL Server Tacklebox- P18 ppsx

5 164 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 5
Dung lượng 298,43 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

replication There are three main tools that you can choose from, when implementing a "HA" solution for data migration: Log Shipping – I have used log shipping for a variety of data migr

Trang 1

the data in them needs to be regularly resynched with the source The added benefit of using a tool such as log shipping is that you can segregate reporting processes from transactional processes, which can offer performance gains in some scenarios

Log shipping vs mirroring vs replication

There are three main tools that you can choose from, when implementing a "HA" solution for data migration:

Log Shipping – I have used log shipping for a variety of data migration tasks and

it is probably the most reliable mechanism that I have found However, it does have its downsides and a big one is that refreshing the data requires everyone to

be out of the target database, and data is only as fresh as the last restore on the target

Log shipping can be setup natively, or using third party tools like RedGate SQL Backup, which sets up all of the automated schedules for you and also compresses the backed up logs, making for much faster data loads In terms of financial cost, the only significant consideration is the secondary server instance, though you can also log ship to the same server, as I will show The space required to store log backups is much less than the space required to implement a solution that performs a full backup and restore of a database, and the time to synch two databases, via log shipping, is drastically lowered

Native SQL Server Replication – This is one of the few solutions that provide

near-real time data on a target However, to be quite honest, I have avoided native SQL replication for a long time It is not so much the overhead of maintenance or administration that has prevented me from deploying it to production, but the learning curve of the technology and need for "compliant schemas" In order to use native replication, the database schema has to fit into a normalized structure, with unique primary keys defined for each replicated table For whatever reason, many third party vendors do not always adhere to this requirement

Database Mirroring (SQL Server 2005 and Database snapshots) – This

technology was introduced in SQL Server 2005 Mirroring is a way of introducing high-availability to SQL Server by allowing the secondary server to become the main server, instantly In a mirrored setup, the secondary database is never online,

or accessible to the end user, until failure occurs on the source The only way to get around this, if you wish to offload reporting to the secondary server, is to set

up database snapshots Unfortunately, snapshots are only available in Enterprise editions of SQL Server so cost is definitely a factor for this solution As such, Database Mirroring is primarily used for high availability, rather than as a more casual data migration technique

Trang 2

It is when you start using these techniques that the issues of cost, space and time really come to the fore and it is important to understand what problems each will solve, compared to their cost Native replication and database mirroring, while certainly valid solutions, come with a higher price tag if you want to cross breed a high availability solution with a reporting solution

While I may choose replication or mirroring as options at some stage, so far I have found that, in my career as a stolid, bang-for-the-buck DBA and manager, log shipping has come out ahead of the three solutions nearly 100% of the time, when considering cost, space and time Therefore, it is the solution I will focus on here

Log shipping considerations

Log shipping is a method, based on SQL Server Agent jobs, by which the transaction log backups from a primary server are applied to secondary servers In this way, one can keep one or more spare "warm standby" servers in a state of readiness to take over in the event of failure of the primary server

Log shipping is a solution that sounds like it would be a breeze to set up, but there are often complications Let's reconsider our (slightly modified) original requirements:

• Migrating roughly 15 Gigs worth of data a month

• Data needs to be refreshed daily

• Need to migrate the whole database

• Developers/Analysts need access permission to the target database

• Indexes do not need to be applied independent of the source

• Source databases are both SQL Server 2005

In this case the log shipping solution sounds straightforward until … you discover that the source database is in Simple recovery mode, so you can't take log backups Also, wait a second, how are we going to add permissions that are different from the source, as the target database will be in read-only/stand mode, so I cannot add users to it This has gotten a bit more complex than I may have anticipated

The time required for log shipping is not insubstantial On a 1 Gigabit network, where both the source and target are on different servers, or even if the source and target databases are on the same SQL Server instance, it is going to take time

to backup and restore the data on the target However, this time is negligible if done in off peak hours, like in the early AM before the start of business operations Also, it is easy to gauge the time it takes to backup, transfer and restore the log file Furthermore, you can reduce the required time by

Trang 3

SQL Backup However, of course, one then needs to add the cost of this tool to the overall cost of the solution

However, what if there were 2 G worth of log data, and the target server was reached via a 3MB WAN connection? What if the request was for more than one target? What could have taken 15 minutes, on first analysis, is now taking 45 minutes or more, and pushing past the start of business DBAs constantly find themselves making accommodations based on unexpected changes to the original requests Proper communication and expectations need to be set upfront so that planning follows through to execution as seamlessly as possible, with contingencies in circumstances of failure

Don't forget also that if the target database ever gets out of synchronization with the source logs for whatever reason (it happens), then the entire database needs to

be restored from a full backup to reinstate the log shipping If the full database is over 200G and you have a slow WAN link, then this could be a big issue Those

45 minutes just became hours No one likes seeing a critical, tier 1 application down for hours

Finally, there will be a need to store this redundant data As you add servers to the mix, the amount of space required grows proportionately Soon, the 200G database, growing at a rate of 2G per day, becomes a space management nightmare It is always best to perform upfront capacity planning, and over estimate your needs It is not always easy to add disk space on the fly without bring down a server, which is especially true of servers with local disk arrays not SAN attached If you have SAN storage, the process is more straightforward, but comes with issues as well Also, consider if the disk subsystem is using slower SATA drives (often used for QA and Dev environments) or faster SCSI drives, which are more expensive per Gig, to the tune of thousands of dollars

Trang 4

Setting up log shipping

With all of the considerations out of the way for log shipping, I want to take a quick look at how to setup a log shipping solution Fortunately, Microsoft SQL Server has made it very easy to implement One requirement is that the source database must be in Full recovery mode As you can see in Figure 3.17, showing the Properties dialogue of the DBA_Rep database, Transaction Log Shipping has its own set of properties including the backup setting for the source database

Figure 3.17: Transaction log shipping for DBA_Rep database

If you click the backup Settings button, you can specify:

• A network share to store the transaction log backups for the source database

• A retention policy, expressed as the number of hours to keep the backup log files

• A backup job name and schedule Transaction Log Shipping uses the SQL Agent service to schedule both the source log backups and target server (can be the same as the source) restores

The settings I chose are shown in Figure 3.18

Trang 5

Figure 3.18: Selecting Transaction Log Backup settings

Having specified the backup location, it is time to add the information for the secondary database, where the source transaction logs will be applied A nice feature, when setting up log shipping for the first time, is the ability to let Management Studio ready the secondary database for you, by performing the initial backup and restore, as seen in Figure 3.19 The secondary database, non-creatively enough, is DBA_Rep1

Most often, you will want to transfer the log files to a different server from the source By selecting the "Copy Files" tab in Figure 3.19, you can specify where the copied transaction log backup files will reside for the subsequent restores to the target database However, in our simple example, both the backup and restore will reside on the same server, as shown in Figure 3.20

Ngày đăng: 04/07/2014, 23:20

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN