1. Trang chủ
  2. » Công Nghệ Thông Tin

Oracle Essbase 9 Implementation Guide- P73 potx

5 225 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 5
Dung lượng 750,08 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The Data file cache setting is the size of the buffer used to load the database page files into memory.. The Data file cache setting can be set as large as the combined total size of al

Trang 1

The Data file cache setting is the size of the buffer used to load the database page files into memory This setting is only relevant if you have your database set to Direct I/O Buffered I/O is the default and on all but the largest of data loads has tweaking this

setting made any noticeable difference The Data file cache setting can be set as large

as the combined total size of all of the database page files On a 30GB database setting, the page file cache size may not be practical, or even possible, especially if you only

have 16 GB of system RAM!

For the Data cache setting, it is usually fine left where it is at the default setting of

3072KB The recommended maximum size is 0.125 times the size of the Data file

cache setting Only change this setting if you are experiencing performance issues

and have many concurrent users accessing the database

The Index page setting is a static number and cannot be set by you Oracle

has determined that 8KB is sufficient for a database index page size

For better performance, Oracle advises to keep all of the databases set to use Buffered I/O or Direct I/O, and not a combination of both settings on the same physical server

Data load and storage settings

We will now briefly cover the available options for optimizing your system's data

loading and data storage capabilities As is always recommended, the default

settings are in most cases, more than adequate, so you should make any changes

carefully and test each one fully to determine if the change is warranted or not

Trang 2

On the Transactions tab of Database Properties screen (seen in the previous

screenshot), you will see options for Committed access and Uncommitted access

If you select Committed access Essbase will hold all of the data blocks involved in

a transaction until the changes or updates are committed to the system This can be

a problem for you because Essbase will keep duplicates of the data blocks until the

time they are committed and you will temporarily need double the actual space that you really need for data storage Essbase does this in case of the need for a rollback

The default setting for transactional access is Uncommitted access As is true with

most Essbase settings, the default setting in this case is more than adequate for most

systems, and the default setting of 3000 for Commit Blocks works well too Even in

larger systems, we haven't been able to notice measurable differences when playing with this setting

The 3000 Commit Blocks setting means that Essbase will commit or make permanent updates to data blocks for three thousand data blocks at a time This means that

if a calculation is interrupted, for example, that all of the work upto the point

of interruption will not be lost All work that has completed, in 3000 data block

increments, will actually be saved to the database

Finally, on the Storage tab of the Database Properties screen (seen above), we

are allowed to configure the data loading I/O method and the data compression

for storage

Trang 3

As usual, the default Essbase setting for data load I/O, Buffered I/O, is more than

adequate for most data load operations Buffered I/O takes advantage of the file

cache settings discussed earlier and only in cases where there is an extremely large

amount of data to load will it be noticeable that the system needs to swap in and out

of virtual memory

The Direct I/O setting is best for extremely large data loads Direct I/O bypasses the

cache and accesses system memory directly If your system has lots of extra memory available, this option can provide a real boost to the data loading performance

Data compression can also be a factor in system performance and once again Essbase

has several options for you Obviously, the No Compression setting can be the

quickest for I/O because there is no extra process time required to compress or

uncompress the data as it is read This is not recommended at all because the size of even an average database would grow to unmanageable proportions very quickly

Can you guess what the best all-around compression setting is? Yes, it is the Essbase

default setting for Bitmap encoding, what else? Overall, this setting uses space the

most efficiently when compared to other available compression types and has a

lower than average I/O cost as well

Essbase does offer Run-Length encoding as well, and this setting may be preferable

for databases that have very low block density Of course, you will need to do some experimenting to see if this type of data compression is right for your situation

Lastly, Essbase offers you the choice of ZLIB compression ZLIB compression can

be useful if the density of your data blocks is extremely high Again, you will need

to experiment with this setting

Partitioning databases

If you are at all familiar with database terminology, you know that partitioning

a database, either relational or multidimensional, is almost always done as a

performance consideration In the relational database world, you are usually taking one very large database and partitioning it into smaller, more manageable databases

In the Essbase multidimensional world, there are several reasons for partitioning:

• To split a large, cumbersome database into smaller, more manageable

pieces or slices

• To create, in one database, selected pieces or slices of data from several

similar but unrelated databases

• To provide a consolidated look at an overall enterprise process

Trang 4

• To control data level security more effectively

• To increase system performance when retrieving high use data

Caution:

Partitioning databases is a very real method of improving performance

You must be very careful not to get carried away and have too many

source databases included in your partitioned target database Essbase

will load into memory all source databases in a transparent partition

and this can actually have a negative effect on system performance!

Essbase offers three types of database partitioning options They are:

Replicated: A replicated database partition copies a portion of the

source database to be stored in a target database Users can access the

target database as if it were the source The database administrator must

occasionally refresh the target database from the source database

Transparent: A transparent partition allows users to manipulate their data

that is stored in a target database as if it were part of the actual source

database The remote data is retrieved from the source database each time

the users of the target database request it Write backs to the target database also flow through back to the source database

Linked: A linked partition enables users to navigate from one data value in

one database to a subset of the data in another database The two databases may contain very different outlines

As you can see there are three very different partitioning methods available to you

with your Essbase system This may sound tired by now, but truly, even partitioning your databases is something that is really only needed on the largest of systems

Partitioning is a valid performance tuning consideration for sure but its use should

be governed more by your Essbase knowledge and experience than by any sort of

formula that says if your database is this size it should do this or that

Let us consider the first scenario, where the database is large and cumbersome and

you need to split the data In this scenario we have 5 years worth of data in the

database For the earliest 3 years of the data, the users do not need to use it on a

day-to-day basis for analysis but only need it once in a while This scenario seems to

be best suited for the transparent partition where we partition the data by the time

dimension We are going to have the Current and Prior years in one cube and the

remaining 3 years in a different cube Let us call the Current and Prior year cube our

ESSCAR cube and the Prior cube the ESSCARP cube Current and Prior year data

will be loaded into the ESSCAR and the prior 3 year's data will be loaded into the

ESSCARP cube In this example, the ESSCARP database or cube is the source data and ESSCAR database is the target database Now, let's see step-by-step how we set

up the transparent partition

Trang 5

1 Using EAS, open the ESSCAR application and expand the ESSCAR database Select the Partitions menu pick and then click on the Action | Create new

partition on "ESSCAR" You will then see the Create Partition for Block

Storage Application screen as shown below.

On the Type tab, select the Partition type In this case select, Transparent

partition and then click on the Connection tab.

2 On the Connection tab, you will need to enter the information about the

server, source database, target database, username, and password We

suggest that for current separation of duties policies, you create a separate

batch id for this process In the source database, we have selected EsscarP

as target database and Esscar as the source database.

Ngày đăng: 06/07/2014, 00:20