1. Trang chủ
  2. » Tài Chính - Ngân Hàng

Ebook Information technology auditing and assurance (Third edition): Part 2

353 7 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Computer-Assisted Audit Tools and Techniques
Trường học Cengage Learning
Chuyên ngành Information Technology Auditing and Assurance
Thể loại chapter
Năm xuất bản 2011
Thành phố Unknown
Định dạng
Số trang 353
Dung lượng 12,79 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Ebook Information technology auditing and assurance (Third edition): Part 2 include of the following content: Chapter 7: Computer-assisted audit tools and techniques; Chapter 8: Data structures and CAATTs for data extraction; Chapter 9: Auditing the revenue cycle; Chapter 10: Auditing the expenditure cycle; Chapter 11: Enterprise resource planning systems; Chapter 12: Business ethics, fraud, and fraud detection.

Trang 1

C H A P T E R 7

Computer-Assisted Audit Tools and Techniques

LEARNING OBJECTIVES

After studying this chapter, you should:

• Be familiar with the classes of transaction input controls used by accounting applications.

• Understand the objectives and techniques used to implement processing controls, including run-to-run, operator intervention, and audit trail controls.

• Understand the methods used to establish effective output controls for both batch and real-time systems.

• Know the difference between black box and white box auditing.

• Be familiar with the key features of the five CAATTs discussed in the chapter.

This chapter examines several issues related to the use of computer-assisted

audit tools and techniques (CAATTs) for performing tests of applicationcontrols and data extraction It opens with a description of application controls.These fall into three broad classes: input controls, processing controls, and

output controls The chapter then examines the black box and white box

approaches to testing application controls The latter approach requires a tailed understanding of the application’s logic Five CAATT approaches usedfor testing application logic are then examined: the test data method, basecase system evaluation, tracing, integrated test facility, and parallel simulation

de-289

Trang 2

APPLICATION CONTROLS

Application controls are programmed procedures designed to deal with potential sures that threaten specific applications, such as payroll, purchases, and cash disburse-ments systems Application controls fall into three broad categories: input controls,processing controls, and output controls

expo-Input Controls

The data collection component of the information system is responsible for bringing data

into the system for processing Input controls at this stage are designed to ensure that

these transactions are valid, accurate, and complete Data input procedures can be eithersource document-triggered (batch) or direct input (real time)

Source document input requires human involvement and is prone to clerical errors.Some types of errors that are entered on the source documents cannot be detected andcorrected during the data input stage Dealing with these problems may require tracingthe transaction back to its source (such as contacting the customer) to correct the mis-take Direct input, on the other hand, employs real-time editing techniques to identifyand correct errors immediately, thus significantly reducing the number of errors thatenter the system

Classes of Input ControlFor presentation convenience and to provide structure to this discussion, we havedivided input controls into the following broad classes:

• Source document controls

• Data coding controls

• Batch controls

• Validation controls

• Input error correction

• Generalized data input systemsThese control classes are not mutually exclusive divisions Some control techniquesthat we shall examine could fit logically into more than one class

Source Document Controls. Careful control must be exercised over physical sourcedocuments in systems that use them to initiate transactions Source document fraud can

be used to remove assets from the organization For example, an individual with access

to purchase orders and receiving reports could fabricate a purchase transaction to a existent supplier If these documents are entered into the data processing stream, alongwith a fabricated vendor’s invoice, the system could process these documents as if alegitimate transaction had taken place In the absence of other compensating controls

non-to detect this type of fraud, the system would create an account payable and quently write a check in payment

subse-To control against this type of exposure, the organization must implement controlprocedures over source documents to account for each document, as described next:

Use Pre-numbered Source Documents. Source documents should come prenumberedfrom the printer with a unique sequential number on each document Source documentnumbers permit accurate accounting of document usage and provide an audit trail for trac-ing transactions through accounting records We discuss this further in the next section

Trang 3

Use Source Documents in Sequence. Source documents should be distributed tothe users and used in sequence This requires that adequate physical security be main-tained over the source document inventory at the user site When not in use, documentsshould be locked away At all times, access to source documents should be limited toauthorized persons.

Periodically Audit Source Documents. Reconciling document sequence numbersshould identify missing source documents Periodically, the auditor should compare thenumbers of documents used to date with those remaining in inventory plus those voideddue to errors Documents not accounted for should be reported to management

Data Coding Controls. Coding controls are checks on the integrity of data codesused in processing A customer’s account number, an inventory item number, and achart of accounts number are all examples of data codes Three types of errors can cor-rupt data codes and cause processing errors: transcription errors, single transposition

errors, and multiple transposition errors Transcription errors fall into three classes:

• Addition errors occur when an extra digit or character is added to the code Forexample, inventory item number 83276 is recorded as 832766

• Truncation errors occur when a digit or character is removed from the end of acode In this type of error, the inventory item above would be recorded as 8327

• Substitution errors are the replacement of one digit in a code with another Forexample, code number 83276 is recorded as 83266

There are two types of transposition errors Single transposition errors occur when

two adjacent digits are reversed For instance, 83276 is recorded as 38276 Multiple

trans-position errors occur when nonadjacent digits are transposed For example, 83276 is

recorded as 87236

Any of these errors can cause serious problems in data processing if they go tected For example, a sales order for customer 732519 that is transposed into 735219will be posted to the wrong customer’s account A similar error in an inventory itemcode on a purchase order could result in ordering unneeded inventory and failing toorder inventory that is needed These simple errors can severely disrupt operations

unde-Check Digits. One method for detecting data coding errors is a check digit A check

digit is a control digit (or digits) added to the code when it is originally assigned that

allows the integrity of the code to be established during subsequent processing Thecheck digit can be located anywhere in the code: as a prefix, a suffix, or embedded some-place in the middle The simplest form of check digit is to sum the digits in the code anduse this sum as the check digit For example, for the customer account code 5372, thecalculated check digit would be

5 3 7 2 17

By dropping the tens column, the check digit 7 is added to the original code to duce the new code 53727 The entire string of digits (including the check digit) becomesthe customer account number During data entry, the system can recalculate the checkdigit to ensure that the code is correct This technique will detect only transcriptionerrors For example, if a substitution error occurred and the above code were entered as

pro-52727, the calculated check digit would be 6 (5 2 7 2 16 6), and the errorwould be detected However, this technique would fail to identify transposition errors.For example, transposing the first two digits yields the code 35727, which still sums to

17 and produces the check digit 7 This error would go undetected

Trang 4

There are many check-digit techniques for dealing with transposition errors A ular method is modulus 11 Using the code 5372, the steps in this technique are asfollows:

pop-1 Assign weights Each digit in the code is multiplied by a different weight In this case,

the weights used are 5, 4, 3, and 2, shown as follows:

2 Sum the products (25 12 21 4 62)

3 Divide by the modulus We are using modulus 11 in this case, giving 62/11 5 with

a remainder of 7

4 Subtract the remainder from the modulus to obtain the check digit (11 7 4[check digit])

5 Add the check digit to the original code to yield the new code: 53724.

Using this technique to recalculate the check digit during processing, a transpositionerror in the code will produce a check digit other than 4 For example, if the precedingcode were incorrectly entered as 35724, the recalculated check digit would be 6

When Should Check Digits Be Used?. The use of check digits introduces storage andprocessing inefficiencies and therefore should be restricted to essential data, such as pri-mary and secondary key fields All check digit techniques require one or more additionalspaces in the field to accommodate the check digit In the case of modulus 11, if stepthree above produces a remainder of 1, the check digit of 10 will require two additionalcharacter spaces If field length is a limitation, one way of handling this problem is todisallow codes that generate the check digit 10 This would restrict the range of availablecodes by about 9 percent

Batch Controls. Batch controls are an effective method of managing high volumes of

transaction data through a system The objective of batch control is to reconcile outputproduced by the system with the input originally entered into the system This providesassurance that:

• All records in the batch are processed

• No records are processed more than once

• An audit trail of transactions is created from input through processing to the outputstage of the system

Batch control is not exclusively an input control technique Controlling the batchcontinues through all phases of the system We are treating this topic here because batchcontrol is initiated at the input stage

Achieving batch control objectives requires grouping similar types of input tions (such as sales orders) together in batches and then controlling the batches throughoutdata processing Two documents are used to accomplish this task: a batch transmittal sheetand a batch control log Figure 7.1 shows an example of a batch transmittal sheet The batchtransmittal sheet captures relevant information such as the following about the batch

transac-• A unique batch number

• A batch date

Trang 5

• A transaction code (indicating the type of transactions, such as a sales order or cashreceipt)

• The number of records in the batch (record count)

• The total dollar value of a financial field (batch control total)

• The total of a unique nonfinancial field (hash total)Usually, the batch transmittal sheet is prepared by the user department and is sub-mitted to data control along with the batch of source documents Sometimes, the datacontrol clerk, acting as a liaison between the users and the data processing department,prepares the transmittal sheet Figure 7.2 illustrates the batch control process

The data control clerk receives transactions from users assembled in batches of 40

to 50 records The clerk assigns each batch a unique number, date-stamps the ments, and calculates (or recalculates) the batch control numbers, such as the total dollaramount of the batch and a hash total (discussed later) The clerk enters the batch controlinformation in the batch control log and submits the batch of documents, along withthe transmittal sheet, to the data entry department Figure 7.3 shows a sample batchcontrol log

docu-The data entry group codes and enters the transmittal sheet data onto the transactionfile, along with the batch of transaction records The transmittal data may be added as anadditional record in the file or placed in the file’s internal trailer label (We will discussinternal labels later in this section.) The transmittal sheet becomes the batch controlrecord and is used to assess the integrity of the batch during processing For example, the

ABC CompanyBatch Transmittal Sheet

Prepared By

Trang 6

data entry procedure will recalculate the batch control totals to make sure the batch is inbalance The transmittal record shows a batch of 50 sales order records with a total dollarvalue of $122,674.87 and a hash total of 4537838 At various points throughout and at theend of processing, these amounts are recalculated and compared to the batch control re-cord If the procedure recalculates the same amounts, the batch is in balance.

After processing, the output results are sent to the data control clerk for tion and distribution to the user The clerk updates the batch control log to record thatprocessing of the batch was completed successfully

Batch Control Log

User Departments Data Control

Data Processing Department

Record Batch

in Batch Control Log

Reconcile Processed Batch with Control Log Clerk Corrects Errors, Files Transmittal Sheet, and Returns Source Documents to User Area.

Group Documents into Batches

User Application

Transaction File

Batch of Documents Transmittal Sheet Error Reports Data Input Documents

Data ProcessingEnd User

Rec By Control Total

Hash Total Record Count

Error Code

Reconciled By

9:55

Trang 7

Hash Totals. The term hash total, which was used in the preceding discussion, refers

to a simple control technique that uses nonfinancial data to keep track of the records in

a batch Any key field, such as a customer’s account number, a purchase order number,

or an inventory item number, may be used to calculate a hash total In the followingexample, the sales order number (SO#) field for an entire batch of sales order records

is summed to produce a hash total

SO#

143276734519983

·

·

·

·8894396543

4537838 hash totalLet’s see how this seemingly meaningless number can be of use Assume that after thisbatch of records leaves data control, someone replaced one of the sales orders in the batchwith a fictitious record of the same dollar amount How would the batch control proceduresdetect this irregularity? Both the record count and the dollar amount control totals would

be unaffected by this act However, unless the perpetrator obtained a source document withexactly the same sales order number (which would be impossible, since they should comeuniquely prenumbered from the printer), the hash total calculated by the batch control pro-cedures would not balance Thus, the irregularity would be detected

Validation Controls. Input validation controls are intended to detect errors in

transaction data before the data are processed Validation procedures are most effectivewhen they are performed as close to the source of the transaction as possible However,depending on the type of technology in use, input validation may occur at various points

in the system For example, some validation procedures require making referencesagainst the current master file Systems using real-time processing or batch processingwith direct access master files can validate data at the input stage Figure 7.4(a) and(b) illustrate these techniques

If the system uses batch processing with sequential files, the transaction recordsbeing validated must first be sorted in the same order as the master file Validating atthe data input stage in this case may require considerable additional processing There-fore, as a practical matter, each processing module prior to updating the master filerecord performs some validation procedures This approach is shown in Figure 7.5.The problem with this technique is that a transaction may be partially processed beforedata errors are detected Dealing with a partially complete transaction will require specialerror-handling procedures We shall discuss error-handling controls later in this section.There are three levels of input validation controls:

1 Field interrogation

2 Record interrogation

3 File interrogation

Field Interrogation. Field interrogation involves programmed procedures that

exam-ine the characteristics of the data in the field The following are some common types offield interrogation

Trang 8

Missing data checks are used to examine the contents of a field for the presence of

blank spaces Some programming languages are restrictive as to the justification (right orleft) of data within the field If data are not properly justified or if a character is missing(has been replaced with a blank), the value in the field will be improperly processed Insome cases, the presence of blanks in a numeric data field may cause a system failure.When the validation program detects a blank where it expects to see a data value, thiswill be interpreted as an error

Numeric-alphabetic data checks determine whether the correct form of data is in a

field For example, a customer’s account balance should not contain alphabetic data Aswith blanks, alphabetic data in a numeric field may cause serious processing errors

Zero-value checks are used to verify that certain fields are filled with zeros Some

program languages require that fields used in mathematical operations be initiated withzeros prior to processing This control may trigger an automatic corrective control toreplace the contents of the field with zero if it detects a nonzero value

Limit checks determine if the value in the field exceeds an authorized limit For

example, assume the firm’s policy is that no employee works more than 44 hours perweek The payroll system validation program can interrogate the hours-worked field inthe weekly payroll records for values greater than 44

Range checks assign upper and lower limits to acceptable data values For example, if

the range of pay rates for hourly employees in a firm is between 8 and 20 dollars, allpayroll records can be checked to see that this range is not exceeded The purpose ofthis control is to detect keystroke errors that shift the decimal point one or more places

It would not detect an error where a correct pay rate of, say, 9 dollars is incorrectlyentered as 15 dollars

FIGURE 7.4

Batch of Source Documents

Validation in a Real-Time System

Validation in a Batch-Direct Access System

Validate and Process

Master File (Validation)

Master File

Transaction File (Batch)

Data Input

Validate Data and Create Transaction File

Update Master File Data Input

Individual Transactions

(a)

(b)

Validation during

Data Input

Trang 9

Validity checks compare actual values in a field against known acceptable values.

This control is used to verify such things as transaction codes, state abbreviations, oremployee job skill codes If the value in the field does not match one of the acceptablevalues, the record is determined to be in error

This is a frequently used control in cash disbursement systems One form of cashdisbursement fraud involves manipulating the system into making a fraudulent payment

to a nonexistent vendor To prevent this, the firm may establish a list of valid vendorswith whom it does business exclusively Thus, before payment of any trade obligation,the vendor number on the cash disbursement voucher is matched against the valid ven-dor list by the validation program If the code does not match, payment is denied, andmanagement reviews the transaction

Check digit controls identify keystroke errors in key fields by testing the internal

validity of the code We discussed this control technique earlier in the section

Record Interrogation. Record interrogation procedures validate the entire record by

examining the interrelationship of its field values Some typical tests are discussed below

FIGURE 7.5

Batch of Source Documents

Process #1

Process #3

Process #2

Production Master Files

Production Master Files

Transaction File (Batch)

Validate Data and Create Transaction File

Validate Transactions and Update Master File Data Input

Transaction File (Batch)

Transaction File (Batch)

Old

Production Master Files

Production Master Files Old

Production Master Files

Production Master Files

Old

Validate and Update Master File

Validate Transactions and Update Master File

Trang 10

Reasonableness checks determine if a value in one field, which has already passed a

limit check and a range check, is reasonable when considered along with other data fields

in the record For example, an employee’s pay rate of 18 dollars per hour falls within

an acceptable range However, this rate is excessive when compared to the employee’sjob skill code of 693; employees in this skill class never earn more than 12 dollarsper hour

Sign checks are tests to see if the sign of a field is correct for the type of record being

processed For example, in a sales order processing system, the dollar amount field must

be positive for sales orders but negative for sales return transactions This control candetermine the correctness of the sign by comparing it with the transaction code field

Sequence checks are used to determine if a record is out of order In batch systems

that use sequential master files, the transaction files being processed must be sorted inthe same order as the primary keys of the corresponding master file This requirement

is critical to the processing logic of the update program Hence, before each transactionrecord is processed, its sequence is verified relative to the previous record processed

File Interrogation. The purpose of file interrogation is to ensure that the correct file

is being processed by the system These controls are particularly important for masterfiles, which contain permanent records of the firm and which, if destroyed or corrupted,are difficult to replace

Internal label checks verify that the file processed is the one the program is actually

calling for Files stored on magnetic tape are usually kept off-line in a tape library Thesefiles have external labels that identify them (by name and serial number) to the tape li-brarian and operator External labeling is typically a manual procedure and, like anymanual task, prone to errors Sometimes, the wrong external label is mistakenly affixed

to a file when it is created Thus, when the file is called for again, the wrong file will beretrieved and placed on the tape drive for processing Depending on how the file is beingused, this may result in its destruction or corruption To prevent this, the operating sys-tem creates an internal header label that is placed at the beginning of the file An exam-ple of a header label is shown in Figure 7.6

To ensure that the correct file is about to be processed, the system matches the filename and serial number in the header label with the program’s file requirements If thewrong file has been loaded, the system will send the operator a message and suspendprocessing It is worth noting that while label checking is generally a standard feature,

it is an option that can be overridden by programmers and operators

Version checks are used to verify that the version of the file being processed is

cor-rect In a grandparent–parent–child approach, many versions of master files and tions may exist The version check compares the version number of the files beingprocessed with the program’s requirements

transac-An expiration date check prevents a file from being deleted before it expires In a

GPC system, for example, once an adequate number of backup files is created, the oldestbackup file is scratched (erased from the disk or tape) to provide space for new files.Figure 7.7 illustrates this procedure

To protect against destroying an active file by mistake, the system first checks theexpiration date contained in the header label (see Figure 7.6) If the retention periodhas not yet expired, the system will generate an error message and abort the scratch pro-cedure Expiration date control is an optional measure The length of the retentionperiod is specified by the programmer and based on the number of backup files thatare desired If the programmer chooses not to specify an expiration date, the controlagainst such accidental deletion is eliminated

Trang 11

Input Error Correction. When errors are detected in a batch, they must be correctedand the records resubmitted for reprocessing This must be a controlled process toensure that errors are dealt with completely and correctly There are three common errorhandling techniques: (1) correct immediately, (2) create an error file, and (3) reject theentire batch.

Correct Immediately. If the system is using the direct data validation approach (refer

to 7-4(a) and (b)), error detection and correction can also take place during data entry.Upon detecting a keystroke error or an illogical relationship, the system should halt thedata entry procedure until the user corrects the error

Create an Error File. When delayed validation is being used, such as in batch systemswith sequential files, individual errors should be flagged to prevent them from being pro-cessed At the end of the validation procedure, the records flagged as errors are removedfrom the batch and placed in a temporary error holding file until the errors can beinvestigated

Some errors can be detected during data input procedures However, as was tioned earlier, the update module performs some validation tests Thus, error recordsmay be placed on the error file at several different points in the process, as illustrated

men-FIGURE 7.6

File Name Expiration Date Control Totals Number of Records Record 1 Record 2

Trang 12

by Figure 7.8 At each validation point, the system automatically adjusts the batchcontrol totals to reflect the removal of the error records from the batch In a separateprocedure, an authorized user representative will later make corrections to the errorrecords and resubmit them as a separate batch for reprocessing.

Errors detected during processing require careful handling These records mayalready be partially processed Therefore, simply resubmitting the corrected records tothe system via the data input stage may result in processing portions of these transac-tions twice There are two methods for dealing with this complexity The first is to re-verse the effects of the partially processed transactions and resubmit the correctedrecords to the data input stage The second is to reinsert corrected records to the proces-sing stage in which the error was detected In either case, batch control procedures (pre-paring batch control records and logging the batches) apply to the resubmitted data, just

as they do for normal batch processing

Reject the Batch. Some forms of errors are associated with the entire batch and arenot clearly attributable to individual records An example of this type of error is an im-balance in a batch control total Assume that the transmittal sheet for a batch of salesorders shows a total sales value of $122,674.87, but the data input procedure calculated

a sales total of only $121,454.32 What has caused this? Is the problem a missing orchanged record? Or did the data control clerk incorrectly calculate the batch controltotal? The most effective solution in this case is to cease processing and return the entirebatch to data control to evaluate, correct, and resubmit

Application B (Accounts Receivable) Application A (Payroll)

The obsolete backup file of Application A

is scratched (written over) by Application B and used as an output (child) file Before scratching, the operating system checks the expiration date in the file's header label.

Master File

Master File

Update Program

Transaction File Master File

New Master File

Master File Master File

Master File

Update Program

Transaction File Original

Master File

New Master File

Parent Grandparent

Obsolete File

Generations

Child Back Up Master Files

Trang 13

Batch errors are one reason for keeping the size of the batch to a manageable number.Too few records in a batch make batch processing inefficient Too many records makeerror detection difficult, create greater business disruption when a batch is rejected, andincrease the possibility of mistakes when calculating batch control totals.

Generalized Data Input Systems. To achieve a high degree of control and

stan-dardization over input validation procedures, some organizations employ a generalized

data input system (GDIS) This technique includes centralized procedures to manage

the data input for all of the organization’s transaction processing systems The GDIS proach has three advantages First, it improves control by having one common systemperform all data validation Second, GDIS ensures that each AIS application applies aconsistent standard for data validation Third, GDIS improves systems development effi-ciency Given the high degree of commonality in input validation requirements for AISapplications, a GDIS eliminates the need to recreate redundant routines for each newapplication Figure 7.9 shows the primary features of this technique A GDIS has fivemajor components:1

ap-1 Generalized validation module

2 Validated data file

1 RonWeber, EDP Auditing: Conceptual Foundations and Practice, 2nd ed (McGraw-Hill, 1988),

pp 424–427.

FIGURE 7.8

(Sales Orders)

Production Master Files

Production Master Files

Transaction File (Batch)

Validate Data and Create Transaction File

Validate Transaction and Update Master File

Data Input

Error Correction

Error Correction

Error Correction

Transaction File (Batch)

Old (Accts Rec)

Error File Error File

Error File

Production Master Files

Production Master Files

Resubmit Corrected Data

Resubmit Corrected Data

Resubmit Corrected Data

Batch of Source Documents

New

Old (Inventory)

Validate Transaction and Update Master File

Trang 14

3 Error file

4 Error reports

5 Transaction log

Generalized Validation Module. The generalized validation module (GVM)

performs standard validation routines that are common to many different applications.These routines are customized to an individual application’s needs through parametersthat specify the program’s specific requirements For example, the GVM may apply arange check to the HOURLY RATE field of payroll records The limits of the range are

6 dollars and 15 dollars The range test is the generalized procedure; the dollar limits arethe parameters that customize this procedure The validation procedures for some appli-cations may be so unique as to defy a general solution To meet the goals of the general-ized data input system, the GVM must be flexible enough to permit special user-definedprocedures for unique applications These procedures are stored, along with generalizedprocedures, and invoked by the GVM as needed

Validated Data File The input data that are validated by the GVM are stored on a

validated data file This is a temporary holding file through which validated

trans-actions flow to their respective applications The file is analogous to a tank of waterwhose level is constantly changing, as it is filled from the top by the GVM and emp-tied from the bottom by applications

FIGURE 7.9

Stored Parameters

Transaction

Stored Validation Procedures

Cash Receipts Payroll Purchases Sales

Sales Orders

Purchases Orders

Payroll Time Cards

Cash Receipts

Generalized Validation Module

Sales System

Purchases System

Payroll System

Cash Receipts System

Validated Data File

To UsersInput Transactions

Applications

Error Reports

Generalized Data

Input System

Trang 15

Error File The error file in the GDIS plays the same role as a traditional error file.

Error records detected during validation are stored in the file, corrected, and thenresubmitted to the GVM

Error Reports Standardized error reports are distributed to users to facilitate error

correction For example, if the HOURLY RATE field in a payroll record fails a rangecheck, the error report will display an error message stating the problem so The re-port will also present the contents of the failed record, along with the acceptablerange limits taken from the parameters

Transaction Log The transaction log is a permanent record of all validated

transac-tions From an accounting records point of view, the transaction log is equivalent

to the journal and is an important element in the audit trail However, only ful transactions (those that will be completely processed) should be entered in thejournal If a transaction is to undergo additional validation testing during the pro-cessing phase (which could result in its rejection), it should be entered in the trans-action log only after it is completely validated This issue is discussed further in thenext section under Audit Trail Controls

success-Processing Controls

After passing through the data input stage, transactions enter the processing stage of the

system Processing controls are divided into three categories: run-to-run controls,

oper-ator intervention controls, and Audit Trail Controls.

Run-to-Run ControlsPreviously, we discussed the preparation of batch control figures as an element of input

control Run-to-run controls use batch figures to monitor the batch as it moves from

one programmed procedure (run) to another These controls ensure that each run in thesystem processes the batch correctly and completely Batch control figures may be con-tained in either a separate control record created at the data input stage or an internallabel Specific uses of run-to-run control figures are described in the following paragraphs

Recalculate Control Totals. After each major operation in the process and aftereach run, dollar amount fields, hash totals, and record counts are accumulated and com-pared to the corresponding values stored in the control record If a record in the batch islost, goes unprocessed, or is processed more than once, this will be revealed by the dis-crepancies between these figures

Transaction Codes. The transaction code of each record in the batch is compared to

the transaction code contained in the control record This ensures that only the correcttype of transaction is being processed

Sequence Checks. In systems that use sequential master files, the order of the action records in the batch is critical to correct and complete processing As the batchmoves through the process, it must be re-sorted in the order of the master file used in

trans-each run The sequence check control compares the sequence of trans-each record in the batch

with the previous record to ensure that proper sorting took place

Figure 7.10 illustrates the use of run-to-run controls in a revenue cycle system.This application comprises four runs: (1) data input, (2) accounts receivable update,(3) inventory update, and (4) output At the end of the accounts receivable run, batchcontrol figures are recalculated and reconciled with the control totals passed from thedata input run These figures are then passed to the inventory update run, where they

Trang 16

are again recalculated, reconciled, and passed to the output run Errors detected in eachrun are flagged and placed in an error file The run-to-run (batch) control figures arethen adjusted to reflect the deletion of these records.

Operator Intervention ControlsSystems sometimes require operator intervention to initiate certain actions, such as en-tering control totals for a batch of records, providing parameter values for logical opera-tions, and activating a program from a different point when reentering semi-processederror records Operator intervention increases the potential for human error Systems

that limit operator intervention through operator intervention controls are thus less

prone to processing errors Although it may be impossible to eliminate operator ment completely, parameter values and program start points should, to the extent possi-ble, be derived logically or provided to the system through look-up tables

involve-Audit Trail ControlsThe preservation of an audit trail is an important objective of process control In anaccounting system, every transaction must be traceable through each stage of processing

FIGURE 7.10

Transactions + Control Totals

Errors

Errors Inventory

Master

AR Master

Input Sales Orders

Transactions + Control Totals

AR Update

Transactions + Control Totals

Inventory Update

Transactions + Control Totals

Output Reporting

Sales Summary Report

Errors Run 1

Trang 17

from its economic source to its presentation in financial statements In an automatedenvironment, the audit trail can become fragmented and difficult to follow It thus be-comes critical that each major operation applied to a transaction be thoroughly docu-mented The following are examples of techniques used to preserve audit trails incomputer based accounting systems.

Transaction Logs. Every transaction successfully processed by the system should berecorded on a transaction log, which serves as a journal Figure 7.11 shows thisarrangement

There are two reasons for creating a transaction log First, the transaction log is apermanent record of transactions The validated transaction file produced at the datainput phase is usually a temporary file Once processed, the records on this file areerased (scratched) to make room for the next batch of transactions Second, not all ofthe records in the validated transaction file may be successfully processed Some of theserecords may fail tests in the subsequent processing stages A transaction log should con-tain only successful transactions—those that have changed account balances Unsuccess-ful transactions should be placed in an error file The transaction log and error filescombined should account for all the transactions in the batch The validated transactionfile may then be scratched with no loss of data

The system should produce a hard copy transaction listing of all successful tions These listings should go to the appropriate users to facilitate reconciliation withinput

transac-Log of Automatic Transactions. Some transactions are triggered internally by thesystem An example of this is when inventory drops below a preset reorder point, andthe system automatically processes a purchase order To maintain an audit trail of theseactivities, all internally generated transactions must be placed in a transaction log

Listing of Automatic Transactions. To maintain control over automatic tions processed by the system, the responsible end user should receive a detailed listing

transac-of all internally generated transactions

Unique Transaction Identifiers. Each transaction processed by the system must beuniquely identified with a transaction number This is the only practical means of tracing

Output Reports

Transactions

Application Process

Transaction Log

Valid transactions equal successful transactions plus error transactions.

Scratch file is erased after processing.

Trang 18

a particular transaction through a database of thousands or even millions of records Insystems that use physical source documents, the unique number printed on the docu-ment can be transcribed during data input and used for this purpose In real-time sys-tems, which do not use source documents, the system should assign each transaction aunique number.

Error Listing. A listing of all error records should go to the appropriate user to port error correction and resubmission

sup-Output Controls

Output controls ensure that system output is not lost, misdirected, or corrupted and

that privacy is not violated Exposures of this sort can cause serious disruptions tooperations and may result in financial losses to a firm For example, if the checksproduced by a firm’s cash disbursements system are lost, misdirected, or destroyed,trade accounts and other bills may go unpaid This could damage the firm’s creditrating and result in lost discounts, interest, or penalty charges If the privacy of cer-tain types of output is violated, a firm could have its business objectives compro-mised, or it could even become legally exposed Examples of privacy exposuresinclude the disclosure of trade secrets, patents pending, marketing research results,and patient medical records

The type of processing method in use influences the choice of controls employed toprotect system output Generally, batch systems are more susceptible to exposure andrequire a greater degree of control than real-time systems In this section, we examineoutput exposures and controls for both methods

Controlling Batch Systems OutputBatch systems usually produce output in the form of hard copy, which typically requiresthe involvement of intermediaries in its production and distribution Figure 7.12 showsthe stages in the output process and serves as the basis for the rest of this section.The output is removed from the printer by the computer operator, separated intosheets and separated from other reports, reviewed for correctness by the data controlclerk, and then sent through interoffice mail to the end user Each stage in this process

is a point of potential exposure where the output could be reviewed, stolen, copied, ormisdirected An additional exposure exists when processing or printing goes wrong andproduces output that is unacceptable to the end user These corrupted or partially dam-aged reports are often discarded in waste cans Computer criminals have successfullyused such waste to achieve their illicit objectives

Following, we examine techniques for controlling each phase in the output process.Keep in mind that not all of these techniques will necessarily apply to every item of out-put produced by the system As always, controls are employed on a cost–benefit basisthat is determined by the sensitivity of the data in the reports

Output Spooling. In large-scale data-processing operations, output devices such asline printers can become backlogged with many programs simultaneously demandingthese limited resources This backlog can cause a bottleneck, which adversely affects thethroughput of the system Applications waiting to print output occupy computer mem-ory and block other applications from entering the processing stream To ease thisburden, applications are often designed to direct their output to a magnetic disk file

rather than to the printer directly This is called output spooling Later, when printer

resources become available, the output files are printed

Trang 19

The creation of an output file as an intermediate step in the printing processpresents an added exposure A computer criminal may use this opportunity to performany of the following unauthorized acts:

• Access the output file and change critical data values (such as dollar amounts onchecks) The printer program will then print the corrupted output as if it were pro-duced by the output run Using this technique, a criminal may effectively circum-vent the processing controls designed into the application

• Access the file and change the number of copies of output to be printed The extracopies may then be removed without notice during the printing stage

• Make a copy of the output file to produce illegal output reports

• Destroy the output file before output printing takes place

The auditor should be aware of these potential exposures and ensure that properaccess and backup procedures are in place to protect output files

Print Programs. When the printer becomes available, the print run program duces hard copy output from the output file Print programs are often complex systemsthat require operator intervention Four common types of operator actions follow:

pro-1 Pausing the print program to load the correct type of output documents (check

stocks, invoices, or other special forms)

2 Entering parameters needed by the print run, such as the number of copies to be

printed

FIGURE 7.12

Output File

Aborted Output

Output Report

Output Report

Output Report

Output Report

Output Report

Output Run (Spooling)

Print Run

End User

Waste

Data Control

Report Distri- bution

Bursting

File

Stages in the

Output Process

Trang 20

3 Restarting the print run at a prescribed checkpoint after a printer malfunction.

4 Removing printed output from the printer for review and distribution.

Print program controls are designed to deal with two types of exposures presented

by this environment: (1) the production of unauthorized copies of output and (2) ployee browsing of sensitive data Some print programs allow the operator to specifymore copies of output than the output file calls for, which allows for the possibility ofproducing unauthorized copies of output One way to control this is to employ outputdocument controls similar to the source document controls discussed earlier This is fea-sible when dealing with prenumbered invoices for billing customers or prenumberedcheck stock At the end of the run, the number of copies specified by the output filecan be reconciled with the actual number of output documents used In cases whereoutput documents are not prenumbered, supervision may be the most effective controltechnique A security officer can be present during the printing of sensitive output

em-To prevent operators from viewing sensitive output, special multipart paper can beused, with the top copy colored black to prevent the print from being read This type ofproduct, which is illustrated in Figure 7.13, is often used for payroll check printing Thereceiver of the check separates the top copy from the body of the check, which containsreadable details An alternative privacy control is to direct the output to a special remoteprinter that can be closely supervised

Bursting. When output reports are removed from the printer, they go to the bursting

stage to have their pages separated and collated The concern here is that the burstingclerk may make an unauthorized copy of the report, remove a page from the report, orread sensitive information The primary control against these exposures is supervision.For very sensitive reports, bursting may be performed by the end user

Waste. Computer output waste represents a potential exposure It is important to pose of aborted reports and the carbon copies from multipart paper removed duringbursting properly Computer criminals have been known to sift through trash canssearching for carelessly discarded output that is presumed by others to be of no value.From such trash, computer criminals may obtain a key piece of information about thefirm’s market research, the credit ratings of its customers, or even trade secrets thatthey can sell to a competitor Computer waste is also a source of technical data, such aspasswords and authority tables, which a perpetrator may use to access the firm’s datafiles Passing it through a paper shredder can easily destroy sensitive computer output

dis-Data Control. In some organizations, the data control group is responsible for

veri-fying the accuracy of computer output before it is distributed to the user Normally, thedata control clerk will review the batch control figures for balance; examine the reportbody for garbled, illegible, and missing data; and record the receipt of the report indata control’s batch control log For reports containing highly sensitive data, the enduser may perform these tasks In this case, the report will bypass the data control groupand go directly to the user

Report Distribution. The primary risks associated with report distribution includereports being lost, stolen, or misdirected in transit to the user A number of control mea-sures can minimize these exposures For example, when reports are generated, the nameand address of the user should be printed on the report For multicopy reports, anaddress file of authorized users should be consulted to identify each recipient of thereport Maintaining adequate access control over this file becomes highly important If

an unauthorized individual were able to add his or her name to the authorized user list,

he or she would receive a copy of the report

Trang 21

NET PAY CHECK DATE

CURRENT PERIOD

CURRENT PERIOD

YEAR TO DATE TOTAL

YEAR TO DATE TOTAL

PAY DATE PERIOD END FED W/H FED ADDTL AMT ACCRUED VACATION AND PERSONAL ACCRUED VACATION FLOATING HOLIDAYS

HOLIDAY HOURS PRIOR TO DEDUCTING YOUR HOURS TAKEN THIS PAY PERIOD

CHECK NUMBER CHECK DATE CHECK AMOUNT

PAYROLL ACCOUNT

BETHLEHEM, PENNSYLVANIA 18015

62-22 311

Trang 22

For highly sensitive reports, the following distribution techniques can be used:

• The reports may be placed in a secure mailbox to which only the user has the key

• The user may be required to appear in person at the distribution center and sign forthe report

• A security officer or special courier may deliver the report to the user

End User Controls. Once in the hands of the user, output reports should be mined for any errors that may have evaded the data control clerk’s review Users are in afar better position to identify subtle errors in reports that are not disclosed by an imbal-ance in control totals Errors detected by the user should be reported to the appropriatecomputer services management Such errors may be symptoms of an improper systemsdesign, incorrect procedures, errors inserted by accident during systems maintenance, orunauthorized access to data files or programs

reexa-Once a report has served its purpose, it should be stored in a secure location until itsretention period has expired Factors influencing the length of time a hard copy report isretained include:

• Statutory requirements specified by government agencies, such as the IRS

• The number of copies of the report in existence When there are multiple copies,certain of these may be marked for permanent retention, while the remainder can

be destroyed after use

• The existence of magnetic or optical images of reports that can act as permanentbackup

When the retention date has passed, reports should be destroyed in a manner consistentwith the sensitivity of their contents Highly sensitive reports should be shredded

Controlling Real-Time Systems OutputReal-time systems direct their output to the user’s computer screen, terminal, or printer.This method of distribution eliminates the various intermediaries in the journey fromthe computer center to the user and thus reduces many of the exposures previously dis-cussed The primary threat to real-time output is the interception, disruption, destruc-tion, or corruption of the output message as it passes along the communications link.This threat comes from two types of exposures: (1) exposures from equipment failure;and (2) exposures from subversive acts, whereby a computer criminal intercepts the out-put message transmitted between the sender and the receiver Techniques for controllingcommunications exposures were discussed previously in Chapter 3

TESTING COMPUTER APPLICATION CONTROLS

This section examines several techniques for auditing computer applications Controltesting techniques provide information about the accuracy and completeness of an appli-cation’s processes These tests follow two general approaches: (1) the black box (aroundthe computer) approach and (2) the white box (through the computer) approach We firstexamine the black box approach and then review several white box testing techniques

Black-Box Approach

Auditors testing with the black-box approach do not rely on a detailed knowledge of the

application’s internal logic Instead, they seek to understand the functional characteristics

Trang 23

of the application by analyzing flowcharts and interviewing knowledgeable personnel inthe client’s organization With an understanding of what the application is supposed to

do, the auditor tests the application by reconciling production input transactions cessed by the application with output results The output results are analyzed to verifythe application’s compliance with its functional requirements Figure 7.14 illustrates theblack box approach

pro-The advantage of the black-box approach is that the application need not be removedfrom service and tested directly This approach is feasible for testing applications that arerelatively simple However, complex applications—those that receive input from manysources, perform a variety of operations, or produce multiple outputs—require a morefocused testing approach to provide the auditor with evidence of application integrity

White-Box Approach

The white-box approach relies on an in-depth understanding of the internal logic of the

application being tested The white-box approach includes several techniques for testingapplication logic directly These techniques use small numbers of specially created testtransactions to verify specific aspects of an application’s logic and controls In this way,auditors are able to conduct precise tests, with known variables, and obtain results thatthey can compare against objectively calculated results Some of the more common types

of tests of controls include the following:

Authenticity tests, which verify that an individual, a programmed procedure, or a

message (such as an EDI transmission) attempting to access a system is authentic.Authenticity controls include user IDs, passwords, valid vendor codes, and authoritytables

Accuracy tests, which ensure that the system processes only data values that conform

to specified tolerances Examples include range tests, field tests, and limit tests

Completeness tests, which identify missing data within a single record and entire

records missing from a batch The types of tests performed are field tests, recordsequence tests, hash totals, and control totals

Redundancy tests, which determine that an application processes each record only

once Redundancy controls include the reconciliation of batch totals, record counts,hash totals, and financial control totals

Auditor reconciles input transactions with output produced by application.

Auditing Around

the Computer—

The Black Box

Approach

Trang 24

Access tests, which ensure that the application prevents authorized users from

un-authorized access to data Access controls include passwords, authority tables, defined procedures, data encryption, and inference controls

user-• Audit trail tests, which ensure that the application creates an adequate audit trail.

This includes evidence that the application records all transactions in a transaction

log, posts data values to the appropriate accounts, produces complete transaction

listings, and generates error files and reports for all exceptions.

Rounding error tests, which verify the correctness of rounding procedures

Round-ing errors occur in accountRound-ing information when the level of precision used in thecalculation is greater than that used in the reporting For example, interest calcula-tions on bank account balances may have a precision of five decimal places, whereasonly two decimal places are needed to report balances If the remaining three deci-mal places are simply dropped, the total interest calculated for the total number ofaccounts may not equal the sum of the individual calculations

Figure 7.15 shows the logic for handling the rounding error problem This techniqueuses an accumulator to keep track of the rounding differences between calculated andreported balances Note how the sign and the absolute value of the amount in the accu-mulator determine how the customer account is affected by rounding To illustrate, the

FIGURE 7.15

Start

Stop End of File

Read Account Balance

Calculate Interest

Calculate New Balance

Calculate New Balance Rounded to Nearest Cent

Subtract Rounded Balance from

Unrounded Balance

Add Remainder

to Accumulator

Add 01 to New Rounded Balance and Subtract 01 from Accumulator

Subtract 01 from New Rounded Balance and Add 01 to Accumulator

Trang 25

rounding logic is applied in Table 7.1 to three hypothetical bank balances The interestcalculations are based on an interest rate of 5.25 percent.

Failure to properly account for this rounding difference can result in an imbalancebetween the total (control) interest amount and the sum of the individual interest calcu-lations for each account Poor accounting for rounding differences can also present anopportunity for fraud

Rounding programs are particularly susceptible to salami frauds Salami frauds tend

to affect a large number of victims, but the harm to each is immaterial This type offraud takes its name from the analogy of slicing a large salami (the fraud objective)into many thin pieces Each victim assumes one of these small pieces and is unaware ofbeing defrauded For example, a programmer, or someone with access to the preceding

Record 1

Record 2

Record 3

Trang 26

rounding program, can perpetrate a salami fraud by modifying the rounding logic asfollows: at the point in the process where the algorithm should increase the customer’saccount (that is, the accumulator value is > 01), the program instead adds one cent toanother account—the perpetrator’s account Although the absolute amount of each fraudtransaction is small, given the thousands of accounts processed, the total amount of thefraud can become significant over time.

Operating system audit trails and audit software can detect excessive file activity Inthe case of the salami fraud, there would be thousands of entries into the computer crim-inal’s personal account that may be detected in this way A clever programmer may dis-guise this activity by funneling these entries through several intermediate temporaryaccounts, which are then posted to a smaller number of intermediate accounts and finally

to the programmer’s personal account By using many levels of accounts in this way, theactivity to any single account is reduced and may go undetected by the audit software.There will be a trail, but it can be complicated A skilled auditor may also use audit soft-ware to detect the existence of unauthorized intermediate accounts used in such a fraud

COMPUTER-AIDED AUDIT TOOLS AND TECHNIQUES

FOR TESTING CONTROLS

To illustrate how application controls are tested, this section describes five CAATTapproaches: the test data method, which includes base case system evaluation and trac-ing, integrated test facility, and parallel simulation

Test Data Method

The test data method is used to establish application integrity by processing specially

prepared sets of input data through production applications that are under review Theresults of each test are compared to predetermined expectations to obtain an objectiveevaluation of application logic and control effectiveness The test data technique is illus-trated in Figure 7.16 To perform the test data technique, the auditor must obtain a copy

Auditor prepares test transactions, test master files, and expected results.

After test run, auditor compares test results with predetermined results.

Test Transactions Input Sources

Trang 27

of the current version of the application In addition, test transaction files and testmaster files must be created As illustrated in the figure, test transactions may enter thesystem from magnetic tape, disk, or via an input terminal Results from the test run will

be in the form of routine output reports, transaction listings, and error reports In tion, the auditor must review the updated master files to determine that account balanceshave been correctly updated The test results are then compared with the auditor’s ex-pected results to determine if the application is functioning properly This comparisonmay be performed manually or through special computer software

addi-Figure 7.17 lists selected fields for hypothetical transactions and accounts receivablerecords prepared by the auditor to test a sales order processing application The figurealso shows an error report of rejected transactions and a listing of the updated accountsreceivable master file Any deviations between the actual results obtained and thoseexpected by the auditor may indicate a logic or control problem

Creating Test DataWhen creating test data, auditors must prepare a complete set of both valid and invalidtransactions If test data are incomplete, auditors might fail to examine critical branches

of application logic and error-checking routines Test transactions should test everypossible input error, logical process, and irregularity

Gaining knowledge of the application’s internal logic sufficient to create meaningfultest data frequently requires a large investment of time However, the efficiency of thistask can be improved through careful planning during systems development The auditorshould save the test data used to test program modules during the implementation phase

of the SDLC for future use If the application has undergone no maintenance since itsinitial implementation, current audit test results should equal the test results obtained

at implementation However, if the application has been modified, the auditor can createadditional test data that focus on the areas of the program changes

Base Case System EvaluationThere are several variants of the test data technique When the set of test data in use is

comprehensive, the technique is called the base case system evaluation (BCSE) BCSE

tests are conducted with a set of test transactions containing all possible transactiontypes These are processed through repeated iterations during systems development test-ing until consistent and valid results are obtained These results are the base case Whensubsequent changes to the application occur during maintenance, their effects are evalu-ated by comparing current results with base case results

Tracing

Another type of the test data technique called tracing performs an electronic

walk-through of the application’s internal logic The tracing procedure involves three steps:

1 The application under review must undergo a special compilation to activate the

trace option

2 Specific transactions or types of transactions are created as test data.

3 The test data transactions are traced through all processing stages of the program,

and a listing is produced of all programmed instructions that were executed duringthe test

Implementing tracing requires a detailed understanding of the application’s internallogic Figure 7.18 illustrates the tracing process using a portion of the logic for a payrollapplication The example shows records from two payroll files—a transaction record

Trang 28

showing hours worked and two records from a master file showing pay rates The tracelisting at the bottom of Figure 7.18 identifies the program statements that were executedand the order of execution Analysis of trace options indicates that Commands 0001through 0020 were executed At that point, the application transferred to Command

0060 This occurred because the employee number (the key) of the transaction recorddid not match the key of the first record in the master file Then Commands 0010through 0050 were executed

TOTAL PRICE 20.00 45.00 400.00 10.00 120.00 3.00 1,220.00

UNIT PRICE 20.00 15.00 20.00 2.00 25.00 3.00 1,220.00

QNTY 1 3 20 5 5 1 1

DESCRIPTION Water Pump Gear Hose Spacer Bushing Seal Rebuilt Engine

CUSTOMER NAME Smith, Joe Azar, Atul Jones, Mary Lang, Tony Tuner, Agnes Hanz, James Swindle, Joe

CURRENT BALANCE 400.00 850.00 2,900.00

CREDIT LIMIT 1,000.00 5,000.00 3,000.00

CUSTOMER ADDRESS

1520 S Maple, City

18 Etwine St., City

1 Shady Side, City

CUSTOMER NAME Smith, Joe Lang, Tony Swindle, Joe

CUST # 231893 256519 267991

Original Test AR Master File

CURRENT BALANCE 420.00 860.00 2,900.00

CREDIT LIMIT 1,000.00 5,000.00 3,000.00

CUSTOMER ADDRESS

1520 S Maple, City

18 Etwine St., City

1 Shady Side, City

CUSTOMER NAME Smith, Joe Lang, Tony Swindle, Joe

CUST # 231893 256519 267991

Updated Test AR Master File

Error Report

TOTAL PRICE

EXPLANATION

OF ERROR 45.00

Record out of sequence

Credit limit error

UNIT PRICE 15.00

Trang 29

Advantages of Test Data TechniquesThere are three primary advantages of test data techniques First, they employ through-the-computer testing, thus providing the auditor with explicit evidence concerning appli-cation functions Second, if properly planned, test data runs can be employed with onlyminimal disruption to the organization’s operations Third, they require only minimalcomputer expertise on the part of auditors.

Disadvantages of Test Data TechniquesThe primary disadvantage of all test data techniques is that auditors must rely oncomputer services personnel to obtain a copy of the application for test purposes Thisentails a risk that computer services may intentionally or accidentally provide the auditorwith the wrong version of the application and may reduce the reliability of the auditevidence In general, audit evidence collected by independent means is more reliablethan evidence supplied by the client

A second disadvantage of these techniques is that they provide a static picture ofapplication integrity at a single point in time They do not provide a convenient means

of gathering evidence about ongoing application functionality There is no evidence thatthe application being tested today is functioning as it did during the year under test

A third disadvantage of test data techniques is their relatively high cost of mentation, which results in audit inefficiency The auditor may devote considerabletime to understanding program logic and creating test data In the following section, wesee how automating testing techniques can resolve these problems

imple-The Integrated Test Facility

The integrated test facility (ITF) approach is an automated technique that enables the

auditor to test an application’s logic and controls during its normal operation The ITF

is one or more audit modules designed into the application during the systems

FIGURE 7.18

Employee Number 33276 33456

Hourly Rate 15 15

YTD Earnings 12,050 13,100

Dependents 3 2

YTD Withhold 3,200 3,600

YTD FICA 873.62 949.75 Payroll Master File

Read Record from Transaction File Read Record from Master File

If Employee Number (T) = Employee Number (M) Wage = (Reg Hrs + (OT Hrs x 1.5) ) x Hourly Rate Add Wage to YTD Earnings

Go to 0001 Else Go to 0010

Payroll Transaction File Time

Card # 8945

Employee Number 33456

Name Jones, J.J.

Year 2004

Pay Period 14

Reg Hrs 40.0

OT Hrs 3.0

Tracing

Trang 30

development process In addition, ITF databases contain “dummy” or test master filerecords integrated with legitimate records Some firms create a dummy company towhich test transactions are posted During normal operations, test transactions aremerged into the input stream of regular (production) transactions and are processedagainst the files of the dummy company Figure 7.19 illustrates the ITF concept.

ITF audit modules are designed to discriminate between ITF transactions and tine production data This may be accomplished in a number of ways One of the sim-plest and most commonly used is to assign a unique range of key values exclusively toITF transactions For example, in a sales order processing system, account numbers be-tween 2000 and 2100 can be reserved for ITF transactions and will not be assigned toactual customer accounts By segregating ITF transactions from legitimate transactions

rou-in this way, routrou-ine reports produced by the application are not corrupted by ITF testdata Test results are produced separately on storage media or hard copy output and dis-tributed directly to the auditor Just as with the test data techniques, the auditor analyzesITF results against expected results

Advantages of ITFThe ITF technique has two advantages over test data techniques First, ITF supports on-going monitoring of controls as required by SAS 78 Second, applications with ITF can

be economically tested without disrupting the user’s operations and without the vention of computer services personnel Thus, ITF improves the efficiency of the auditand increases the reliability of the audit evidence gathered

inter-Disadvantages of ITFThe primary disadvantage of ITF is the potential for corrupting the data files of theorganization with test data Steps must be taken to ensure that ITF test transactions donot materially affect financial statements by being improperly aggregated with legitimatetransactions This problem is remedied in two ways: (1) adjusting entries may be pro-cessed to remove the effects of ITF from general ledger account balances or (2) data filescan be scanned by special software that remove the ITF transactions

FIGURE 7.19

Expected Results

Production Reports

Auditor enters test transactions along with production transactions and calculates expected results.

After testing, the auditor compares ITF results with expected results.

ITF Master Files

Production Master Files

Production Application with Embedded ITF Modules

ITF Transactions

Production Transactions

ITF Test Results

The ITF Technique

Trang 31

Parallel Simulation

Parallel simulation requires the auditor to write a program that simulates key features

or processes of the application under review The simulated application is then used toreprocess transactions that were previously processed by the production application Thistechnique is illustrated in Figure 7.20 The results obtained from the simulation are rec-onciled with the results of the original production run to establish a basis for makinginferences about the quality of application processes and controls

Creating a Simulation Program

A simulation program can be written in any programming language However, because

of the one-time nature of this task, it is a candidate for fourth-generation language erators The steps involved in performing parallel simulation testing are outlined here

gen-1 The auditor must first gain a thorough understanding of the application under

re-view Complete and current documentation of the application is required to struct an accurate simulation

con-2 The auditor must then identify those processes and controls in the application that

are critical to the audit These are the processes to be simulated

FIGURE 7.20

Application Specifications

Simulation Output

Production Output

Production Transactions

Production Transaction File

Production Master Files

Actual Production Application

Simulation Program

Generalized Audit Software (GAS)

Auditor uses GAS to produce simulation of application under review.

Auditor reconciles simulation output with production output.

Trang 32

3 The auditor creates the simulation using a 4GL or generalized audit software (GAS).

4 The auditor runs the simulation program using selected production transactions and

master files to produce a set of results

5 Finally, the auditor evaluates and reconciles the test results with the production

re-sults produced in a previous run

Simulation programs are usually less complex than the production applications theyrepresent Because simulations contain only the application processes, calculations, andcontrols relevant to specific audit objectives, the auditor must carefully evaluate differ-ences between test results and production results Differences in output results occurfor two reasons: (1) the inherent crudeness of the simulation program and (2) real defi-ciencies in the application’s processes or controls, which are made apparent by the sim-ulation program

SUMMARY

This chapter examined issues related to the use of computer-assisted audit tools andtechniques (CAATTs) for performing tests of application controls and data extraction.The chapter began by describing three broad classes of application controls: input con-trols, processing controls, and output controls Input controls, which govern the gather-ing and insertion of data into the system, attempt to ensure that all data transactions arevalid, accurate, and complete Processing controls attempt to preserve the integrity ofindividual records and batches of records within the system and must ensure that anadequate audit trail is preserved The goal of output controls is to ensure that informa-tion produced by the system is not lost, misdirected, or subject to privacy violations.Next, the black box and white box approaches to testing application controls werereviewed The black box technique involves auditing around the computer The white boxapproach requires a detailed understanding of the application’s logic Five types of CAATTthat are commonly used for testing application logic were then examined: the testdata method, base case system evaluation, tracing, integrated test facility, and parallelsimulation

input controlsintegrated test facility (ITF)operator intervention controlsoutput controls

output spoolingparallel simulationprocessing controls

Trang 33

REVIEW QUESTIONS

1 What are the broad classes of input controls?

2 Explain the importance of source documents and

associated control techniques

3 Give one example of an error that is detected by a

check digit control

4 What are the primary objectives of a batch

f Expiration date check

g Numeric-alphabetic data check

tech-7 What are the five major components of a GDIS?

8 What are the three categories of processingcontrols?

9 If all of the inputs have been validated beforeprocessing, then what purpose do run-to-runcontrols serve?

10 What is the objective of a transaction log?

11 How can spooling present an added exposure?

DISCUSSION QUESTIONS

1 The field calls for an “M” for married or an “S”

for single The entry is a “2.” What control will

detect this error?

2 The firm allows no more than 10 hours of

over-time a week An employee entered “15” in the

field Which control will detect this error?

3 The password was “CANARY”; the employee

entered “CAANARY.” Which control will detect

this error?

4 The inventory item number was omitted on the

purchase order Which control will detect this

error?

5 The order entry system will allow a 10 percent

variation in list price For example, an item with

a list price of $1 could be sold for 90 cents or

$1.10 without any system interference The cost

of the item is $3 but the cashier entered $2.Which control would detect this error?

6 How does privacy relate to output control?

7 What are some typical problems with passwords?

8 What are the three categories of processingcontrol?

9 Output controls ensure that output is not lost,misdirected, or corrupted and that privacy is notviolated What are some output exposures, orsituations where output is at risk?

10 Input validation includes field interrogation thatexamines the data in individual fields List fourvalidation tests and indicate what is checked ineach

11 What is record interrogation? Give two examples

Trang 34

MULTIPLE-CHOICE QUESTIONS

1 CMA 685 5-28

Routines that use the computer to check the validity

and accuracy of transaction data during input are

An edit of individual transactions in a direct

access file processing system usually

a takes place in a separate computer run

b takes place in an online mode as transactions

are entered

c takes place during a backup procedure

d is not performed due to time constraints

e is not necessary

3 CMA Adapted 686 5-13

An example of an input control is

a making sure that output is distributed to the

proper people

b monitoring the work of programmers

c collecting accurate statistics of historical

trans-actions while gathering data

d recalculating an amount to ensure its accuracy

e having another person review the design of a

business form

4 A control designed to validate a transaction at the

point of data entry is

a recalculation of a batch total

In a manual system, records of current activity

are posted from a journal to a ledger In a

computer system, current records from a(n)

a table file are updated to a transaction file

b index file are updated to a master file

c transaction file are updated to a master file

d master file are updated to a year-to-date file

e current balance file are updated to an index

file

6 CMA 1287 5-1The primary functions of a computerized infor-mation system include

a input, processing, and output

b input, processing, output, and storage

c input, processing, output, and control

d input, processing, output, storage, and control

e collecting, sorting, summarizing, and reporting

7 CMA 1287 5-16

An employee in the receiving department keyed

in a shipment from a remote terminal and vertently omitted the purchase order number.The best systems control to detect this errorwould be a

In an automated payroll processing environment,

a department manager substituted the time cardfor a terminated employee with a time card for afictitious employee The fictitious employee hadthe same pay rate and hours worked as the termi-nated employee The best control technique todetect this action using employee identificationnumbers would be a

of virtually all accounting processing systems,and much of the information generated by theaccounting system is used for preventive control

Trang 35

purposes Which one of the following is not an

essential element of a sound preventive control

system?

a separation of responsibilities for the

record-ing, custodial, and authorization functions

b sound personnel practices

c documentation of policies and procedures

d implementation of state-of-the-art software

and hardware

e physical protection of assets

10 Which of the following is not a test for identifying

application errors?

a reconciling the source code

b reviewing test results

c retesting the program

d testing the authority table

11 Which of the following is not a common type of

white-box test of controls?

a completeness tests

b redundancy tests

c inference tests

d authenticity tests

12 All of the following are examples of source

docu-ment control except

a prenumbering source documents

b limiting access to source documents

c supervising the bursting of source documents

d checking the sequence of numbers to identify

missing documents

13 The correct purchase order number, 123456, was

incorrectly recorded as shown in the solutions All

of the following are transcription errors except

a 1234567

b 12345

c 124356

d 123457

14 Which of the following is correct?

a Check digits should be used for all data

codes

b Check digits are always placed at the end of

data codes

c Check digits do not affect processing efficiency

d Check digits are designed to detect

transcrip-tion errors

15 Which statement is NOT correct? The goal of

batch controls is to ensure that during processing

a transactions are not omitted

b transactions are not added

c transactions are processed more than once

d an audit trail is created

16 The data control clerk performs all of the ing duties except

follow-a maintaining the batch control log

b computing (or recomputing) batch controldata

c reconciling the output results to the batchcontrol log

d destroying batch control logs when reconciled

17 An example of a hash total is

a total payroll checks—$12,315

b total number of employees—10

c sum of the social security numbers—12,555,437,251

d none of the above

18 Which statement is NOT true? A batch trol log

con-a is prepared by the user department

b records the record count

c indicates any error codes

d is maintained as a part of the audit trail

19 Which of the following is an example of a fieldinterrogation?

a numeric/alphabetic data check

b sign check

c limit check

d missing data check

Trang 36

23 A specific inventory record indicates that there

are twelve items on hand and a customer

pur-chased two of the items When recording the

order, the data entry clerk mistakenly entered

twenty items sold Which check would detect this

25 Which statement is not correct?

a The purpose of file interrogation is to ensure

that the correct file is being processed by the

system

b File interrogation checks are particularly

important for master files

c Header labels are prepared manually and

affixed to the outside of the tape or disk

d An expiration date check prevents a file from

being deleted before it expires

26 A computer operator was in a hurry and

acciden-tally used the wrong master file to process a

transaction file As a result, the accounts

receiv-able master file was erased Which control would

prevent this from happening?

a header label check

b expiration date check

c version check

d validity check

27 Which of the following is NOT a component of

the generalized data input system?

a generalized validation module

b validated data file

c updated master file

d error file

28 Advantages of the generalized data input systeminclude all of the following except

a control over quality of data input

b automatic calculation of run-to-run totals

c company-wide standards for data validation

d development of a reusable module for datavalidation

29 Run-to-run control totals can be used for all ofthe following except

a to ensure that all data input is validated

b to ensure that only transactions of a similartype are being processed

c to ensure the records are in sequence and arenot missing

d to ensure that no transaction is omitted

30 Methods used to maintain an audit trail in acomputerized environment include all of thefollowing except

a transaction logs

b unique transaction identifiers

c data encryption

d log of automatic transactions

31 Risk exposures associated with creating an outputfile as an intermediate step in the printing process(spooling) include all of the following actions by acomputer criminal except

a gaining access to the output file and changingcritical data values

b using a remote printer and incurring ing inefficiencies

operat-c making a copy of the output file and usingthe copy to produce illegal output reports

d printing an extra hard copy of the output file

32 Which statement is NOT correct?

a Only successful transactions are recorded on

a transaction log

b Unsuccessful transactions are recorded in anerror file

c A transaction log is a temporary file

d A hard copy transaction listing is provided tousers

PROBLEMS

1 Input Validation

Identify the types of input validation techniques for the

following inputs to the payroll system Explain the

controls provided by each of these techniques.

a Operator access number to payroll file

b New employee

c Employee name

d Employee number

e Social Security number

f Rate per hour or salary

g Marital status

h Number of dependents

Trang 37

i Cost center

j Regular hours worked

k Overtime hours worked

l Total employees this payroll period

2 Processing Controls

CMA 691 4-2

Unless adequate controls are implemented, the rapid

ad-vance of computer technology can reduce a firm’s ability

to detect errors and fraud Therefore, one of the critical

responsibilities of the management team in firms where

computers are used is the security and control of

infor-mation service activities.

During the design stage of a system, information

system controls are planned to ensure the reliability of

data A well-designed system can prevent both intentional

and unintentional alteration or destruction of data These

data controls can be classified as (1) input controls,

(2) processing controls, and (3) output controls.

Required:

For each of the three data control categories listed,

pro-vide two specific controls and explain how each control

contributes to ensuring the reliability of data Use the

following format for your answer.

3 Input Controls and Data Processing

You have been hired by a catalog company to

comput-erize its sales order entry forms Approximately 60

per-cent of all orders are received over the telephone, with

the remainder either mailed or faxed in The company

wants the phone orders to be input as they are received.

The mail and fax orders can be batched together in

groups of fifty and submitted for data entry as they

become ready The following information is collected for each order:

• Customer number (if a customer does not have one, one needs to be assigned)

• Customer name

• Address

• Payment method (credit card or money order)

• Credit card number and expiration date (if necessary)

• Items ordered and quantity

• Unit price

Required:

Determine control techniques to make sure that all orders are entered accurately into the system Also, dis- cuss any differences in control measures between the batch and the real-time processing.

4. Write an essay explaining the following three methods of correcting errors in data entry: immediate correction, creation of an error file, and rejection of the batch.

5. Many techniques can be used to control input data Write a one-page essay discussing three techniques.

6. The presence of an audit trail is critical to the integrity

of the accounting information system Write a one-page essay discussing three of the techniques used to preserve the audit trail.

7. Write an essay comparing and contrasting the following audit techniques based on costs and benefits:

• test data method

• base case system evaluation

• tracing

• integrated test facility

• parallel simulation

Trang 38

C H A P T E R 8

Data Structures and CAATTs for Data Extraction

LEARNING OBJECTIVES

After studying this chapter, you should:

• Understand the components of data structures and how these are used to achieve data-processing operations.

• Be familiar with structures used in flat-file systems, including sequential, indexes, hashing, and pointer structures.

• Be familiar with relational database structures and the principles of normalization.

• Understand the features, advantages, and disadvantages of the embedded audit module approach to data extraction.

• Know the capabilities and primary features of generalized audit software.

• Become familiar with the more commonly used features of ACL.

This chapter examines data structures and the use of CAATTs for data

extrac-tion and analysis The chapter opens with a review of data structures, whichconstitute the physical and logical arrangement of data in files and databases.Flat-file, navigational database, and relational database structures are exam-ined Considerable attention is devoted to relational databases, since this is themost common data structure used by modern business organizations The cover-age includes relational concepts, terminology, table-linking techniques, data-base normalization, and database design procedures

Understanding how data are organized and accessed is central to using adata extraction CAATT Auditors make extensive use of these tools in gatheringaccounting data for testing application controls and in performing substantivetests In the previous chapter we studied how CAATTs are used to test applica-tion controls directly The data extraction tools discussed in this chapter are used

to analyze the data processed by an application rather than the application self By analyzing data retrieved from computer files, the auditor can make infer-ences about the presence and functionality of controls in the application thatprocessed the data

it-327

Trang 39

Another important use of data extraction software is in performing tive tests Most audit testing occurs in the substantive-testing phase of the audit.

substan-These procedures are called substantive tests because they are used for, but not

limited to, the following:

• Determining the correct value of inventory

• Determining the accuracy of prepayments and accruals

• Confirming accounts receivable with customers

• Searching for unrecorded liabilities

CAATTs for data extraction software fall into two general categories:embedded audit modules and general audit software The chapter describesthe features, advantages, and disadvantages of the embedded audit module(EAM) approach It then outlines typical functions and uses of generalized auditsoftware (GAS) The chapter closes with a review of the key features of ACL (au-dit command language), the leading product in the GAS market

DATA STRUCTURES

Data structures have two fundamental components: organization and access method Organization refers to the way records are physically arranged on the secondary storage

device This may be either sequential or random The records in sequential files are

stored in contiguous locations that occupy a specified area of disk space Records in dom files are stored without regard for their physical relationship to other records of the

ran-same file Random files may have records distributed throughout a disk The access

method is the technique used to locate records and to navigate through the database or

file While several specific techniques are used, in general, they can be classified as eitherdirect access or sequential access methods

Since no single structure is best for all processing tasks, different structures are usedfor storing different types of accounting data Selecting a structure, therefore, involves atrade-off between desirable features The criteria that influence the selection of the datastructure are listed in Table 8.1

In the following section, we examine several data structures These are divided tween flat-file and database systems In practice, organizations may employ any of theseapproaches in various combinations for storing their accounting data

1 Retrieve a record from the file based on its primary key.

2 Insert a record into a file.

3 Update a record in the file.

4 Read a complete file of records.

5 Find the next record in the file.

6 Scan a file for records with common secondary keys.

7 Delete a record from a file.

Trang 40

Flat-File Structures

Recall from Chapter 4 that the flat-file model describes an environment in which vidual data files are not integrated with other files End users in this environmentown their data files rather than share them with other users Data processing is thus per-formed by standalone applications rather than integrated systems The flat-file approach

indi-is a single view model that characterizes legacy systems Data files are structured, ted, and arranged to suit the specific needs of the owner or primary user Such structur-ing, however, may omit or corrupt data attributes that are essential to other users, thuspreventing successful integration of systems across the organization

format-Sequential Structure

Figure 8.1 illustrates the sequential structure, which is typically called the sequential

access method Under this arrangement, for example, the record with key value 1875 is

placed in the physical storage space immediately following the record with key value

1874 Thus, all records in the file lie in contiguous storage spaces in a specified sequence(ascending or descending) arranged by their primary key

Sequential files are simple and easy to process The application starts at the ning of the file and processes each record in sequence Of the file-processing operations

begin-in Table 8.1, this approach is efficient for Operations 4 and 5, which are, respectively,reading an entire file and finding the next record in the file Also, when a large portion

of the file (perhaps 20 percent or more) is to be processed in one operation, the tial structure is efficient for record updating (Operation 3 in Table 8.1) An example ofthis is payroll processing, where 100 percent of the employee records on the payroll fileare processed each payroll period However, when only a small portion of the file (or asingle record) is being processed, this approach is not efficient The sequential structure

sequen-is not a practical option for the remaining operations lsequen-isted in Table 8.1 For example,retrieving a single record (Operation 1) from a sequential file requires reading all the re-cords that precede the desired record On average, this means reading half the file eachtime a single record is retrieved The sequential access method does not permit accessing

a record directly Files that require direct access operations need a different data ture The following data structures address this need

Records Are Read Sequentially

Key

Keys Are in Sequence (in this case, ascending order)

Ngày đăng: 20/12/2022, 11:55

TỪ KHÓA LIÊN QUAN