1. Trang chủ
  2. » Giáo án - Bài giảng

Design of the evaluation

28 241 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 28
Dung lượng 0,91 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

A program evaluation has to be designed- whether it is Q or Q method of investigation First: evaluator has to decide -which sites to study large or small, few or severalclass room with

Trang 1

More effective

Less effective

George C Homas (1949): People who write about methodology

often forget that it is a matter of strategy, not of morals There are neither good nor bad methods but only the methods that are:

Un de

r p arti cu lar circ

um sta nce

s in

rea ch ing ob jec tiv

es

On th

e w

ay t o a

dis tan

t g oa l

Design of

the evaluation includes here:

Trang 2

The design: indicates

• Which people or unit will be studied?

• How they are to be selected?

• Which kind of comparison will be drawn?

• What is the timing of the investigation?

So key in design are:

Status

Method

Comparison

Change/Longitudinal

Trang 3

Process Evaluation:

Is program is being conducted & producing output as planned?

How can process can be improved?

• In fact not very different from monitoring- similar kind of inquiry on both

But some differences:

• Often conducted for the benefit of the program

• Help to understand the program: what it has been doing and how?

• Lead to reflect on how it might improves its operations?

• More systematic and relay more on data and less on

intuitive judgments

Trang 4

A program evaluation has to be designed- whether it

is Q or Q method of investigation

First: evaluator has to decide

-which sites to study (large or small, few or several(class room with in the school) and which people to query

- which sample strategy to use

- Time period: which data, what interval, how intensively

In qualitative(Yin): make design an integral feature

So, major advantage in process evaluation= “opportunity to find the unexpected”

- The evaluator can follow the trail wherever it leads

- Also learns which aspects of the program matter to staff and to

participants

- What elements of the program are relevant to whom at which

time?

Trang 5

When should an evaluator conduct a process

evaluation?

• When she knows little about the nature of the

program & its activities

• When the program represents a marked departure

from ordinary method of service?

• When the theories underlying the program – implicitly

or explicitly are doubtful, disputable or problematic

• …in such cases a wider gauge inquiry is worthwhile.

However, again there is the option of quantitative measures

of program process: design forms, interviews, survey

So, whichever method or combination of methods is used for process evaluation the evaluator has to decide the

frequency with which information will be collected.

Trang 6

cal

component

Design Timing of data collection: when and

how often data will be collected

Observe classroom activities at least twice per semester with at least 2 weeks of observation Conduct focus groups with participants in the last month of the intervention

Data sources Source of information (for example,

who will be surveyed, observed, interviewed)

Both qualitative and quantitative – data sources include participants, teachers/staff delivering sessions records, the environment etc

Both qualitative and quantitative – tools include surveys, checklists, observations forms, interview guides etc

management Procedures for getting data from field and entered – plus quality

checks

Staff turn in participant sheets weekly, evaluation coordinator collects and checks surveys and gives them to data entry staff

Interviews transcribed and tapes submitted at the end of the month

Data

analysis

Statistical and/or qualitative methods used to analyse or summarise data

Statistical analysis and software that will be used to analyse the quantitative data

Types of qualitative analysis used

Trang 7

Outcome evaluation: to what extent have a program’s

goals have been met?

2 fold: Before –program- after

Random

assignment

Program participants

Control group c d

b-a=yd-c=z

y-z= net outcome of the program

If +, positive net outcome

b-a= out comes for participants= y

d-c= out comes for the control group=z

y-z= the net outcome of the program More on chapter- 9

Before After

Diagram of an experiment

Trang 8

In the absence of experimental design,

conditions other than the program

can be responsible for observed

outcomes chief among them are :

If there is no reason to suspect that selection or attribution

or outside events or any other threats are getting in the

way, then experimental design is a luxury.

Evaluator is doing an evaluation to inform the policy

community about the program & to help decision makers make wise decisions Her task is to provide best

information possible

What the evaluator needs to be concerned about is:

countering credible threats to valid conclusions

Trang 9

Important concepts

• Validity: the extent to which the indicator actually

captured the concept that it aimed to measure

• Valid findings describe the way things actually are.

• Internal: the causal link between independent(program

inputs: describes the participants or features of the

service they receive) and dependent( observed out

comes of the program) variables

• External/ generalizability: whether the findings of one

evaluation can be generalized to apply to other

programs of similar type

Trang 10

Unit of analysis and Unit of sampling

UOA: the unit that the evaluation measures

and enters in to statistical analysis or it is

the unit that figures in data analysis.

The unit that receives a program is usually a

person but it can be: a group, an

organization, or a community or any

other definable and observable unit

Researchers used to worry about the

appropriate UOA e.g analysis of higher

level of department of health not

possible to rich sound conclusion about

individuals with in those departments.=

Ecological fallacy

UOS: is the entity i.e

selected in to the program e.g “ study of a nutrition education program carried

out in supermarkets”

In program conducted in school classroom are usually the unit sampled There may be 2 stages of sampling 1 st schools can be sampled followed by

choice of classrooms within the selected school

However, techniques of multi-level analysis make it possible to analyze data at several levels simultaneously A matter that requires continuing care is the unit of measurement

Trang 11

Designs: 2 key questions need to keep in mind

a) what comparisons are being made and will these comparisons provide sound conclusions?

b) will the findings from a study like this be persuasive to potential audiences?

Self evaluation

Expert judgment

Formal 1.One group designA) after only

b) before and after

2 Extending one group designa) more data during the programb) DRB- Dose- Response- Designc) Time Series Design

3) Comparison groups…

Trang 12

or dislike about the program.

• Shortcomings; if people know it is for evaluation:

most favorable reporting.

• Despite the weaknesses, these kind of judgments have value.

Trang 13

Inspect, review, evaluation the program…

•Team and panel

• No single person can exercise her unique standard

•Decision have to survive

•Wider range of experience and skill

Who

are?

What they do??

Which phase and where?

Trang 14

Formal: i)One group design

• Does not include any comparison with units

• 2 categories of evaluation: after only(AO) and before and after(BA)

• A) After only: the evaluator finds out what the situation is after the program

by examine records, if unavailable the evaluator can use historical comparisons E.g compare test result at the end semester(3 rd ) with history of test result(1 st

and 2 nd )

This adds useful information but there are drawbacks

Changing situation of school, diff population, test system, teachers…

Question of validity of data, reliability of peoples memory ,= real difficulty

inducing a reliable baseline with out records

BUT certain matter of facts such as age, school complete time, whether they were un/employed, what kind of work they are doing such responses are fairly trustworthy.

Another way through the “ use of experienced judgment”

Trang 15

B)Before and after: Look at program recipients before they enter

the program and then again in the end

Cohort 1 Before After

Result: was the change in their skills or health or income due to

the program? May be or may not be ???

But if the data collected with care and systematically, they offer considerable information about the program and the people who participate in it Then the evaluation will be able to say much

about how the program is working

One group

Trang 16

Advantages and disadvantages of one group design

Evaluators mind eye Higher expectations Can not provide quick result Some agencies demand one-time ex-post facto investigation: short term needs quick results Evaluators will have to exploit every

opportunity…

Trang 17

ii)Extending one group design: one group design can be elaborated in 2 basic

During the program

Much before & after the program

MDDP: More data during the

program DRD: Dose-Response- Design

That is

Trang 18

MDDP: More Data During the Program

• One way: During measure, or series of

3D “During-During- During” measure

• Q&Q data can be collected in program

services and on participant’s progress as

they move through the program.

• The data can be analyzed to elaborate

the picture of what happens during the

program

• Can identify the association between

program events and participant

outcomes

• It can probe in to the relationships,

among rules, recruitments, strategies,

mode of organization, auspices, services,

leadership, communication, feedback

loops… & whatever else people are

concerned about.

• Evaluation can compared participants who received a large quantity of service ( a high doge) with those who receive a lower doge

• The notion that more is better may not always be right

• In social programs, evaluator can make many internal comparisons

• It is a highly versatile design

• It can be used not only in one group designs but also when the evaluation

includes comparison or control group…in ch- 9

DRD: Dose Response Design

Trang 19

More people’s eyes on the street

Greater sense

of individual responsibility for the project

Inhabitation of violence

Culture of mutual

concern

More neighborliness, social networks

Improved security

More patrol surveillance and arrests

Greater demand for security staff

Program Theory Model: MDDP

Tenant/resident representatives

More security staff hired

Tenant watch, Fear of

to pass, She locates sources, time intervals

Trang 20

We are discussing a variety of research design for evaluating the process and outcome of social programs.

Evaluator will want to use designs that are less vulnerable to bias and incompleteness…

Trang 21

TIME-SERIES DESIGNS

Presented by: MODESTUS

Trang 22

TIME-SERIES DESIGNS

Time-series data enable the evaluator to interpret the pre-to-post

changes in the light of additional evidence They show whether the measures immediately before and after the program or whether they mark a decisive change

TIME-SERIES DESIGNS

Time-series data enable the evaluator to interpret the pre-to-post

changes in the light of additional evidence They show whether the measures immediately before and after the program or whether they mark a decisive change

Trang 23

This type of evaluation has the ability to follow participants for more than a year after their departure from the program; these can be

apply on crime, welfare caseloads, hospital stays, medical costs,

births, refugee resettlement etc

This type of evaluation has the ability to follow participants for more than a year after their departure from the program; these can be

apply on crime, welfare caseloads, hospital stays, medical costs,

births, refugee resettlement etc

The main criticism is that some outside events coincided with the program and were responsible for whatever change was observed between before and after For example, the program that accounted for the decline in smoking; it could have been the television series

on the risks of smoking that came along at the same time

The main criticism is that some outside events coincided with the program and were responsible for whatever change was observed between before and after For example, the program that accounted for the decline in smoking; it could have been the television series

on the risks of smoking that came along at the same time

Another way to cope with the possibility of contamination with

outside events is by studying what happened over the same time

period to people who did not receive the program but were exposed

to the same external influences

Another way to cope with the possibility of contamination with

outside events is by studying what happened over the same time

period to people who did not receive the program but were exposed

to the same external influences

Trang 24

Comparison Group

When these groups are not selected randomly from the same

population as program recipients, we call them comparison groups The main purpose for the comparison is to see whether receiving the program adds something extra that comparison units do not get

these shows what program recipients would have been like if they had not been in the program

Comparison Group

When these groups are not selected randomly from the same

population as program recipients, we call them comparison groups The main purpose for the comparison is to see whether receiving the program adds something extra that comparison units do not get

these shows what program recipients would have been like if they had not been in the program

Before-and-After with Comparison Group

The comparison group is selected to be as much like the client group through any of a variety of procedures The evaluator can locate a similar site where the program do not take place, compare the status

of both group before the program began & test how similar they

were

However, each will compensate for differences that the other

comparison leaves uncontrolled

Before-and-After with Comparison Group

The comparison group is selected to be as much like the client group through any of a variety of procedures The evaluator can locate a similar site where the program do not take place, compare the status

of both group before the program began & test how similar they

were

However, each will compensate for differences that the other

comparison leaves uncontrolled

Trang 25

Multiple Time Series

These are especially useful for evaluation policy changes, when

jurisdiction passes a law; Example evaluator of the crackdown on highway speeding Evaluator collected reports of traffic fatalities for several periods before & after the new program went into effect They found that fatalities went down after police began strict

enforcement of penalties for speeding

Multiple Time Series

These are especially useful for evaluation policy changes, when

jurisdiction passes a law; Example evaluator of the crackdown on highway speeding Evaluator collected reports of traffic fatalities for several periods before & after the new program went into effect They found that fatalities went down after police began strict

enforcement of penalties for speeding

Constructing a Comparison Group

Effect can be assumed (Program – effect) Effect can be assumed (Program – effect)

Ngày đăng: 27/03/2016, 14:44

TỪ KHÓA LIÊN QUAN