1. Trang chủ
  2. » Ngoại Ngữ

Measurement in the Evaluation Context Trustable, Feasible, Usable

2 3 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Measurement in the Evaluation Context Trustable, Feasible, Usable
Tác giả John F. Stevenson
Trường học University of Rhode Island
Chuyên ngành Psychology / Evaluation Research
Thể loại Báo cáo
Năm xuất bản 2010
Thành phố Kingston
Định dạng
Số trang 2
Dung lượng 58 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Stevenson Department of Psychology, University of Rhode Island, 10 Chafee Road, Kingston, RI 02881 401874-4240; jsteve@uri.edu I spend one week on measurement in my introductory graduat

Trang 1

Measurement in the Evaluation Context: Trustable, Feasible, Usable 1

John F Stevenson

Department of Psychology, University of Rhode Island,

10 Chafee Road, Kingston, RI 02881 (401)874-4240; jsteve@uri.edu

I spend one week on measurement in my introductory graduate course on evaluation research methods in

a psychology department In that class I try to hit on some high spots with practical examples – it is not a session on psychometrics, although I touch on those issues Every student is working on a semester-long multi-part assignment, planning an evaluation, so the measurement issues are directly related to decisions they are making for their own projects The role of logic models in identifying what must be measured is already an ongoing organizing theme Here are the high spots:

Measurement is a cultural thing! I feature culture-embedded case stories to illustrate points First,

professional cultures approach the task of measuring important variables in evaluation from very different perspectives Epidemiologists like population-level archival statistics, often dichotomous variables (e.g crime, health diagnoses, mortality), logistic regression and elegant modeling of temporal change;

sociologists like individual items on surveys, and pay a great deal of attention to sample characteristics, and how to measure demographics; psychologists like multi-item self-report scales with an emphasis on psychometric properties (reliability and validity) and unobservable “intrapsychic” constructs (knowledge, attitudes, skills/behaviors) Each has strengths and weaknesses, and they may lend themselves to different parts of a logic model – sociological for process measures, psychological for intermediate or mediating outcomes, and epidemiological for longer-term outcomes

Second, the assumptions underlying many of our measures reflect the culture in which they were

developed I include a lengthy section on cultural issues in measurement in evaluation, using my own case-study paper on the topic as a basis This paper illustrates some basic psychometric steps, puts them

in the context of a real-world project scenario with Latino social service agency staff and directors as the clients, and makes observations about promise and pitfalls in cross-cultural forays into quantitative measurement I don’t spend time on qualitative alternatives (e.g Lincoln and Guba, 1986) in this class, but they are discussed in a session on qualitative methods

Selecting measures in a program evaluation context is about much more than psychometric properties James Ciarlo worked with a group of expert consultants on an NIMH contract to prepare a manual on measuring mental health outcomes, and I organize some of my class around his recommendations His list of criteria for mental health outcome measures can be grouped under three headings: trustworthiness (reliability, validity, cultural and literacy-level appropriateness); practicality (stakeholder interest and understanding, time to completion, training and other costs, social values and biases); and usefulness (sensitivity to change, validity for plausible program effects, policy relevance) Selection for convergence across multiple measures, from different sources (self, significant others, clinical observers, objective test, structured observation) with differing strengths and limitations is often wise

I conclude with a class exercise: “For your project, (1) select three important stakeholders, (2) indentify the most important outcome for each, and (3) propose one suitable approach to measuring each outcome.”

References

1 Paper presented as part of a session titled “Critical Concepts for Introductory Evaluation Courses: Multiple Perspectives - Part 2” at the annual meeting of the American Evaluation Association, San

Antonio, Texas, November 2010

1

Trang 2

Ciarlo, J.A., Broskowski, A., Cox, G.B., Goldman, H.H., Hargreaves, W.A., Mintz, J., Waskow, I.E., & Zinober, J.W (1986) Ideal criteria for development , selection, and use of client outcome measures

In J.A Ciarlo, T.R Brown, D.W Edwards, T.J Kiresuk, &F.L Newman Assessing Mental Health

Outcome Measurement Techniques NIMH Series FN No 9, DHHS Pub No (ADM)86-1301

Washington, DC: Superintendant of Documents, US Government Printing Office

Lincoln, Y.S and Guba, E.G (1986) But is it rigorous? Trustworthiness and authenticity in naturalistic

evaluation In D.D Williams (Ed.) Naturalistic Evaluation, New Directions in Program Evaluation ,

No 30) San Francisco: Jossey-Bass

Patton, M.Q. (2008).  Chapter 7, Focusing on outcomes: Beyond the goals clarification game, in 

Utilization­Focused Evaluation,  Edition 3. Thousand Oaks, CA: Sage.

Posavac, E.J. and Carey, R.G.  (2007) Chapter 4, Developing measures, in Program Evaluation:  Methods

and Case Studies, 7th Edition.  Englewood Cliffs, NJ:  Prentice­Hall.

Stevenson, J (1996, November) The social construction of constructs: Measuring Hispanic Pride Paper presented at the annual meeting of the American Evaluation Association, Atlanta, GA

2

Ngày đăng: 20/10/2022, 03:11

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w