1. Trang chủ
  2. » Ngoại Ngữ

Is Critical Thinking the All-purpose Outcome in Higher Education

9 3 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Is Critical Thinking the All-purpose Outcome in Higher Education
Tác giả John F.. Stevenson
Trường học University of Rhode Island
Chuyên ngành Higher Education
Thể loại essay
Năm xuất bản 2010
Thành phố Kingston
Định dạng
Số trang 9
Dung lượng 71,5 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

In definitions of learning outcomes in higher education, critical thinking is ubiquitous.. An emphasis on skills like critical thinking is also consistent with the effort to find ways to

Trang 1

Is Critical Thinking the All-purpose Outcome in Higher Education?1

John F Stevenson 2 November 13, 2010

Introduction

How can evaluators in higher education work with administrators and faculty to select,

implement, and learn from institution-level measures of crucial learning outcomes? Critical thinking is a pervasive choice, and this paper explores issues in its definition and measurement, drawing on experiences of one university along with the published literature

In definitions of learning outcomes in higher education, critical thinking is ubiquitous For evaluators who work with administrators and faculty, considering ways to define and measure critical thinking is a likely role At both the departmental level and the institution-wide level, academic skills have been a central feature in higher education renewal efforts across the

country Allen (2006), for example, lists “a common core of broad expectations” including

“written and oral communication, critical thinking, and information literacy” and characterizes them as “virtually universal” (p.34) Employers want them, state legislators want them … they are a part of the accountability movement in higher education that higher education evaluators know has been gaining momentum over the past 15 years or so (AAC&U, 2005) An emphasis

on skills like critical thinking is also consistent with the effort to find ways to measure student learning outcomes; on the face of it skills lend themselves to operationalizable definitions In contrast to other intended outcomes that appear more abstract (e.g global awareness,

self-reflective and ethical decision-making, commitment to lifelong learning), skills seem

straightforward, measurable entities with an easy-to-grasp practical significance A survey of 433 Chief Academic Officers of higher education institutions conducted in early 2009 found that 74% of the institutions reported that critical thinking was one of the areas of intellectual skills addressed in their common learning goals for students, second only to writing skills in frequency (Hart Research Associates, 2009) This skill is recommended as part of AAC&U’s (2007) LEAP Initiative as one of six “Intellectual and Practical Skills” to be incorporated in a set of essential learning outcomes, and 73% of a sample of employers interviewed in 2006 recommended that colleges and universities place more emphasis on “critical thinking and analytical reasoning skills.” Reflecting this interest in critical thinking as an important learning outcome, national attention has been paid to its measurement For example, the Collegiate Learning Assessment (CLA) developed by the Council for Aid to Education (Shavelson, 2009) features a complex real-world problem context for assessing this skill, the Collegiate Assessment of Academic Proficiency (CAAP) developed by ACT uses real-world stimulus scenarios with a set of

structured follow-up questions, and the VALUE Project sponsored by AAC&U (AAC&U, 2009) presents a “metarubric” for assessing it, distilling the common core from rubrics used by a large group of institutions across the country This paper will address alternative definitions and

1 Paper presented at the annual meeting of the American Evaluation Association, San Antonio, TX

2 Department of Psychology, University of Rhode Island, 10 Chafee Road, Kingston, RI 02881; 401-874-4240; jsteve@uri.edu

Trang 2

measurement approaches with attention to ongoing definitional issues and the usefulness of alternate approaches in actual curricular change

Defining and measuring “critical thinking” at the University of Rhode Island

Table 1 presents a summary of methods used at the University of Rhode Island to assess aspects

of student learning related to critical thinking As the table makes clear, various methods, selected or developed by various entities within the institution, are employed At the program (departmental) level we do not yet have a classification system that would allow us to count the number of departments that use some measure of critical thinking In Psychology this is one of ten identified learning outcome goals for our undergraduate program, and it has been given early priority

Across the university, FSSE and NSSE results can be used to track the perceived availability of opportunities for learning that are associated with critical thinking, and compare with norms for similar institutions However, the data for these items are rarely made accessible to faculty, and faculty do not generally seek them out How the psychometric strengths and available norms could affect internal refection on ways to enhance student learning is unknown, as the data have not been used in that way

A faculty committee responsible for assessing the general education program devised a set of questions for students to complete along with end-of-semester course evaluations These are directly relevant for assessing the implementation of skill-focused aspects of the courses, and the results for a stratified sample of general education courses have been reported within the

committee and summarized for external consumption

CAAP results have not been shared beyond the small group of staff and faculty who have

worked to implement the Wabash study at URI The first cohort to receive senior-year testing is due this spring

Critical thinking as defined by a series of cognitive outcome objectives (similar to Bloom’s taxonomy of cognitive objectives) has been promoted as a means of assessing the impact of general education The faculty committee responsible for assessing general education

anticipated that each major knowledge area (social sciences, natural sciences, etc.) would

approach these learning outcomes in different ways, but that all would share commitment to them at a higher level Student assignments have been used to explore the value of the model, and rubrics have been drafted by the committee Local interest may be strongest when

accreditation pressures rise; in the mean time internal differences of opinion about the utility of the model as applied across subject areas have slowed its application

Defining critical thinking: generic definitions

Historically there have been a few major threads in efforts to define the concept Scriven & Paul(1987) authored a definition that has much in common with Bloom’s taxonomy They stated, “Critical thinking is the intellectually disciplined process of actively and skillfully

conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from,

or generated by, observation, experience reflection, reasoning, or communication, as a guide to belief and action.” They went on to say it can and should transcend separate disciplines They also pointed to two basic aspects of the process: the skills for doing critical thinking and the inclination to use them Paul and Elder (2008) emphasized a humanistic theme in their

definition: “Critical thinking is that mode of thinking … in which the thinker improves the

Trang 3

quality of his or her thinking by skillfully taking charge of the structures inherent in thinking and imposing intellectual standards upon them.” Thus we have (1) a set of cognitive skills for

analyzing information and addressing problems across a wide range of contexts; (2) the

disposition to approach new situations from that critical perspective (thinking before acting); and (3) the habit of applying those same critical faculties to one’s own thought processes in a meta-cognitive way Halpern (1993), with a focus on the teaching of critical thinking, adds the

important qualification that the skills should generalize beyond the context in which they were acquired, and be reflexively transferred to the “real world” contexts where they matter

AAC&U’s VALUE rubric for critical thinking invokes the transdisciplinary nature of the

activity, and defines a set of five important facets: (1) clear and comprehensive understanding of the problem; (2) careful development of information needed to address the problem; (3) critical analysis of assumptions and contexts; (4) creatively synthesized, clearly articulated position on the issue, with recognition of complexities and limits; and (5) logical derivation of consequences and implications with appropriate weighting of considerations

Choices for measurement: Does a one-size-fits-all approach work?

Both personal experience and the literature suggest that there are very different ways to define critical thinking, and issues in measurement extend from these definitional conundrums One dimension of difference concerns the need for domain-specific approaches to measurement In psychology several authors have empirically explored the question of how generic these skills really are Renaud and Murray (2008) provide an illustration of the discipline-linked ways of thinking about critical thinking, as they cite a definition in the psychology literature that

emphasizes evaluation of evidence and logical links of the evidence to claims in order to

conclude whether the claims have been supported That is certainly how I have always thought

of it Renaud and Murray go on to demonstrate that subject-specific tests of critical thinking do better than generic measures at demonstrating sensitivity to teaching effects – put another way, transfer to other contexts is not so easy to find Indeed, the published literature suggests that it has been difficult to show that course experiences transfer to generic skills

In addition to the choice of generic vs context-related measurement, does it work better to use a test (whether general or course-related) or does it make more sense to use a rubric to evaluate the quality of actual work in classes? In the latter case, is it practical to use a generic rubric across many courses (as the AAC&U suggests) or do such rubrics fail to capture the intended effects of particular courses? Here the conversation is partly about measurement sensitivity, partly about finding good comparison norms, and partly about the likelihood of impact of findings on local pedagogy

Another dimension I have encountered is related to the content to which the critical thinking is applied Much of the definitional language seems aimed at the subject’s ability to examine the work of someone else (for example a published study or essay or opinion piece) and critically deconstruct it to arrive at defensible conclusions about its merit and better alternative

conclusions A different context for critical thinking is presented when the subjects are

responsible for asking and answering a question of their own, thus applying the critical

perspective to their own evidence, reasoning, and conclusions In psychology, the design and conduct of research studies can be a part of the teaching/learning process from very early on

In disciplines farther removed from the social sciences, such as the fine arts, I have encountered even more dramatically different ways of thinking about what constitutes critical thinking, and

Trang 4

what problem contexts are addressed Think about how an actor takes on a new role, or an artist

“sees” an object in a new way Perhaps the generic measures (e.g CLA, CAAP) are a bit too far removed from many of our disciplinary contexts to reflect much transfer from the educational experiences those contexts provide

Who gets to decide what critical thinking is, and does that matter?

As Table 1 displays, at the University of Rhode Island many players have a bit part in assessing critical thinking The institutional research office, the assessment office, the Faculty Senate and its committees, and many departments – all are engaged in defining and measuring some version – or versions of this construct As a faculty member with long service on committees dealing with assessment and general education, as well service as a department chair and graduate program director, I have many reasons to be interested in how well we are doing at enhancing critical thinking, and what we could do to improve Yet I have very little to show for it, despite all of the interesting approaches to measurement presented in Table 1 My own view is that we need ongoing conversation across the disciplines about what we mean by critical thinking – and these conversations should be informed by the actual assignments we use to develop and

measure it Indeed, our taxonomy of cognitive objectives is intended to lead to exactly that kind

of conversation It may be less elegant than an externally normed measure, but it represents what faculty actually do in their classes to promote “higher-order” information processing (Renaud & Murray, 2007), and how they think about student learning With a group of faculty fellows trying out new interdisciplinary first-year seminars, we are taking another look at how students

do on those cognitive learning objectives, and we hope to have a receptive audience to which we can report the results

Our local conversation points to a larger issue: which of the several considerations in making measurement choices matters the most? For external accountability, and possible pride among peer institutions, the generic, nationally normed test has clear advantages These advantages may be most salient for administrators For an academic department, a context-specific test may provide the most efficient and reliable way to track improvement over time in student skills Validity and pedagogical impact are still open questions for this approach, and portfolios or samples of assignments reviewed by a faculty committee with a departmental rubric may have some advantages for those values However, for the general education context, a special

challenge is that there is no faculty constituency readily at hand We are betting that the

development of a locally owned rubric, applied to assignments from a variety of general

education courses from across many disciplines, can have an internal impact on courses and on the structure of the requirements themselves The availability of a group of faculty teaching first year seminars and meeting to discuss how that is going may provide the constituency group necessary for “completing the loop” on the inside If that fails, administrators may have an easier time calling for generic testing As an evaluator I hope for a logic model linking process (i.e curricular elements and pedagogical methods) to short-term outcomes (i.e rubric-assessed assignments), medium-term outcomes (i.e department-wide tests) and to long-term outcomes (i.e generic tests like CAAP and CLA used to measure value-added across the institution) See Figure 1 for a schematic representation of this logic That may just be a step too far, and too costly in money and person hours Still, it’s a pretty picture!

Trang 5

Adams, M.H., Whitlow, J.F., Stover, L.M., & Johnson, K.W (1996) Critical thinking as an

educational outcome: An evaluation of current tools of measurement Nurse Educator, 21(3),

23-32

Allen, M.J (2006) Assessing General Education Programs San Francisco: Anker/Jossey-Bass.

Association of American Colleges (1994) Strong Foundations: Twelve Principles for Effective General Education Programs Author, 1818 R Street, NW, Washington, DC 20009-1604 Association of American Colleges and Universities (2004) Taking Responsibility for the Quality

of the Baccalaureate Degree: A Report from the Greater Expectations Project on

Accreditation and Assessment Author, 1818 R Street, NW, Washington, DC 20009-1604 Association of American Colleges and Universities (2005) Liberal Education Outcomes: A Preliminary Report on Student Achievement in College Author, 1818 R Street, NW,

Washington, DC 20009-1604

Association of American Colleges and Universities (2007) College learning for the new global century: Executive summary Author, 1818 R Street, NW, Washington, DC 20009-1604

Association of American Colleges and Universities (2009) The VALUE Project overview Peer

Review, Winter 2009, 4-7.

Driscoll, A & Wood, S (2007) Developing Outcomes-based Assessment for Learner-centered

Education: A Faculty Introduction Sterling, VA: Stylus.

Ewell, P (2004) General Education and the Assessment Reform Agenda Association of

American Colleges and Universities, 1818 R Street, NW, Washington, DC 20009-1604 Ferguson, M (2005) Advancing Liberal Education: Assessment Practices on Campus

Association of American Colleges and Universities, 1818 R Street, NW, Washington, DC 20009-1604

Gaff, J (2001) The Academy in Transition: General Education in an Age of Student Mobility Association of American Colleges and Universities, 1818 R Street, NW, Washington, DC 20009-1604

Gaston, P.L & Gaff, J.G (2009) Revising general education – And avoiding the potholes Washington, DC: American Association of Colleges and Universities

Halpern, D.F (1993) Assessing the effectiveness of critical thinking instruction The Journal of

General Education, 42(4), 238-254.

Hicks, S.J & Hubbard, A.M (February, 2009) Using course-based assessment to transform general education Paper presented at the AAC&U Conference on General Education,

Assessment, and the Learning Students Need Baltimore, MD

Humphreys, D (2006) Making the Case for Liberal Education Association of American

Colleges and Universities, 1818 R Street, NW, Washington, DC 20009-1604

Kanter, S.L., Gamson, Z.F., & London, H.B (1997) Revitalizing General Education in a Time of Scarcity Boston: Allyn & Bacon

Trang 6

Kuh, G D (2008) High-impact Educational Practices: What they are, who has access to them, and why they matter Washington, DC: AAC&U

Kuh, G.D., Kinzie, J., Schuh, J.H., & Whitt, E.J., & Associates (2005) Student Success in

College: Creating Conditions that Matter San Francisco: Jossey-Bass

Lawson, T.J (1999) Assessing psychological critical thinking as a learning outcome for

psychology majors Teaching of Psychology, 26(3), 211-213.

Leskes, A & Milller, R (2005) General Education: A Self-study Guide for Review and

Assessment Association of American Colleges and Universities, 1818 R Street, NW,

Washington, DC 20009-1604

Leskes, A & Wright, B.D (2005) The Art and Science of Assessing General Education

Outcomes: A Practical Guide Association of American Colleges and Universities, 1818 R Street, NW, Washington, DC 20009-1604

McMillan, J.H (1987) Enhancing college students’ critical thinking: A review of studies

Research in Higher Education, 26(1), 3-29.

National Survey of Student Engagement (2008) Promoting engagement for all students: The imperative to look within 2008 results Bloomington, IN: Indiana University Center for Postsecondary Research

Pace, D & Middendorf, J Eds (2004) Decoding the disciplines: Helping students learn

disciplinary ways of thinking New Directions for Teaching and Learning 98: Summer.

Paul, R & Elder, L (2008) The miniature guide to critical thinking concepts and tools Dillon Beach, CA: Foundation for Critical Thinking Press

Renaud, R.D & Murray, H.G (2007) The validity of higher-order questions as a process

indicator of educational quality Research in Higher Education, 48(3), 319-351.

Renaud, R.D & Murray, H.G (2008) A comparison of a subject-specific and a general measure

of critical thinking Thinking Skills and Creativity, 3(2), 85-93.

Schlesinger, M.A (1984) The road to teaching thinking: A rest stop The Journal of General

Education, 36(3), 182-273

Scriven, M & Paul, R (1987) Critical thinking as defined by the National Council for

Excellence in Critical Thinking (cited at www.criticalthinking.org/aboutCT )

Shavelson, R.J (2007) A Brief History of Student Learning Assessment: How We got Where We

Are, and a Proposal for Where to Go Next Washington: Association of American Colleges

and Universities

Stevenson, J.F & Scarnati, B.S (February 2010) Comparing approaches to the critical thinking dilemma Workshop presented at the AAC&U Conference on General Education and

Assessment, Seattle WA

Stevenson, J.F., Grossman-Garber, D., & Peters, C.B (March 2007) Assessing the core:

Learning outcome objectives for general education at the University of Rhode Island

Roundtable presented at the AAC&U Conference on General Education and Assessment, Miami, FL

Trang 7

Terenzini, P.T., Springer, L., Pascarella, E.T., & Nora, A (1995) Influences affecting the

development of students’ critical thinking skills Research in Higher Education, 36(1),

23-39

Wagner, T.A & Harvey, R.J (2006) Psychological Assessment, 18(1), 100-105.

Williams, R.L., Oliver, R., & Stockdale, S (2004) Psychological versus generic critical thinking

as predictors and outcome measures in a large undergraduate human development course

The Journal of General Education, 53(1), 37-58.

Trang 8

Table 1 Institution-Wide Assessment Methods for Critical Thinking at the University of Rhode Island Assessment

type

mechanism

Advantages of the method

Disadvantages of the

method Indirect

Nationally

normed

and

standardiz

ed

NSSE, FSSE (items on “thinking critically” and “solving complex real-world problems”)

Selected by U administration, administered by institutional research office

External comparisons/

accountability Reliable psychometrics Self-reported views have been shown to

be predictive

Not directly connected

to local intentions Rarely discussed with faculty

May lack pedagogical relevance, credibility Locally

developed

Student course evaluation (items on chance to practice cognitive tasks)

Faculty committee for assessment of general education

Targets locally identified concerns, Senate-approved outcomes

May prove poorly designed

Direct

Nationally

normed

and

standardiz

ed

CAAP institution-wide for samples of freshmen and seniors (Wabash Study)

Selected by U administration with faculty input;

administered by assessment office

External comparisons/

accountability Reliable psychometrics Elegantly

conceptualized outcomes

Not directly connected

to local intentions May lack pedagogical relevance, credibility

Locally

developed

Taxonomy of cognitive skills with expectation

of differences by discipline

Developed and administered by faculty committee

Local control of definitions Direct link to pedagogy

May prove poorly designed

Easy to contest on psychometric grounds

Trang 9

Figure 1 All Should Have Prizes!

Process:

Curricular elements

Pedagogical methods

(e.g NSSE; local survey)

Short-term outcomes:

Rubric-assessed assignments

Medium-term outcomes:

Department-wide tests

Long-term outcomes Generic national tests:

e.g CLA, CAAP

Ngày đăng: 20/10/2022, 03:29

w