1. Trang chủ
  2. » Giáo án - Bài giảng

Academic assessment of critical thinking in distance education information technology programs

19 3 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 19
Dung lượng 487,11 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

To measure LOs, rubrics are among the most common direct methods used to collect assessment data and evaluate student learning.. During the assessment planning, a curriculum map was prep

Trang 1

Chapter 5

ABSTRACT

The purpose of this book chapter is to elucidate the process of assessment in higher education with a focus on distance learning and information technology programs Its mission is to bring awareness of academic assessment concepts and best practices in the use of standard assessment tools and direct measures to evaluate student learning The chapter provides definitions of academic assessment and presents the application of signature assignments and rubrics in the Computer Science and Information Technology Management programs to demonstrate student learning results.

INTRODUCTION

While the growth of distance education programs has been significant in the past two decades, the as-sessment of student learning in online schools is emerging The topic of asas-sessment is important because

in the last few years, academic assessment has become an integral part of student learning at traditional and online institutions (WASC, 2013; Miller, 2012) From the perspective of accrediting bodies, an as-sessment process must be implemented to demonstrate continuous improvement in the quality of teaching programs In the context of globalization, American Universities teaching abroad also transfer the assess-ment process and best practices to their foreign branches (Noori & Anderson, 2013) The role of faculty

is critical through taking ownership, developing strategies, and ensuring that educational goals are met and constantly refined When assessment methods and techniques are incorporated into the classroom,

Academic Assessment of

Critical Thinking in Distance

Education Information Technology Programs

Mina Richards

Trident University, USA

Indira R Guzman

Trident University, USA

Trang 2

student learning is evaluated and measured against a set of learning outcomes Formative and summative assessments are used to collect data and compare results against some form of metrics or benchmarks Formative assessments are used to capture students’ knowledge at some point in time during class whereas summative are conducted at the end of the course or program of study These evaluations also detect possible gaps and opportunities to drive course improvement and learning activities Assessment

is a continuous process improvement that should be aligned with the institution’s mission to provide an exceptional educational experience for students (Walvoord, & Banta, 2010, p.27) This outcome-based approach equally benefits faculty and students by institutionalizing a culture of instruction and learning

An important element in assessment is Learning Outcomes (LOs) These are defined as educational objectives concisely articulated to convey the fundamental learning and skills that students should achieve during a course or program of study (WASC, 2013, p.12) LOs must be task specific and measurable They are widely recognized as the prerequisite for a “learner centered” approach To measure LOs, rubrics are among the most common direct methods used to collect assessment data and evaluate student learning There are a variety of rubric designs, all of which are meant to inform mastery of skills or areas needing improvement The Association of American Colleges and Universities (AAC&U) has been recognized

as the leader developer of rubrics The association has designed a series of Valid Assessment of Learn-ing in Undergraduate Education (VALUE) rubrics for the purpose of measurLearn-ing student achievement in

15 intellectual and practical skill areas of study (AAC&U, 2014)

To emphasize the application of rubrics to learning outcomes, this chapter discusses a university assessment process in terms of signature assignments selected for the Computer Science (CS) and In-formation Technology Management (ITM) programs The university is an online institution granting undergraduate, masters, and Ph.D degrees During the assessment planning, a curriculum map was prepared to determine alignment between courses and specific Institutional Learning Outcomes (ILOs) Within the ILOs to be measured, the critical thinking ILO was chosen since it was the academic theme for the year Signature assignments were then carefully selected by course to ensure that critical think-ing could be measured at the introduced, reinforced, or emphasized levels The assessment results are discussed as evidence that critical thinking abilities can be assessed through rubrics and across the first and last course of each program The findings also indicate that relevant gains were obtained from stu-dents attending the emphasized level courses when compared to the stustu-dents sampled in the introduced level course The study methodology is explained in this chapter

This chapter develops some standard practices in the assessment process and presents conclusions about the advantages and challenges of the assessment tools and practices from the perspective of faculty and University administration

The authors of the chapter have the experience of serving in the Assessment Committee of the Uni-versity and serving as Program Chair of these academic programs

ASSESSMENT OF STUDENT LEARNING

Assessment Lifecycle

The adoption of Internet technologies has given the opportunity to extend higher education to a larger portion of adult learners through e-learning modalities (WASC, 2013; Miller, 2012) Similar to traditional universities, online schools must also assess the quality of instruction, educational activities, and

Trang 3

teach-ing strategies In recent years, the connection between academic assessment and teachteach-ing institutions has been more apparent than ever While assessment is multifaceted, it requires a framework to guide student learning and curricula Accrediting bodies and pressure from peer institutions compel universities

to adopt an assessment framework Assessment is also changing the emphasis from a teaching-centered approach to a student-centered experience to prepare students for professional careers

Assessment is also considered an ongoing process of refinement through planning and operation

of instructional methods The literature on assessment conveys many different rationales serving dif-ferent ends, but all lead to a common purpose of improving student learning For example, a program chair uses assessment to evaluate degree programs and curricula to enhance program competencies and meet accreditation requirements An administrator considers assessment as way to meet accreditation requirements Therefore, assessment is an essential component to virtual instruction, and it is the basis

to determine if a learner meets educational goals Assessment becomes centric to the scope of academic roles and a direct link to responsibilities of student grades, curricula planning, program evaluation, and program auditing Among the described dimensions, an educator focuses on assessment as a means to improve a course or how a subject can be better taught (Bell & Farrier, 2008) Capacho (2014) concurs that a systematic approach is needed in assessment to evaluate online learning and converge between evaluation methods of e-learners and best practices in instruction As with any process, assessment re-quires factual evidence to measure internalization of student knowledge, and therefore use those results

to design instructional objectives

The purpose of student learning can be defined in terms of learning outcomes Proitz (2010) described learning outcomes from a historical perspective beginning with the “objective movement” in the 1900’s

to articulate learning goals and inspiring refinement through “mastery learning” theories along with Bloom’s models in the 1950’s To anchor learning outcomes to assessment, the literature describes as-sessment as a lifecycle to understand its process (New York University, 2014) Although there has been

a tendency to see assessment as disjointed phases, in the context of completeness, each phase feeds to one another as shown in Figure 1

Figure 1 Ongoing cycle of academic assessment

Adapted from New York University (2014).

Trang 4

The basic assessment process is organized into a four phase cycle consisting of planning, in-quire, analyze, and apply (UC Davis, 2015) To illustrate the process and accountabilities for each

of the assessment phases, Table 1 provides a summary of the assessment lifecycle UC Davis as-sessment stages were adapted and expanded to include monitoring quality and delineate processes and responsibilities

As noted in Table 1, the assessment lifecycle for a single learning outcome can take about two years

to evaluate since assessment is usually conducted once a year The first two phases can take one year to complete while implementing improvements and monitoring quality can take one extra year or more Phases 3 and 4 need careful attention and can be the most challenging in terms of analysis, adoption, and tracking of results Having access to accurate data from a Learning Management System is crucial Documentation and benchmarking are also needed to compare previous and current results An institu-tion needs to establish an ongoing and consistent culture of assessment since accreditors need 3-5 years

of assessment data to demonstrate continuous improvements Consequently, assessing student learning provides viable evidence to support quality of learning throughout a degree program and also a measure

of degree quality and university rigor The board of directors of the Association of American Colleges and Universities (AAC&U, 2014) affirmed this notion by stating that “concerns about college afford-ability and return on investment are legitimate, but students and families also need good information as they make decisions and evaluate schools.”

Table 1 The assessment lifecycle

Assessment Lifecycle

Planning Assessment

Activities Align learning outcomes with curriculum at the program and course level

Develop timeline of assessment deliverables.

Identify academic strategic plan and role of ILOs (Accreditation Liaison Officer)

Train faculty (Assessment Office) Provide guidance to faculty in planning for assessment cycle (Assessment Office)

Collecting Program Data Gather program data from direct and indirect

assessments through dashboards and instruments

Review curriculum maps to match data to courses

Collect and store student learning data (Assessment and Institutional Research Offices)

Gather evidence by program (Institutional Research, Faculty, Program Chair)

Analyzing Program Data Examine and evaluate data against learning

outcomes, previous improvements, and plan for future improvements Identify analytical approach and interpret results from data dashboards

Review analysis and plans for improvement (Faculty and Program Chair)

Review completeness of reports and report variances (Assessment Office)

Implementing

improvements Evaluate student learning outcomes and program data to determine gaps in student learning Plan

and programmatic and learning improvements with faculty and course developers Establish a timeline

of implement improvements and the LOs to be assessed in the next cycle

Coordinate improvement plans and implementation (Faculty, Program Chair, Assessment Office and in some cases Curriculum Office if the changes have been identified to the curriculum)

Enforcing Quality Review improvements and plan to gather relative

data for next assessment cycle Collect and store student learning data to determine improvement and measure to baseline (Assessment

Office, Institutional Research, Program Chair and faculty)

Adapted from UC Davis (2015).

Trang 5

STUDENT LEARNING OUTCOMES AND CURRICULUM MAPPING

Student Learning Outcomes

The first step on assessment is to define a list of Program Learning Outcomes (PLOs) to support student-centered teaching Through learning outcomes, the assessment process evaluates not only the quality of programs and student learning but also the teaching effectiveness Learning outcomes are similar to establishing goals, and for this reason, the SMART criteria (specific, measurable, attainable, realistic, and timely) is used to guide their writing Asking questions such as what is a program meant

to teach? Or what competencies and skills should students demonstrate at the end of program of study? are helpful to begin formulating the initial list of learning outcomes Davis and Wong (2007) conducted studies into the factors that contribute to a student-centered experience indicating that student learning

is an interaction between learning activities and approaches to learning Building outcomes that meet those criteria is a repetitive process among faculty until a consensus is reached and final outcomes are endorsed When developing a list of learning outcomes, the statements must be clear and specific, and they should be aligned with the curriculum and focused on learning products not processes (University

of Richmond, 2008) The following examples provide an illustration how program learning outcomes should be articulated by incorporating active verbs

Example 1

Poor: The students will understand computing hardware configurations and application software Better: Analyze computing hardware configurations and application software to identify information

technology solutions to meet business needs

Example 2

Poor: Faculty will teach communication skills.

Better: Communicate effectively with a range of audiences to propose information technology

manage-ment solutions

Bloom’s Taxonomy is also a commonly used framework to define educational levels and build learn-ing outcomes The levels of performance are identified and classified into multiple constructs (Allen, 2006) Bloom Taxonomy assist in categorizing how students learn in the various domains at the cogni-tive, affeccogni-tive, and behavioral level Cognitive domains establish what students should know; affective represents their way of thinking; and behavioral mean how students should perform (University of Richmond, 2008) Bloom’s cognitive criteria are shown in Table 2

Once learning outcomes are identified, the use of assessment tools such as rubrics are particularly helpful to collect program data and benchmark results against a baseline to measure academic program strength (Capacho, 2014) The outcome results must inform which outcome rates above or below the baseline, therefore providing insights into the learning gaps that must be addressed A learning outcome can also be measured in a given course to identify course strength For institutions with a mature

Trang 6

as-sessment process, practicing asas-sessment creates a forum of inquiry to seek improvement among faculty, students, and other stakeholders (Bell, & Farrier, 2008) These forums provide a venue to enhance curriculum development and improve the evaluation cycle of student learning By engaging all parties

in assessment units, the institution shows collaboration and fosters a culture of student learning and academic improvement across colleges

Curriculum Mapping

Evidence of student learning must be shown at various stages during the program of study to demon-strate competence and knowledge growth For this purpose, a curriculum map is developed analyzing the learning outcomes and curriculum at the introduced (I), reinforced (R), and emphasized (E) level

A curriculum map is a tool, in the form of a matrix used to graphically represent the extent to which program learning outcomes are being addressed across the curriculum By having a matrix, faculty can determine which courses are aligned as introduced, reinforced, and emphasized and ensure that pro-grams have several learning opportunities related to each outcome Curriculum maps are helpful tools

to show the relationship between ILOs and PLOs or the alignment between courses and PLOs Faculty can also evaluate how each course fits into the program and which courses provide the most support

to master a concept as in the case of capstone courses Lam and Tsui (2013) offered a perspective that curriculum maps are widely used to evaluate the extent of curriculum courses and connection to learn-ing outcomes An example of a conventional curriculum map format showlearn-ing alignment of course with PLOS is illustrated in Table 3

The process of developing a curriculum map includes the following steps

1 Once the program learning outcomes have been developed, create a curriculum map to illustrate how these outcomes are addressed across the curriculum

2 Enter the courses within the program on the vertical axis and the learning outcomes on the hori-zontal axis

3 Specify the level to which each course contributes to a learning outcome

I = Introduced

Students are introduced to key concepts or skills related to the outcome

Table 2 Bloom’s taxonomy

Knowledge To know specific facts, terms, concepts, principles, or theories

Comprehension To understand, interpret, compare and contrast, explain

Application To apply knowledge to new situations, to solve problems

Analysis To identify the organizational structure of something; to identify parts, relationships, and organizing principles Synthesis To create something, to integrate ideas into a solution, to propose an action plan, to formulate a new classification scheme Evaluation To judge the quality of something based on its adequacy, value, logic, or use

Bloom Taxonomy CSU Bakersfield, PACT Outcomes Assessment Handbook (as cited in University of Richmond 2008).

Trang 7

Students build on introductory knowledge related to the outcome They are expected to demonstrate their knowledge or ability at increasingly proficient levels

E=Emphasized

Students are expected to demonstrate their ability to perform the outcome with a reasonably high level of independence and sophistication

Curriculum mapping can be automatically created through assessment software or manually prepared

as shown in Table 3 Regardless of method, one of the most essential tasks is for faculty to prepare it each year and analyze the skills, content, and assessment opportunities with a degree program Krajcik (2011) supported that standard curriculum maps align curriculum development, instruction, and assess-ments Accrediting institutions also audit curriculum maps to ensure that specific learning outcomes are being covered in the coursework Additionally, the following guidelines are useful as starting point for building curriculum maps and identifying student learning outcomes

• Coverage of Learning Outcomes: Each learning outcome should be reinforced and emphasized

across multiple courses

• Avoidance of Redundancy: A curriculum map makes redundancies visible, e.g when all courses

address a specific learning outcome

• Alignment of Courses: Each course should support at least one (and ideally more) learning

outcomes

Table 3 Hypothetical example of curriculum map showing alignment between PLOs and courses

Course Program

Learning

Outcome 1

Program Learning Outcome 2

Program Learning Outcome 3

Program Learning Outcome 4

Program Learning Outcome 5

Program Learning Outcome 6

Program Learning Outcome 7

Trang 8

It is difficult to meaningfully address all learning outcomes in a single course, unless it is an intro-ductory course or capstone course If a course does not appear to address any learning outcomes, the course should be further reviewed to determine whether it should be required, eliminated, or whether

an important learning outcome has been missed

CRITICAL THINKING SKILLS AS MEASURE OF STUDENT LEARNING

Critical Thinking Definition

Critical thinking as defined in the literature conveys a multidimensional process that it is better understood

if its parts are examined for assessment purposes The many definitions are trans-disciplinary in nature For example, the Critical Thinking Community (2014) webpage includes an excerpt from a seminal study conducted in education in 1941 by Edward Glaser In arguing for the need to develop students’ critical thinking skills, Glaser defined critical thinking as follow:

The ability to think critically, as conceived in this volume, involves three things: (1) an attitude of be-ing disposed to consider in a thoughtful way the problems and subjects that come within the range of one’s experiences, (2) knowledge of the methods of logical inquiry and reasoning, and (3) some skill in applying those methods Critical thinking calls for a persistent effort to examine any belief or supposed form of knowledge in the light of the evidence that supports it and the further conclusions to which it tends It also generally requires ability to recognize problems, to find workable means for meeting those problems, to gather and marshal pertinent information, to recognize unstated assumptions and values, to comprehend and use language with accuracy, clarity, and discrimination, to interpret data, to appraise evidence and evaluate arguments, to recognize the existence (or non-existence) of logical relationships between propositions, to draw warranted conclusions and generalizations, to put to test the conclusions and generalizations at which one arrives, to reconstruct one’s patterns of beliefs on the basis of wider experience, and to render accurate judgments about specific things and qualities in everyday life (p.14).

In the last ten years, higher education has engaged in educating students through a series of learning outcomes ranging from communications to knowledge acquisition One of the most popular outcomes

in practice is training students to think critically, that is, evaluating the pros and cons of something and coming up with an informed opinion or a recommendation to solve a problem Therefore, critical thinking

is the ability to be curious and look at a situation or argument from different angles while providing the best workable solution Systemic thinking is only achieved through training, application, and practice Prior beliefs on critical thinking supported that only individuals with logical and mathematical intel-ligences made better critical thinkers; however, new pedagogical views have changed that belief, and it

is possible to turn any student into a well cultivated critical thinker (Paul & Elder, 2008) These authors described the elements of thought in eight dimensions as noted in Table 4

Critical thinking (CT) has become an organizing core in instruction for the development of students’ reasoning skills Although each teaching discipline has its own perspective of critical thinking, online schools use classroom discussions to enhance it For example, the use of Socratic questioning helps

Trang 9

elevate discussions and uncover issues and assumptions in a logical manner Faculty can also contribute

by formulating questions that touch on analysis and evaluation Through this exchange, discussions are helpful to conduct analytical reasoning while evaluating an argument from multiple viewpoints A well designed discussion forum always considers Bloom’s Taxonomy Similarly, the dominance of critical thinking must be present on quizzes, summaries, discussions, and assignments Thus, the enhancement

of this skill must be evaluated and measured through assessment techniques to determine the extent of student practice at the introduced, reinforced, and emphasized levels The next sections illustrate the critical thinking learning outcome through signature assignments and assessment measures

Assessing Student Learning through Formative Assessment

Assessments provide evidence of knowledge, skills, and values learned and what remains to be learned The most accurate way to assess student learning is combining multiple types of assessments Accrediting bodies require at least one direct measure for each learning outcome, and course grades are not accepted

as evidence of student learning Baroudi (2007) and Johnson and Jenkins (2013) define two types of assessments: formative and summative, which provide two basic approaches to measure learning

• Formative Assessment: Activities used by the instructor to determine students’ knowledge while

providing them with developmental feedback, which can also be used to improve instruction

• Summative Assessment: Activities to evaluate and grade student learning at some point in time

during the course or program of study

in-cluding reports, exams, demonstrations, performances, and completed works The strength

of direct measures is that faculty members can capture a sample of what students can do, which can be very strong evidence of student learning

per-ceived student learning These reports are available from many sources including students, faculty, and employers Indirect measures are not as strong as direct measures because one needs to make assumptions into the context of the report

Direct and indirect measures are both used to improve programs and courses in a continuous improve-ment cycle Table 5 provides a summary of assessimprove-ment measures and types

Table 4 Elements of thought

Question at Issue: Problem issue

Purpose: Goal and objective

Information: Data, facts, observations, experiences

Concepts: Theories, definition, axioms, laws, principles, models

Assumptions: Presupposition taken for granted

Implication: Consequences

Point of View: Frame of reference, perspective, orientation

Interpretation and Inference: Conclusions, solutions

Elements of Thought (Paul & Elder, 2008).

Trang 10

Using Rubrics for Formative Assessment

As part of the assessment strategy to examine learning outcomes is to use a set of rubrics The term rubrics have been mostly associated with grading rubrics to monitor students’ knowledge and assist instructors with ongoing feedback; however, rubrics are multiuse Andrade (2000) defined rubrics as instructional tools to give students informative feedback on course assignments such as essays or class projects Conversely, analytical rubrics are considered assessment tools to measure student learning of

a given learning outcome at the institutional, program, or class level (ACC&U, 2014) Rubrics as tools have a wide adoption in education and are very popular among students and instructors Wolf and Ste-vens (2007) stated that among the most significant rubrics’ properties is that they must clearly identify performance assessment as observable and measurable

Rubrics consist of four components The levels signify the progression of performance quality, rang-ing from excellent to emergrang-ing Weights can be assigned Levels are associated with score points to measure performance according to weights The criteria are short statements describing the elements being measured in each assignment Each criterion includes descriptors according to gradation and per-formance ranks Table 6 is an example of a rubric component exclusively designed for this chapter and related to critical thinking, which could be adapted to online classrooms and other learning outcomes

Table 6 Example of critical thinking component in rubrics

Critical Thinking

The writer’s

conclusions

are logical and

reflect informed

evaluations

Demonstrates mastery conceptualizing the problem, and viewpoints and assumptions of experts are analyzed, synthesized, and evaluated thoroughly

Conclusions are logically presented with appropriate rationale.

Demonstrates considerable proficiency conceptualizing the problem, and viewpoints and assumptions of experts are analyzed, synthesized, and evaluated proficiently

Conclusions are presented with necessary rationale.

Demonstrates partial proficiency conceptualizing the problem, and viewpoints and assumptions of experts are analyzed, synthesized, and partially evaluated

Conclusions are somewhat consistent with the analysis and findings.

Demonstrates limited

or poor proficiency conceptualizing the problem, and viewpoints and assumptions of experts are analyzed, synthesized, and limited

or poorly evaluated Conclusions are either absent or poorly conceived and supported.

Table 5 Direct and indirect measures

Faculty grades and feedback on course assignments, quizzes, and

papers that students submit in each course. Student self-reflective essays, and end-of course reflective online discussions Signature assignments in selected courses across the program:

• Assignments that assess student achievement of learning

outcomes in selected courses in the beginning, middle and end of

the program.

Email Communications with Students, CAFA (Course and Faculty Assessment) Survey.

Grading and Student Feedback Rubrics are used to assess the

achievement of the PLOs. Student Exit Survey, Persistence Study, Complaint Resolution Systems, Priorities Survey for Online Learners (PSOL), Student

Satisfaction Tracker Survey, Survey of Non-persisting Students, and Alumni Survey.

Capstone Course

Course assignment that spans course duration and rubrics for

grading and feedback.

Capstone Course

Self-Reflective Essays and end-of-course reflective online Discussions.

Ngày đăng: 11/10/2022, 07:39

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w