1. Trang chủ
  2. » Ngoại Ngữ

predicting-online-student-outcomes

37 3 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Predicting Online Student Outcomes From a Measure of Course Quality
Tác giả Shanna Smith Jaggars, Di Xu
Người hướng dẫn Address correspondence to: Shanna Smith Jaggars Assistant Director, Community College Research Center Teachers College, Columbia University
Trường học Columbia University
Chuyên ngành Online Education
Thể loại working paper
Năm xuất bản 2013
Thành phố New York
Định dạng
Số trang 37
Dung lượng 458,94 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Cấu trúc

  • 1. Introduction (5)
  • 2. The Literature on Online Learning Quality (6)
    • 2.1 Course Organization and Presentation (8)
    • 2.2 Learning Objectives and Assessments (8)
    • 2.3 Interpersonal Interaction (10)
    • 2.4 Technology (11)
    • 2.5 The Practical Utility of Existing Online Rubrics (12)
  • 3. Methods (13)
    • 3.1 Analysis Sample (13)
    • 3.2 Assessing Each Area of Quality (14)
    • 3.3 Additional Qualitative Data (16)
  • 4. Quantitative Results (16)
    • 4.1 Descriptive Statistics for Each Area (16)
    • 4.2 Predicting Course Grades (17)
    • 4.3 Sensitivity Analysis With Prior GPA (19)
  • 5. Qualitative Data on Interpersonal Interaction (19)
    • 5.1 Student–Instructor Interaction (19)
    • 5.2 Student–Student Interaction (23)
  • 6. Discussion and Conclusion (25)
  • A.1 Organization and Presentation (35)
  • A.2 Learning Objectives and Alignment (35)
  • A.3 Interpersonal Interaction (36)
  • A.4 Technology (37)

Nội dung

While many online learning quality rubrics do exist, thus far there has been little empirical evidence establishing a clear link between specific aspects of course quality and concrete,

Introduction

Online coursework has become increasingly popular in postsecondary education, particularly at community colleges, where distance education and online degree programs are expanding (Parsad & Lewis, 2008) Yet a majority of community college faculty remain skeptical about the quality of online learning (Allen & Seaman).

Evidence indicates that community college students tend to perform worse in online courses than in traditional face-to-face settings (Jaggars, 2013; Xu & Jaggars, 2011a; Xu & Jaggars, 2013) To boost course quality and student outcomes, some colleges are considering peer-review processes informed by established online course quality measures However, as they try to define and adopt a specific online course quality rubric, they confront a broad and often bewildering array of quality indicators, and there is as yet no clear empirical link demonstrated between these indicators and concrete student-level outcomes, complicating the choice among the many options for measuring course quality (e.g., Benson, 2003).

Drawing on the online course quality literature, this study develops a simple holistic rubric comprising four quality subscales and applies its guidelines to assess 23 online courses taught at two community colleges in Spring 2011.

By combining rubric ratings with supplementary quantitative and qualitative data, this study addresses two key questions: first, how do course quality subscales relate to end-of-semester grades; and second, among subscales that significantly impact student outcomes, what practices and techniques distinguish higher-quality courses from lower-quality ones.

First, the article surveys the online quality literature, which defines quality across four domains: organization and presentation, learning objectives and assessments, interpersonal interaction, and use of technology It then discusses the limitations of existing online course quality rubrics, introduces our own rubric, and explains how we linked each of the four quality dimensions to the rubric's criteria and evaluation measures to support a coherent assessment of online courses.

In this study, online coursework is defined as semester-length, asynchronous, fully online college courses—the most typical type of online courses offered by community colleges, as described by Jaggars and Xu (2010) and Xu.

& Jaggars 2011b) areas to concrete student outcomes Finally, we discuss the results and implications of this study.

The Literature on Online Learning Quality

Course Organization and Presentation

Across the quality rubrics, Quality Matters places the strongest emphasis on course organization and presentation, arguing that a well-structured course supports student learning The Quality Matters standards, for example, require that students are introduced to the course’s structure and purpose, and that instructions clearly specify how to get started and where to locate course components (Quality Matters Program, 2011) In the practitioner literature, Grandzol and Grandzol (2006) likewise contend that a consistent and transparent course structure—comprising navigational documents and explicit guidance on where to go and what to do next—is vital to student success.

Several surveys have also emphasized the importance of a well-organized course structure with intuitive navigation A study of two online criminal justice courses

Fabianic (2002) indicates that students regard ease of navigation as a key quality criterion An institutional survey by Young (2006) found that students appreciate instructors who make a strong effort to create a thoughtful course that is well organized and carefully structured Together, these findings suggest that clear navigation and well-designed course structure are central to students' satisfaction with their learning experience.

“Ease of use,” defined as an intuitive, user-friendly interface and straightforward navigation, was identified as one of the three most important quality criteria by students, faculty, and staff Similarly, Ralston-Berg (2010) found that students regard key success factors in online courses as including clear instructions on how to get started, how to locate various course components, and how to access resources online Beyond these survey findings, there has been relatively little empirical research conducted in this area.

Learning Objectives and Assessments

Online course quality rubrics consistently emphasize clearly stated, well-aligned learning objectives, a direct link between course objectives and assessments, and explicit, transparent grading criteria For example, the Institute for Higher Education Policy recommends providing students with supplemental course information that outlines course objectives, concepts, and ideas (Merisotis & Phipps).

2 Found under the first Quality Matters general standard, entitled “Course Overview and Introduction.”

Quality Matters provides a detailed set of standards for course design, stressing that learning objectives should be measurable, clearly communicated, and consistent across learning modules The standards also require that assessments align with the stated learning objectives and match the course level, ensuring valid measurement of student learning Additionally, students must receive clear instructions on how they will be evaluated, so expectations and grading criteria are transparent These elements, outlined by the Quality Matters Program (2011), support cohesive, standards-aligned online and blended courses.

Naidu (2013) argues that carefully designed learning goals are important in all educational settings, but especially critical in distance education where students often study independently with limited opportunities to interact with peers and tutors Moore (1994) discusses learning objectives within online programs, emphasizing responsiveness to the needs of individual learners He notes that while some autonomous learners require little help from teachers, others need assistance in formulating and measuring their learning objectives.

Evidence from surveys and qualitative research indicates that clearly stated and sequenced learning objectives, paired with relevant assessments and a transparent grading policy, are important for online courses In Ralston-Berg (2011), students evaluating 68 online course benchmark items placed four of the top ten selections on course objectives and their measurement, underscoring the value of explicit objectives and assessment alignment Respondents also described a high-quality online course as one with information presented in a logical progression, broken into spaced lessons, and content that is straightforward and aligned with what will be on the tests Conversely, Hara and Kling (1999) show that unclear course objectives can hinder student performance: an instructor who did not specify objectives or expectations allowed flexibility but left many students frustrated and uncertain about what was expected.

3 Found under the general standards “Learning Objectives (Competencies)” and “Assessment and

4 The four specific items were: “How work will be evaluated,” “Grading policy,” “Assessments measure learning objectives,” and “Assessment time, appropriate to content.”

Interpersonal Interaction

Nearly every published online quality framework emphasizes the importance of interpersonal communication and collaboration For example, the Middle States guidelines (2002) explicitly state that courses and programs should be designed to promote effective communication, teamwork, and collaborative learning among students and instructors.

Effective learner interaction hinges on appropriate exchanges between the instructor and students and among learners themselves Across many frameworks, interpersonal interaction is identified as a key area with specific best practices designed to boost engagement The Quality Matters guidelines specify four items under the general standard of learner interaction and engagement, plus two items focused on self-introductions by both the instructor and the students.

Online learning theory has long highlighted interpersonal interaction as a central driver of student learning, with two key impacts First, collaborative work helps build a learning community that encourages critical thinking, problem solving, analysis, integration, and synthesis, while also providing cognitive supports to learners Together, these effects promote a deeper understanding of the material.

Kearsley, 1996; Friesen & Kuskis, 2013; Picciano, 2001; Salmon, 2002, 2004;

Interpersonal interaction in online learning can strengthen students’ psychological connection to the course by boosting social presence—the degree to which participants in mediated communication are perceived as real and engaging individuals This sense of authenticity enhances engagement and connectedness in the online environment, a concept developed in foundational work on social presence (Gunawardena & Zittle, 1997; Short, Williams, & Christie, 1976; Young, 2006) and refined by later studies (Shearer, 2013) At the same time, collaborative knowledge-building as described by Scardamalia & Bereiter (2006) and Sherry (1995) supports deeper learning when learners interact meaningfully, reinforcing both the social and cognitive dimensions of online course participation.

Survey research consistently shows that effective student–instructor and student–student interactions are central to successful online learning (Fredericksen et al., 2000; Smissen & Sims, 2002; Baker, 2003; Ralston-Berg, 2010, 2011) Perhaps more importantly, a substantial body of empirical studies has also focused on interpersonal interaction, including student–instructor interaction (Arbaugh, 2001; Picciano, 2002).

Under the Course Overview and Introduction standard, two items are highlighted: the instructor's self-introduction should be appropriate and available online, and students are asked to introduce themselves to the class.

In online learning, both instructor–student interaction and student–student interaction contribute to improved learning outcomes Research from Young (2006) on instructor–student engagement and from studies on peer interaction—including Bangert (2006), Matthew, Felvegi, & Callaway (2009), and Balaji & Chakrabarti (2010)—emphasizes the value of ongoing communication within the course Bernard et al (2009) conducted a meta-analysis of 74 studies on interaction in online learning and concluded that increased interpersonal interaction, whether with the instructor or with student peers, positively affects student learning.

Recently, theorists and researchers have shifted from measuring the extent of interaction to assessing its quality, arguing that mere communication and collaboration do not automatically boost student learning Instead, these activities must have a clear instructional purpose and actively facilitate content delivery, as emphasized by Baran and Correia (2009) and Naidu (2012) Supporting this view, Ho and Swan (2007) found that the quality of participation in online discussions—characterized by new contributions, alignment with the student’s opinions, and the use of sufficient evidence where needed—predicted students’ course grades Another study (Balaji & ).

Chakrabarti, 2010) asked students to rate their course’s online discussion forum in terms of the perceived quality, interactivity, and participative nature of the discussion

Perceived quality of discussion was positively related to students’ participation and interaction, as well as to self-perceived learning.

Technology

Online learning quality rubrics consistently emphasize the availability and usability of technology For example, the Quality Matters rubric requires that course technologies be current and that students have ready access to the necessary tools (Quality Matters Program, 2011) Survey research supports this emphasis: in the Ralston‑Berg (2011) study, two of the top ten student preferences related to easily accessible and downloadable technology Grandzol and Grandzol’s (2006) review of best practices likewise shows that students prefer to interact with technology.

6 Each study compared a treatment versus control condition, where the treatment involved a stronger element of interpersonal interactivity and the control included a lesser or non-existent level of interactivity

Key requirements are that technology is available and easily downloadable, and that course components are web-based or readily downloadable Course content should be delivered using current technologies—such as PowerPoint presentations with voiceover narration—rather than relying on textual explanations, delivering accessible, engaging multimedia learning experiences that work online and offline.

An emerging literature emphasizes that technology's value lies in how it is used to support learning, not merely in its existence As Fulton noted, "dazzling technology has no value unless it supports content that meets the needs of learners" (2001, p 22) A recent review of the effectiveness of widely used online-course tools—such as discussion boards, online quizzes, and embedded video—found that the mere presence of these tools did not automatically affect student learning, underscoring that outcomes depend on how tools are integrated into instructional activities rather than their availability alone.

Educational technology is most effective when it gives students greater control over how they interact with media and when it fosters reflective learning, as shown by Balaji & Chakrabarti (2010) and Roschelle et al (2010) These findings imply that simply introducing new technology into a course does not automatically boost student success; the tools must be deliberately integrated and aligned with clear learning objectives to meaningfully support student learning.

The Practical Utility of Existing Online Rubrics

Existing quality rubrics align with theoretical, survey, and empirical research, and all four strands consistently indicate that a high-quality online course is well organized, has clearly specified learning objectives, provides an appropriate level of interpersonal interaction, and uses current technologies However, the rubrics also exhibit two key limitations that merit attention.

First, specific rubric items have been validated only in terms of student and faculty ratings, perceptions, and opinions (e.g., Ralston-Berg, 2011), rather than in terms of student outcomes Thus it is unclear whether a course deemed “high-quality” by a specific rubric would have stronger student success rates than a “low-quality” course

Second and perhaps more importantly, the rubrics’ grading criteria tend to focus on the presence or absence of surface-level characteristics For example, while the literature on interpersonal interaction and technology usage are increasingly beginning to emphasize the quality of these activities and tools, most existing rubrics merely indicate whether or not they are available This tactic is understandable; it is much easier to quickly and reliably assess “course quality” if the grader has only to mark the presence or absence of various characteristics Yet this method may not provide much insight into whether the course provides a high-quality learning experience For example, an instructor may succeed in creating a highly engaging and interactive learning environment without necessarily using every strategy on a quality checklist Such an instructor would receive a lower score than a second teacher who mechanically adhered to each item on the checklist, even if the second instructor’s course seemed sterile, boring, and impersonal A deeper and more nuanced examination of quality, however, may seem infeasible to many colleges, which are concerned about the staffing and resource requirements of peer evaluation

An analysis of existing online quality rubrics points to the value of a learning-focused rubric that evaluates not only the presence of key quality elements but also how effectively these elements are leveraged to support student learning, all within a quick and efficient assessment process It would be especially helpful to validate each rubric area against student outcomes, ensuring alignment between the rubric criteria and measurable learning results.

Methods

Analysis Sample

In spring 2011, instructors teaching online courses within the selected subject areas were invited to participate in the study, resulting in 19 faculty participants who taught 23 courses across 35 online course sections After the term concluded, the state system provided anonymized data for 678 students who completed at least one of the 35 sections, including transcript information and demographic characteristics End‑of‑semester grades were standardized to a 0–4 scale, with 0 representing an F and 4 representing an A.

This study sample from a community college was predominantly female (76%), with racial composition of White (56%) and Black (34%), and ages under 25 (54%) The distribution across tracks showed most on a transfer-oriented path (62%), followed by career-technical tracks (32%), with the remainder undeclared or of unknown status Overall, 87% were continuing students (having enrolled at the community college for at least one previous semester), 69% had previously taken an online course, and 51% studied full time during the semester under study, carrying an average load of 10.46 credits Continuing students had earned an average of 28 credits prior to the studied semester and an average GPA of 2.74 For the courses included in this study, the average end-of-semester grade was 2.32, with continuing and new students earning similar grades (2.32 vs 2.30) In the interviewed subsample (N=43), about 75% were employed and roughly one-third reported childcare responsibilities.

Assessing Each Area of Quality

To assess the quality of each course, we developed a rubric that addresses four areas:

• Organization and presentation, which examines ease of navigation, as well as clarity and organization of materials;

• Learning objectives and assessments, which evaluates whether the course clearly outlines course-level and unit-level objectives, along with clear expectations for assignments;

• Interpersonal interaction, which assesses the effectiveness of interpersonal interaction in reinforcing course content and objectives; and

• Use of technology, which examines the effectiveness of the chosen technology to support learning objectives

The Appendix provides a detailed description of the quality expectations for each of the four areas, and to keep terminology concise we will henceforth designate these areas as organization, objectives, interaction, and technology This concise labeling helps readers navigate the document and align quality criteria across these domains.

Our rubric is designed to help a rater navigate a complex set of quality characteristics within each area, promoting deeper reflection beyond a simple yes/no checklist while preserving fast, efficient judgments comparable in time to a yes/no tool To assess a course’s quality in each area, a member of the seven-person research team logs into the course website several times each semester to observe ongoing activities (for example, announcements, discussion board postings, and chat sessions), collects course documents (including but not limited to syllabi, assignments, and other written materials), provides detailed comments on the extent to which the course meets expectations for each of the four areas, and then assigns a numeric rating for each area A second researcher then reviews the original documentation and provides their own rating.

To produce a final set of numeric ratings for each course, we developed scoring guidelines designed to promote strong inter-rater reliability In pilot scoring trials, raters struggled to achieve agreement on a fine-grained 5-point scale, whereas a 3-point scale reduced disagreements in objectives, interaction, and technology; disagreements about organization persisted more often Any remaining discrepancies were resolved through conference discussions involving the two raters and the research director Consequently, the final rating scale for each domain ranges from 1 (lower quality) to 3 (higher quality), with detailed descriptions described in the Appendix.

Across multisection courses taught by the same instructor, sections were largely identical, with the same materials and teaching structures used across sections The levels or types of interaction did not vary markedly from one section to another, creating a uniform learning experience.

From a practical standpoint, all activities were archived and could be viewed at the end of the semester if necessary We chose to monitor the courses throughout the semester to inform our ongoing interviews and other research activities Accordingly, the unit of analysis was the specific course, rather than the specific section.

Additional Qualitative Data

As part of a larger study, the research team conducted in-depth interviews with 24 instructors (including all 19 who taught one of the courses used in this analysis) and 47 students (43 of whom were enrolled in one of these courses) Interviews were conducted using semi-structured protocols focused on experiences and perceptions related to online learning, and, for students, particularly on their learning experiences in the online courses in which they were currently enrolled All interviews were transcribed and then coded using ATLAS.ti qualitative data analysis software, using an array of codes related to different research topics of interest (for other analyses produced using these data, see Bork & Rucks-Ahidiana, 2013; Jaggars, 2013; Edgecombe, Barragan, & Rucks-Ahidiana, 2013) For the current analysis, we used four codes (one for each area) to flag instructors’ reflections and students’ experiences that were relevant to the given area We also pulled data on the observed characteristics of each course, using the raters’ course descriptions In Section 5, we utilize both interview and observation data to explore the qualitative differences between courses that received a high versus a low rating in a given area.

Quantitative Results

Descriptive Statistics for Each Area

Table 1 summarizes the rubric score means and intercorrelations across 23 observed online courses Descriptively, courses tended to receive somewhat higher ratings for interaction and objectives and somewhat lower ratings for technology and organization The four rating dimensions were moderately intercorrelated, with the strongest correlation between technology and objectives.

Table 1 Means and Correlations for the Four Quality Ratings ( N = 23 courses)

Subscale Mean (SD) Organization Objectives Interaction

Predicting Course Grades

Across 678 students, the initial bivariate analyses showed that course ratings of interaction (r = 0.15, p < 001) and technology (r = 0.12, p < 01) were significantly linked to end-of-semester grades, while organization (r = −0.05, not significant) and objectives (r = 0.05, not significant) had negligible associations; to better isolate these relationships, the study controlled for student-level characteristics by using a multilevel model (random-effects / mixed / hierarchical linear model), placing each predictor at its appropriate analytic level.

In a multilevel model, the variance in the outcome is partitioned into two pieces:

Course-level variation (τ00) and student-level variation (σ^2) quantify the differences in average grades across courses and the dispersion of grades among students within a course In rough terms, course-level variation reflects how the average performance shifts from one course to another—for example, one course might average a B+ while another sits around a C+ Student-level variation captures the spread of individual grades within a course, so in a class with an average of C, some students perform above that mean and others below it Typically, a course-level predictor explains only between-course differences, while a student-level predictor can account for both between-course and within-course variation, implying that effective analysis should incorporate predictors at both levels.

More precisely, the estimate for any course-level predictor is adjusted to account for differences between courses that arise from student-level variables In this study, all student-level predictors were grand-mean-centered The sample size available to assess the predictive capability of the course-level quality ratings was quite small, with N = 23 observations.

A first step in multilevel modeling is to determine whether there is a significant variation in student grades across courses; if not, there is nothing to explain In this study, the unconditional-means model indicated meaningful variation in student grades across online courses (τ00 = 0.48, SE = 0.17, p < 01) and a significant degree of variation in grades among students within each course (σ^2 = 1.79, SE = 0.10, p < 001).

In the second step, we added student-level predictors to the model—such as prior online course experience, accrued prior credits, current semester credit load, age 25 or older, race (White), gender (female), and the type of academic track—to reduce unexplained variation at both the student and course levels Together, these predictors explained 6% of the within-course student-level variation and 8% of the course-level variation.

In a third step, we added the four course-level quality ratings as predictors, which improved the explanation of the course-level variation substantially, to 23 percent

Subsequent tests conducted on each of the four ratings separately revealed that the extra variance arose entirely from the interaction area Consequently, in the final model we discarded the other three ratings and focused exclusively on the interaction rating.

The final model explained 23 percent of the variance in course-level grades and indicated that the interaction rating had a significant positive impact on student grades (b

The final model yields a coefficient of 0.40 (SE = 0.19, p < 05) For the average student (defined as having mean scores on the covariates included in the model), taking a course with the typical interaction rating of 2 results in an estimated course grade of 2.27, or a C− When the same student takes a course with an interaction rating of 3, the estimated course grade increases to 2.67, or a C+.

For more details on the application of multilevel models in the educational context, the reader may find it useful to consult Raudenbush and Bryk (2002), Kreft and deLeeuw (1998), and Singer (1998)

10 That is, when each of the other three ratings were added separately to the model, each explained 0 percent additional course-level variance over and above the student predictors.

Sensitivity Analysis With Prior GPA

Of the 678 students in the sample, 591 had attended college for at least one semester, making prior GPA available in the dataset A sensitivity analysis on this reduced sample tested prior GPA as a predictor The unconditional-means model for the reduced sample showed similarly sized variance components at the course level (τ00 = 0.50, SE = 0.18, p < 05) and at the student level (σ^2 = 1.75, SE = 0.10, p < 001) In the second step, including prior GPA along with the other student-level predictors markedly increased explained variance, accounting for 37 percent of the course-level variance The addition of the interaction rating in the third step raised the explained variance by about 6 percentage points to 43 percent The final model indicated a marginally significant positive effect for the interaction rating (b = 0.30, SE = 0.17, p < 10).

Qualitative Data on Interpersonal Interaction

Student–Instructor Interaction

Courses that score highly on interpersonal interaction rely on instructors who post often, encourage student questions through a variety of channels, respond to queries quickly, and actively solicit and incorporate student feedback; these interrelated practices support stronger engagement and learning outcomes and are explored in more detail below.

Regular, high-interaction instructors consistently posted announcements to remind students about assignment requirements, looming deadlines, newly posted documents, examinations, and other logistical details; when announcements were frequent, students tended to be more satisfied with the course, whereas limited announcements were associated with higher levels of student dissatisfaction due to unclear expectations and coordination issues.

Distance learning has me studying hard, but this course feels too lenient The instructor doesn’t set explicit due dates the way my other class does, leaving me with less structure and accountability Still, everything is supposed to be completed by the end of the term, which puts the pressure on to stay on track despite the lax pace of online learning.

In high-interaction courses, students reported that instructors answered questions promptly—typically within 24 hours—and provided multiple communication channels, such as email, telephone, discussion board postings, synchronous chatting, and in-person office hours This multi-channel approach helped keep students engaged and supported their learning Struggling students particularly valued opportunities for face-to-face interaction, with one student noting the benefits of in-person discussions.

With the upcoming test, I plan to locate her office and seek extra help in person rather than emailing her and waiting for a response.

Another student noted that both in-person office hours and video chats were helpful:

But I think once a month we should all meet with the instructor … Because you know, you can talk to a person over the computer all day long But there’s something about sitting in front of that person that does it for me

…You know, I just don’t want you to tell me something to shut me up I want to see what your face looks like when you give me the answer

High-interaction instructors are more likely to ask for student feedback and to respond to it, creating a more engaging online learning environment This pattern is associated with higher student satisfaction in online courses For example, one student evaluated his course experience on a five-point scale, from one (least satisfactory) to five (most satisfactory), illustrating the impact of feedback-driven instruction.

I'm going to lay out five points and explain why each matters She's emailed us twice, you know, saying, "People didn’t do well, what can I do?" She plans to implement one suggestion next semester, but I say we should consider a broader set of strategies rather than relying on a single fix.

“excellent” because she already does a good job, and she wants to know what she can do to fix it To me that says a lot

The strategies described above helped students feel that the instructor cared about the course and their performance, which in turn led students to personalize their connection with the instructor, feel more connected to the course, and strengthen their motivation to learn and succeed Effective teacher interaction and the sense that the teacher is genuinely invested in students’ progress emerged as key drivers of increased course engagement and academic success.

“cares” seemed to carry a lot of weight in students’ assessments One student noted:

I value educators who treat students as individuals, not numbers, and who truly care about our learning whether it’s in face-to-face classes or online In person, there’s a chance to be more personal—put a face to the name and get a sense of who you are Online courses may lose that immediacy, but what matters to me is not that—it's having instructors who are accessible and willing to talk when I’m struggling or have a question That kind of responsive student support makes learning meaningful in any format.

It seemed that students could easily distinguish between instructors who cared and those who did not; as one student explained:

Some teachers just don’t care, leaving students to fend for themselves in an online learning environment as if it doesn’t matter You’re still getting a grade, and you end up doing the teacher’s work online and calling it a day.

Many teachers want students to understand the differences between online learning and in-class education and to receive the help they need when they can’t attend in person They recognize barriers to participation and strive to offer support, flexible guidance, and resources for distance learning However, some educators still seem detached, as if their compensation is the only priority, underscoring the uneven levels of teacher support in both online and traditional classroom settings.

Several students made explicit the link between teacher interaction and caring One student who felt that the instructor really cared described her experiences with the instructor during office hours:

I actually study with her It’s hard to find teachers like that

During her on-campus office hours, she’ll study with you for about an hour and a half We can set up the meeting through chat or email, and I’ll go to her office to sit down and review any questions I have She’s very personable.

Another student in a humanities course appreciated the helpfulness evident in the narrative videos the instructor had created When asked whether she sensed that the instructor cared about her learning, she responded.

His video tutorials clearly convey what he expects and come across as a genuine effort to help, with practical examples to illustrate each point When you’re unsure about what to do, you can click on someone else’s answer to read what they’ve said and learn from their explanations.

Another student was able to sense the instructor’s passion through live chat and the discussion board: 11

Student–Student Interaction

Across 23 courses studied, student engagement with peers through online discussion boards remained weak Although nearly every course included a discussion board, the majority of student postings were superficial, offering only brief acknowledgments like “I agree” or “good job.” Even where a minimum posting requirement existed, content tended to be minimal, undermining opportunities for meaningful peer-to-peer interaction and collaborative learning Instructors acknowledged encouraging the use of discussion boards, but students seldom engaged beyond surface-level responses, highlighting a persistent gap between the potential for active online collaboration and actual participation in these courses.

I'm still figuring out how to bring this together, and the piece feels a bit up in the air I used to run online discussion boards for each chapter—chapter one, chapter two—where readers could post thoughts about the chapter, and either I or someone else would respond, fostering reader engagement and conversation.

No one ever did … So the point with this conversion over the spring, I didn’t even bother … Because it hadn’t worked in the past

In a subset of high-interaction online courses, students engage more regularly and thoughtfully on the discussion board Participation is not only mandatory and graded, but instructors also clearly articulate what constitutes high-quality posts and insightful replies For example, one instructor notes in the syllabus: “Discussion board posts and replies will be closely evaluated for depth of thought and insight into the question,” underscoring expectations for substantive online dialogue and meaningful engagement.

Replies to peers must be thoughtful, and should not simply agree with the author, or state

‘me too’.” He also made explicit that each post would be assessed in terms of focus, specificity, support, thoughtfulness, and use of language, with each counting for 2 points

Under the focus rubric, students earn 2 points for making vividly clear references to specific readings, 1 point for making some reference to readings, and 0 points for making no reference to readings In addition, required responses to other students’ initial postings are graded in terms of the extent to which they address points or questions raised by the initial post and draw upon readings to validate their response The instructor felt that the grading rubric helped encourage discussions that could build “a real, personal connection” among students in an online class, “like there is in a real, traditional classroom.”

Building community in any way you can is a powerful driver of student success It fosters a sense of belonging that boosts engagement and participation in courses, strengthens college-wide retention, and creates multiple, layered benefits that influence motivation, persistence, and overall academic performance.

Despite encouragement from instructors to participate in peer discussions, most online learners in the study showed little interest in engaging with their online peers When asked whether interacting with other online students mattered, one student in the class said he didn’t need it for that course Many participants interpreted online discussions as forced, artificial communication that neither mirrored the spontaneous personal connections of a face-to-face classroom nor fostered active learning This pattern highlights a gap between the intended benefits of online peer interaction—collaborative learning and deeper engagement—and students’ actual experiences in online courses.

Face-to-face in math class, we joke around and help each other, and when someone needs notes she asks—often starting with an email and then a follow-up request We interact more in person because we see each other at least twice a week, while online interactions are largely what we're told to do If it were just the teacher giving me an assignment for myself, I probably wouldn’t engage with my fellow students at all Direct communication feels different, and I participate mainly because the assignment requires it.

In addition to a general indifference to online peer interactions, some students reported negative experiences with required group work in their online courses One student provided an example:

During another online class, I had to participate in group work, and I really don’t like it I don’t mind group work in a classroom setting, but online collaboration is much more difficult.

When asked why online group work is more difficult to complete, the student cited two main obstacles: scheduling a common time for everyone and a lack of commitment from group members.

Not having daily or regular in-person meetings makes it really difficult for us to coordinate and work together; when one member doesn’t contribute, the rest have to pick up the slack, yet that person still gets credit because the work is done online and it’s hard to see who did what This isn’t a good system, and in my online classes this semester there hasn’t been any group work, which I’m grateful for, because online collaboration is extremely difficult and not really worth it.

Discussion and Conclusion

Well-organized courses with clearly defined learning objectives are desirable, but these aspects do not necessarily affect student grades Students preferred courses that used appropriate learning technologies over those that were overly reading-heavy (Edgecombe et al., 2013), yet this preference did not reliably boost grades The only factor that predicted course grades was the level of interpersonal interaction.

Although our small sample size makes definitive conclusions difficult, the data suggest a pattern in courses where the instructor posts frequently, invites student questions through multiple modalities, and responds promptly to queries Moreover, courses that actively solicit and incorporate student feedback—and, most importantly, demonstrate a genuine sense of instructor presence and engagement—tend to boost student participation and overall satisfaction in online learning.

“caring” created an online environment that encouraged students to commit themselves to the course and perform stronger academically This finding aligns with work by Young

A 2006 study suggests that students regard an effective online instructor as actively involved, delivering timely feedback, adapting to students’ needs, and encouraging interaction among classmates, the instructor, and the course material This emphasis on an engaged teacher aligns with Holmberg’s 1995 belief that instructors must foster a personal relationship with students to motivate them to succeed in online learning.

Qualitative data indicate that students in our sample prioritized instructor–student interaction over peer-to-peer engagement in online courses, with many viewing peer collaboration as a required obligation rather than a meaningful part of learning This general indifference to peer interaction may reflect the non-traditional characteristics of our participants, many of whom juggle professional or family obligations that limit their schedules and complicate participation in collaborative tasks As a result, enforced peer interactions in distance education may undermine student autonomy in choosing when, where, and how to learn, and they may not necessarily benefit learners, a view supported by scholars such as May (1993) and Ragoonaden & Bordeleau.

Across the literature on online interpersonal interaction, there is some disagreement about whether student–instructor or student–student interaction is more critical to learning Some studies align with the view that student–instructor interaction is more salient in online learning For example, Fredericksen et al (2000) found that student–instructor interaction contributed more to perceived learning in distance education than did student–student interaction Similarly, Sener (2000) reported that the online "tutorial model," which emphasizes strong student–instructor interaction without requiring student–student interaction, yielded student success rates comparable to courses adopting a model that includes student–student interaction.

While the so-called “mandatory interaction model” required students to engage with both the instructor and peers, implying that adding student–student interaction was unnecessary, Bernard et al.'s (2009) meta-analysis finds the opposite: online interventions that increase student–student interaction produce a larger learning impact (effect size of 0.49) than those that boost student–instructor interaction (effect size of 0.32).

One influential theory of online learning, proposed by Anderson (2003), helps explain divergent findings by suggesting that high-quality learning can emerge from either strong student–instructor interaction or strong student–student interaction He then goes further, proposing that in some cases neither type of interpersonal interaction is necessary if course technology is robust enough to support high levels of student–content interaction (Anderson, 2003, p 4).

Deep, meaningful formal learning is supported when at least one of the three interaction forms—student–teacher, student–student, or student–content—is highly engaged; the other two forms may be offered at minimal levels or even omitted without degrading the educational experience.

Using more than one of the three learning modes typically yields a more satisfying educational experience, as learners gain from varied formats that boost engagement and understanding However, these interactive, multi-mode experiences are often less cost- and time-efficient than simpler, less interactive learning sequences.

An initial association was observed between the effective use of technology to support learning objectives and higher course grades, but this link disappeared after controlling for student characteristics This may indicate that the learning technologies incorporated into these courses were not robust enough to promote strong student–content interaction, potentially limiting their impact on learning outcomes; for details on the technologies used in these courses, see the accompanying materials.

Edgecombe et al (2013) found that course technologies were not entirely disconnected from student grades; some tools contributed to higher performance by boosting interpersonal interaction For example, video- or audio-taped narrations helped instructors establish a stronger sense of personal presence, which in turn enhanced the course’s interpersonal interaction ratings and overall learner engagement.

Overall, students in our study seemed most concerned with their connection to their instructor, a focus that could be due in part to the profile of the community college population In an in-depth qualitative examination of community college students, Cox provides insights into how instructor relationships can shape the student experience in this setting.

(2009) found that many were disheartened or even paralyzed by the fear that they could not succeed in college Cox noted that:

Some students began the semester overwhelmed by fear, but with proactive college intervention they learned to manage their anxiety and successfully complete their courses This progress depended on someone at the college who could reassure them about their academic competence and potential to succeed In the interviews, professors emerged as that pivotal support, especially when they could come down to students’ level and genuinely relate to them.

Cox’s findings align with our observation that the sense of caring, which was communicated through interpersonal interaction, seemed particularly salient to students in their conversations about course quality

Beyond interpersonal interaction and effective use of technology, we also evaluated course organization and structure as well as the clarity of learning objectives and assessments Regarding course organization, the subscale showed weak predictive power, likely tied to measurement challenges; grading this aspect was the hardest for our team to reach consensus on Some courses were clearly weak in organization and navigability, others clearly stronger, but the majority fell somewhere in between, exhibiting a mix of confusing features and clearer ones These courses typically scored a 2 on our grading scale, suggesting that certain organizational features may drive course quality more than others.

Negligible associations with course outcomes for the organization and learning-objective scales could reflect restricted range rather than a true lack of relationship; although there is some variability across courses, it is possible that only a minimal threshold is necessary for each area, making a yes/no checklist potentially sufficient to determine whether a course meets that threshold In this view, rubrics such as Quality Matters may capture some aspects of course quality However, based on the findings in this study, quality ratings of interpersonal interaction should include not only a checklist of whether certain types of interactions are present but also a more nuanced assessment of how the teacher communicates interpersonal presence and caring Likewise, quality ratings for technology should focus not only on the use of current technologies but on how these technologies are used to support student interaction, confidence, motivation, and learning.

Organization and Presentation

The course features an easy-to-navigate, self-explanatory interface that helps students identify and manage course requirements Materials are clearly labeled and consistently organized, with the course homepage highlighting central resources aligned to learning objectives and minimizing extraneous content Links to internal and external web-based materials are integrated seamlessly with other course materials and kept up to date A clear step-by-step guide enables students to locate key documents, assignments, and assessments quickly, ensuring a smooth and efficient learning experience.

1 – Unclear navigation in presentation of course and no instructions of how to approach navigation

2 – Clear navigation in presentation of course, but no instructions of how to approach navigation

3 – Clear navigation in presentation of course with step-by- step instructions of how to approach course navigation.

Learning Objectives and Alignment

Learning objectives and performance standards for the course and each instructional unit are clearly defined, giving students a precise understanding of what they need to know and what they will be asked to do Objectives are published on the course site and syllabus, with explicit connections among them to create coherence across instructional activities and a clear rationale for how content fits together These objectives are specific and transparent, detailing how student performance will be measured both overall and within individual units, and the grading criteria are clearly stated to reinforce the expected outcomes.

1 – Unclear course-level or unit-level objectives, along with unclear expectations for assignments

2 – Some course-level objectives, unit-level objectives, or expectations for assignments are clear, and others are not

3 – Course-level objectives, unit-level objectives, and expectations for assignments are clear and well-aligned with one another.

Interpersonal Interaction

This course offers abundant opportunities for meaningful interaction with the instructor and with other students, designed to enhance knowledge development and foster productive learning relationships Instructor feedback is specific, actionable, and timely, clearly identifying what students do well and where they can improve Instructors employ strategies to increase instructor presence, helping students become familiar with the instructor’s personality and teaching style Student–student interactions are embedded in well-designed instructional activities that are relevant, engaging, and aligned with specified learning objectives The types and nature of interactivity are determined by the desired learning goal rather than arbitrary criteria for collaboration or communication These interactions emphasize knowledge and skill application, not mere recitation.

1 – Little or no meaningful interpersonal interaction

2 – Moderate meaningful interaction with instructor and/or amongst students

3 – Strong meaningful interaction with instructor and amongst students.

Ngày đăng: 27/10/2022, 23:39