Boling, E.C., Hough, M., Krinsky, H., Saleem, H., & Stevens, M. (2012). Cutting the distance in distance education: Perspectives on what promotes positive online learning experiences. Internet and Higher Education, 15, 118-126.
In the journal Internet and Higher Education, Boling et al. (2012) offer student and instructor perspectives on what constitutes an effective online learning experience. They explore the questions: what hinders online learning? What supports it? To address these questions, the authors conduct a descriptive, qualitative case study. They performed 60-minute long interviews with 6 instructors and 10 students from various fields of study about online learning experiences, and examined materials and artifacts from the online courses. They coded the interview transcripts and compared themes in order to draw conclusions. The themes used for coding were drawn from the Cognitive Apprenticeship Model (CAM) and included “four dimensions that constitute any learning environment”: content, method, sequencing, and sociology (120). After examining the coded interviews and course materials, the authors concluded that online learning that includes multimodal learning and interpersonal interaction were the most effective. According to Boling et al. (2012), participants preferred “those courses and programs that were more interactive and incorporated the use of multimedia” (120). Courses that heavily relied on text-based content (reading and writing) and lacked student-student and student-instructor connections were not viewed as favorably.
The question posed by Boling, Krinsky, Saleem, and Stevens (2012) is certainly an appropriate question, but they could have made a better argument as to its value. The authors justify their particular study by pointing to the rapid expansion of online programs and courses in recent years, explaining that while, “numerous studies have shown that teaching online requires a different pedagogy and unique set of skills from that of the traditional classroom,” most instructors are not trained or equipped for online teaching (118). That justification is not particularly convincing to me, especially since their study was conducted in 2012 and previous studies likely would have addressed the same research question. Instead, perhaps Boling et. al (2012) could have made the argument that their study has value because, (1) it addresses the question from a unique angle by exploring student and instructor perspectives, or, (2) it is somehow fulfilling a gap in the existing literature (perhaps a greater emphasis on their use of the CAM lens would have addressed this).
The article also included a few unnecessary details (like a definition of a “case study”), and was occasionally repetitive. For example, the authors repeated a few quotes from the interviews multiple times. That does not seem necessary, given the number of interviews completed. The study also explores one example of an “effective” online course; however, this example was a whole program. In my opinion, comparing programs to other programs can be problematic, since the elements students like about a program may have nothing to do with online course design (such as the capstone in-person work required by the “favorable” online course program).
Despite these shortcomings, the study by Boling et. al offers a few valuable contributions. First, the inclusion of both instructor and student perspectives is noteworthy. Boling et al. (2012) explain this choice, stating that, “We also wanted to understand teaching and learning from instructors’ perspectives to help inform what was learned from students” (119). I appreciate that the authors acknowledge and take into account the ways that instructors, too, learn during an online course. The use of the CAM lens for coding the interviews was also unique; as the authors point out, few studies have used CAM to examine online learning specifically (119).
Lastly, the study by Boling et al. (2012) also has value because it compares online courses to other online courses. This represents a shift away from much of the literature, which tends to compare the effectiveness of online courses to the effectiveness of face-to-face courses. Because of this approach, I gleaned a few practical applications for online course design. For example, the authors explain that one source of student frustration was not knowing to whom they were speaking during synchronous course activities (121). Knowing this, I would be more likely to recommend instructors start all synchronous course experiences with brief student and instructor introductions. Insights such as this can spur simple improvements to online course design or facilitation, leading to a more positive student experience.