Article Review #4

Biard, N., Cojean, S., and Jamet, E. (2018) Effects of segmentation and pacing on procedural learning by video. Computers in Human Behavior, 89, 411-417.

In this study, Biard, Cojean, and Jamet (2018) explore how “the pace of presentation of an instructional video affects the learning of professional skills” (413). Particularly, they are interested in the effect of system-directed or learner-directed pauses in instructional videos designed to teach procedural skills.

The authors justify their study by accurately pointing out that videos are becoming more frequently used in education. Particularly, videos are often used to teach procedural skills: “learning to perform a series of actions to achieve a particular goal” (411). According to the authors, some studies show that instructional videos are more effective than static images, text, or drawings, while other studies show the opposite because video information is “transient, and more cognitively costly processing is thus required to extract the relevant items and hold them in memory” (412). A continuous flow of information, like that encountered in an instructional video, imposes a heavy cognitive load on the learner. The most current recommendation for this problem found in the literature is to present “learner-paced segments” (412). If learners have control over the video, the theory goes, they can halt this flow of information and lessen the cognitive burden.

The authors hypothesize that “the pace of presentation of an instructional video affects the learning of professional skills” (413). Particularly, they expect that novices have difficulty figuring out where they should pause in order to memorize the content. They explain that, “Students spontaneously make little use of a pause button, as they do not know when to halt the video, so learner-paced pausing can only improve procedural learning when it is combined with system-paced segmentation” (413). Having a pause button does not mean students (especially novices) will actually use it and thus gain the cognitive load benefit, and so system directed pauses will be more useful.

To test this hypothesis, 68 occupational therapy students viewed a video on a clinical procedure that consisted of 7 main steps and additional sub-steps. The students were split into 3 groups: (1) the noninteractive video group was shown a video they could not pause, (2) the interactive video group was shown a video they could pause, and (3) the segmented interactive video group could pause their video, but the video also had a system-directed pause after each step. The authors recorded the number, timing, and length of the pauses during student viewings, and had students complete a recall and procedural learning test after watching the video.

The duration of total pauses in segmented interactive video group were 10x longer than spontaneous use of pause button in group two. All three groups had the same performance on the recall tests, but the interactive segmented video group performed the best on the procedural test. Why is this the case? The authors postulate that it is in part due to Mayer’s segmentation principle (the idea that segmenting or halting the continuous flow of information can help prevent cognitive overload), but also because splitting the video after each of the 7 tasks helps students to build a “relevant mental model” of the task. In procedural knowledge (one that is step-by-step, in other words), a “well-structured mental representation of the procedure is needed to succeed” (415).

My primary critique of this article relates to the authors’ claims that novices in particular do not know when to pause videos, and so it is even more important to have system-directed pauses. Their entire sample (68 students) consisted of “novice” learners. I think that in order to make this claim, they needed to include “expert” learners. Would an expert really pause a video any more frequently than a novice? The assumption is that it is the novice’s lack of metacognition that prevents them from pausing the video, but both procedural novices and experts can have poor metacognitive skills. Perhaps, too, the lack of leaner-directed pauses could be less about metacognition and more about engagement — a student may be more inclined to “power through” a boring video as compared to pausing and engaging with an interesting one. In other words, I am not sure that the claim that novices in particular need system-directed pauses was adequately supported by the experiment design.

Overall, I appreciate that this article combines both theory and practical applications. The literature review is extensive, and the authors take care to build on the work of previous research, particularly through their reference of Mayer’s multimedia theories. Similarly, when designing procedural learning videos, designers and instructors can practically apply the findings of this article. As a designer, faculty often push back against “short” videos, as they are used to much longer face-to-face class sessions and would prefer to simply put their 50-minute lecture into their online course. Articles such as this one are beneficial because they offer research paired with a recommendation: videos that can be paused by the learners are good, but segmented videos or videos with system-directed pauses are even better — particularly for procedural learning.

Continue Reading

Article Review #3

Guo, P.J., Kim, J., and Rubin, R. (2014, March). How video production affects student engagement: An empirical study of MOOCs. Paper Presented at ACM Conference on Learning at Scale, p. 41-50. Atlanta, GA. Retrieved from

In this article, the authors seek to identify what kinds of videos are most engaging to students. They aptly point out that online courses heavily rely on video as a means of communicating content, but these videos package and present content in a variety of ways. Consequently, they choose to explore this question: “Which kinds of videos lead to the best student learning outcomes in a MOOC?” They focused their study on MOOCs because of the large amount of data available. In fact, they claim that their study is the largest-scale study of video engagement to date, as they are able to quantitatively analyze data from 6.9 million watching sessions. They pair this data with qualitative analysis of interviews with 6 staff who were involved in the video production. The “video watching sessions” consist of a single instance of a student watching a video, and they measured engagement by (1) the length of time a student spends on a video, and (2) whether or not the student attempted the post-video assessment.

While this article doesn’t delve deep into theory or offer an extensive review of the literature, it does provide accessible, practical, research-based applications. This can be invaluable to the designer or instructor who is looking for guidance on developing the best possible instructional videos for their course. Their recommendations are as follows:

  1. Shorter videos are more engaging. In fact, “video length was by far the most significant indicator of engagement” (4). Students rarely made it through videos longer than 9 minutes long, and so the authors recommend keeping videos to 6 minutes or less.
  2. Talking heads are more engaging than PowerPoint-style presentations. Students tend to respond well content that is “personalized.”
  3. High production value might not matter. They found students preferred a more natural, informal video to a large, informal lecture-style video. Again, the “personalization” element can prove powerful.
  4. Khan-style tutorials are more engaging compared to PowerPoint-style presentations or typed explanations. According to the authors, the “natural motion of human handwriting can be more engaging than static computer-rendered fonts” (6).
  5. Pre-production improves engagement. Even when recording live classroom lectures, pre-production helps create more engaging videos.
  6. Speaking rate affects engagement. Interestingly, students tended to engage better with faster talkers. The authors hypothesize that this is a reflection of the speaker’s enthusiasm rather than speed itself.
  7. Students engage differently with lectures versus tutorials. Students experience lectures as a continuous watching experience, while with tutorials, they tend to re-watch, skip around, etc.

As you can see, each of these findings lend directly to specific recommendations for video development. This chart summarizes each finding and the associated recommendations:

The authors themselves point out that this study has a few significant limitations. First, since they are only examining MOOCs, the participants are more likely to be self-motivated and comfortable with educational technology, which means their findings will not necessarily apply to all online learners.

The most significant limitation, in my opinion, is that their proxies for engagement (length of time spent watching a video and attempts at the post-video assessment) might not actually measure true engagement. (To be fair, the authors also point this out). Similarly, while engagement is an important component of student learning, I don’t think we can necessarily say that an engaged student is also a student is who is able to achieve the learning outcomes. (In other words, performance and engagement are linked but not the same thing).

A related element that is missing from this study is qualitative data that captures the student experience. While the interviews with the 6 staff members involved the video production can provide some insight, I think this study would be more robust if it also included qualitative data from the stakeholder at the center of this question — the students themselves. This could also perhaps address a few important hypotheses that the authors bring up but aren’t able to answer, such as (1) their hypothesis that shorter videos are more engaging because the content has to be more meticulously planned, and thus are higher quality, or (2) their hypothesis that it is the level of enthusiasm – not the actual talking speed – that leads to greater student engagement.

This recommendations for practice provided in this article are easy to understand, practical, and grounded in research. The authors close with this excellent point: “To maximize student engagement, instructors must plan their lessons specifically for an online video format. Presentation styles that have worked well for centuries in traditional in-person lectures do not necessarily make for effective online educational videos” (10). As we design videos for online learning, we will do well to take this comment to heart and consider their recommendations for practice.

Continue Reading