Article Review #4

 

Cross, T. & Palese, K. (2015). Increased learning: Classroom assessment techniques in the online classroom. The American Journal of Distance Education, 29, 98-108.


Cross and Palese (2015) begin by outlining the basic concept of formative assessments, or low-stakes checks of student understanding. According to the authors, a study by Fisher and Frey (2008) outlines a method of teaching called the Gradual Release of Responsibility (GRR), which involves a teaching pattern that looks like this:

Direct instruction > guided practice > independent practice

Cross and Palese (2015) explain that formative assessments represent “guided practice.” (This differs from summative assessments in that summative assessments are graded and designed to measure student mastery of specific learning outcomes at a given time.) According to the authors, formative assessments are important because they inform both the teacher and the student about the student’s progress.

In this article, the authors analyze impact of formative assessments on frequency of student discussion board postings and quiz scores in 69 fully online course sections. Five math instructors participated, implementing a specific type of formative assessment (called CATs) into their weekly discussion boards. The instructors compared the data to their own previous semesters (when they did not implement CATs). The authors found that, “in both cases of posting frequencies and quiz scores, CATs sections of classes had significantly higher means” (104).

According to the authors, CATs are ungraded formative assessments designed to gauge learning in real time as part of a face-to-face, in-class activity. While the authors do give a few examples of the types of CATs that were implemented as part of this study, I found their definition of CATs to be a little thin. Without a clear understanding of what distinguishes CATs from other types of formative assessments, it was harder for me to understand the uniqueness of the particular research design. In addition, there was little mention of what the discussion boards looked like prior to implementing CATs. If the discussions were not designed with student engagement in mind, it could be just the revision of the prompts that led to improved outcomes, and not formative assessments/CATs specifically. In other words, without a clear definition of these variables, it is hard to tell what is “doing the work” in these improved outcomes. (To be fair, the authors do also state that they cannot make any claims of causation based on their current data).

In addition, in order to argue the importance of implementing formative assessments in online classes, the authors provide this quote from Bergquist and Holbeck: “Traditionally, online courses have been designed with only summative assessments in place, such as graded discussion questions, participation, weekly assignments, quizzes, and exams. However, formative assessments are also necessary to check for student understanding in the online classroom prior to the summative assessment” (99). As I considered this quote, it occurred to me that perhaps the authors and I have a different idea about what really constitutes a “formative assessment.” While it is true that online courses have been designed primarily to consist of graded discussion questions, participation, weekly assignments, and quizzes. I would argue that in many cases, these activities listed are functioning as formative assessments. Do formative assessments have to be completely ungraded in order to “count” as a formative instead of summative assessment? Or do they simply need to be low-stakes? Or, even more, do they simply need to offer students a chance to perform “guided practice”? In my opinion, assigning at least a few points to even an assignment designed to be “formative” simply encourages greater participation and buy-in from students. Even something graded can function as “guided practice” if it is designed in such a way that the students know the intention of the assignment is to “practice” or to “work it out,” rather than to “do” or “perform.”

These critiques aside, the most valuable part of this study was the section in which the authors offered theories of why implementing CATs worked. According to the authors, using CATs within online discussion boards offers more peer-peer and instructor-guided learning, a “safe space within which students are more advanced can guide more novice peers” (106). It also was able to “help faculty members give more individualized attention to students,” and shorten the feedback loop (106). All of these outcomes could indeed stem from a judicious use of formative assessments in online courses. Building a discussion board as a “safe space” for students to practice new concepts could lead to some very useful and engaging discussions. Formative assessments certainly have a place in the online classroom, and putting them into discussions seems like a great place to start.