Online Assessment 01

Online assessment and the LMS

What is often used as the starting place in discussions about online assessment at the college level is the, now, traditional and integrated LMS. The standard LMS is pretty strong at supporting textual queries and storing quantitative records of activities within that proprietary construct.  The is assumes that the answer is multiple choice, or numeric for the most part. As part of the LMS “gradebook” the scores from other items can be manually marked in on e fashion or another in one fashion or another. None of which is as easy as the fully automated side.

One way or the other, the scores find their way into the instructor’s gradebook and are visible to the administration via the campus SIMS (student information management system) as directed. As shared in the introduction computer assisted technologies have been around for some time. Electronic gradebook functionality became possible to the common instructor over 30 years ago with the VisiCalc spreadsheet creation, which was fundamental in accelerating the PC revolution.  BlackBoard came on the scene in 1997 moving this functionality into a browser window and the data bases into centralized locations. This allowed administrators, whose accountability for student attendance, and progress was keenly felt, to share ongoing record keeping data with instructors in real time. For compliance with their governing bodies, this was an important step

As noted social blogger and commentator Jane Hart reports LMS use may be driven by compliance concerns or learning concerns.[1] But she and many others argue that compliance sets the bar too low for Quality and for use cases that should be addressing student learning, hence the name, “learning management system.”  It is like the Italian innovator of preschool design shared that school were designed with the janitor in mind first. Keeping the school tidy can be seen as more important than messy learning.

The common complaint about “teaching to the test” becomes even worse when one then makes the test to accommodate a static LMS. On his “Innovative Learning” website Richard Culatta describes the LMS environment as “all-or-nothing propositions for institutions, teachers, and students. That is, even if you use an open source CMS like Moodle, you are (without significant customization) bound to use Moodle’s content publishing tool, Moodle’s quiz tool, Moodle’s gradebook, etc. These tools are limitations and hearken back to the exams that might be run off on a school mimeograph in blue ink. One may add to this the concern about no-proctor environments that allow or at least do not prevent answer sharing.

What is worse by way of online quizzes, as they are often given only for summative purposes, the Dept of Ed study states “the providing of simple multiple-choice quizzes did not appear to enhance online learning.[2] This does not rule it out as a fair evaluative tool. But these must be written with great attention to the craft of assessment and the specific nuances of this kind of query.[3] A poorly constructed multiple choice test can actually “be detrimental because it exposes students to misinformation in the form of lures. The selection of lures can lead students to acquire false knowledge.”[4]


[1] Jane Hart, “Where are you on the LMS adoption curve? – Social Media for Working & Learning,” Social Media for Woking and Learning, http://janeknight.typepad.com/socialmedia/2010/11/where-are-you-on-the-lms-adoption-curve.html.

[2] “U.S. Department of Education,” 48.

[3] Katrien Struyven et al., “10.2 MCQ: Overall effects of end-of-course assessment on student performance: A comparison between multiple choice testing, peer assessment, case-based assessment and portfolio assessment.,” Studies in Educational Evaluation 32, no. 3 (2006): 202-222.

[4] Andrew C. Butler and Henry L. Roediger III, “08.2 change assessment: Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing.,” Memory & Cognition 36, no. 3 (April 2008): 613.