This week’s assignment has had me spending A LOT of time staring at my notes trying to think through the challenges of assessing learning in online, asynchronous modules. Even if it were an online course, I would still have opportunities to interact with students, and that opens up so many more possibilities for active learning assignments and quality assessment tools. When you’re designing stand-alone online learning modules, the challenges are both conceptual AND technological. I could probably dream up a lot of interesting assessment, but can they be implemented?
That said, here are my thoughts on Fink’s Procedures for Educative Assessment:
As I noted last week, for the purposes of this class I am working on an introductory research skills module aimed at freshmen, covering the library catalogs and journal databases. Here are two forward-looking assessment situations:
1. A student needs to locate a specific book chapter that is listed as required reading on a class syllabus
2. A student needs to locate three peer-reviewed journal articles on a specific topic for a class assignment
One of my learning goals relates specifically to situation 2: Evaluate three major databases and choose the appropriate one for starting journal article searches. I’m having a harder time coming up with criteria that are more than just binary. For example, one criterion for meeting this goal would be: The student selects the correct starting database for the discipline in the assignment (i.e. WoS for hard science topic, etc.). Another criterion might be: Student selects WoS or ASP if most recent research (i.e. last 3-5 years) is required by the assignment. A third criterion would be: Student selects at least one additional database for the search and compares the results.
I’m going to talk about self-assessment, FIDeLity Feedback and the active learning stuff together, because I think this is really the tricky part of designing online learning modules. Let’s face it, most online tutorials are still fairly passive, either a series of text heavy pages, maybe a narrated PowerPoint, or screencasts. We’ve been seeing more going on with tools like pop-up quizzes, which helps, but the simple fact is that is is much harder to engage students in active experiences, and relatedly, hard to give immediate feedback (or even any feedback) when there is no instructor and the modules are accessed asynchronously. In most cases, you are back to delivering information to learners, so there is not an opportunity to actually see what they are doing, and therefore it is difficult to assess their learning. The actual proof of learning might come when they turn their assignments in to their classes, and the librarian might never get any feedback as to whether an online module was helpful to students or not.
However, I don’t think all hope is lost. It does seem like there is technology that is making it easier to embed quizzes into screencasts, so in addition to doing some simple multiple-choice assessment of knowledge activities, I thought perhaps a type of self-assessment might be to ask students to rate their confidence in carrying out the task on a likert-type scale. In addition, I think the use of simulations may help in online library instruction. For example, I recently heard about the Guide on the Side tool from the University of Arizona (where our course leaders are!), and it looks very promising (does it work with resources that are behind a login, like databases?). In the case of my specific learning goal, if I had a tool that would allow learners to run simulated searches with feedback like some of the samples on the Guide on the Side page, that would seem to be more of an active experience, and would hopefully allow for more accurate assessment. I am also interested in exploring the possibilities of games for designing online instruction in libraries, but it is not a subject I know a lot about. So if anybody here has ever used games or other simulations, I would love to hear about it! For those of us working on asynchronous online modules, I think the real challenge lies in what tools are available at your particular institution, because unfortunately that is going to limit the kinds of instructional and assessment activities you can design. The other constraint that goes with that is that all the feedback has to be pre-programmed in, so there is no flexibility for individualization or spontaneity.
Speaking of feedback, I want to talk a little bit about the FIDeLity model. As I already noted, frequent and immediate feedback are big hurdles in asynchronous online learning, particularly in self-contained modules. Something that I think does not get enough attention, but that I think is really important, however, is how emotion is communicated online, particularly in text-based environments. I’m coming at this as somebody with a background in online communication research, and based on some recent research that I’ve seen, as well as a decade+ of observing and studying online conversation, my current working hypothesis is that people are generally really bad at accurately decoding the emotional intent of text-based communication. For online learning, I think the implication is that your words may be perceived as sounding harsher if they are delivered through text. Even feedback that is meant to sound positive may be at best perceived as neutral. So if a goal is to design online learning that contains feedback delivered with empathy, then I think we have to try to come up with ways to move beyond text, and incorporate maybe audio and or video feedback on our modules. Of course, that adds to the workload and the technical challenges, which is also an inescapable part of online instructional design. It’s not something I’ve seen much of, so I’m interested if anybody knows of good examples.
I’m not sure if I’ve quite answered all the questions, but I’d love to hear what people think, particularly if you are also trying to create stand-alone online learning content.