Curriculum Delivery

Overview

Universal Design frameworks and Web Content Accessibility Guidelines recommend presenting information in multiple formats: text should be optimized for screen readers; non-text visuals should have text equivalents; audio information should be captioned or transcripted. This section will discuss Cognitive Theory of Multimedia Learning and how it can be applied to optimize student learning when using multiple formats. Two samples of multimedia curriculum will be provided: one showcasing content delivery, and one that provides instructional support for an assignment.

Cognitive Theory of Multimedia Learning (CTML)

Cognitive Theory of Multimedia learning was developed by Richard Mayer and others through empirical research on how humans learn. Fundamentally, the theory is built upon three key concepts:

  1. Humans primarily process information through visual and auditory channels (Dual Coding Theory)
  2. These channels have limited capacity (Cognitive Load Theory)
  3. Learning occurs when information is actively processed (Levels of Processing Model)
Chart illustrating Cognitive Theory of Multimedia Learning, showing information moving through the channels into Working memory, where it is processed and integrated with prior knowledge.
Cognitive Theory of Multimedia Learning

The goal of Cognitive Theory of Multimedia Learning is to understand how to best use the visual and auditory channels to enhance essential and generative processing (which leads to learning) and minimize extraneous processing (which inhibits learning). Although Mayer and his fellow researchers have identified multiple principles (Mayer, 2014; Clark & Mayer, 2016), two are particularly relevant to applying the Universal Design principle for presenting information.

  • The Multimedia Principle states that humans learn better from words and pictures than from words alone.
  • The Redundancy Principle states that humans learn better when the same information is not presented in more than one format.

At first glance, the Redundancy Principle seems to conflict with both Universal Design recommendations and WCAG that recommend that all visual text be convertible to sound and vice versa. However, research suggests that when words appear as captions, running across the bottom of a video presentation, they enhance learning (Dallas, McCarthy, & Long, 2016; Costley & Lange, 2017).

Recommendations

Provide Options

Whenever possible, present information using multiple methods. Provide multiple forms of multimedia instruction: static text combined with meaningful images is one form; video narration with captions is another. Encourage students to customize their experience: video speeds can be adjusted faster or slower; segments can be replayed if necessary. The more students control their interaction with the material, the more likely they are to learn from it (Stiller et al., 2009; Wang et al., 2018).

Audio Accessibility

Use captions for video with narration. Use transcripts for pure audio recordings. Keep in mind that spoken words are processed differently from other auditory signals, such as music or ambient noise.

Consider if narration is necessary. Students do not always learn best from spoken words, especially if the spoken words are redundant with visual images (captioning excepted). Spoken words noticeably enhance learning when combined with visuals to create meaning, such as when narrating a worked example and explaining what is being done at each step.

Audio is not necessary to create a multimedia experience. A multimedia experience exists when words are combined with images. Alphabetic text next to a graphic is multimedia in the same way a narrated video demonstration is multimedia.

Thoughts on Video

Many people tend to think that online learning environments have to include video lectures. Just as faculty should consider whether audio narration is necessary, they should also consider whether videos are necessary. If you are creating “talking head” videos, you may be creating sub-optimal learning experiences for your students.

Students learn best from video when:

  • Narration and visual images are both required to generate meaning (see Audio Accessibility above)
  • Spoken words are in an animated and conversational style
  • If video of the person talking is included, the person has different facial expressions, makes eye contact, and displays human gestures

Sample Video Lecture

Multimedia lecture on Enclosure

The video above is an excerpt from a lecture where I explain the concept of Enclosure. In this excerpt, I am capturing a drawing that illustrates how fields went from common land shared by peasants tied to a lord to smaller enclosed plots rented to individual families. The narration explains the visual and the visual complements the narration. After this section, the video goes back to a PowerPoint presentation that includes words as text, an image, and an inset video showing me talking. The PowerPoint exists as the visual focus. The words provide students an outline of the lecture, and the image shows the phenomenal wealth that could be acquired through Atlantic System trading (in the full-length lecture, the image is explicitly discussed in relation to this theme; I did not keep that part for this sample). The “talking head” portion is not the focus, but conveys that I am a real person interested in the topic that I’m talking about. In this layout, I left empty space at the bottom of the video so that captions would not overlay the PowerPoint or my face.

The video lecture is provided to students as an option. They are instructed to read the text-based document I provide, or watch the lecture. I also provide them with the PowerPoint file embedded on the same page as the lecture.

Screenshot showing delivery of text lecture with visual PowerPoint.
Screenshot of text lecture accompanied by embedded PowerPoint

Sample Instructional Support

Video demonstration of the web app Coggle

The video above was a short video I made to help my students with new technology I was asking them to use in an online class. I start with the website that gives them access to the technology: Coggle, a free web-based mind map app. I then lead them through the process of accessing the app, saving a copy of the templates I created for them for their own personal use, and then sharing their personal mind map with me. The visual focus shows students what they can expect when they start clicking, while my words narrate the steps they should take and explains what they should expect. A video display showing me as a talking head shows up in the corner so that students have a sense of my presence, but it does not dominate the video demonstration. Note that I use a conversational tone, and facial expressions as I narrate the steps. Finally, the video is captioned.

Sample Multimedia Assignment

If you would like to see a sample of my multimedia instruction in action, this link leads to the overview I provide for a final project. I’ve included a captioned video describing the project, two versions of a transcript (for those who would prefer to just read what I say), and brief instructions. Each page contains a small amount of information, with links to navigate to other pages.

Concluding Remarks

Although I present the information on this page using text (with images to illustrate points), as narrated PowerPoint video, and as a downloadable PowerPoint, I did not narrate the sample videos, instead describing them with alphabetic text. This was a conscious choice to avoid splitting your attention and creating extraneous processing. My other option was to mute the original audio track and talk over what I was doing and why, but this would have detracted from the original intent of the video to demonstrate a multimedia learning object.

Presenting information in multiple formats can enhance learning if it is done thoughtfully, keeping learning theories in mind!