Online Course Quality Assurance: Struggles and Successes

Faculty working on her laptop
Topics:
Published: Friday, October 16, 2020
Summary:

Digital Learning’s Continuous Improvement team strives to ensure that UArizona’s online courses meet high quality standards.

Online course evaluations utilizing the Quality Matters rubric have been one of Digital Learning's core quality assurance processes since 2015. As we approached the half decade mark in the application of this rubric, we thought it might be a good idea to step back and see what we can learn from past evaluations. 

More specifically, we wanted to know:

  • What’s going on with the courses that we evaluate? Are there any trends in how rubric standards were being rated across evaluations?
  • How can we provide more targeted support to faculty? If certain standards are more challenging to implement, how can we create trainings focused on those standards?
  • How can DL create internal processes that help faculty build better courses? In other words, how can the Continuous Improvement team improve our own practices?

So, in a nutshell, we amalgamated data from four years of internal course evaluations (2015-2019) and ran some simple statistical tests (ANOVA and Chi square) to see what instructors are doing well, where they have improved over time, and where they still needed support in terms of online course design. As we uncovered these trends, we reflected on the contextual changes that have occurred over the past five years to make sense of the story that the numbers were telling us.

The findings were promising. Though only the 2015 scores were statistically impacted by year (likely due to low sample size), there were general trends in the data. As you can see in Figure 1, scores rose sharply from 2015-2019, generally peaking in 2018. This was largely due to the roll-out of the Wildcat Design set, which provided educators with a template that laid the foundation for solid course design and also met a handful of criteria in the rubric. With these foundational criteria met, instructors could focus on other aspects of course design, such as creating high quality content or learning to use new digital tools in their courses. Another factor that may have played a role was our growing Continuous Improvement team. With more instructional designers hired to the team in 2018, we were able to offer more extensive faculty support. 

Figure 1: Most scores peaked in 2018
Figure 1: Most scores peaked in 2018

While most standards peaked in 2018, there were a few oddballs. For example, Standard 3, Assessment & Measurement, rose rapidly in 2016 before taking a dip and finding its plateau. While there is not a singular obvious reason for this, instructors generally claim that this is one of the easiest standards for them to apply. The Accessibility & Usability standard, on the other hand, did not reach its pinnacle until 2019, perhaps due to increased emphasis on accessibility and UDL in our own office trainings and teacher support processes. Also, Quality Matters’ transition from the 5th to 6th edition of its rubric in late 2018 separated video captioning from text and image accessibility, which made this standard easier to meet. Finally, there were more workshops campus-wide and Faculty Learning Communities devoted to raising awareness of how to improve the accessibility of text and images within a course.

Figure 2: General Standards that did not peak in 2018
Figure 2: General Standards that did not peak in 2018

In the Quality Matters rubric, each main standard is divided into sub-standards (also called Specific Review Standards, or SRSs). Our analysis of those SRSs shed some light on specific areas where teachers are struggling. The good news is that almost 80% of the SRSs were met in more than 80% of the reviews. The areas where courses tended not to meet the review standards, listed in Table 1, below, included computer skills and digital information literacy, alignment with learning objectives, and accessibility. The SRSs, and our brief explanation of why these SRSs did not tend to meet review standards, are listed in this table.

Table 1: Least frequently "Met" Specific Review Standards 
Specific Review Standard Why did educators struggle with this standard?

SRS 1.6 Computer skills and digital information literacy skills expected of the learner are clearly stated. (66% Met)

Reviewers often didn’t think that what we included in our design template was adequate to meet the standards. There is dissent about how to interpret this standard as it applies to what we provide.

SRS 2.2 The module/unit-level learning objectives describe outcomes that are measurable & consistent with course-level objectives or competencies. (73% Met)

Module Learning Outcomes are not required by the institution, and many instructors have never encountered any training about how to write them or they question the value of them. 

SRS 2.4 The relationship between learning objectives or competencies and learning activities is clearly stated. (62% Met)

We built an overview module template that is meant to address this standard, but not everyone thinks it meets the SRS. There is also the problem that meeting one standard may be to the detriment of another. For example, adding a numbering system might detract from readability/white space (which is SRS 8.2). 

SRS 3.3 Specific and descriptive criteria are provided for the evaluation of learners’ work, and their connection to the course grading policy is clearly explained. (71% Met)

Rubrics are a controversial issue among some educators, and if reviewers do not see rubrics in a course, they often felt this standards wasn’t met even though “descriptive criteria” can be more than just rubrics.

SRS 8.2 The course design facilitates readability. (68% Met)

Instructors without design backgrounds were perhaps unaware of how to correctly style text and chunk information on a page using styles. 

SRS 8.3 The course provides accessible text and images in files, documents, LMS pages, and web pages to meet the needs of diverse learners. (74% Met)

This is a time and training intensive standard, that is often not widely supported by the institution, people want to do better but don’t know how or where to begin. It is overwhelming for many instructors. One of our solutions is our Accessible Syllabus Workshop.

In brief, our qualitative data analysis, rooted in numerical trends, led us to identify three main contextual factors that explain why certain standards were met less frequently. These factors included (1) the introduction of the Wildcat design template, (2) institutional factors such as Learning Objectives policies across departments, and (3) reviewers’ application of the rubric. 

Figure 3: Contextual factors influencing low scores on SRSs
Figure 3: Contextual factors influencing low scores on SRSs

So what did we learn?

Both the qualitative and the quantitative data suggest that we can’t design and improve the quality of our online courses in a vacuum. We can’t just hire great instructional designers. Instead, we have to advocate for institutional change. We can’t just offer a workshop on the rubric. Rather, we have to discuss and implement the myriad ways we can design to meet the best practices that we teach. For example, in our Wildcat Design Set, we prompt instructors to show the relationship outlined in SRS 2.4, which looks at the connection between Learning Outcomes and learning activities, but this prompt often goes unrecognized by instructors, and even when it is followed, reviewers may still interpret it as Not Met based on their application of the rubric.  Or when we consider issues surrounding readability (SRS 8.2), we must admit there are aspects of our design template that may need to model better contrast and font size. Also, despite the recent increase in training and attention to issues of accessibility and UDL, there may still be a limited institutional understanding of how to design content that is readable by web accessibility standards. 

We can’t underscore enough how important the role of context is in determining what quality assurance and continuous improvement will look like for each institution. We all face national, institutional, and department strengths and challenges that impact our work. Our focus over the last five years has been on learning the rubric and integrating course revisions that result from our evaluations. These processes also have impacted the unique culture surrounding UArizona policies, interoffice collaborations, and developments within the different teams at Digital Learning who work on a course throughout its lifecycle. The good news is that, despite the occasional hurdle, we can see from this analysis that the majority of the courses we review meet at least 80% of the standards that we evaluate. If you have to start somewhere, it may as well be at 80%.

Authored By:

Nicole Schmidt

Nicole Schmidt
Assistant Director, Technology, Innovation and Research