Measuring Online Learning Readiness in Higher Education

Concurrent Session 6
Streamed Session

Watch This Session

Session Materials

Brief Abstract

A panel will feature researchers from two universities who are collaborating on a validation study of an online learning readiness instrument. Panelist will share about the current state of the online learning readiness literature, their validation methodology, study results and plans for using the instrument for intervention and student support.

Presenters

Mary Ellen Dello Stritto is the Director of Research for Oregon State University Ecampus, where she designs and conducts research studies on online teaching and learning, provides support for faculty research on online education, and produces tools to promote research literacy. Her background is in psychology with a specialization in quantitative methodologies, survey design, and statistical analysis.
Rebecca Arlene Thomas is currently the Postdoctoral Scholar of the Oregon State Ecampus Research Unit (ECRU). The ECRU conducts original research in online higher education, and promotes collaboration and research literacy in the field. Before working at Oregon State, Rebecca earned a master's degree in Instructional Psychology & Technology from Brigham Young University and PhD in Psychology from the University of Texas San Antonio. She enjoys conducting research about college student relationships and aggressive behavior in addition to her work in online education.

Extended Abstract

Overview

In this panel, three researchers from this project will discuss the following:

a) what they have learned about the current state of the published literature about online learning readiness

b) the methodology involved in validating the readiness instrument

c) preliminary study results and plans for using the instrument for intervention and student support efforts.

Audience engagement:

This presentation will include three sections: a discussion of the literature, methodology, and results and application. At the end of each section participants will be asked to participate in a shared google slide in which they will write down ideas and thoughts to share. After the first section (literature), participants will be asked to reflect on how readiness is or might be assessed at their own institution. They will be asked to write considerations related to readiness at their institution on the google slide. After presenting the results of the second section (methods) participants will be asked to think about barriers to measuring readiness at their own institutions and share them on a google slide. After the third section (results and application), participants will be asked to write on a google slide a few ideas about how they might use a readiness instrument for student support in their role. The google slides will be available to the participants after the presentation. If there is a capacity in the online platform, participants will be asked to verbally share their thoughts during these activities.

Learning objectives

By attending this session, attendees will: 

  1. Describe how online learner readiness applies to their institution
  2. Discuss barriers to measuring online learner readiness
  3. Apply the online learner readiness research to their own role

Context

As online courses have become common on college campuses, higher education personnel have been interested in factors that prepare students for online coursework. While several measures of online learner readiness are currently available, most were developed over 5 years ago, and include outdated questions about technology. Few studies of published online readiness measures have undergone rigorous validation (e.g. Yu, 2018) or have assessed the predictive validity of the measures.

Methods

Our study sought to fill this gap by using Dray et al.’s (2011) work in combination with other literature in this area and consultation with the student success professionals and academic advisors.  Two universities collaborated to develop an online learning readiness instrument that measures students’ self-efficacy, locus of control, self-directed learning, and technological capability. In an initial phase we conducted cognitive testing with student participants to assess the validity of each of the items in the instrument. In the second phase, we validated the instrument with a large sample of newly enrolled students at both campuses.

At University A we recruited online students in their first enrolled term via email and online orientation. First year campus-based students who had not taken online courses were recruited via email as a comparison group. At University B the instrument was embedded in all online courses, students consented to their responses to be included in the study.

Students were asked to fill out an online survey containing the consent form, questions about courses taken in the past, the online readiness instrument and a few demographic questions. Students were asked to provide their name and student ID# in the online survey. If they agreed to this component of the study, their identifying information was used to request the GPA, grades and enrollment status for four terms at university A and for 2 semesters at University B. 

The 41-item readiness instrument has seven subscales: 1) Locus of control – measures perceived control for education, 2) Self-regulation efficacy – how well students can regulate their thoughts and behaviors in order to complete their coursework, 3) Educational skills efficacy – how well students can perform specific skills that are important to coursework, 4) Communication efficacy – how well students communicate with classmates, group members and instructors, 5) Efficacy challenges and commitments – handling challenge and personal commitments that are outside of course work, 6). Locus of control technology – how students will approach challenges related to technology, and 7). Efficacy technology – comfort with common technologies that student may need to be comfortable with in online courses.

Participants

A total of 615 students completed the survey at University A. Of the participants at University A, 493 (80%) were online students, and 122 (20%) were campus-based students. At University B, 9,760 online students consented to their responses being used in the study. Data analysis was conducted separately for University A and University B.

Results

University A: The mean age of participants was 28.2 years (range 18-64). A total of 368 (6%) identified as female, 217 (35%) as male, 12 (2%) as genderqueer /gender non-conforming or other identities, 10 (1%) as trans male or trans female, and 8 (1%) chose to not identify. The majority of participants identified their race/ethnicity as White (446, 73%). The next largest percentage identified as two or more races (66, 11%), while 48 (8%) identified as Hispanic/Latino, and 36 (6%) identified as Asian, 15 (2%) identified as Black or African American. Finally, less than 1% identified as American Indian or Alaskan native or Native Hawaiian or Other Pacific Islander. The largest major represented was Computer Science (167, 27%). A total of 124 (20%) participants indicated they are currently a parent or guardian of at least one child under the age of 18.

Statistical analysis revealed adequate reliability scores (a >.7) for five of the seven scales.

  1. Locus of control (7 items, a= .741)
  2. Self-regulation efficacy (8 items, a= .813)
  3. Educational skills efficacy (3 items, a= .559)
  4. Communication efficacy (10 items, a= .828)
  5. Efficacy challenges and commitments (4 items, a= .507)
  6. Locus of control technology (4 items, a=.760)
  7.  Efficacy technology (5 items, a=.729)

Discussion

The data collected informed scoring criteria, subscale creation, reliability, and validity for the new instrument. The predictive value of the instrument is being tested by tracking the academic outcomes and enrollment of student participants over the period of one academic year. The outcome of this study is a valid instrument that can be used as a tool for success counselors and advisors as they support student success in online learning.

References:

Dray, B. J., Lowenthal, P. R., Miszkiewicz, M. J., Ruiz Primo, M. A., & Marczynski, K. (2011). Developing an instrument to assess student readiness for online learning: A validation study. Distance Education, 32(1), 29-47.