Potential of Bias in AI Generated Learner Profiles with Feedback Loops in Online Curriculum: What Should We Know?

Concurrent Session 5

Session Materials

Brief Abstract

The use of positive dopamine-driven feedback loops found in social media platforms into online curriculum design has opportunity. However, there is a potential of bias in AI generated learner profiles which impacts accessibility and universal design, diversity, equity, and inclusion. What is the role of educators to safeguard the system?

Extended Abstract

Potential of Bias in AI Generated Learner Profiles with Feedback Loops in Online Curriculum: What do we should know?

There is a plethora of research looking at effective online teaching and the tools to help educators engage and teach students. One intriguing area used by modern day industry in social media sites is the application of Artificial Intelligence (AI)-fueled algorithms to create additive dopamine-driven feedback loops. These systems provide a stream of curated content that holds the user’s attention and focus, compelling them to continue their consumption of media. Indeed, there are limited studies looking at using industry driven techniques employed in social media construction to help create learning tools that are adaptive to the individual user and universally accessible – 24/7.

However, there is dearth research in academics on the potential bias of AI generated learner profiles. This is important to the community of online educators who see opportunity in creating online curricula that are just as immersive and engaging as those applications on mobile phones and can ensure accessibility and universal design helping with diversity, equity, and inclusion. Yet we have little influence on the industry driven creators of social media feedback loops.

This OLC discovery session will 1) introduce the idea of commercial feedback loops and purpose 2) briefly discuss how we as educators might channel these into positive directions through their application in learning management systems and instructional design, 3) explore the potential for bias in AI generated learner profiles on access and equity, and 4) engage with the OLC conference participants on the role of educators to safeguard the system.

As educators, we face an innate conflict between building static resources and uniform presentation modalities for multiple learners. What works for some might prove ineffective for others. Because time and resources are limited, and educators do not necessarily have a clearly defined user profile for each student, individualized customization is not feasible within traditional systems. Yet social media sites, news feeds, and other online platforms have demonstrated it is possible to rapidly overcome this obstacle using AI and machine learning.

This Discovery Session will be interactive and facilitated for participants to leave the session with a better idea of the opportunities and challenges of AI generated learner profiles with feedback loops with the primary focus or take away on a list of tools and ideas to safeguard industry driven technology partners ensure students are safe.

  • How do we serve and advise?
  • How do we have oversight or a council from higher education....to ensure the well being of the learners.
  • What do we need to look at and what are the goals? Perspectives and concerns are critical today?
  • How do we recognize the potential bias in AI generated learner profiles and how do we remove those biases with inclusive curricula?

Educational Applications of Feedback Loops

The process through which social media sites gain and retain the undivided attention of their users is a relatively new phenomenon and is only beginning to be fully explored. Katambwe (2020) looked at social network systems as dialogical communication tools. In this work he defined addiction as the, “Loss of control over the use or consumption of something which was a way out of current experience.” (Katambwe, 2020) He found that Facebook, and other social media platforms, provide social validation of oneself, accord with others who emerge from the digital dialog, and creates pleasure from the short-lived experience. This in turn ensures the social validation loop grows indefinitely. (Katambwe, 2020)

While most social media companies are reluctant to discuss their product in these terms, in a 60-Minutes interview former Google product manager, Tristan Harris, held up his smart phone and stated that: “This thing is a slot machine . . . every time I check my phone, I’m playing the slot machine to see, “What did I get?” This is one way to hijack people’s minds and create a habit, to form a habit. What you do is you make it so when someone pulls a lever, sometimes they get a reward, an exciting reward. And it turns out that this design technique can be embedded inside of all these products.” (Harris, 2017)

Harris (2017) goes on to explain that the primary goal of these companies is to hold the attention of their users at all costs. Another interviewee, computer programmer Ramsay Brown, claimed that, “A computer programmer who understands how the brain works knows how to write code that will get the brain to do certain things.” (Harris, 2017)

Stepping back from the motivations and goals of social media sites and looking objectively at the tools they employ, these companies have found a way to work with brain chemistry and hold a user’s attention on a particular subject. This is one of the goals of effective instructional design and provided these systems could be implemented in a healthy and constructive way, they have the potential to revolutionize any digital learning resources.

Potential of Bias in AI Generated Learner Profiles with Feedback Loops

While some research has been done into using feedback loops in curriculum and assessment, missing are the recent challenges of potential bias in the incorporation of sophisticated, adaptive AI systems on accessibility and universal design.  Recent news talked about Amazon’s AI bias against women and Google’s bias against black women. In an excerpt from the World Economic Forum Annual Meeting (January 18, 2019), the question was raised by Ann Cairns, Vice Chair, Mastercard about, “Why AI is failing the next generation of women.” The article discussed how prevalent the commercial side of AI is in the modern world from applications that influence how we buy, what we eat, media we watch to areas much more controversial such as making hiring decisions and support criminal activities. Indeed, there are many benefits of AI but as Cairns (2019) states, “we need to develop standards and testing for AI that enable us to identify bias and work against it.” It is well documented that the coders of AI created algorithms used data that introduces existing biases. For example, when recruiting for a company, “.... male candidates will find that their AI rejects female candidates, as they don’t fit the mold of past successful applicants.” As educators, we must be deliberate when working with industry partners to ensure those biases do not affect online learning in feedback loops.

          Another article (December 8, 2021) by Fortune, “How Artificial Intelligence Bias Affects Women and People of Color” found that AI is a, “…. computational problem-solving mechanism”.  The article states that, “Bias, or prejudice for or against a thing, person, or group, is traditionally thought of as part of human decision-making. But, when left unchecked, bias can extend beyond individual actions and infiltrate the systems created by people designed to protect everyone”. It goes on to say, “Members of marginalized groups, such as women and people of color, are often those adversely affected by erroneous algorithms”. It is in this statement that prods today’s online educators to ask what we need to know about the application of AI generated learner profiles with feedback loops in online curriculum and how we can partner with social media companies to address potential bias from the onset.

References

Berkeley School of Information (2021). How Artificial Intelligence Bias Affects Women and People of Color, https://ischoolonline.berkeley.edu/blog/artificial-intelligence-bias/

Cairns, A. (2019). Why AI is failing the next generation of women, https://www.weforum.org/agenda/2019/01/ai-artificial-intelligence-failin...

Harris, T. (2017, April 9). What is "brain hacking"? Tech insiders on why you should care [60 Minutes Interview]. (A. Cooper, Interviewer)

Katambwe, J. (2020). Is dialogue addictive? Of loops, pride, and tensions in social media construction. Language & Dialogue, 10(1), pp. 49-73. Retrieved from https://doi-org.ezp.lib.cwu.edu/10.1075/ld.00059.kat