Alec Couros promotes playful, creative, and curious, yet critical exploration of AI: These images are examples of AI-generated avatars he has shared on social media
Alec Couros is a distinguished professor of educational technology and media and the Director of the Centre for Teaching and Learning at the University of Regina. He is an expert in the field of educational technology and very early on has been working on concrete examples for integrating AI into education. He has given several talks on the topic of AI in teaching and learning, including a session offered by the CTL in January 2023. In his trainings and talks, he explores the potential and implications of using generative AI, such as ChatGPT, in teaching and learning. He provides instructors with the tools, resources, and ideas needed to better understand and embrace generative AI in the classroom. At the same time, Alec Couros is cognizant of the threats AI poses for society by increasing cybercrime and by making it exceedingly difficult to discern facts from fiction.
In the interview we explore topics such as the role of AI in assessment, the potential for AI-generated learning content, the shifting role of instructional designers, the importance of media and information literacy, and the ethical considerations and imperatives surrounding AI technology, specifically critical thinking, intellectual property, trust-building, and equitable access.
Alec points out that traditional assignments and assessments will need to change: “We won’t be able to detect AI generated writing anytime soon”. Assessments will need to focus more on process-oriented and AI-assisted approaches. He warns against creating a silo in higher education where people work and write different than in the real world: “We have to learn how to learn. But we have to also learn how to learn with AI”. Alec views the integration of AI from the perspective of universal design for learning, aiming to offer tools that are advantageous for all students and are both standardized and easily accessible. He points out that while certain vendors might have higher costs, there are open-source alternatives that are worth exploring. These options could potentially deliver AI solutions that could be implemented across campuses in the near future.
Alec brings up the intriguing (and scary) idea that AI could potentially generate new educational content, such as courses or lectures, in the style of a specific professor based on their past lectures and publications. This would involve using AI to analyze an instructor’s existing body of work, including published papers and recorded lectures, and then synthesizing that information to create new educational materials that mimic the instructor’s style, approach, and expertise. This raises questions about the implications for preserving and extending an instructor’s influence even after their retirement or passing. It also prompts considerations about intellectual property rights, ethical concerns, and the potential impact on the authenticity of education.
AI can assist in generating various learning materials and artifacts be it instructional videos, questions, quizzes or interactive content, making it easier and faster to develop educational resources that align with sound instructional strategies. Alec predicts that the role of instructional designers may shift towards supervising and ensuring the accuracy and pedagogical appropriateness of AI-generated content: “supervising the content that’s developed, making sure that it’s logically sounds, and it’s pedagogically sound”.
Alec highlights the concern that AI can be used by cybercriminals to create sophisticated phishing emails and scams, emphasizing the need for public education to enhance media and information literacy: “Seeing is no longer believing. … If we understand that and we can help students and the general public to have a skeptic’s mindset… we’ll have a better society.”
Watch the complete interview here:
Edited Text Version
Getting Started with AI
Stefanie Panke: What initially sparked your interest in AI, and what led you to take swift action in compiling these early training resources?
Alec Couros: AI has been present for decades in various forms. My engagement with AI began through social media and the early internet era, dealing with algorithms and smart devices. In late 2022, I discovered ChatGPT on an online Reddit forum. While it quickly gained popularity, I recognized its potential for personalized assistance in education, aligning with my focus on personalized learning, particularly from my experience with social media. The interest in educational technology, particularly from the personalized learning perspective, was what drew me in. It felt like there was instant promise. Now, a few months later, the world has changed, and things are rapidly evolving.
Stefanie Panke: I’m curious about the AI tools you’re currently using. Could you share what’s open in your browser tabs right now?
Alec Couros: I’m currently using Midjourney for images, experimenting and having fun on social media. Perplexity is quite interesting for its research assistance. Of course, ChatGPT with its plugins and code interpreter is a significant one. Here I am using the advanced pro version. I also use Bing on my Edge browser occasionally, and every now and then, I will use Google Bard, though I still find the results quite poor in comparison. Occasionally, I explore tools like Synthesia to create avatars or WellSaid Labs for voice generation. I’m also using Tome, which helps create small presentations. I’m constantly using tools for various purposes, some for practical reasons and others to push the limits and understand these tools better, so I can guide others who might use them.
Stefanie Panke: In my role as digital pedagogy coach I’ve noticed a significant divide among faculty. Some, like you, are early adopters and tech-savvy, while many others, across different subjects, have yet to use any AI tools. To overcome the fear barrier, I often emphasize how these tools can enhance efficiency and productivity in their teaching. Are there specific tools you find effective for faculty, and how do you suggest they approach AI in terms of gaining teaching benefits and efficiency gains?
Alec Couros: You’re right, productivity and efficiency are important aspects of these tools. I’ve found that some administrative tasks can be streamlined using AI. These tools can help automate certain tasks, freeing up more time for creative and engaging aspects of teaching. I’ve experienced increased efficiency by using AI tools for tasks like sending messages to campus and conducting preliminary research on articles. It’s given me more time to do to to do things that I that I I typically use a lot of labor for. So I I often hear the expression ‘outside outsource your labor, not your thinking’. For instance, in preparing for my teaching, I can quickly update my syllabus and improve it with assistance from tools like ChatGPT or Claude. Spending time with my children co-creating stories and exploring early literacy to middle and high school literacy with AI has been a lot of fun. This approach aligns with the idea of outsourcing labor but not critical thinking.
AI in Higher Ed
Stefanie Panke: That’s a great perspective! In a recent conversation with master students about AI, I heard the concern that generative AI might devalue academic credentials. Since AI can complete tasks such as applying for programs, passing tests, and submitting essays, this will potentially undermine the significance of degrees. How do you respond to these concerns?
Alec Couros: Our assessments and assignments must evolve. Faculty should understand that detecting AI-generated writing won’t be feasible in the near future. OpenAI ceased work on their text classifier tool designed to distinguish between human and AI concepts. Other platforms like Turnitin focus on reducing false positives, which implies lingering issues. These tools are susceptible to manipulation, rendering long-term effectiveness questionable. We need to shift towards process-oriented assessments with AI assistance, offering insights into AI-supported learning, so we don’t have to create this silo of higher ed, where we are learning and working in a way that we don’t do it in the real world. We have to learn how to learn. But we have to also learn how to learn with AI. Rethinking creativity and evaluating student work’s authenticity become essential factors in this context.
A the same time, we have to think about equity issues. Obviously, we’re going to have students in the classroom who have varied levels of access to some of the AI tools. Some can afford the premium products, others cannot. Those who can afford the premium products will have an even greater advantage on top of the socioeconomic advantages that they already have within that equation. So we have to be thoughtful about how we actually make the class equitable, and make the tools equitable in our assessments and in the way that we rethink learning. We have to kind of think from a universal design for learning aspect is, try to provide tools that will benefit all learners and make them standard and accessible, and I think we we can do that. Some of the vendors, of course, will be more expensive, but there are open source solutions that we can look at, so that we can potentially provide AI solutions across campus in the in the very near future.
Information Literacy, Deep Fakes and Trust Networks
Stefanie Panke: What I love about your work is that it is optimistic and playful around technologies, but not uncritical. So I’ve also learned from your postings and from your social media postings and your writings around AI that you are definitely concerned, that AI will allow cyber criminals to create extremely sophisticated phishing emails and elaborate scams.
Alec Couros – CTL: There’s been studies on how generative AI amplifies disinformation and false information it to a greater extent. Deepfakes, for example, complicate the authentication of individuals, which I’m familiar with due to my encounters with catfishing scams. It just gets easier and easier to take anyone’s face, anyone that’s successful on the Internet. I can put you into any situation. I can make a video of you and so on. There are really very few safeguards to detect this. Companies have been talking about digital watermarks, but it’s going to be very difficult to do, because there’s no standard protocol for the way that these tools are developed.
Seeing is no longer believing. Ultimately the safeguards need to rely on better public education. This always comes down to media and information literacy education. If we can help student and the general public to have a skeptics mindset we will have a better society.
Stefanie Panke: How will we evaluate the information that we encounter online in a future of AI saturated content?
Alec Couros – CTL: We can’t fit knowledge into checklists anymore. There are people like Michael Caulfield who put together a really great free book called “Web Literacy for Student Fact Checkers.” He looks at better strategies, such as lateral reading, for instance, and not focusing so much on the artifact, but focusing much more on the source. I think that van be applied to audiovisual media: What is the source of these images? Are they credible sources? Focusing more on the communicator rather than the communication, that’s going to be really important. If we spend time trying to analyze the accuracy of every single artifact that emerges, we’ll never get anywhere, and we’ll always have doubt. So we have to learn how to find trusted sources. There will be technologies down the road that will be controversial but will provide an added layer of trust. If you look at OpenAI’s venture into Worldcoin, where people are doing retinal scans, it has some benefits to it, but there are also many privacy implications about having your retinal scan available to a company to identify you across the network. So there are always issues with that because our biometrics are going to play a big part and they can be compromised and used in very bad ways. At the same time, they’re also essential to authenticating trust through networks.
Future of Instructional Design
Stefanie Panke: Let’s talk about the job market. How will AI change the field and profession of instructional design? And what other job sectors will have similar severe transformations? Are we doing enough to prepare our students for this?
Alec Couros : So instructional design is an interesting area. It’s a place where we are seeing types of tools that are assisting in the development of learning artifacts and learning tools that used to be created exclusively by instructional designers. But now it can be done through AI. For example, someone can upload an instructional video, and you can create a number of artifacts from that video that will align with sound instructional strategies or reinforcement strategies around the content itself. I think the shift for instructional designers is less about the actual development of content and much more about AI supervision, reviewing the content that AI develops, making sure that it is logically and pedagogically sound.
I think we’ll see more of that. The accuracy and pedagogical appropriateness will be increasingly entrusted to instructional designers, while deferring more to AI capabilities for the mechanics of developing artifacts, especially for e-learning courses.
When you think about academics, consider what’s happening in Hollywood. The concept of generating new episodes for shows like South Park, Seinfeld, or Friends using AI is analogous. AI can place human actors in novel situations they’ve never encountered or agreed to. Extending this idea to e-learning, we might reach a point where retired instructors could have AI-generated courses created using their expertise, even if they’re no longer available.
Generating new content, courses, or updating existing courses based on an instructor’s published papers or previous artifacts is indeed feasible. Instances where students continue learning from videos after a professor’s passing have been documented. This raises questions about personnel, licensing rates, content ownership, and the exclusivity of rights. Both instructors and universities, along with their respective unions, need to consider how to license content when developed by instructors that may extend beyond the purpose and creation of discrete artifacts.
I think there are incredible applications for that type of thing. But at the same time we have to think about whether the output is accurate, respectful, whether this is ethical practice, whether the intellectual property aspects are covered.
The practices around distance learning in particular are going to be muddied by AI. We’re going to have to rethink what it means to to work in an instructional design and online learning setting.
Stefanie Panke: Thank you so much. This was extremely insightful. For people who want to learn more, where can they find you, follow you and read about your work?
Alec Couros: You can find me on Twitter (@courosa), LinkedIn (Alec Couros), or email me at [email protected]. My website is couros.ca.
Dr. Alec Couros is widely recognized as an international leader in the field of educational technology as well as a pioneer in the area of open education. In his 31 years as an educator, Alec has worked as a teacher, youth worker, educational administrator, IT coordinator, consultant, and professor, with employment in K-12 schools, youth justice facilities, technical institutes, and universities. Thanks to this wide spectrum of experiences, Alec has built a reputation as a leading and influential keynote speaker in the areas of digital citizenship, networked learning, social media in education, media literacy, and open education, and he has given hundreds of workshops and presentations across North America and around the world. In addition to his work as an internationally renowned speaker, Alec is a professor of educational technology and media in the Faculty of Education, University of Regina, Canada.