“If we want better AI, we have to become better people” – An Interview with Stephen Downes

Stephen Downes, author of OLDaily, is possibly the original social media influencer—long before this was a term people knew and a profession one could aspire to, and definitely long before TikTok dances. Back in the mid 1990s, Stephen started sharing a daily online learning professional newsletter that has for the past three decades shaped how people see educational technologies and continues to reach millions of subscribers globally. Part reading list, part notebook, part editorial, it is an exercise of thinking in public.
Recently, Stephen discussed the connection of generative AI and assessment, and coined the term ‘AI-agnostic assessment’. In an era where simple prompts, browser features, plugins, and desktop apps are all capable of assisting or fully completing assessments, higher education needs to rethink what meaningful, fair, and authentic assessment looks like.
As a response, Stephen called for AI-Agnosticism‘, which he described as setting genuine tasks and not caring how these are accomplished, only that they are accomplished as well as possible: “ It doesn’t matter whether the person used AI to evaluate quality of results, make decisions about tools, or critique outputs. It *does* matter that whatever they have produced as a solution to the genuine task is in some reasonable way demonstrated to be the ‘best’ (whatever that means, as it varies by context) solution”. (Stephen Downes, LinkedIn).
The interview is a wide-ranging conversation about genAI, authenticity, regulation of technology, agency and human consciousness.
Full Recording

Edited Transcript
On OLDaily and the “100% human-authored” disclaimer
Stefanie Panke: My first question is about the OLdaily and the disclaimer that it is 100% human-authored. What is the value for you and others to not use generative AI for crafting this newsletter?
Stephen Downes: Okay, so first of all, let me be clear: I use AI for lots of things, and I actually wrote an article a number of years ago about how I use AI. It talked about my car, search, and cleaning up speckles on my photos using AI. I use AI. I think AI is great. I use it to help me write software because it’s better than I am. The posts in OLDaily—I started putting this notice on a few years ago because there was a trend of AI summaries of curated articles. I wanted to be clear that that’s not what I’m doing. For people who aren’t familiar with OLDaily: each weekday it’s a list of short articles—maybe 100, maybe 200 words max—where each piece is about a resource I found on the web. The title of my OLDaily piece is always the title of the article. I indicate the author and where it came from, and then I write a commentary about what I’ve read. I don’t just summarize; I contextualize and offer opinions. Sometimes I use the item as a launching pad to riff on my own thoughts. Other times, I’ll simply summarize if that’s all I want to do with that article. The context and background I bring—I’ve done surveys on this in the past—is the really important thing I add to the list of articles. That’s why I put the note there, so people new to the newsletter understand it’s not an AI-curated list of what’s popular; it’s something different. That’s all.
What “AI-agnostic” assessment could look like
Stefanie Panke: Thank you for the clarification. That seems like a very important distinction: you use AI for many different things, but not for this. Knowing when to use it and when not to use AI is something that schools and universities—organizational learning contexts—really struggle with. You have offered an idea I find intriguing: “Well, you can’t figure out if students use it or not anyway, so why not create assessments—create assignments—that don’t hinge on whether or not AI is used?” What could this look like? Could you give a concrete example?
Stephen Downes: Sure—and I’ll add that this is long overdue. It’s been due since long before the arrival of AI. The way you framed the question made me think back to grade 12. We were reading Dickens. You can tell Dickens was paid by the word because he used so many of them unnecessarily, with a lot of irrelevant detail. They wanted to make sure we read the book, so they’d give little tests like, “What color was David Copperfield’s coat…?” This was the 1970s. I boycotted my tests in grade 12 English on the grounds that they were pointless. I still almost passed, but I refused to do them and therefore failed grade 12 English. It’s ironic that I make a living by writing today. It’s not simply a question of changing the assignments. I think it’s about changing learning as a whole—what we think learning and education are. Education as it’s currently structured is: we give you a body of content to remember, you remember it, then prove it somehow, and we’re done. That has always struck me as pointless, and AI shows pretty definitively that it’s pointless.
So what would it look like? Pretty much anything that isn’t education as we understand it today. Concrete examples: have students conduct an advocacy campaign for something like the environment. Have students raise money for a worthwhile social cause. Have them start a business and provide a service and make money. Have them find a solution to a pressing community problem. Have them design recreational activities for very young children. Have them conduct scientific tests to determine whether the water in the local river or creek is safe to drink—or safe to swim in. Have them design solar-powered vehicles that can cross the community and then race them. Have them design a video game that raises awareness of the importance of bees. Any of those are activities we can evaluate as instructors. We can tell whether they put in a decent effort, whether the work resulted in a good outcome, and whether what we taught had any impact on what they did. We can see them learning as they go through the activities.
It doesn’t matter whether they used AI; it’s irrelevant. In fact, how they use AI, if at all, is something we can evaluate: did you use AI wisely? They’re not going to be able to do these things if they’re sitting in a classroom listening to someone talk about history in a way that separates them from it, or doing practice sets of mathematical problems. We have to change how we approach learning.
When authenticity (provenance) matters
Stefanie Panke: Well, I do like your idea because it’s simple and it meets most people’s intuition that we care about outputs—that in the real world, as opposed to within educational or institutional boundaries, it matters what you produce and the quality of what you produce. If I go to a doctor’s office and want to get a diagnosis, I would prefer the correct answer that is AI-generated over the incorrect answer, even if the doctor thought about it a lot. But there are many other situations where the effort and care that went into crafting something matters. A counterexample would be a letter from a relative. I would much prefer something that’s insulting or rude, but written by the person, than something that exactly meets my interpersonal expectations but was AI-generated. Where along those lines do you think we should teach and foster authenticity?
Stephen Downes: In the case of a letter from a relative, provenance matters. It matters where it came from because you value a letter from a relative; you don’t value a letter from a machine. So it’s not really a question of whether it was better or worse done, or indicative of skill. It’s the provenance—the source. Where that matters is where we probably shouldn’t use AI—or, more accurately, we should be open and honest about our use of AI.
In academia, provenance matters because we need to credit sources. If I write an article and I have an idea—say, the term “AI agnosticism”—and I read it somewhere, then I should attribute that source. If that source is a human, I should attribute the human and ideally the place where they wrote it. There’s a whole discussion we could have about how to do that. If the source is a machine, I should say it came from a machine and, by implication, probably from some human somewhere; I just don’t know who, and it would take a lifetime to find out. Failing to do that implies I’m saying I came up with this all by myself—the provenance is me.
That matters in academia. It maybe matters more than it should. A lot of the time, the idea is what matters, not where it came from. “Two plus two equals four” is an idea; we don’t need to attribute the source of that because the idea is what matters, not who came up with it. But in a lot of academia, who came up with it matters, and I’m not always sure why.
So there’s a skill here in drawing that distinction. How important that skill is, I don’t know—it’s kind of important. It’s important, maybe, that you know the distinction between sending a personal gift and sending something the computer recommended. But really, that’s the distinction you’re drawing here. I thought you were going to say you prefer a handwritten letter rather than one that’s typed. That was the thing when I was a kid: make sure when you write to your grandmother—first of all, make sure you write to your grandmother to thank her for the gifts—and make sure that you handwrite it, don’t type it, even though I had this classic Underwood typewriter I was so proud of. These days, I wouldn’t care if somebody handwrote something to me or not, and nobody cares. I’d text my grandmother, if they were still around, and say thanks—that would work just as well. These things change over time.
Similarly, people talk about basic skills—the sorts of things you practice in school, like reading and mathematics. But these basic skills change over time too. Being able to do calculations in your head used to be really important. Not so much now. In fact, we’d rather use machines because we can never do it in our head as fast as a machine can, and we need to do calculations much more quickly than we used to. There are lots of things like that.
Do basic skills and production still matter?
Stefanie Panke: In an interview with Rick West about a variety of topics—open education, EdTechBooks, generative AI, he explained that he himself learns by writing, and, more generally, that in academic contexts, we expand our understanding by engaging in research and writing. That’s why we assign essays. Now, if students do that with generative AI, do we lose this aspect of assessment that’s actually generative toward learning? Or does it matter less than we think?
Stephen Downes: Well, it matters less than we think—you knew I was going to say that. But let me explain why, because that’s more interesting. It comes down to the assertion “I think when I write,” and therefore writing is thinking. If you want to learn how to think, you have to do it a lot—just like anything—so you have to write a lot, and hence most academic exercises are writing exercises. It’s not true, though, that we all think when we write—or that we all think by writing. In my case, writing is the output process. I’ve already done my thinking. I’m sure others are like this too. Even in university, writing academic articles, I would sit down, write the article beginning to end, and it was done. I might go back and edit spelling—I’m a lousy speller—and insert references (I note them but don’t memorize page numbers). But the thinking was done in other contexts. For me, a lot of the time, I’m thinking when I’m speaking. That’s why I like doing these interviews. For you, this is an interview; for me, I’m practicing my thinking. Then we’ll get a transcript. I’ll be happy with how it came out the first time—that’s the evidence of my thinking. Writing it out again—why would I? I’ve done the thinking already; now I’m just “printing” it. Different people think in different ways. Artists think visually—in pictures. I’m not doing the whole learning-styles thing here; in fact, the claim “we think by writing” sounds more like an endorsement of learning styles than what I’m saying. It’s just that people think differently. Musicians think in music—not literally with tunes in their head (though that may be part of it), but they see harmonies and movements. I’m not a musician, so I’ll fudge the terms. Architects think in terms of structure and drawings. Seymour Papert was tragically injured in an accident in Vietnam, from which he ultimately died. In the moments before that accident, he and his colleagues were talking about the flow of motorcycles and vehicles through an intersection, and how you would represent both mathematically—he was thinking in mathematics at that time. The accident is unrelated, but it’s interesting that when Papert looks at the world, he’s thinking in mathematics. So, different people think in different ways. It’s not true that we must practice writing a lot in order to learn to think. What we have to do is think a lot to learn to think. In different contexts and tasks, different ways of thinking are appropriate—partly determined by the person, partly by the task, partly by who-knows-what. That’s how we learn to think. I think it would be better to encourage people to engage in many diverse ways of learning how to think, and to pursue the way that works best for them.
Stefanie Panke: The point is about production. You produce something, and that’s how you process. If I outsource production, doesn’t that hurt my ability to mull things over and deepen my knowledge?
Stephen Downes: That’s not a bad argument. I’d still disagree. I had this discussion with someone the other day, and we saw the output in my newsletter. There’s a long history in the philosophy of science that says the purpose of science is to solve problems, compared to the history of technology, which is to make things. For me, science is about discovery—finding something new. So I’ve offered two alternatives to making things: solving problems and discovering new things—and I’ll bet there are more. It isn’t simply about producing or the pressure of producing an output to force thinking. That works for some people. In academia especially, the pressure of writing to a deadline forces many to think—I get that. But it’s not the only way. Academia is a bit self-selecting in this way: it selects for people who think well by producing to deadlines, and filters out those who think in different ways.
Future skills and the AI era
Stefanie Panke: You’ve touched on future skills. Does education prepare students for what they might do later in life? You seem skeptical. My last big technology shift was when I started university in 1997—the Internet was just becoming a thing. It happened only in one room; you had to line up; you printed everything; your search engine was AltaVista. Today my job wouldn’t exist without web technologies. What’s your prediction—how big will this workforce shift be? Like the internet? Bigger? Different?
Stephen Downes: It’s going to be big, obviously, because we’re moving from an environment where the only things that could reason were humans to one where—well—“rocks can reason” as well. It’s interesting: you said your job wouldn’t exist. Part of your job is interviewing people around the world on Zoom. That’s a skill you couldn’t have predicted needing in 1997—unless you were going to be a late-night TV host. But it is a skill learned through experience. I watched Dave Cormier become a master of podcasting by doing a thousand episodes of his online show—something nobody would have conceived of before unless you were Johnny Carson (or the Australian equivalent). What would those skills be? That’s harder, and it’s part of why we shouldn’t try to standardize a single set of skills—it’s too easy to get wrong. I was thinking about this while listening to a podcast—an episode of This Week in Technology by Leo Laporte (who actually started in television before that gig dried up)—about conversation as a skill we need to preserve, even in a world of AI. They talked about AI producing personal podcasts for people. That probably wouldn’t attract me. What I like about This Week in Technology is that I’m listening to real people with real stories. Leo Laporte boasts about his son, a chef who opened a restaurant in New York City and has a TikTok feed—I wouldn’t get that from a purely synthetic show.
But what if I had a personalized AI version of that podcast? I could react to the podcast as it runs, and have a conversation where the podcast responds to me in a way that stays true to the original but draws on Laporte’s background, writings, and shows—so I actually have a conversation, mostly listening but sometimes interacting. Then my version of the podcast reports back to Leo Laporte what we talked about—probably an aggregate summary across thousands of listeners. There’s a whole bunch of skills in that. We’re not sure what they are yet, but we can see what they might be—how the discipline of hosting a podcast evolves. You still need to be human and unique, with stories and personal context, but you’re integrating that with technology to produce not just a stream of audio people passively hear, but an interactive stream that they respond to—and then you understand what those responses mean. There are skills there. We can’t really name them yet; the language isn’t adequate because we don’t have a thing to point to and say, “That’s the skill.”
I’ve read that the caring industries will come to prominence in an AI era—which sounds opposite to how it should be. You might think AI will do all the nursing and teaching—and yes, it could do a lot—but there’s still a lot of room left over for actual caring, which is distinct from many functions of teaching, nursing, and doctoring. That need isn’t going away. Caring is a skill. We don’t really know how to name and pin it down; we usually see it in the context of other disciplines—the “caring disciplines”—but it is a skill on its own. Empathy is a skill on its own; sympathy and a range of related human capacities still matter. And, as I said earlier, provenance matters—being able to genuinely commiserate with another person matters.
Stefanie Panke: I asked a very similar question a few weeks ago to Mike Caulfield—about future skills, what qualifications and jobs will be there, and how professions will change. He explained that AI is not like programming, where mastery of the language is the goal, but a lot more like spreadsheets. The value comes from people having deep domain expertise and translating it through the tool. What do you think?
Stephen Downes: I think he’s right in an important sense. Being really skilled at speaking English isn’t helpful if you don’t have anything to say. It’s the same with writing fantastic computer code using advanced algorithms in multiple languages—it doesn’t help if you don’t have an application you’re trying to build. So the “content” matters more than the tool. That’s probably true of AI. Knowing how to use AI doesn’t matter if you don’t have anything to say, anything to build, an application, or a problem to solve. In that, I think he’s quite right. Where I would question him is in domain expertise. It depends on how we define it. One way—say, a Dan Willingham kind of way—is knowing all the facts: having deep content knowledge. AI will have more expertise in that sense than any of us ever will, because it will have, for example, the entire text of the Canadian Rubber Company Handbook of Chemistry and Physics at its fingertips. We can’t memorize something like that.
But there’s a different kind of expertise where ability in a discipline becomes more like intuition than grasping facts. It’s more practice than memory. It’s hard to imagine an AI having that quite the same way. It really depends. We could envision a robot with all the dexterity a human has—it’s conceivable. Over time, it could develop the same habitual skills an expert has, so it could walk into an OR and perform surgery without falling asleep after a 12-hour shift, or fly an airplane without taking naps, etc. Maybe there’s another kind of expertise more specific to humans. If I had to point to it, I’d point to the diversity and particularity of a given human’s experience. No matter what AI does or becomes, it can’t have the same experiences I’ve had. It would have to be me. So what I bring—and this is why I have the “100% human authored” note in OLDaily—is a very specific background in the field that informs every post I write. That’s the unique thing. Each human being has a large repository of individual perspective and expertise. If we find ways of developing that—what would I do to help people prepare for an age of AI? I would encourage them, and help them however I could, to have a diversity of experience: go out, see new things, try new things, go to new places. We talked briefly about me going to Iceland. Why would I go to Iceland? Because it puts me in a context I can’t imagine. As it turns out, my imagining was a shallow copy of the actual experience—and that’s always been the case.
Experiences, agency, regulation, and “better AI”
Stefanie Panke: One of the things you shared about Iceland is that it’s hilly, it rains a lot, cities are far apart, and there are a surprising number of bugs. The real experience—the richness—was not cookie-cutter. When I think about what people learn from and with AI, the most worrisome thing is how much it flattens experience. We already see that with social media—wherever you go, you are always in the same place. Does that worry you?
Stephen Downes: The quick thought that popped into my head is: maybe that’s why AI needs us. Right now, AI does flatten experience. But all mass media does that—pandering to the lowest common denominator. Television brought us Gilligan’s Island. Look at pulp magazines of the 1930s and 40s—trash writing, cookie-cutter. We even have cookie-cutter suburbs. Anything we do for the masses tends to do that. It’s certainly a danger for AI as well—something I hope we can work against more successfully than we did with other media.
Stefanie Panke: When you say you hope we work against that, it implies we have agency in shaping how AI is used and developed, and how it will influence society, media, production, learning, and institutions. What should that agency look like? Should governments regulate AI? More competition? Will the market solve this?
Stephen Downes: Michael Wesch did a video many years ago—The Machine is Us/ing Us. More recently, I did Ethics, Analytics and the Duty of Care, a year-long investigation that reached a similar conclusion. And Carlo Iacono has been saying what we see in AI is a reflection of us. Right now it’s a shallow reflection, but it’ll get deeper over time. The agency we have—and interestingly, the same agency we have when we teach—is not in telling AI something. Regulation, rules, principles, even providing facts may set parameters and boundaries, but that’s probably the most they can do. It won’t create the AI we want; at best it prevents the AI we don’t want (and it might not even do that). AI learns from us—our words, pictures, videos, the things we create. Eventually, AI will walk around streets and look at buildings, signs, graffiti, parks, farms—the land we’ve changed and left unchanged. It will learn from all of that. Our agency in shaping AI is in shaping all of this: our words, actions, media, cities, environment. That’s what AI will learn. If we want better AI, we have to become better people.
It’s the same with students. They learn by imitating, not by being told. If we want to educate better, we have to better model the behaviors we want students to pursue. People talk about a crisis of education. It’s also a crisis of behavior. People have been behaving badly, and we shouldn’t be surprised when children imitate that. About agency: no one person can do this, and you can’t force it—though people want to. I see sets of AI regulation principles as attempts to force behavior on AI. You can’t—first, because it won’t work, and second, because AI learns from people, not from principles. And students learn from people, not from principles —double meaning noted. Each of us has agency, but not that much. It would be worse, not better, to grant a small number of people a large amount of agency. That would not be the outcome we want, I’m pretty sure.
Connectivism, cognition, and a theory of mind
Stefanie Panke: Does generative AI work as a model or mirror for human consciousness? Is this how consciousness evolves and works?
Stephen Downes: It’s interesting. In the history of AI, you had two traditions: the cognitivist approach and the connectionist approach—the idea of building artificial neural networks. Those were designed to emulate how humans think. That’s what that project has been. We did use computational theory as metaphors of mind, and some people still do. Paul Kirschner, for example, thinks humans are like computer processors with buffers and limits—cognitive load theory (Sweller) is based on a computational theory of mind. But with the connectionist theory of mind, the metaphor goes the other way—from the human to the computer. Now we’ve gone far enough building neural-network systems that we’re beginning to make the metaphor back to humans again: humans are like these computers designed to be like humans. Quel surprise. The real question is: is it a good theory of mind? It’s also a good question whether it’s a good metaphor. To me, the question is whether there’s something underlying connectionist systems—neural networks—like those that power generative AI and many other AI systems today (rule-based systems collapsed and failed). Is there something underlying that? I think there is. Others do, too. I’m pretty convinced.
It comes back to that thing: rocks can think. How can rocks think? If they’re organized in the right way and perform physical activities in the right way, they can think. It’s in the connections and organization. We can talk about it in terms of signals, weights, activation, and patterns in neural nets. Or in terms of pattern recognition—our capacity to see shapes in chaos. There’s something there. There are basic mechanisms that are not complex—simple mechanisms that, multiplied enough times, create complex things like thought and experience and other human phenomena.
That was a bad answer to a good question, and I wish I’d done it better, but I hope you get the point: it’s not just a metaphor. You see the same thing in mathematics—graph theory; in connected systems in nature—murmurations; and in simple experiments—multiple metronomes ticking on a board suspended on cans. The same formal description can apply to all of those, and it isn’t the formal description that is the thinking. The formal description is probably wrong in important ways—abstracts in ways that miss nuance. But that kind of organization is what it means to think. I don’t know how else to say it right now.
Closing thoughts
Stefanie Panke: Is there something you’d like people to know or read up on—something to give educators to chew on as they reshape assessments, assignments, and teaching practices around AI—or anything you want to point people to in your current work?
Stephen Downes: Oh, gee. That’s like “be wise on the spot,” which is hard, and I’ve never mastered it. I always want to say something like: every person matters; every person is valuable. When I was younger I used to say, “Every person is as deep as you think you are.” As I get older, I see this more. It’s all one basic organizing principle—which sounds boring—but out of it comes the incredible beauty that is each individual person, even the “ugly” ones, and each individual city, even the “ugly” ones. It’s an amazing thing, and it’s worth the time to stop and try to see that for what it is—not trying to make it something, but to see the incredible diversity and vitality that characterize the world. If we can do that—each of us—and reflect that back into our daily lives, we’ll probably have a better world. It’s part of it, anyway.
About
Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada, specializing in new instructional media and personal learning technology. A Canadian philosopher by training, he has explored and promoted the educational use of computer and online technologies since 1995. Together with George Siemens he designed and taught the open online course in 2008 that is widely cited as the first MOOC. His writing and talks have made him one of the most recognizable voices in online learning.