New AACE Open Access Journal ‘AI Enhanced Learning’: A Conversation with Theo Bastiaens and Mike Searson

AI Enhanced Learning is a new, open-access journal from the Association for the Advancement of Computing in Education (AACE). As editors and visionaries behind this publication, Theo Bastiaens and Mike Searson offer a behind-the-scenes look at the goals of the journal, which aims to explore the many ways artificial intelligence is transforming formal and informal learning, inside and outside the classroom. We explore questions around AI policies in scholarly publishing, the evolving definition of authorship, and the potential—and limitations—of AI in educational contexts. We talk about the editors’ personal experiences with generative AI, their reflections on how AI is reshaping academic writing and peer review – in academic terms ‘a hot mess’, hopes for increased equity and fears of HAL 9000-level events looming in our future.
Watch the interview on AACE’s YouTube channel or read the edited transcript below.
Edited Interview Transcript
Stefanie Panke: It is my absolute pleasure today to speak with Theo Bastiaens and Mike Searson about an exciting new opportunity—a new journal titled AI Enhanced Learning. It’s going to be open access. It’s going to be fantastic. My first question for the editors and initiators of this journal is: What is the vision behind this publication? What kind of submissions do you want to attract, and what kind of readership do you envision for your journal?
Mike Searson: Okay. So I think it’s important to understand that we are within the AACE community, which is an international organization that hosts many publications. I think the Digital Library within AACE is one of the largest of its kind in the entire world. There are literally hundreds of institutions that subscribe to AACE publications.
So we’re very fortunate to have this international community of practitioners, scholars, and academics within AACE who will hopefully both seek to publish in our journal and also be readers of it—as they do research and share their findings with students.
Theo Bastiaens: Indeed. We organize conferences, and AACE has several journals—like EdMedia and E-Learn. So it’s really a broad audience of scholars, based all over the world. We are lucky to have such an audience, and hopefully many participants and proposals from all over the globe.
Mike Searson: If I could add one more point to the vision: We struggled a lot—more than we should have—over the title. Specifically, the term “learning.” SITE, for example, is an organization dedicated to teacher education, which represents learning in formal environments, like classrooms. Many people within SITE are trainers of future teachers.
But we wanted to not only include that audience, but also learners in other environments—from afterschool programs to museums to informal learning. So we’re focusing on learning and the ways that AI can enhance it, both inside and outside the classroom.
Personal AI Histories
Stefanie Panke: I’m curious about your individual, personal perspectives on the topic of generative AI. Do you recall the first time you encountered or used a generative AI tool? How has your use evolved since then, and when did you realize, “Wow, this is really going to change education”?
Theo Bastiaens: It’s difficult. We’ve wanted to use AI for many years. I started working in the early 1990s on intelligent tutoring systems. At that time, we wanted to make software more intelligent. But really, the turning point was recently—with the launch of ChatGPT. That was amazing. The first time I used it, I knew it would be a definite game-changer for the whole world.
Mike Searson: My background is in cognitive psychology, and I did my doctoral work in the 1980s. I was aware of researchers who were looking at AI and models of the mind—Herbert Simon, Marvin Minsky, and others. So I’ve been familiar with AI for decades. But I agree with Theo: the release of ChatGPT in November 2022 really shook things up. And Stefanie, you asked earlier about the vision of the journal. We did agree that it would look at AI broadly—not just generative AI in particular.
AI and Academic Writing
Stefanie Panke: I know the journal is focused on AI-enhanced learning, but I have a question about writing and scholarly publications. How do you think generative AI in particular is going to change academic writing, peer review processes, and academic publishing?
Theo Bastiaens: The change already happened. Mike and I, we do a lot of reviewing and editing for journals and conferences. In the past, we often had to review poorly written publications, sometimes from non-native speakers, which required a lot of corrections. That need has almost disappeared completely. Magically. Why? I think almost everyone nowadays uses some form of AI to help with writing—both style and grammar. That is a comparatively small change that has already happened. But there are many more changes. We all work at universities, and we see how students work with publishing and academic writing. It’s a struggle, a temptation, and an improvement—all at once. Society, universities, and academic publishers are still figuring out how to deal with it. I think the academic term for it is: it’s a hot mess.
Mike Searson: I can’t write an email today without AI jumping in. We now have prompts that go far beyond spellcheck and make all kinds of suggestions. I’ve been thinking about writing an article titled, “Did I Just Write This Article?”. A lot of us throw writing ideas into gen AI tools. So it’s something we’re struggling with as a society, and in academia, it’s even more problematic. We’ve seen submissions with AI-generated datasets and citations that simply don’t exist. I’ve come across some myself. So it’s hard to know what we’re even receiving. And the issue of intellectual property is really problematic.
Policies and Norms for Academic Publishing
Stefanie Panke: Do you have an AI policy for submissions and review? Is it encouraged? Forbidden? Left to the discretion of the author or reviewer?
Theo Bastiaens: As you can hear, it is not forbidden. Definitely not. But of course we have a policy. We think it’s important that if you use AI—ChatGPT or any other tool—you mention it in your paper. It should help with writing style, grammar, and structure—but it shouldn’t write your entire paper for you. We’ll find out how this evolves.
Mike Searson: We don’t have an AI-specific policy yet, but we are in dialogue with colleagues who are developing them—for example, within SITE. It came up recently that people were reviewing proposals using AI, and they are probably moving toward forbidding that. One of the concerns we have—because we’ll have many editors and reviewers—is that some might be tempted to use AI to do the entire review. That is likely something we will discourage. As for authors, the extent to which they disclose their use of AI is an ongoing discussion in publishing.
Theo Bastiaens: Exactly. Forbidding it is difficult. AI can now adapt to your writing style, and it’s becoming harder and harder to tell whether something was written with AI or not. It’s a continuous struggle—but also an improvement in how we work as a journal and as editors.
Stefanie Panke: I teach in an international program at the Asian University for Women, and I have to say, I’m incredibly excited about the opportunities this creates for my students. These women are academically brilliant, but many are not native English speakers. Generative AI has really helped level the playing field. It gives them the tools to contribute to global academic conversations. Do you think this will spark more equity and more engagement internationally?
Mike Searson: At the SITE conference I mentioned earlier, advocates for the use of gen AI tools among second-language English learners and people with disabilities were very vocal. They made it clear why these tools should be used. The debate was really about the extent to which AI use should be cited. We understand and support why many communities benefit from these tools. But then we also have to consider the requirements and expectations of academic publishing.
Stefanie Panke: Theo, you have a particularly interesting perspective on this because you are a second-language English speaker who’s been publishing for decades. Could you share your experience with this?
Theo Bastiaens: I’ve been in the field of educational technology for over 30 years. I’m originally from the Netherlands. When I was a young researcher, I really struggled to learn the academic writing style that journals expect. Reviews would often comment on my writing style or suggest changes. But now, people can write strong papers more easily. It took me decades to reach that level, and now the tools make it possible much sooner. It’s really opening up the world. We’re also starting to think about other languages. Right now, we always write in English. But it’s becoming much easier to translate and adjust style. So you could take your English publication and submit it to a German journal, for example, something you might never have considered before because of the work involved. It really makes the world more open.
Mike Searson: Stefanie, if you did something like that—used a tool to translate your paper—would you cite it? What’s your view as an author?
Stefanie Panke: I actually have a recent experience. I did a small autoethnographic study where I kept a journal of my AI use over six months and wrote an article about it. For that piece, I saved every query and AI output, just to analyze what I used and how. I was going for complete transparency, and it was a pain. Hundreds of pages of output. I don’t think people will do that going forward. It’s not realistic. It’s like citing Microsoft Word or a dictionary. We use all kinds of tools that enhance cognition, and we don’t cite them. So no, I wouldn’t cite AI for translation in a typical academic article.
Mike Searson: That’s a really important point. We’ve never cited spellcheckers or thesauruses. So at what point does a tool become more than just a helper?
Theo Bastiaens: Exactly. And people already use AI tools. Reviewers think they can detect ChatGPT writing, and they might plug a paragraph into a detector. If it says “likely AI-generated,” some will reject it outright. We’ve even heard of doctoral candidates being rejected based on suspicion alone. But these tools give false positives, especially for non-native speakers. It’s a real struggle for editors to figure out how and when to use them.
Mike Searson: And it’s evolving quickly. What you could detect six months ago, you can’t detect anymore. That’s how fast AI is improving.
Theo Bastiaens: And Stefanie, you said something important earlier: in the end, if the paper is high-quality, you don’t care whether AI was used. I think that reflects how most of society feels. So maybe the answer is to openly say, “You can use AI—as long as you tell us what tools you used.” People have used tools to improve performance for centuries. This is just another tool. A powerful one, yes, but still a tool. And we have to adapt to it—as scholars, editors, and as a society.
Mike Searson: Right. And the key question is: when does it cross a threshold? When is it “too much” AI use? We don’t have a clear answer yet. In the past, we didn’t cite tools like spellcheck. But this is different, and we haven’t defined that line yet.
Open Access and AACE’s Publishing Model
Stefanie Panke: Let me change topics for a moment and talk a little bit about open access. This is an open access publication—AI Enhanced Learning—and I’m excited about that. Is this a trajectory that might be on the horizon for other AACE journals?
Theo Bastiaens: We can only speak for this journal. And for AI Enhanced Learning, it was important to go with the flow of open access. Mike has some strong opinions on what open access should look like, so maybe he can share.
Mike Searson: Sure. Let me start with AACE’s general policy. When you publish something online, there are typically no charges to authors, and the journal is freely available. I recently co-edited a book on generative AI in education—released in early 2024—that follows this model.
However, if the publication is held by an academic library, AACE charges a fee for that. So we’re guided by AACE’s model: open access for individuals publishing and reading articles—no author fees—but institutional holdings are charged.
Theo Bastiaens: And I can say, on behalf of Gary Marks, the founder of AACE, that open access was very important to him. He made that very clear.
Mike Searson: The funny thing is, many AACE participants may not even realize that the Digital Library is a subscription. Most universities subscribe, and if you’re logged in through your work account, you don’t notice. You just access everything as if it were free.
Theo Bastiaens: Exactly. But that’s a good point—we need to do a better job of educating our colleagues. Hundreds of institutions participate in AACE conferences and access journals through institutional subscriptions without knowing that their university is covering it.
Stefanie Panke: And in the end, AACE is a nonprofit organization. Somebody has to cover server costs, maintenance, and so on. That makes sense.
Theo Bastiaens: Yes, absolutely. But for AI Enhanced Learning, Mike, maybe you can underline that there are no charges at all?
Mike Searson: Correct. For this journal, there are no publishing charges and no subscription is needed to access it online. If a physical copy is held by a library, that’s the only case where a fee applies.
What Makes a Strong Submission?
Stefanie Panke: Fantastic. So it’s open access, no processing charges for authors, and it will be indexed in Scopus. It sounds like an extremely exciting publishing opportunity for edtech scholars globally. Do you have any tips for successful manuscripts?
Theo Bastiaens: Yes. Everybody wants to know: what do I need to do to get published? First, it should not only be theoretical. It should be grounded in different learning or technical theories.
What we really love—Mike and I—is empirical work. We need more data in this field. We want to see the field grow.
Mike Searson: By definition, if it’s empirical, it should be original. That’s what we want to see. At SITE, for example, we’ve noticed that empirical contributions are lacking. I’ve written a lot myself in the past and probably wish more of it had been empirical. So yes, we favor empirical articles. But if you’re going for a more theoretical or white-paper approach, make it original. We know a lot of people have thoughts and ideas, but we’re looking for original research and insights to share with a global audience.
Getting Involved: Pathways to Publication
Stefanie Panke: How about leadership positions in the journal? Are you still looking for editorial board members or reviewers?
Theo Bastiaens: Reviewers—always! We already see a lot of submissions coming in, so we need reviewers. As for the editorial board, we’re still building it. We have two editors-in-chief and two or three associate editors so far. We are looking for more, but you need to apply. Self-nominations are possible, but we’re looking for people with experience in the field.
Mike Searson: Yes, we’re building the plane while flying it. Reviewers—as many as possible—are welcome. We’re reviewing applications now and will bring people on board. We’re also actively reaching out to individuals we believe can shape the editorial board.
For example, Stefanie, we’ve seen your work at SITE conferences and think you can make a great contribution to the journal.
Stefanie Panke: Thank you! And speaking of conferences, we have two major ones represented: the SITE conference in Orlando, chaired by Mike, just wrapped up, and the AACE EdMedia conference—chaired by Theo—is coming up in Barcelona. If someone is thinking, “I’d like to submit to the new journal,” would submitting a high-quality conference paper be a good first step?
Theo Bastiaens: Yes. That’s always the best pathway. If you have a high-quality conference paper and you think it’s worth publishing, submit it to an AACE journal. If it’s about AI, we’d be especially happy to see it.
Mike Searson: That’s the goal of conferences—to get feedback and then expand your work into a journal article. I don’t think it happens enough, but we really hope it will. I attended every single AI-focused session I could at SITE. I approached several presenters and encouraged them to write an article. I think Theo will do the same in Barcelona. If we get enough strong articles, we might even do a special issue. That’s already in the works.
Theo Bastiaens: Exactly. That’s what we do. At SITE, I heard some great presentations. If we think something is strong, we invite the presenter to submit their paper for review. It works well. I’ll definitely be doing that in Barcelona too.
Mike Searson: And the audiences are quite different. SITE focuses more on teacher educators, while EdMedia includes more pedagogical and technical innovators. The journal will reflect a nice mix of both.
Stefanie Panke: When can we expect the inaugural issue of AI Enhanced Learning? Do you have a timeline in mind?
Mike Searson: We’re looking toward the end of the second quarter. Beyond that, we hope to publish on a regular basis. We’re not ready to commit to a fixed calendar just yet, but we recognize the need for consistency and plan to implement that.
Futures of AI: Hopes, Risks, and Ethical Questions
Stefanie Panke: Let me end with a look into the future. One of the submission categories for AI Enhanced Learning is “Possible AI Futures”—a future studies-oriented track. So I’d like to ask you both: What do you think lies ahead? Will generative AI increase or decrease equity, quality, and access?
Theo Bastiaens: In general, I think it will increase equity, quality, and access. But there will be exceptions. You can already see people using AI to generate content for social media that’s not necessarily high quality. It can go both ways. But in our field—education, research, teaching—I hope and expect that it will increase quality.
Mike Searson: We have to distinguish between the potential of the tools and how they are actually used. Take the internet, for example. In theory, it connects us all, but in practice, social media created tribes. People stay in their groups and rarely go beyond. It reinforced division instead of expanding perspectives. When someone says, “I’m going to the internet to do research,” I worry. Because often they’re not really researching—they’re just reinforcing a particular viewpoint. Generative AI is similar. Technically, it draws from a broad knowledge base. But in practice, a lot of that content is Western, white, and English-speaking. So we get biased output. That’s something we need to be aware of and teach our students to navigate. Just like we used to say, “You can’t trust everything on the internet,” now we have to say, “You can’t trust everything generated by AI.”
Theo Bastiaens: Exactly. But the good news is, with this journal, we’re trying to bring different communities and perspectives together—to promote understanding across “tribes.”
Mike Searson: Yes, come to America and help us fix that! The divides are deep.
Stefanie Panke: I find that really interesting. Because on the one hand, you have this exciting tool that is creative, that can generate and do so many things. On the other hand, you have, in some cases, very heavy-handed algorithms that filter the answers and push for a specific perspective. This does remind me of social media. Personally, I was so excited when Web 2.0 was a buzzword. I remember the vision for change in education, society, and being so sure this change would be for the better. In hindsight, that was false. So, in education specifically, what role do you think AI will play? In some cases, people get excited—like, “this might even replace teachers,” at least in underserved communities or in specific situations. Do you think that’s possible? And if not, what elements make education a fundamentally human endeavor?
Mike Searson: This is a dangerous one. But for me, this is very easy. I think what we’re discussing is a giant Turing test. The extent to which people believe the AI-enhanced robot is a teacher—then, it’s a teacher. You can read some of the research on people who have robotic pets—really cute robotic dogs that crawl into your lap. Their reported experience is that they have a living pet. Now, on one hand, I would say: it cannot ever replace an actual teacher. On the other hand, if the people—kids, parents—believe it’s a teacher, then it passes the Turing test.
Also, I want to acknowledge—in the U.S. at least—publishers have played too large a role in the development and presentation of educational content. And I know they will be doing the same with AI. As media—printed or electronic content—becomes AI-embedded, that influence will grow.
Theo Bastiaens: I think AI cannot replace the teacher. In certain areas where teachers are not available, or if you want something at home quickly, you can get used to AI. You can really learn from it. But in our world—look at conferences, at distance teaching, universities.
During COVID, we thought, “There we go—we’ll be online forever.” And yet what we have seen instead is that the numbers of in-person conferences go up again. People want to meet because we are social species. Online conferences still exist, but now they’re an add-on, not the main way to communicate, share, talk with colleagues. I think AI will be like that—an add-on. It can help a lot. It can even replace a teacher in certain situations. But in the end, we need people—to communicate, to have contact with.
Mike Searson: I think what Theo and I want to clarify is—we’re answering your question from a technical perspective. Can AI replace teachers? Yes. Can it pass the Turing test? Yes. Should it replace teachers? That’s something different.
Stefanie Panke: I’m curious—did either of you use AI in a teacher role? To learn something from it?
Theo Bastiaens: Myself? Sure—all the time. If I want to know more about a topic, I can ask AI and read about it. But it’s still just reading so far.
Mike Searson: I do all the time. I’m astounded when people say, “I don’t use AI.” My response is, “It uses you.” It’s inescapable—to write anything, interact with anything, AI is present. It helps guide thinking, bounce ideas. I try to be aware of that and note it when I’m doing informal writing. It’s a useful aid in much of my work.
Stefanie Panke: I love it for things I barely remember—for example something I heard during a talk at a conference, like, “Oh yeah, I heard this at a keynote. It was interesting”. It can help me retrieve the full paper. I will say, sometimes it makes up the title, and then you spend half an hour searching for what didn’t exist in the first place. But in many cases, with just a few words you recall, you can retrieve the full work of a scholar that otherwise you’d have lost. Personally, I also tried using it as a French tutor. And I’ve got to admit—I lacked motivation. Yes, it works really well. I can talk to ChatGPT in French, it talks back to me. I could practice. But I have no interest in talking to an AI. Zero. It lacks all the characteristics that make a human conversation interesting for me.
Mike Searson: But it’s getting better. There are mobile devices now that will do that for you—a live translation of someone speaking to you, let’s say in Spanish, and in their voice you hear English, with all the nuances. So I think that’s why we’re hesitant to make firm predictions. Things are going to evolve and change. Could there be a time when we can’t tell the difference between a person and a translated voice? Very possible. It will pick up on all the nuances of our speech patterns.
Theo Bastiaens: When I learn with AI, it’s usually little things that make my work more convenient. I was trained as a programmer. I don’t do it much anymore, but if I need a bit of code, I ask AI. I think, “Yeah, that’s correct.” At least I can review it and say whether it’s right or not.
Same with SPSS or statistics. Going through manuals is inconvenient. You ask AI, and it gives you a test or an approach. If you have the knowledge to check whether it’s correct, it’s incredibly convenient. You can’t trust it blindly, but it’s getting better and better.
Mike Searson: That’s a discussion happening in computer science departments. Students throw things into gen AI to generate code, then test it themselves. A twist on your question—will there be a point where code can be fully generated by AI? I think yes. But as Theo said, we’ll still need ways to verify if it’s accurate and valid. That’s a complex and evolving discourse. And Stefanie, you mentioned something earlier—recommendation letters.
Stefanie Panke: Yes, I have to write a lot of recommendation letters. And I’ve loved using AI as a helper. When we talked earlier you sighed, “These AI packages in admissions…” I understand the concern. But from my point of view, it’s such a specific genre. Especially as a non-native English speaker, I want to get it right. I want to use the right catchphrases and the right words that will make sure the student doesn’t end up at the bottom of the pile just because I didn’t know what admissions office was looking for—or didn’t use all the buzzwords they want these days. So I think it’s a two-way street—how much people are using it, but also how computer-driven our expectations have become.
Theo Bastiaens: That’s true. You use AI to write it, and the admissions office uses AI to check whether the right words are in it. That’s the bad part. But what can you do?
Mike Searson: I know colleagues who use it all the time just for that purpose. So I’d say—check yourself. As you use it, is it saying what you would have said about that individual?
If you put together a bunch of bullet points about the person and feel like the letter states that accurately, then I go with it. I use gen AI as a conversational tool. I really cannot remember the last time I used just one product. I go back and forth. That’s what you’re describing. I know people use it all the time. You can say, “Add a more personal touch,” or “Make it more formal.” If you believe it captures the essence of what that person is about—and it’s accurate, in the right tone—then it’s a tool. And we’ve always used tools. Human history uses tools for two reasons: to save physical effort or to save time. So if it saves time and still says what you believe in your heart about that person, I see no reason not to use it.
Theo Bastiaens: Yes. But Mike, you would also underline the importance of having the knowledge—the competence—to reflect on AI output. If you can’t do that, it’s worthless.
Mike Searson: Of course. That’s something we should teach our students. It’s a dialogue. Go back and forth. Look for hallucinations. Don’t just settle for the first draft.
Stefanie Panke: Don’t stick with the first draft! That’s the one that sounds very polished—but has a narrow argument or a rigid phrasing. You have to coax AI toward a broader, more authentic view of the world.
Mike Searson: That’s why we use the term “generative.” But people are trained by the Internet to enter one phrase and move on. We have to teach them that using AI is a different approach.
Hope, Caution and the Important Question of Conference Locations
Stefanie Panke: So, all these things considered—are you more worried or more excited about the AI future we’re all stepping into?
Theo Bastiaens: You can only be excited. From where we are to what it could do—it’s coming anyway. And this is our field of research and study. I choose to be excited.
Mike Searson: Oh, I’m both. I’m excited. And I’m scared to death. I’m one of those people who can’t believe we haven’t instituted a kill switch. My background is in cognitive psychology—how knowledge is constructed. I see no reason why, at some point, AI robots wouldn’t want to go after that. I imagine the HAL 9000 from 2001: A Space Odyssey. Statistically, that kind of thing is bound to happen. It’s a ridiculously powerful tool. On the other hand, I look at what’s happening in medicine—it’s just incredible. AI is powering prosthetics that learn how you move. It starts to anticipate movement. There are real opportunities for humans and educators. But Stefanie, going back to what you said earlier—what frightens me is how many of our colleagues believed what we did: that social media would be transformative. Look where we are now. I just hope we don’t end up in the same situation with AI. I hope we use it in more constructive ways.
Stefanie Panke: Thank you both so much for this conversation.
Mike Searson: Wait, Stefanie—You forgot to ask one question. Why do I get to go to Orlando, and Theo gets to go to Barcelona?
Stefanie Panke: Well, I’ll go to Barcelona too! My sympathies, Mike.
Mike Searson: [Laughs] Fair enough.
Stefanie Panke: I am also excited that AACE now has a conference in Asia. The first one was in Singapore, and the next will be in Bangkok. That’s a big step.
Theo Bastiaens: We actually had Global Learn 2020 scheduled in Shanghai, but COVID canceled it. We’ve been working toward global access for a while.
Stefanie Panke: It’s incredible. And having a chair like Curt Bonk makes a huge difference. He brings a big network and deep expertise. I look forward to what’s next.
Mike Searson: Absolutely.
Stefanie Panke: Thank you again!
