Explainable and Trustworthy GenAI: An Interview with Aras Bozkurt

Aras Bozkurt (Image Source: Personal Blog) 

Aras Bozkurt, Professor of Distance Education and Educational Technology at Anadolu University (Türkiye), has been an early and influential voice in the discourse on generative AI in education.  He published various highly collaborative works with other notable international experts like Helen Crompton and Jon Dron.

His work wrestles with both the implications, the theoretical underpinnings and the ethical boundaries of generative AI. His recent article Trust, Credibility and Transparency in Human-AI Interaction: Why We Need Explainable and Trustworthy AI and Why We Need It Now together Ramesh C. Sharma criticizes that the opaque “black box” nature of generative AI raises concerns about decision-making, bias, and accountability, especially in high-stakes domains like education. The authors propose a framework for responsible AI development that adheres to principles such as transparency through open-source models, interpretability via user-friendly tools, fairness with diverse datasets and social accountability by adherence to societal norms.

Prof. Bozkurt also led The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future which provides a comprehensive and critical evaluation of generative AI. As noted collectively by the collectively, the manifesto “marks the initial steps of an inquiry, endeavors not to be lured by clichéd discourses, aims to raise awareness, suggests a cautious approach, encourages critical perspectives and delivers a wake-up call with the introduction of GenAI into our lives.”

In an early collaborative paper, you and your co-authors use speculative future narratives to explore the impact of GenAI on educational contexts. It was an intriguing piece. Do we need more explorative futurology-driven techniques in EdTech?

Absolutely—I believe we do need more explorative, futurology-driven techniques in EdTech. In my experience working with speculative methods, I’ve found that creating narrative fictions allows us to stand in the present while making sense of potential futures. It frees us to imagine scenarios that might otherwise never occur to us, and this process can be incredibly helpful for revealing our subconscious hopes and fears.

Sometimes, our usual ways of thinking can put us in an echo chamber; speculative storytelling pushes us out of that box. By weaving creative narratives, we’re able to expose different facets of complex phenomena and explore “what if” scenarios in a way that’s both critical and innovative. As Jean-Jacques Rousseau so beautifully put it, “The world of reality has its limits; the world of imagination is boundless.”

This imaginative freedom also has a tangible benefit: it spurs more radical and creative thinking about how technology might develop and impact education. In my view, EdTech research benefits from this approach because it’s not just about predicting what will happen; it’s about actively designing futures we either desire or want to avoid. By tapping into fictional storytelling and other inventive techniques, we can envision directions that might seem far-fetched today but could become very real in tomorrow’s classrooms.

So, yes—using speculative methods doesn’t just make for an intriguing read; it expands our intellectual horizons and encourages us to challenge existing boundaries in EdTech research. I’m all for it, and I’d love to see more scholars and practitioners embrace these approaches to spark fresh insights and guide more thoughtful technological developments in education.

You argue that GenAI should be explainable, meaning that the inner workings of AI systems and their decision-making processes should be transparent and understandable to users. In a high-tech society, we constantly interact with technologies we don’t understand. Is that a problem beyond AI?

I do think this issue goes well beyond AI. In our high-tech world, we’ve grown used to interacting with countless devices and systems—everything from simple apps to complex algorithms—without fully grasping how they work. That’s not necessarily a problem when the stakes are low. But once these technologies start making decisions that affect our lives in bigger ways—like shaping public opinion, influencing educational outcomes, or even making healthcare recommendations—it becomes critical that we understand at least the basics of how and why they do what they do.

When I talk about GenAI needing to be explainable, I’m essentially arguing that we can’t afford to have “black box” systems steering major aspects of our society without giving us a window into their logic. Yes, the details can get incredibly technical, but offering transparent, user-friendly explanations fosters trust and empowers individuals to make informed choices. If we accept opaque technologies too readily, we risk handing over control and losing sight of how decisions are being made on our behalf.

That said, it’s definitely not just AI: everything from online recommendation systems to our smartphone apps often remains a mystery to most of us. So while I focus on GenAI because of its rapidly evolving impact, the same principle applies across the board—we all benefit from technology that provides clarity rather than confusion. Ultimately, this isn’t about opening the “black box” for everyone to see every line of code. It’s about ensuring meaningful transparency and trust so we’re not living in a world run by forces we don’t understand and can’t hold accountable. 

How do you use generative AI in your teaching and how well do you think your students understand both the inner workings and the societal consequences of AI?

I mainly use GenAI to provide examples or outline complex topics, but I also encourage students to question how AI arrives at its responses. Honestly, not all of them fully grasp the ethical or societal implications; I try to remind them that AI is trained on existing data and can reinforce biases. This helps them see that GenAI isn’t just a neutral tool—it reflects real-world inequities and power structures.

A point that you have made repeatedly in your work is that GenAI is not ideologically and culturally neutral. In a keynote talk at the last EdMedia conference Mike Sharples [link to AACE conference report] described ChatGPT as imbued with a liberal, slightly left-leaning US-American persona. Would you agree and what norms, if any, should be inscribed into generative technologies?

I’d agree that generative AI tools can carry cultural or ideological influences from their training data, so labeling them as “left-leaning” or “right-leaning” isn’t surprising. Large parts of the internet reflect specific cultural norms; the AI then reproduces those same viewpoints.

For norms, I think explainability, transparency and accountability are key issues. If a system is prone to certain biases or if it’s trained on a narrowly representative data set, that should be disclosed. Another helpful practice might be offering users the ability to adjust or at least be aware of the AI’s default assumptions. Ultimately, we want to ensure users understand that the system doesn’t exist in a cultural vacuum. 

Open source and open standards have been incredibly important for preserving what Hall & O’Hara (2009) called “the essential invariants of the Web experience: decentralization to avoid social and technical bottlenecks, openness to the reuse of information in unexpected ways, and freedom and equality of information as it passes across the Web”. How do you envision an increase of transparency and diversity through open-source models in the AI-sector? 

I believe open-source initiatives encourage community involvement and scrutiny. When code and training data are open, people from different backgrounds can test, audit, and improve the models. This broader collaboration can help catch biases or blind spots and may also foster innovation along with transparency.

However, I’d like to consider transparency from a slightly different angle. I don’t think we’ll see a drastic change in how open companies are, mainly because there’s a lot at stake in a large and competitive market. Organizations typically don’t disclose everything they do or how they do it, for fear of losing their market share. It’s a bit like cooking with a secret family recipe—nobody really wants to give it all away. What we’ll likely see instead are varying degrees of openness, and most for-profit enterprises will use “transparency” strategically, as a way to shape public opinion rather than truly reveal all their inner workings.

Your concerns around generative AI make a point that to me struck a similar cord to the theme of Biden’s farewell address which is a wariness or maybe even fear of the influence of web technologies with massive proliferation  – be it generative AI or social media – on societies. Do you think the concerns around social media and generative AI are somewhat similar or do you see substantive differences?

There’s definitely overlap between social media and generative AI in terms of rapidly spreading misinformation and shaping public opinion. On social media, users create posts that can go viral, but with generative AI, the system itself churns out compelling—even if entirely fabricated—content. That fundamental shift from human-generated to automated creation raises the stakes for regulating false information and underscores the need for robust digital literacy.

At the same time, I’d also highlight a McLuhanian perspective: we invent technology, yet it reshapes us in return through our symbiotic relationship with it. Given the language-processing power of generative AI, its influence may extend far beyond social media’s current scope and deeply alter how societies perceive the world. Put differently, the way we integrate this technology into daily life directly impacts how our future will look. Each decision we make now—whether in policy, education, or everyday use—can reverberate into tomorrow. We should, therefore, proceed with caution and thoughtful intent, rather than letting the technology’s capabilities run ahead of our collective wisdom.

From your point of view, what is the correct level of regulation of powerful web technologies and what form should it take?

I favor a balanced approach that doesn’t crush innovation but does protect the public interest. Regulators should set minimum standards for transparency, especially around data sources and potential biases. Multi-stakeholder collaboration is important—governments, educators, tech companies, and civil society groups all have perspectives that matter.

One obvious concern is the control of fake, yet convincing content – be it in the form of news stories, phishing or cyber scams. How can individuals evaluate the information that they encounter in Web channels filled with AI-saturated content – and if they can’t, who should be tasked with filtering out or flagging this content?

I think individuals need strong AI literacy skills to evaluate sources, cross-check facts, and understand how AI-generated content can appear credible yet be totally false. This includes knowing basic strategies like reverse image searches, looking at metadata, and verifying references. At the same time, platforms that benefit from user engagement have a responsibility to detect and flag blatantly false content. They could use watermarks or create policies that label AI-generated pieces. Ultimately, though, a well-informed public remains the best defense—people who are curious, skeptical, and willing to question what they read.

I’d like to stress again the importance of AI literacy in helping individuals stay in control rather than relinquishing decisions to secondary stakeholders. Yes, there will undoubtedly be regulations governing these processes, and companies will bear certain responsibilities. But we can’t assume these measures will address every potential pitfall. Instead, we need AI literacy to guide us in discovering our own answers and making informed choices—even when formal rules and oversight fall short.

 How should we view the users of generative AI: As targets of technology who, at least to an extent, need to be protected by gatekeepers (be it regulators or designers), or as agents who actively orchestrate tools and artifacts for specific goals?

I believe it’s most productive to view users as active agents, equipped with the knowledge and confidence to employ AI tools effectively rather than being treated as passive targets who just need protection from designers or regulators. Yet it’s important to recognize a tension here: in many real-world scenarios, users are not only “agents” in the ideal sense—actively orchestrating AI for their own goals—but also potential customers or data sources feeding new training algorithms. 

Which AI tools do you personally frequently use and how much has AI changed your work routines? Does it give you more time?

I’ve experimented with various AI tools for benchmarking, but I generally stick to one of the first-generation GenAI platforms because it offers pioneering features that really streamline my workflow. Since I started using it, I’ve noticed it not only saves me time but also boosts my overall capacity—I can get through tasks much faster and with less effort. Instead of spending hours on repetitive work, I’m able to focus on more creative or analytical aspects, which I find really rewarding. So yes, it absolutely gives me more time in my day and makes my work routines more efficient. 

Finally, what advice would you give faculty who have rarely touched any AI tools or who have used it once or twice and then thought, “Oh, that’s not for me.”

I’d suggest dipping a toe back in, but start with low-stakes experiments. For instance, try using AI to generate discussion prompts or quiz questions, then evaluate how accurate or helpful they are. That helps you see AI’s potential without feeling overwhelmed.

Also, staying curious is important. AI isn’t going away, so I find it helpful to approach it with a mindset of continuous exploration. You don’t have to become an AI expert overnight—just keep an open mind and be ready to critique and refine what the tool produces.

Many thanks! Anything else you would like to mention?

My main parting thought is that generative AI is clearly here for the long haul, and we’re all in the process of figuring it out together. In my opinion, it’s better to engage critically and shape how AI gets used than to ignore it or fear it outright.

 

 

Be the first to write a comment.

Your feedback