AI-Driven Interventions for Teaching Students with Autism – An Interview with Aaron Jones

Generative Artificial Intelligence (AI) is transforming education across various sectors, including adaptive education programs. For AACE Review, I had the pleasure to speak with Aaron Jones, a researcher at Marshall University (USA), about his work on AI-powered learning interventions for autistic students in Cyprus. His project, EdQuestGPT, leverages AI-driven gamification, adaptive learning, and therapeutic best practices to address critical gaps in special education services. Aaron Jones will present his work at the 2025 EdMedia conference in Barcelona. I got an early sneak peek of his paper. In the interview, we explore the development process, technical challenges, pedagogical alignment, and future potential of AI in ASD education.
Do you recall when you first encountered generative AI and how your own personal use has evolved since then?
It’s all a bit hazy now, but I believe I started engaging with AI right around the time I began my doctoral program at Marshall. That would be about two years ago now. Some students and I had gotten into a discussion about it, though none of us had genuinely begun to actively engage with it or test its capabilities in our work. To remedy that, we started researching what AI learning options were available. One student brought to our attention a webpage called “Human or AI,” which essentially involved a chat-based conversation with a human or AI chatbot. After a few rounds of reciprocal dialog, the user was prompted with the question, “Were you chatting with a Human or AI?” The user makes their choice and is told the correct answer. My students often found the human and AI chats to be nearly indiscernible. They were blown away when I stepped up and proceeded to get 21 in a row correct, compared to the study’s findings of only 68% accuracy. So, we discussed why I made my choices, and we attempted to break down the differences between human and AI-generated responses critically. Interestingly, it turns out that this was an open-participation study being performed by Daniel Jannai, Amos Meron, Barak Lenz, Yoav Levine, and Yoav Shoham. They might be associated with Stanford, but that was not explicitly stated. After that, we completed some composition work to reveal the linguistic limitations of earlier AI chatbot models and for me to assure them that their teachers would almost certainly be able to detect if they attempted to pass off AI writing as their own. From there, I found Teachable Machine, which we used to perform several projects.
My personal use has undoubtedly changed. Whereas in the beginning, it seemed a novelty tool, now, I use it for a wide range of projects with students, in my doctoral work, and my personal life. My research on gamified, personalized learning using AI came out of a practical need, very organically. More recently, and absolutely influenced by my research, I’ve been working with one student to create a personal tutor aimed at learning MIT’s App Inventor tools. Like with EdQuest GPT, we took a very granular and targeted approach to building its tutoring persona, its knowledge base, and its education and computer science CV, including its preferred pedagogical models. My student actually prompted it to take on the character of Ron Weasley from Harry Potter fame, so it interacts in a British accent and peppers in Harry Potter-based humor and allusions. It is quite entertaining, and the progress the student is making is remarkable.
The human-AI hybrid tutoring that goes on pushes the limits of he and I, as well as the AI agent. In developing the tutor’s background, I used a lot of information on pedagogical models of teaching, and as the student learns, we occasionally shift or employ model amalgams to target learning preferences or to hone some of our weak areas. Using student inquiry, mastery learning, inductive models, or when student skill and confidence are high, near total Socratic questioning. It is so stimulating to see how both the AI tutor and student adapt and change communication styles while preserving a similar level of warmth and character in the interaction. There is definitely another research paper being developed within this fun project!
In my personal and professional development, I’m using AI to learn how to code, as a sounding board and revision partner for my research writing, to connect with clinical trials for medical issues for a family member, and even to generate recipe ideas so I can cook interesting meals for my family without the pressure of creating or researching them. The exceptional variety of ways to engage with these digital multitools is inspiring, and I find it genuinely fascinating how each user creates and deploys their own use cases.
Personally, I commence all interactions with AI with one relatively strict rule: I only employ AI, having already done the creative legwork. Whether as a research assistant, a compositional aid, for thought experiments, or when using image or video creation, I am resolute in ensuring that the intellectual heavy lifting and critical thinking have been performed before engaging AI. This guarantees intellectual integrity, forces active metacognition throughout processes, and ensures ethical adherence. Perhaps most importantly, it safeguards my “voice.”
Tell me a little bit more about the context of your project. What is the connection of Marshall University to Cyprus?
There isn’t an inherent connection between Cyprus and Marshall, aside from me. I’ve been living and working in Cyprus for over 4 years at this point and am in year two of my doctoral studies.
Here in Cyprus, as in the majority of my career in education has been dedicated to working with students with special educational needs, specifically autism spectrum disorders. Coincidentally, I’ve also been living outside of the United States for nearly 15 years, and in that time, I’ve always felt the importance of integrating into the home culture wherever I was living. The process of not just fitting into greater society but acculturating with it and thus becoming a respectful participant and actor for positive change has always remained imperative. Thinking of myself as an immigrant rather than an expat, in that growing roots and building cultural bridges was requisite to living in others’ homelands, has been a hallmark of my outlook.
This stems from my time teaching in Chicago Public Schools, where I developed deep connections to the schools and communities where I taught and lived. Participating in social justice action and civic engagement outside of classroom hours was so significant for connecting with my diverse groups of learners and made me a better educator with a profound desire to have an impact.
My lifelong determination to help provide inclusive, high-impact, and expectation-driven education to underserved populations connects Marshall to Cyprus. The midnight to three a.m. classes are still tough, though.
When did AI become a research interest, and what inspired you to develop an AI-powered intervention for autistic students?
As I mentioned, striving for acculturation and civic engagement in all the communities in which I’m lucky enough to work is but one aspect of how this project took root. I’ve been teaching in non-traditional contexts for many years, working almost exclusively with autistic students of widely disparate skills, abilities, and challenges, the expression of which is anything but monolithic. As with diverse learners generally, but in even more acute and demanding ways, educators of autistic children must be willing to adapt. Content delivery, skill acquisition, targeted interactions, reinforcement learning for therapeutics and academics, pedagogical models, and curriculum design methods will often need to be rethought, scaffolded for student needs, or, if possible, reciprocally designed with students.
This presents significant challenges, of course, though most educators manage to the best of their ability. That said, in building my education network, I often engaged in discussions with educators and other stakeholders about the challenges faced by special needs students, their parents, and educators. In these discussions, I heard about scarcity. A scarcity of resources, a shortage of expertise, and, most deafeningly, a lack of interest in filling those roles. As I expanded those networks and broadened them into higher education, NGOs and non-profits serving those with special needs, and therapeutic experts I realized that there just, honestly, weren’t many. Service provision was slow while the need was great, and skilled practitioners in the education and interventional arenas were overwhelmed.
This begs a few questions about creating and implementing truly individual educational and treatment plans, especially within inclusive environments with 20+ other students and little help from aides or other professionals. In addition to these challenges, genuine personnel shortages and a need for ongoing professional development have made it evident that some support is needed for all stakeholders in the hierarchy.
This got me thinking about what kinds of force multipliers could be leveraged, and AI revealed itself as the obvious answer. At that time, I had been experimenting little by little with Human-AI hybrid tutoring as a novel method of studying for exams, quizzes, grammar practice, and feedback on student writing. Prior to that, I had been a major advocate for Project-Based and place-based, experiential learning modalities, both of which are adaptive, student-led styles that suit diverse learners and the neurodivergent well. AI stood out as a tool designed for the kind of adaptability that autistic students need to thrive.
There was an epiphany moment there, in which I realized that under expert guidance, AI-driven personalized learning could be that hyper-adaptable educational agent, providing student-centered instruction and being a force multiplier for educators and therapeutic service providers. At that point, I dove in headfirst, and quickly, I also began to become cognizant of how AI could be used to track student behavior data, analyze trigger behaviors, harness computer vision to detect facial expression and eye contact metrics, etc., and analyze these massive amounts of data in ways human practitioners could not. That is when the EdQuest project truly began in earnest. It dawned on me quite early that this could become a “life’s work” kind of project, which is both inspiring and daunting.
How is K-12 education delivered in Cyprus, and what challenges do autistic students face in the current Cypriot education system?
In Cyprus, there are two systems, essentially: traditional Greek-language Cypriot public schools and English-language private schools. However, private Greek-language and Russian-language schools are also a tiny but available minority option. Despite this linguistic difference, and thus some demographic discrepancies that come with it, they share some significant positive and negative commonalities. Each setting, with few exceptions, implements school structure and curricula based on, or directly taken from, the UK national curriculum so students who may make the change from a private English-speaking school to a Cypriot secondary school, or vice versa, will enter with comparable subject matter knowledge and skill development. This curricular continuity is a boon to many children in such a linguistically and ethnically rich environment.
Autistic students, of course, face similar challenges in many nations, including the US, to a degree. However, regarding awareness, advocacy, and action toward educating autistic students, the US is undoubtedly a leader. This is not to say that there is no legislation toward inclusivity present, but that these may not be performing as well as intended to ready the educational system for dealing with the need. That means we see shortages of Special Education Needs Coordinators (SENCO), SEN/SPED teachers and aides, resources for autism spectrum students, and General Education teachers who feel unprepared for wholesale inclusion of SEN/SPED/ASD students. This also coincides with the presence of very few Board-Certified Behavior Analysts to provide therapy supports and interventions, actions we know have significant return on investment.
Speculation as to why this scarcity exists is not my field, though obviously, system-wide evaluation is a requisite step for long-term improvement. I, education stakeholders, and most in the Cypriot autism community are more focused on shoring up service gaps as quickly and effectively as possible. With my project, I see not so much a workaround for quality professional development and education for a new generation of better-prepared teachers, but rather a tool that can be implemented more or less immediately that works with students who need extra support in a highly personalized way, works for teachers and therapists who are overburdened and multiplies the impact of their expertise, and can also be implemented by parents, whose involvement in the autism intervention process has a massive impact on outcomes.
You followed an iterative design process, starting out with different models like Google Gemini and ChatGPT until ultimately settling on a custom GPT. I think that’s a common challenge for instructional design: There are so many large language models: How do I decide if it’s better to use Claude, ChatGPT, Gemini, or something else for my project? And while I’m trying to decide, how will I keep up with the constant release of new features? What is your advice for other educators?
Honestly, like anything, you may not always choose the model best suited to your goals in a technical sense, instead opting for a more engaging or friendlier UI/UX. Personal preference is often as much a deciding factor as a use case, and given the sophistication of the currently available models versus the sophistication of the average user, you’re still going to be readily able to handle most needs with whatever model you choose. In my process, I will often run the same prompt through multiple models to get a sense of their competencies for my needs. That’s how I initially settled on Gemini Advanced for the first iterations, as its ability to explain things to the students with a “warmth” of tone far surpassed the OpenAI, Anthropic, and Meta options available at the time. However, when a superior multimodal functionality and greater customization options were desired and became available, I switched to OpenAI’s models. Even now, I find, for instance, that Gemini’s customizable Gems option does not provide the level of functionality that CustomGPTs do.
I personally don’t opt for School.ai, MagicSchool, or other packaged options as I find them to fall short in a number of categories despite being specified for use by educators. However, I’d suggest those as a starting point for a novice user trying to gain efficiency in their daily workflows. Additionally, there are some great webinars and trainings from organizations like Day of AI, The Rundown AI, and Section AI, etc. Some are free, and some are paid, but despite the cost, I think, given the AI freight train, upskilling in these areas is an unbelievably worthy investment. If that isn’t an option, I’ve found some great social media follows and newsletters (The Rundown AI and Robotics newsletters, Evolving AI on Instagram, and many more) to keep me updated without feeling like I’m constantly digging for news. Other excellent resources I’ve found are the AI subs on Reddit. So many people are doing super cool things, and a quick scroll through the subs can be hugely helpful in sussing out your currently underdeveloped ideas or connecting with users who may have more expertise.
If nothing else, open a few models and throw some thought experiments at them. Wax philosophical. Ask them to code a simple game for you. Have them write a lesson plan. Try Deep Research across a few platforms. Simple actions like these can elucidate so much about each model; best of all, they require almost no technical ability.
AI is prone to hallucinations and can amplify false information. For example, in education and lesson planning, AI output will frequently emphasize the role of learning styles. How did you ensure that EdQuestGPT aligns with evidence-based pedagogical and therapeutic best practices for ASD students?
The data curation process was vital for this exact reason. The knowledge base documents were meticulously chosen, which was no small feat. Locating resources, I felt, covered the necessary depth and breadth of best practices for teaching autistic students was painstaking and required many hours of search time. Throughout that process, I likely combed through thousands of pages of literature on ASD intervention and treatment methodologies for differing expressions of autism symptomatology, educational strategies for students with varying levels of communication delays, and procedural guidelines for educators and BCBAs to implement empirically sound interventions that are industry standards based on relative expert consensus.
Once this was done, to maximize reliability, I had experts in autism pedagogy and therapies for neurodivergent individuals audit the material that EdQuest would use for at least some of its response behavior. It must be said that after some initially super promising beta testing, even before the external audit, I had returned to the curated data sets myself with an inkling that they were not as solid as I thought. That instinct turned out to be correct as it aided in what is a more potent, more reliable variant of EdQuest and led to other interesting technical challenges that, when resolved, should make a move from EdQuestGPT to the standalone application at least a tad smoother. Fingers crossed.
What were some frustrations around AI-tools you experienced?
I’ve had many frustrating, humorous, and bizarre experiences thus far, and they often become points of realization for me. While AI is truly a fascinating and paradigm-shattering technology, its limitations remain highly evident.
In doing some thought experiments around the action of algorithms, both the AI and I realized that while algorithms and neural networks are capable of so much, they don’t have much of our capacity to make meaningful connections between multimodal stimuli. This was not a frustration but rather a reminder that creative, critical thinking work is still a human domain. The ability to process extremely nuanced, subtle cues of behavior, vocal tone, musical notes, variance in genres, intuition based on nearly imperceptible data, etc., is an irrepressible gift of the human mind.
Recently, I took a few hours of video observation data from a paper on Virtual Reality in education that I’m revising for publication. I have a video of the student and a separate but synchronous video from the VR session via screen mirroring. I wanted to synchronize the students’ physical behaviors and connect them to the VR stimuli with which they were interacting. I decided to see if AI could help me synchronize and analyze the videos as it would be a massive time saver. After uploading the videos, prompting the AI for the format I wanted regarding time stamps, behaviors, and cross-referencing, and waiting just a few minutes for the analysis, I was provided with a table of behaviors observed at regularly occurring but specific intervals and associated screen mirrored VR environment data. Or so I thought. I, of course, went through line by line and started to notice that some of the VR environmental scenes were far more intricate than I remember the experience being. I looked up all the time stamps and realized they weren’t more elaborate; they were false. AI had taken the first minute or so of video and was relatively accurate in what it could glean. From there, it made many assumptions about what data the video might hold. I’d call it creative extrapolation. I addressed the fabrication issue and asked if the AI agent could complete the task technically. It insisted it could and apologized. Long story short, soon after, it had to admit it could not handle this type of data analysis. It was simply too complex, given its sensory inabilities. Given the time, I was able to synchronize all the video data and connect stimuli to behaviors with relative ease. This exposed another limitation of current AI systems that humans don’t have.
Another example occurred when I created an AI course for the MEd program at Marshall. I admit I was feeling overwhelmed, so I asked if ChatGPT could identify some sources for a module. It quickly turned out some journal articles, YouTube links, podcasts, and the like. The titles sounded oh-so reliable. Upon clicking the links, most were dead. So, I dug deeper, attempting to locate the papers through other databases. Sometimes, the authors were real, but the papers were not. Others were nowhere to be found and were perhaps fictitious. The same was true for the video and podcast links. I addressed the issue with ChatGPT, received the standard apology, and a regenerated set of links, all dead or nonexistent again. I responded angrily this time, and the AI indicated that the next set of links would be verified. Again, the titles looked good, but upon clicking the first one, a YouTube video of Rick Astley’s hit “Never Gonna Give You Up” played. ChatGPT had Rickrolled me!
Given the diversity of ASD presentations, how can an AI system adapt to varying student needs in terms of academic support and socio-behavioral development?
Truth be told, given significant “time in the tank” with various AI models during the development of EdQuest, I remain skeptical about AI’s ability to manage the necessary adaptability. The newest models are incredibly flexible multimodal tools. However, their neuroplasticity can’t touch that of human experts. Because of this, expert practitioners must be involved in any AI-powered application or intervention development, in training those intending to deploy or use AI interventions, and in the intervention process itself. That is not to say an autistic student with sufficient technical skills cannot interact with EdQuest or applications like it, but rather that an informed human expert should be involved with interpreting the efficacy of the intervention and possibly revising the methods the AI should employ.
In my work, it has become extremely clear that AI models can identify patterns within massive data sets and follow set protocols while maintaining some creativity. However, they are quite limited in finding or making the connections and associations humans can naturally make between objects and ideas. An expert educator can take real-time classroom observational data and reframe their approach with almost no direct “prompting.” AI is unable to make the informed yet intuitive leaps we are. The human neural network was constructed to create multimodal connections, essentially webs of connections that often hinge on data from widely disparate sensory stimuli. LLM and reasoning models are developing somewhat similar abilities. Still, this ability to connect things that may not be connected by hard data makes high-quality therapists what they are.
Combining the human expert skill set and ability to intuit vital information from interactions with AI’s capabilities to collect and analyze massive datasets and find important patterns is where the magic will take place regarding how adaptive systems enhance the outputs of expert knowledge and skills.
Another feature of AI that cannot be understated is that its energy for engagement is infinite, while human experts have challenging days, need breaks, and lose focus. This very human fact can unintentionally lead to a level of variety in their responses to autistic students/clients that can be problematic for their often rigid communication styles and difficulty with language and information processing.
Your study leveraged hypothetical ASD student profiles for beta testing. What insights did this process provide in refining EdQuestGPT’s features and usability?
I don’t want to say that this was my stroke of genius moment, but it was an enormous factor in the iterative refinement of the project. In the infancy stage of the project, I was able to test some of my early ideas with actual students, and this was highly informative as they gave feedback about the sessions that would later be the foundation for EdQuest. This was, however, a minimal sample representing a narrow set of expressed autism-related socio-behavioral and linguistic characteristics. From a research design standpoint, this was an explicit deleterious limitation on any outcomes and made scalability extremely risk-laden. To combat this issue, I began searching for anonymized/redacted sample IEPs, 504s, BIPs, or positive student profiles online. I rapidly realized there were precious few of these to be found. This limited sample size of online IEPs and other resources used in autism education would not be sufficient, and accessing more student participants would lead to ethical issues and make me liable to review board approvals for which EdQuest was unlikely to pass.
My way around this was to provide an AI agent with a large amount of data on how autism is expressed. As I was already in the data curation process, I had many quality sources available. In addition, I provided the AI with redacted/anonymized IEPs and other documents to provide examples of therapeutic and academic goals regularly created for autistic students with different learning and behavioral intervention needs.
The process of creating these hypothetical student profiles was relatively swift and insightful. I created 100 unique student profiles in the initial batches, each with specific behaviors and traits in real-world ASD student cases. I manually reviewed each to audit their accuracy, particularly the cross-referencing of particular behavioral or cognitive challenges and the academic skills most commonly impacted by those challenges. There were, of course, some inconsistencies here and there, but overall, I was pleasantly surprised. The AI model readily made logical connections between the social, behavioral, linguistic, and communicative issues faced by autistic students and the therapeutic options and goals called for. I was even more impressed by the way the aforementioned academic area impacts were so well-matched to the cognitive and behavioral characteristics of the individual hypothetical student, as it was not simply making connections between areas of need but strength areas typical to different severity and expressions of autism while keeping track of developmental stage or age and their effects on learning needs. Essentially, these student profiles, which were exceedingly concise, one to two paragraphs in length each, exhibited nuance and surprising depth.
Specifically, creating these hypothetical students forced me to dig more deeply into the available knowledge and documentation related to different expressions of autism and the therapy best practices associated with them. That process itself increased my awareness of autism-related issues. Additionally, the sheer range of cognitive and behavioral abilities and challenges of these hypothetical students pushed EdQuest in each iteration to accurately call for tailored therapeutic practice, which could often work synchronously at multiple skills and/or both therapy and academic IEP goals within a single gameplay session.
It’s funny how simple it seems using hindsight to look back on the process. This is one of the most exciting things about AI tools for me. They can help me create something painstakingly detailed and time-consuming, saving me immeasurable time considering things I would not. At the same time, they trigger deep metacognition and critical thinking. They thus are a strong impetus to expand my knowledge base to check the work, reframe it, and redirect when necessary. It’s also profoundly affirming when something exceptional emerges because it is the depth and detail of my prompting which shapes such remarkable outputs. Since those prompts stem from my expert suppositions, this often confirms that, “Yes, I actually do know a thing or two about this!”
What are next steps for EdQuestGPT?
Frankly, I am at a challenging point in the development process. I believe the variant I am working with now, which I supposed would be version 4.0.2. I have what I believe to be a working model that is consistent in its choices for therapeutic or academic retrieval practice sessions while providing a narrative arc appropriate for a role-playing game tailored to the interests of the user and is also able to collect and track student progress data giving an output to the practitioner involved that can help guide future in-person or AI sessions. However, it is very clear to me that in the near to medium term, this needs to have integrated cloud storage, complete, secure user authentication, a database of visual elements that the integrated AI can retrieve in real-time, and massive upgrades in UI/UX. These are pretty significant developer projects in a classical sense, but as I mentioned, I’m learning to code with AI currently, and together, we have walked through creating a Flask environment, the necessary Python code work to integrate EdQuestGPT with the Google Firebase API, and some basics of gameplay.
In the long term, I’m looking to move everything to a standalone iOS/Android application with engaging visual elements and a user-friendly UI/UX for improved gameplay. To bring an application of this complexity to the enterprise level a significant amount of developer time is required, at least classically. Given the rise of “vibe-coding,” though, maybe I won’t need quite so much. At this point, given my outlook on AI as an aide for my own cognitive work, and the recognition that it’s likely to produce a lower-quality application, I’m not counting on completely automated coding help from AI models. To that end, I am looking into grant opportunities that could help me fund some dev time or connect with some developers who may want to contribute to a worthy cause.
From that point, I’m hoping to scale. In its final functional form, EdQuest could be a difference maker beyond my local autism community. It is a tool that could be leveraged first nationwide here, in Cyprus, and likely beyond. I also believe this would be a fantastic tool to deploy in a Virtual Reality/Augmented Reality modality, with which I’ve done extensive research for special needs populations as well, but that is a dream, for now.
Beyond EdQuestGPT, what potential do you see in AI for reaching students with special support needs?
I believe it holds spectacular potential for neurodivergent students, but not only that population. AI interventions can aid a broad array of developmental and behavioral issues. I think that the way I’ve created EdQuestGPT could be a guide for creating other similar applications. If the AI models are purposefully and carefully trained on evidence-based practices, they should provide outputs that are, at best, highly efficacious and reliable and, at worst, non-harmful. I know that isn’t the awe-inspiring, outcome-enhancing result we’d like, but it is the line we draw in good academic research.
However, the future does seem bright for AI used for special needs projects. As mentioned, VR/AR is a ripe area for integrated AI learning tools. These can be specifically designed for students with cognitive or sensory challenges and provide opportunities for multisensory skill development in a way that theoretically will help with transferability to the real world.
AI’s ability to monitor progress and track real-time analytics is also likely to be increasingly harnessed for predictive and/or preventative interventions, offering behavioral insights that can identify psychological distress or environmental triggers for problem behaviors and perhaps take preventative action.
This same analytical ability will allow greater multimodal interface capabilities. This means learning or therapeutic applications could leverage speech, haptic, gesture, and perhaps even emotional recognition to create more intuitive learning experiences with maximum accessibility.
The sky truly is the limit.
What challenges remain in making AI-driven interventions more transparent, explainable, and trustworthy for educators and parents especially when working with children?
This is an issue I worry about, but I admit I don’t have all the answers. While an experienced user, I am by no means an expert at what happens inside the “black boxes” of AI. Experimenting with them and running into their cognitive roadblocks has illuminated for me to a reasonable degree how these algorithms “think,” but even so, it is tremendously complex.
I look at the issues surrounding social media and how what we choose to incentivize can have immense impacts, skewing both positively and negatively. Incentivizing profit and winning in the race toward AGI is problematic, to say the least. While I realize there is a remarkable amount of money, time, and expertise being put into safeguards, including value alignment research, predictive, preventative measures by stress testing systems against adversarial inputs, policy involvement, and academic partnerships, we are essentially shooting first and asking questions later with the current incentivization model.
Overall, accountability is the primary concern, with transparency, of course, being central to it. However, without comprehensive and comprehensible ethical frameworks in place and made public, taking real action toward that accountability will be arduous.
Unfortunately, there is no pause button on this to let all of society grasp what these tools are and what is likely coming. So, one answer could be creating cohorts of collaborators, including technologists, educators, psychologists, and policymakers. These cohorts should all be intimately involved with the development of these ethical, human value-aligned frameworks so that the development of future AI models and tools is well thought out prior to deployment. Legislation will be a part of this, but I worry that most government officials know far less about AI than an average citizen.
I am also committed to the belief that lower-level stakeholders across spheres and populations should receive training produced by these multidisciplinary teams so that a large section of society understands the power of these tools and how to contribute to guiding their future direction. A well-informed public will force transparency initiatives in AI development, encouraging accountability among its creators and demanding innovation that prioritizes humanity’s collective well-being above market share. Undoubtedly, the money will come either way, so why not do this thing right?
About:
Aaron has been an educator for nearly 20 years, specializing in technology integration and its benefits for students with special educational needs, particularly the neurodivergent. Having taught in many nontraditional education roles, Aaron has become particularly adept at designing and deploying cutting-edge, student-centered curricula that are highly personalized and targeted toward the full actualization of his students as agentic learners and future change makers.