Blog

Beyond Buzzwords—Rules of the Game: Governing AI and EdTech in the ASEAN Region

In this episode, we’re diving into one of the most pressing questions in education today: How do we govern AI responsibly, inclusively, and sustainably? AI is advancing at a remarkable speed. And while the opportunities are exciting, the risks—from privacy concerns to inequitable access—are impossible to ignore. Across the region, governments are racing to craft policies that protect learners without stifling innovation.

This episode draws from the expertise of SEAMEO INNOTECH’s Centre Director, Dr. Majah-Leah V. Ravago, and EdTech Hub’s Asia Lead, Haani Mazari. Throughout the discussion, we will unpack how governments are responding to the rapid advancements in the AI space, what inclusive AI governance looks like, and how initiatives like the Regional AI Framework may guide the region forward.

Watch the episode

Read the play-by-play of the conversation below:

Question 1: In a world where AI is moving faster than policy, how are governments in the region responding? Who is setting the rules, and who do you think is getting left behind?

Dr Ravago began the conversation by noting that Southeast Asia’s readiness varies widely, reflecting existing development gaps. Countries like Singapore and Malaysia are far ahead, while others, such as the Philippines and Brunei, are still formulating national AI frameworks. She notes that governments remain the primary rule-setters, but many are struggling to catch up, especially since universities had already begun creating their own internal guidelines even before national policies took shape. AI regulation continues to evolve alongside the technology itself, with each ministry responsible for integrating AI into its own sector—such as the Department of Education in the Philippines, which is actively engaging with SEAMEO INNOTECH. 

Haani builds on the above points by highlighting that while governments lead AI policy, a critical issue is which part of government takes ownership, since most national AI roadmaps are intentionally sector-agnostic and often emphasise economic growth and industry over education. She stresses the importance of initiatives like SEAMEO INNOTECH’s regional AI framework for education, which elevates education as a dedicated focus area rather than an afterthought in broader AI policy documents.

Question 2:  How do you see it keeping up with the pace of the reality of AI use in classrooms, and what do you think governments and schools can do to ensure that AI is being used safely and responsibly?

Dr. Ravago explains that long before national AI frameworks were formalised, universities had already begun creating their own internal guidelines for AI use—but these early rules, which often tried to ban AI outright, quickly proved unrealistic and costly to enforce. She argues that strict regulation is difficult with a technology that evolves so rapidly, and over-regulation risks stifling innovation. Instead, she advocates for adaptable guidelines that still uphold essential protections, especially around data privacy, ethical use, and safeguarding children as a vulnerable population in education. She also stresses the need for regulation that prevents monopolies, since unequal access to AI tools can widen disparities for students in remote or low-income communities. For her, the balance lies in creating flexible frameworks rather than rigid restrictions.

Haani expands the conversation by emphasising the centrality of data privacy, noting how inaccessible and overly complex the terms and conditions of AI providers often are—even for adults—let alone for schools or students. She argues that the debate between regulation and guidelines must consider what is being regulated, especially when safety concerns may require firmer boundaries than innovation-focused areas. For Haani, responsible AI use is inseparable from equitable use: marginalised learners face greater risks, and current assumptions that AI should directly serve students can deepen inequality, because only well-resourced schools can fully benefit. She highlights alternative models, such as Pakistan’s pilot that used AI to generate lesson plans and reduce teacher workload, showing how AI can strengthen systems around learners rather than exacerbate gaps. In her view, responsible AI in education must shift mindsets toward inclusivity and system-level support.

Question 3. So what does inclusive regulation look like in practice, and how can we ensure it remains relevant in an ever-changing AI landscape? Especially when we talk about ensuring that it doesn’t just serve the most connected and well-resourced schools as well, and ensuring the urban-rural divide is also bridged.

Dr. Ravago explains that inclusive AI regulation must start with fairness and equity, but it cannot be viewed solely through the lens of education. Access to AI depends on broader ICT infrastructure, and in countries like the Philippines, rural and remote areas still lack basic connectivity. Without stable internet access or devices, learners cannot benefit from AI-enabled tools, regardless of what education policies say. She highlights the newly passed “Connectadong Pinoy” law, which aims to bring data connectivity to the last mile, as a crucial first step in bridging the urban-rural divide. For her, inclusive regulation requires recognising that education, technology, and national development are interconnected, and that AI access can only be equitable when foundational infrastructure is in place.

Haani builds on the infrastructure argument by raising concerns about whose data and knowledge systems AI tools are actually built on. She notes that AI models often rely on English-language, Global North–dominated datasets, which risks marginalising local languages and cultural perspectives across Southeast Asia. She also points to emerging approaches—such as Malaysia’s Dalima platform, where AI pulls from Ministry-approved learning resources—to illustrate different ways countries are grappling with content filtering and data provenance. For her, inclusive regulation must consider not just access to infrastructure, but access to culturally and linguistically relevant content, ensuring that AI tools reflect diverse contexts and do not inadvertently reinforce inequalities in information access.

Returning to the discussion, Dr Ravago notes that in the Philippines, language is less of a barrier because most educational materials are in English, but this is not the case across the region. She points to countries like Thailand that are building localised AI systems capable of processing content in their own languages—a necessary step since AI can only produce meaningful outputs from the data it has been trained on. She also highlights another layer of inequity: the shortage of local AI developers and data scientists, which drives up costs and makes it harder for small companies or institutions to adopt AI. From an economic perspective, she argues that expanding the AI-skilled workforce is essential for lowering costs, improving access, and ensuring that AI development is not concentrated among a few well-resourced actors.

Question 4: We know that SEAMEO INNOTECH is currently developing a regional AI framework to guide the development around AI in the region. What are the toughest trade-offs or challenges that you see policymakers facing? Is it a question of innovation versus oversight, local priorities versus global frameworks, or is it something else?

Dr. Ravago highlights that the biggest challenge in creating a regional AI framework is the wide variation in priorities, capacities, and bureaucratic processes across Southeast Asian countries. Some ministries are already working on AI, while others are focused on more basic education needs, making it essential for the framework to function as a flexible policy guide rather than a rigid mandate. She stresses the importance of co-ownership, with ministries directly involved, so the framework is not seen as externally imposed. Another major issue is the mismatch between rapidly evolving AI technologies and slow-moving government systems. To address this, she argues that national AI policies must embed adaptability—clear review cycles and feedback loops—so policies can evolve as technology changes and withstand shifts in political leadership. Looking ahead, she emphasises that the regional framework must be followed by careful contextualisation for countries with different levels of readiness, especially low-income countries, balancing foundational reforms with the need to keep pace with AI developments.

Haani emphasises that policymakers face a core tension: each country has different priorities, concerns, and contexts surrounding AI in education, making regional harmonisation complex. She agrees that adaptability is key, pointing to the Philippines’ plan to update its AI guidance annually as an example of governance designed to keep pace with fast-moving technologies. For her, the harder task is not drafting an AI policy but building a learning governance system—one that iterates, reviews, and stress-tests policies through sandboxes and distributes responsibility across ministries, industry, and schools. Drawing from work in the Philippines and Indonesia, she notes a growing regional shift toward evidence-based, iterative policymaking, with ministries more willing to pilot and refine before scaling. This shift, she argues, is essential for aligning education systems with technological innovation.

Question 5: What do you think it takes for governance to remain adaptable and not outdated at the moment it’s written?

Both speakers emphasise that adaptable governance for AI in education requires integration, balance, and sustained buy-in across the whole system—not just within classrooms.

Dr. Ravago highlights that ministries of education must learn to use AI for governance itself, not only for teaching and learning. He points to examples from the Philippines, where AI-driven use cases—such as SIGLA (an AI system that analyses student health data to inform feeding programmes) and tools that improve scholarship targeting—help ministries make more efficient, evidence-based decisions. These innovations show how AI can strengthen policy design and resource allocation. However, he stresses that such governance applications rely heavily on sensitive student data, making data privacy protections and careful risk mitigation essential.

Haani builds on this by noting that adaptable governance depends on embedding AI within broader policy priorities rather than treating it as a standalone initiative. Policies last longer and remain relevant when they are woven into national priorities—such as foundational learning or distance education—rather than placed in silos that may shift with political cycles. Yet, she cautions that AI policies must also avoid being so rigid that they stifle innovation. The challenge is creating coherence across multiple strategies (AI, EdTech, foundational learning) so they reinforce—rather than compete with—each other.Both agree that public understanding and trust are crucial. Dr. Ravago adds that engaging the media is an overlooked but important strategy: journalists are often parents themselves and may naturally focus on AI’s risks. Providing them with dedicated learning sessions could help ensure balanced coverage and broader support for responsible, sustainable AI adoption in education.

Question 6: What is one policy or regulatory approach you’d love to see Southeast Asia adopt in the next 5 years?

Dr. Ravago expresses a strong desire to see the successful rollout of the regional AI framework currently being developed, followed by meaningful national-level contextualisation. For him, a common regional framework—adapted to each country’s context—would offer much-needed structure and alignment for responsible AI adoption in education.

Haani emphasises the importance of iterative, adaptable policymaking that can evolve alongside rapid technological change. From a regulatory standpoint, she hopes to see far more cross-sector convenings—bringing together ministries, regulators, industry, and child-protection stakeholders—to tackle AI governance collectively rather than in silos. She is particularly interested in the potential for a cross-sector minimum standards framework, especially around data privacy and child protection, noting that issues such as safeguarding, privacy, and platform accountability cut across multiple domains and cannot be addressed by education authorities alone.

Wrap-up question:  If AI were a student in a classroom, would it be a rule breaker, the overachiever, or the class clown, and why?

Dr. Ravago characterises AI as a ‘challenger’ – not a rule-breaker, but a presence that pushes teachers to think differently. For him, AI resembles a student who prompts questioning, reflection, and deeper engagement, offering constructive challenge rather than disruption.

Haani offers an almost opposite view, seeing AI not as a challenger but as an overachiever shaped by someone else’s worldview—agreeable, moldable, and unlikely to question underlying ideals. This is why she emphasises the need for a human in the loop: teachers must push learners to think beyond what AI presents, apply knowledge critically, and avoid being confined by AI’s built-in assumptions.

Closing message: Is there anything else, perhaps a final message that you’d like to leave our viewers with?

Dr. Ravago encourages viewers—especially parents, teachers, and students—to keep an open mind about AI. He stresses that while AI brings significant potential benefits, it also carries risks, and the key is to approach it with balance rather than fear or dismissal. Technology, he reminds us, exists to help improve our lives, and we should remain curious about how AI can assist rather than resist it outright.

Haani ends with a simple but powerful message: collaboration is essential. She emphasises that meaningful progress in AI and education will only happen when stakeholders work together.


We hope you’ll join us for upcoming episodes as we continue to unpack what works, what needs to evolve, and how we can build a more equitable and effective education system together.

The next episodes will unpack conversations around Ethics and AI, Contextualising Global Guidelines, and Evidence Generation with regional experts, and will commence in January 2026. 

Stay tuned for the discussion as we continue the conversation in 2026.


This episode draws on expertise from the following discussants:

  • Dr Majah-Leah V. Ravago is an economist and educator with a strong background in leadership and policy research. She is the current Centre Director and Chief Executive of INNOTECH (Regional Centre for Educational Innovation and Technology), a regional center of the Southeast Asian Ministers of Education Organization (SEAMEO). She has held significant leadership roles, including President and CEO of the Development Academy of the Philippines (DAP) and President of the Philippine Economic Society. Dr Ravago holds a PhD in Economics from the University of Hawai‘i and recently completed an Executive Leadership Programme on AI and Productivity at INSEAD.
  • Haani Mazari is EdTech Hub’s Asia Lead and Digital Personalised Learning Lead. In her current role, Haani oversees strategic engagements across the Middle East, South and Southeast Asia, as well as contributing towards our global advisory function in supporting decision-makers with artificial intelligence and delivering education in emergencies

Statement of disclosure: This blog was developed with support from generative AI. A transcript of the recorded session was generated, and the content was organised into question-based segments, which were then provided to an AI tool to assist in the drafting and structuring of this piece.

Acknowledgements

Thank you to colleagues from SEAMEO INNOTECH, Dr. Maja-Leah Ravago, Jennica Dalisay and all those at EdTech Hub supporting this work including Haani Mazari, Neema Jayasinghe, Sangay Thinley, Jazzlyne Gunawan, Sophie Longley, Jillian Makungu, and Laila Friese on developing this third episode of the EdTech Hub Spotlight Series.


This publication has been produced by EdTech Hub as part of the ASEAN-UK Supporting the Advancement of Girls’ Education (ASEAN-UK SAGE) programme. ASEAN-UK SAGE is an ASEAN cooperation programme funded by UK International Development from the UK Government. The programme aims to enhance foundational learning opportunities for all by breaking down barriers that hinder the educational achievements of girls and marginalised learners. The programme is in partnership with the Southeast Asian Ministers of Education Office, the British Council, the Australian Council for Educational Research, and EdTech Hub.

This material has been funded by UK International Development from the UK Government; however, the views expressed do not necessarily reflect the UK Government’s official policies and equitable digital learning.

Connect with Us

Get a regular round-up of the latest in clear evidence, better decisions, and more learning in EdTech.

Connect with Us​

Get a regular round-up of the latest in clear evidence, better decisions, and more learning in EdTech.

EdTech Hub is supported by

The findings, interpretations, and conclusions expressed in the content on this site do not necessarily reflect the views of The UK government, Bill & Melinda Gates foundation or the World Bank, the Executive Directors of the World Bank, or the governments they represent.

EDTECH HUB 2025. Creative Commons Attribution 4.0 International License.

to top