How might we create policies and programmes that enable use of AI in education while protecting learners and national priorities?

How might we create policies and programmes that enable the use of AI in education while protecting learners and national priorities?
Realising the benefits of AI in education, while managing its risks, calls for collaboration across public, private, and civic actors, supported by strong public oversight and safeguards (Molina et al., 2024; World Bank, 2024). As ‘Big Tech’ companies expand their presence in LMIC ecosystems (Rahman & Freeman, 2025), it is critical for governments to strengthen their capacity to engage with private sector partnerships on their own terms.
EdTech Hub’s AI Observatory is exploring how ministries of education can align partnerships—which means driving equitable outcomes through purposeful collaboration and collective action.
This week, in Issue No. 25 of the #WaypointWednesday, we spotlight benchmarks for AI EdTech, data governance to protect learners, and collaborative regulation through sandboxes.

Early signals
Aligning procurement of private-sector AIEd with benchmarks for education
Ministries need to be able to critically examine the quality, safety, and cost-effectiveness of private-sector AI education (AIEd); however, many products lack robust evidence on their impact. (Fab AI) We’re seeing benchmarks and evaluations designed specifically for AIEd.
- LMICs – Fab AI’s benchmarks for education: AI benchmark leaderboards designed to ensure developers have an education-aligned target to measure and improve against, and policymakers can make a more informed decision about which AI systems to partner with. (Fab AI)
- Global – EdTech Impact’s assessment framework: EdTech Impact works with schools, analysts, researchers, and experts to assess a product’s evidence of impact, user experience and pedagogical design, to data responsibility, cybersecurity, and algorithmic fairness. (EdTech Impact)
Upgrading data governance to protect learners
In order to support data-driven, evidence-based strategies while protecting learners, governments must establish strong, transparent, and inclusive frameworks for data governance. (Molina et al., 2024; UNICEF Innocenti, 2025)
- Mauritius – EdTech company reporting duties: In Mauritius, EdTech companies are required to register as ‘data controllers’, declare their data processing activities, and conduct Data Protection Impact Assessments for high-risk operations involving student data. However, UNICEF Innocenti’s research suggests very few countries have the resources to oversee these duties. (UNICEF Innocenti, 2025)
- Republic of Korea – Ethical Principles for Artificial Intelligence in Education: The Korean government set 10 specific goals and guidelines to support the ethical use of AI in education, including that data should only be collected where this is aligned with educational goals and where personal information and privacy are protected during data processing. (Republic of Korea Ministry of Education, 2022)
Shaping regulation collaboratively through sandboxes
Regulatory sandboxes are formally supervised frameworks where governments can engage innovators to test tools that challenge existing legal frameworks in order to understand risks and safeguards and help shape regulation. (OECD, 2023)
- Kenya – The Communications Authority ICT Regulatory Sandbox: The first admitted participant developed an offline-focused microserver aimed at addressing content distribution challenges in Kenya’s basic education. Datasphere Initiative, featuring the sandbox in a recent report, observed that a key challenge has been the lack of public awareness and understanding of the sandbox concept. (Communications Authority of Kenya; Datasphere Initiative, 2025)
- India – AI Governance Guidelines: India’s national AI governance guidelines, published this month, recommends piloting sector-specific regulatory sandboxes in their action plan. (Government of India Press Information Bureau, 2025)
Reflections:
- Current regulations on access to children’s data generally allow their data to flow from public education systems to private EdTech companies, but not the reverse. There are typically no requirements for EdTech firms to share the data they collect with public authorities or academics. Consequently, civil society and researchers often cannot monitor EdTech’s influence on education, assess its impact on children’s rights, or fulfil oversight roles in shaping educational directions. (UNICEF, 2025)
- While data protection policies may be in place, education is consistently among the top targeted sectors for cyberattacks, as systems are outdated, the budget for cybersecurity is limited, and the sensitive data makes an attractive target for high ransom demands. A multi-country Sophos study found that in 2024, 63% of lower education organisations were hit by ransomware (Mahendru, 2024). This is a major concern as students’ data are increasingly being tracked.
Swipe for a quick take 👇🏽
We’d love to hear from you! What’s been shaping your thinking on AI? Drop your thoughts (and reading recommendations) in the comments. Explore more from EdTech Hub’s AI Observatory.
EdTech Hub’s AI Observatory is made possible with the support of the UK’s Foreign, Commonwealth and Development Office.
