EventThought LeadershipBlogEvent

Beyond Buzzwords: Evidence Generation – What Does It Take for Evidence to Shape Decisions

read

|

12 Mar 2026

|

Education systems are generating more learning data than ever, but it still does not reliably shape what happens next. This episode asks a practical set of questions: what evidence is most useful for improving foundational learning, who needs to use it to make decisions, and what helps it travel into policy discussions, budget choices, and programme design. It also explores how results can be communicated across different contexts, so they support improvement, rather than being read as a simple ranking of schools, provinces, or countries.

In this episode, we draw on insights from Alejandro Sinon Ibañez (Ali), SEA-PLM Programme Manager and Policy Specialist at the SEAMEO Secretariat, and Jacqueline Cheng (Jackie), Research Fellow at the Australian Council for Educational Research (ACER). This episode has a special focus on SEAPLM, following its recent launch of the SEAPLM 2024 data, drawing examples from the regional learning assessment that provides comparable evidence on foundational learning across Southeast Asia.

What is SEA-PLM and why does it matter?

The Southeast Asia Primary Learning Metric (SEA-PLM) is a regional, large-scale assessment and capacity-building programme. It is designed by, for, and with Southeast Asia. The instruments, tools, and methods reflect the unique contexts of education systems across the region, while sharing a common goal: monitoring student learning outcomes, identifying factors that affect student learning, and strengthening systems to address gaps.

SEA-PLM was created to capture learning progressions and understand how school environments support skill development. The metric was designed to go beyond assessment and generate evidence that policymakers, teachers, and stakeholders can use to make informed decisions by providing information such as why learning outcomes vary. Learn more about SEAPLM and its 2024 results here: SEA-PLM 2024

Watch the Episode

Read the play-by-play of the conversation below:

Question 1: Education systems generate more data than ever — but it doesn’t always seem to shape policy or practice. Why is it so hard to translate evidence into decisions?

Ali began by bringing up that  ‘education systems are data-rich but decision-poor’. Noting that the issue isn’t a lack of evidence, but instead how evidence is being used to inform everyday decision-making for policymakers, teachers, parents, and students. Policymakers juggle political cycles, competing priorities, and budget constraints. So while SEA-PLM provides robust data, uptake depends on intentional translation, including framing evidence around real policy questions that countries are already asking. SEA-PLM is not just about data collection; instead, it is about using evidence to drive meaningful change in foundational learning.

Jackie agreed with the point that there is sometimes “too much data”, which can overwhelm decision-makers. People struggle to identify key messages or where to start. She notes that translation of data into actions or meaningful messages also takes time, which doesn’t always align with political cycles or competing priorities. Another challenge is that decisions are often made centrally, while implementation happens at provincial or district levels, where contexts vary widely. Unless those variations are considered, it’s hard to see evidence effectively translated into practice.

Question 2: In your experience, what helps research evidence actually get taken up and used — in policy decisions, budgets, and programme design?

Jackie shared an example from the ASEAN-UK Supporting the Advancement of Girls’ Education (SAGE) programme, noting how it brings together policymakers, researchers, and regional organisations to produce studies that are explicitly designed to inform policy.

Specifically highlighting the example of a policy dialogue organised to launch the research on out-of-school children, wherein the findings were directly linked to inform national inclusion strategies, in which the intentional design helps evidence shape action.

Ali added that evidence production is often treated as the endpoint, rather than embedded into planning and budgeting, where findings are also considered during policy decisions and programme design.

He explains that in the evidence realm, roles are often fragmented—researchers focus on rigour, implementers on delivery, policymakers on priorities. No one owns a ‘translation’. SEA-PLM shows that evidence use works best when responsibility is shared across all levels, from national to classroom. Intentional translation ensures evidence cascades into real improvements in learning outcomes.

Question 3: What are the most pressing challenges in gathering evidence across such a diverse region? 

Jackie considered the practical points to reaching rural and remote communities as a major challenge. She notes that communities often include ethnic minority populations who speak different languages from the test language. Travel can involve long journeys, river crossings, floods, and landslides.

Language is another serious challenge. When the test language is not a child’s mother tongue, we may not fully capture their capabilities. We want assessments to show what students can do, not what they can’t.

Ali highlights the challenge of communicating results that may be politically sensitive. Especially from a regional perspective, where systems vary greatly in capacity and resources. 

He notes that SEA-PLM is not about ranking systems, but strengthening them. Hence, strong national engagement and careful framing are essential, and co-developing reports with countries helps ensure findings are contextualised and responsibly interpreted.

Question 4: How can learning data be communicated without it being perceived or presented as encouraging rankings

Ali explained that in the case of SEA-PLM, findings are always presented in context. Data without context erases responsibility. When disparities are highlighted—by gender, location, or socioeconomic status—we also explain why these inequities exist and how they relate to broader structural factors. Responsible communication builds trust and supports action-oriented interpretation, helping countries design targeted interventions.

Jackie underlined how contextual questionnaires are crucial. For example, socioeconomic status strongly predicts learning outcomes, but SEA-PLM also identifies academically resilient students from disadvantaged backgrounds. Understanding what helps these students succeed, such as parental support or reduced domestic responsibilities, can inform effective policy responses.

Looking ahead: In an ideal world 5–10 years from now, how would evidence-informed decision-making change education systems and learners’ experiences?

Jackie shared that she would love to see faster responsiveness to using the data for decision-making and targeted response, such as targeted teacher training in areas where students are struggling.

Ali hopes that decision-making would be anticipatory rather than reactive. Foundational learning would receive optimal funding, targeted support would reduce learning gaps, and interventions would be scalable and sustainable. We would see more academically resilient learners and systems focused on impact at scale, particularly for the most disadvantaged.

Final question: If the region could invest in addressing one major evidence gap, what should it be?

Ali points to the importance of ensuring that regional evidence is treated as a public good, not a compliance exercise. This may take the form of investing in transparent, shared data systems that would allow countries to learn from one another while recognising that learning is universal, even though systems differ.

Jackie suggests investing in three things: 1) what actually works in real classrooms, especially overcrowded and multi-grade ones; 2) longitudinal data on learning trajectories; and 3) practical solutions when resources are limited.

This episode draws on expertise from the following discussants:

  • Alejandro Ibañez serves as the SEA-PLM Programme Manager and Policy Specialist at the SEA-PLM Regional Secretariat within the Southeast Asia Ministers of Education Organization (SEAMEO) Secretariat. He manages the flagship programme, Southeast Asia Primary Learning Metrics (SEA-PLM), the first regional large-scale learning assessment and capacity-building programme designed to monitor and improve student learning outcomes. His primary focus is on leveraging systems assessment to drive improvements in basic education. Through the SEA-PLM surveys, he works closely with SEA countries to assess and strengthen policy and practice both at the system and school levels.
  • Jacqueline Cheng is a Research Fellow at the Australian Council for Educational Research. She has been the coordinator for the Southeast Asia Primary Learning Metrics (SEA-PLM) since 2017, working closely with SEAMEO, UNICEF, and the Ministries of Education in the participating 7 countries. Jacqueline is a data-driven researcher who hopes to achieve more equitable learning opportunities and improved learning outcomes for all.

Episode 1: Beyond Buzzwords — What Do Real Partnerships Look Like?

Episode 2: Beyond Buzzwords — Who Pays? The Future of EdTech Financing

Episode 3: Beyond Buzzwords—Rules of the Game: Governing AI and EdTech in the ASEAN Region

Episode 4: Beyond Buzzwords: What AI and Digital Safety Really Mean for Learners

Statement of disclosure: This blog was developed with support from generative AI. A transcript of the recorded session was generated, and the content was organised into question-based segments, which were then provided to an AI tool to assist in the drafting and structuring of this piece.

Acknowledgements

Thank you to colleagues from Australian Council for Education Research (ACER), Jacqueline Cheng, Jeaniene Spink, and Jess Hennessy; SEA-PLM Regional Secretariat, Alejandro Sinon Ibanez, and all those at EdTech Hub supporting this work including Neema Jayasinghe, Sangay Thinley, Jazzlyne Gunawan, Sophie Longley, Jillian Makungu, and Laila Friese on developing this fifth episode of the EdTech Hub Spotlight Series.

This work is part of the portfolio of projects delivered by the ASEAN-UK SAGE programme. The ASEAN-UK SAGE programme is delivered by the British Council and SEAMEO Secretariat, in partnership with EdTech Hub and Australian Council for Education Research (ACER). ASEAN-UK SAGE is an ASEAN cooperation programme funded by UK International Development.

Share: