EdTech does not only need proof of impact. It needs proof of trust

When people talk about evidence in EdTech, the conversation is often reduced to one apparently simple question:

Does it work?

It is an important question. But on its own, it is not precise enough. Instead we need to consider the full complexity of local contexts and stakeholder needs, which requires a series of interconnected questions:

  • Does the tool work for whom?

  • In which context?

  • Under which conditions?

  • For which learning or system goals?

  • With what effects on teachers, learners, families, administration and systems?

  • And what kinds of evidence are being used to support the claim?

These questions matter because education technology is not introduced into neutral environments. It enters classrooms, homes, school systems, procurement processes, policy frameworks and public debates. It affects not only learning outcomes, but also teaching practice, data governance, inclusion, trust, workload, institutional capacity and long-term sustainability.

This is why the EdTech sector needs to move beyond a narrow reliance on proof of impact and towards a broader culture of proof of trust.

Why “impact” is necessary, but insufficient

For many years, evidence in EdTech has often been treated as if it were synonymous with impact and in particular, learning impact. In practice, this usually means measurable learning outcomes, e.g., test scores, attainment gains, progress measures or other indicators that seek to show whether a tool has improved learning.

This kind of evidence matters, and education systems should ask whether technologies support learning. Companies should be able to explain the educational value of what they build. Policymakers and procurement teams should be able to distinguish between claims that are plausible, claims that are demonstrated, and claims that are overstretched.

But learning impact alone cannot answer every question that education systems need to ask.

A product may show positive learning effects in one setting, but be difficult for teachers to integrate into daily practice. A tool may be engaging for learners, but raise concerns about data protection or algorithmic transparency. A platform may scale quickly across markets, but fail to serve learners with different access needs, languages, disabilities or local contexts. A solution may appear efficient from a system perspective, while creating new forms of workload or reducing professional judgement.

In other words, the question is not only whether an EdTech product can demonstrate impact. The question is also whether it can be trusted in the environments where it is being used.

From proof of impact to proof of trust

Proof of trust does not mean lowering standards of evidence. It does not mean replacing rigour with vague claims about good intentions. Nor does it mean that learning outcomes no longer matter. Rather, proof of trust asks for evidence that is credible, context-rich, explainable and socially responsible.

It asks whether evidence helps us understand not only whether a technology appears to be effective, but how it is designed, governed, implemented and scaled. It asks whether the evidence is proportionate to the claim being made. It asks whether the evidence speaks to the decision at hand.

  • For a teacher, this might mean understanding whether a tool fits classroom practice, supports pedagogical goals and works for the learners in front of them.

  • For a policymaker, it might mean understanding whether a product aligns with public values, regulatory requirements, equity commitments and system-level priorities.

  • For a company, it might mean being able to communicate evidence transparently, without overclaiming what early pilots, usage data or customer feedback can demonstrate.

  • For a funder or investor, it might mean distinguishing between market traction and educational value, or between scalability and responsible implementation.

Proof of trust, therefore, shifts the evidence conversation from the single question, “does it work?”, to a more useful set of questions:

  • What does this evidence show?

  • What does it not show?

  • Who was it produced for?

  • What decision does it support?

  • What assumptions shaped how it was generated?

  • How might its meaning change as it moves between stakeholders?

These questions are at the centre of Needs-Based Evidence Mapping, which is a practical framing for understanding what kind of EdTech evidence is needed, by whom, for which decisions, and what that evidence can and cannot reasonably claim.

Evidence is not one thing

One of the reasons evidence conversations in EdTech become difficult is that different stakeholders need to extract very different information from the same piece of evidence.

Educators may be looking for evidence of pedagogical fit, learner engagement, accessibility or workload implications. Companies may need evidence that supports product development, market positioning or procurement readiness. Policymakers may be looking for signals of safety, equity, system fit or public value. Investors may focus on retention, scalability, defensibility and risk.

Problems arise when evidence generated for one purpose is interpreted as if it answers another. A usability pilot may be presented as evidence of learning impact. A compliance document may be treated as proof of ethical practice. Strong user engagement may be interpreted as evidence of educational effectiveness. Rapid adoption may be mistaken for inclusion.

This is not always deliberate. Evidence changes as it travels. Different actors interpret it through different professional logics, incentives and constraints.

The Needs-Based Evidence Mapping was developed to make these differences visible. It does not introduce another certification scheme or a single hierarchy of “stronger” and “weaker” evidence. Instead, it offers a way to organise evidence according to purpose.

Towards appropriate questions and relevant evidence dialogue

The demand for evidence in EdTech is increasing, which is a positive development. But if evidence expectations remain unclear, they can create more confusion than trust.

Companies may feel pressure to make premature impact claims. Schools may be asked to interpret evidence without sufficient support. Policymakers may struggle to compare evidence produced for different purposes. Investors may reward signals that do not necessarily reflect educational value. Intermediaries may be left to translate between actors without shared language or infrastructure.

A more mature evidence ecosystem would not ask every product to prove everything at once. It would ask for evidence that is appropriate to the stage of development, the claim being made, the risks involved and the decision being taken. It would also make space for different types of evidence: research studies, classroom observations, teacher feedback, learner experience, accessibility documentation, safety assessments, implementation data, usage analytics, compliance reports and system-level learning.

The task is to understand what each form of evidence can contribute, and where its limits lie.

Towards more responsible evidence conversations

This shift helps companies communicate more responsibly. It helps educators and school leaders ask more precise questions. It helps policymakers and funders interpret evidence claims without creating inappropriate or premature demands. It helps intermediaries support dialogue across the ecosystem.

Above all, it recognises that evidence in education is not just about proving that something works. It is about understanding whether, how and under what conditions technology can contribute to better, safer, fairer and more sustainable education.

The goal should not simply be more evidence. Rather it should be evidence that is meaningful, interpretable and proportionate to the decisions being made.

Read the full Needs-Based EdTech Evidence Mapping report and follow this series as we unpack what this means for companies, educators, policymakers, funders and ecosystem actors.

Next
Next

Sponsoring the Female EdTech Fellowship