Right now, education systems are being asked to do something fundamentally tough. They are being asked to run a hundred metre dash and a marathon at the same time.
AI is unfolding at sprint speed. New models appear overnight, tools evolve weekly, and students are already questioning what their future jobs will look like in a world where machines can perform many of the tasks they are training for. Teachers are quickly grappling with how to get the most from new tools while policymakers feel pressure to respond quickly. The urgency is palpable, and the instinct is to move fast, set benchmarks, define quality standards, and put guardrails in place before the ground shifts again.
And yet systems change has always been a marathon. Building trust with teachers takes time. Reforming procurement processes takes time. Developing national capacity, reshaping incentives, and redesigning accountability structures… they all take time! These are not startup problems; they‘re institutional ones. They require stamina rather than speed because they are embedded in culture, governance, politics, and lived realities..
In February we were at the India AI Impact Summit presenting our work from the AI Observatory, a programme supported by FCDO. This was the largest Global South gathering to date on artificial intelligence . What made the Summit so revealing was how often these two logics, speed and endurance collided. Much of the conversation gravitated toward benchmarking and evaluation, reflecting understandable attempts to stabilise something that is still fundamentally in motion. Education has long relied on a rhythm of “get it right, then scale.” Perfect the model, generate the evidence, and then roll it out at pace. But AI does not wait politely for that cycle to complete. By the time one evaluation concludes, the underlying model may already have changed. A procurement framework designed for stability suddenly finds itself negotiating with tools that update by the month, or even by the day.
The result is a kind of institutional whiplash. We are applying marathon muscle memory to sprint terrain; optimising processes for certainty while operating in conditions defined by uncertainty; and commissioning multi-year impact evaluations while classrooms are already iterating in real time. And in many cases, we are talking about upgrading existing systems (making teachers’ lives a little easier here, shaving off minutes there) when the deeper question is whether those systems need to be transformed altogether.
Running at two speeds is exhausting, not because either speed is wrong, but because they demand different capacities. A sprinter trains for explosiveness, rapid feedback, and short bursts of intensity. A marathon runner trains for pacing, nourishment, and the ability to endure discomfort over long stretches. In the age of AI, education systems need both. They need the courage to test quickly, to run rapid evidence cycles, to accept that not every experiment will work. At the same time, they need the discipline to build long-term capability, to invest in teachers, to rethink procurement, and to grapple seriously with questions of sovereignty and power.
This is where the human dimension becomes impossible to ignore. As Paula Ingabire, Rwanda’s Minister of ICT and Innovation (MINICT), put it, “I want to be at the table where products are made — not where products are delivered.” That sentiment cuts to the heart of the sprint–marathon tension. If teachers remain recipients of finished products, systems may sprint in the short term but they will not endure. If teachers are brought into the design and decision-making process, there are trade-offs, time out of classrooms, imperfect pilots, opportunity costs, but there is also the possibility of building something that lasts. Teachers-in-the-lead is not some romantic slogan; it is a structural necessity if we want systems capable of adapting over time.
There is also something sobering about acknowledging that no one signs up for a marathon because it sounds easy. People run 26 miles because they want to push themselves out of their comfort zone and cross that finish line. The same is true here. If the framing is sovereign AI, or equitable access, or genuinely transformed learning systems, then the pain and effort are part of the bargain. What we cannot do is pretend that this will be comfortable. Sprinting without endurance leads to burnout. Enduring without adaptation leads to irrelevance. The challenge is learning how to do both without collapsing under the strain.
For governments that means bringing different departments together to collaborate on a holistic approach to AI instead of policies and directives that compete or contradict each other (moving slow) while also experimenting in a way that’s “good enough for now and safe enough to try” (moving fast). For funders that means taking the time to build investments and pipelines with other funders instead of competing to fund the same solutions (moving slow) and building long-term partnerships that trust the ventures and programmes they back even when they go through rapid iterations (moving fast). For all of us, it means both pauses to reflect and hypothesise (moving slow) and spaces for trying and learning in real-time and in the real-world (moving fast).
It means recognising that beneath all the talk of models and benchmarks, this is ultimately about people, about who gets to decide, who gets to build, and who gets to run the race. Like I always say, “it’s never the tech, it’s always the people”.