From Principles to Practice
Why the AI Impact Summit Signals a Value-First Shift in Global Governance
Why it matters: After years of AI summits producing safety frameworks and principles, India is hosting one explicitly focused on "impact" – measurable outcomes that justify investment. For organisations struggling to demonstrate AI ROI, this signals a broader shift in how governments and industry will evaluate AI success: not by capability, but by value delivered.
The expensive conversation we keep having
AI summits have become a reliable feature of the international calendar. The UK hosted one in 2023. Korea followed. France had its turn. Each produced communiqués, frameworks, and commitments to safety principles that governments could reference in press releases.
What they have not produced, with any consistency, is a framework for measuring whether AI investments actually deliver value.
India’s upcoming “Impact Summit” represents an explicit break from this pattern. The name itself is a statement: not safety, not innovation, not ethics. Impact. Measurable outcomes that matter to people and organisations.
For UK boards watching AI budgets grow while ROI remains elusive, this shift in global conversation matters more than any safety framework.
The value gap is real
Consider the operating reality for most organisations. AI spending is increasing. Vendor promises are impressive. Pilot projects proliferate. And finance directors keep asking the same uncomfortable question: what are we actually getting for this?
Industry estimates suggest >75% of AI investments fail to deliver measurable value [Analysis based on industry benchmarks]. That is not a technology problem. It is a value problem. [2]
India’s High Commissioner to the UK, Vikram Doraiswami, framed this directly at TechUK: “It should not only be about spending money on AI. It should be about what you are trying to do and who will it benefit.”
This is a different starting point than most AI governance conversations. Not “how do we ensure AI is safe?” but “how do we ensure AI is worth it?”
“It should not only be about spending money on AI. It should be about what you’re trying to do and who will it benefit.” – H.E. Vikram Doraiswami [1]
The space programme lesson
Doraiswami offered an instructive parallel from India’s history: the space programme.
In the 1960s, India investing in space technology was widely mocked. Why does a country with such poverty need satellites? There were images of rocket components transported on bullock carts – an easy visual for those who thought the investment absurd.
India’s answer was practical: what can new technologies do for us in developmental terms?
The space programme was designed for immediate returns. Remote sensing provided information for farming and land-use patterns. As capabilities improved, meteorology inputs reduced the impact of cyclones and tidal surges. Losses from climatic catastrophes are now a fraction of what they were 30 or 40 years ago – directly attributable to space programme investments.
The cost efficiency became legendary. India’s 2023 moon landing on the lunar south pole reportedly cost $75 million end-to-end. Alfonso Cuarón’s film Gravity cost $100 million. A Hollywood movie, literally more expensive than a moon landing.
Analysis: This is not about celebrating frugality for its own sake. It is about demonstrating that technology investment can be evaluated on outcome-to-cost ratios rather than capability alone. The moon landing was not impressive because it was cheap. It was impressive because it achieved a specific objective at minimal resource expenditure.
What “impact” means in practice
The Impact Summit is organised around seven verticals, according to Doraiswami, each bringing together government, academic, and business partners. The structure deliberately avoids the traditional summit format of segregated discussions – ministers in one room, academics in another, businesses in a third.
Instead: regulators, innovators, and deployers of AI systems in the same conversations, focused on specific use cases.
This is not conceptually revolutionary. But it is structurally different from summits designed to produce principles rather than implementations.
The verticals include “Safe and Trusted AI” – covering standards for safety testing, auditing, governance, and transparency frameworks. But notably, this is one pillar among seven, not the entire agenda. Safety is necessary. It is not sufficient.
The multilingual proof point
If you want to understand what “impact-first” AI looks like in practice, consider India’s approach to language.
India has 22 officially recognised languages. English speakers number around 140 million – only 10% of the population [1]. An AI strategy that works only in English is, by definition, an AI strategy that excludes 90% of citizens.
The response has been to train AI models across languages. According to Doraiswami, 22 Indian languages now have functional AI solutions. The Bhashini platform provides translation of government and legislative documents. Court judgements – previously accessible only in English or Hindi – can now be read across the country in citizens’ preferred languages,
One concrete example: Marathi-language AI farming software reportedly serves 16 million users in a state of approximately 110 million people. [3] Specialised inputs about farming, accessible in the language farmers actually speak.
This is not AI for AI’s sake. It is AI evaluated on whether it reaches people who need it, in forms they can use.
Analysis: For UK organisations, the lesson is not about multilingual deployment (though that matters for diverse populations). It is about the evaluation framework. India is asking: does this AI investment improve outcomes for intended users? That question should precede any conversation about safety, ethics, or capability.
The interoperability imperative
There is a second dimension to the impact agenda: interoperability.
Doraiswami drew an analogy to email protocols. Forty years ago, SMTP created an open, common railroad for electronic communication. Different providers, different interfaces, universal connectivity. Everyone could email everyone else.
Then came WhatsApp, Signal, iMessage. Walled gardens where interoperability declined. Convenient, but fragmented.
The question for AI: are we building SMTP, or are we building walled gardens?
India’s preference, expressed through its digital public infrastructure model, is clear: shared rails where everyone can build applications. Government owns the infrastructure. Private sector innovates on top. Competition happens at the application layer, not the platform layer.
If this sounds abstract, consider the UK mid-market organisation evaluating AI governance tools. The current choice is typically between expensive enterprise platforms (often US-controlled, rarely interoperable) and DIY approaches (cheap but fragmented). Neither resembles infrastructure designed for shared benefit.
The Impact Summit’s agenda includes discussions about “globally subsidised GPU access” – treating compute as shared strategic infrastructure rather than competitive advantage. Whether this is politically feasible is questionable. That it is being discussed at a major summit is notable.
What the UK-India partnership could mean
The joint India-UK AI centre announced during Prime Minister Starmer’s October 2024 visit could operationalise this agenda.
The proposed model: testing sandboxes where governance frameworks can be prototyped and validated before deployment. Not regulation designed in abstraction, but infrastructure tested against real use cases.
For UK mid-market organisations, this could matter significantly. Participating in sandbox testing provides access to governance frameworks developed collaboratively – without the cost of building proprietary compliance infrastructure.
The value proposition is straightforward: governance as shared infrastructure, reducing the compliance burden for organisations that cannot afford enterprise-scale investment.
Analysis: The question is whether UK government will lean into this opportunity or treat the partnership as diplomatic theatre. The difference will be visible in funding, participation structures, and whether sandbox outputs become accessible to organisations beyond those directly involved.
Risks and constraints
The impact agenda is not without significant challenges.
Verification matters. Many statistics cited like the 16 million Marathi AI users come from government sources with incentive to present favourable narratives. Independent verification is essential before treating these as established benchmarks.
Impact can be defined self-servingly. Any government can claim its programmes have impact. The question is whether impact is measured against objectives that matter to citizens, or objectives that make programmes look successful. Rigorous, independent evaluation frameworks are harder to build than principles documents.
Cultural and institutional differences. What works in India’s governance context – strong central coordination, developmental imperatives, different privacy expectations – may not transfer directly to UK contexts. Learning from the model requires adapting it, not copying it.
The Global South framing cuts both ways. India positions itself as speaking for developing nations. But India is also a nuclear power, space power, and aspiring superpower. Its interests do not automatically align with smaller developing countries, and “Global South” solidarity should be evaluated critically.
Energy constraints remain real. Doraiswami acknowledged that renewable energy has limits, particularly around storage. India is pursuing green hydrogen and nuclear expansion to support AI compute growth. Without solving the energy question, AI ambitions – however well-intentioned – face hard physical constraints.
What to do next
For boards and executives:
Before your next AI investment decision, require a clear answer to: “What specific outcome are we trying to achieve, and how will we measure whether we achieved it?”
Benchmark AI investments not against peer spending, but against value delivered per pound invested. India’s space programme model – capability achieved relative to resources expended – is instructive.
For technical leaders:
Evaluate your AI governance approach against interoperability criteria. Can your frameworks work with others, or do they create lock-in?
Watch the Impact Summit outcomes for sandbox opportunities and governance infrastructure that could reduce your build-versus-buy burden.
For mid-market organisations:
Track the UK-India AI centre developments. Early participation in testing sandboxes could provide governance frameworks without enterprise-scale investment.
Challenge vendors to demonstrate impact metrics, not capability metrics. The question is not what the AI can do. The question is what it will deliver for your specific context.
Disclaimer: This article represents analysis based on publicly available statements from a TechUK event in January 2025. Statistics cited are attributed to the speaker and require independent verification. This does not constitute legal, financial, or professional advice.
If your organisation needs support building AI governance frameworks that deliver measurable outcomes – not just compliance checkboxes – Arkava helps mid-market enterprises turn AI investment into business value.
Contact: engage@arkava.ai






