The Gap: Why Good Intentions Are Not Enough

Most world leaders, most corporate executives, most policymakers — when asked about AI — will tell you they take it seriously. They will mention ethics committees, responsible AI principles, and frameworks under development. They are not lying. The intention is genuine.

And yet the gap between intention and reality has never been wider.

The expertise problem

Building ethical AI frameworks requires a rare combination of skills: deep technical understanding of how these systems actually work, philosophical grounding in ethics and human rights, legal expertise across multiple jurisdictions, and political experience navigating institutional change. Very few people possess more than one of these. Almost none possess all four.

The result is that AI ethics — as practiced in most organisations — tends to be either technically naive (producing beautiful principles with no mechanism for implementation) or ethically shallow (producing detailed technical guidelines that ignore the deeper questions about what we actually value and why).

There is no shortage of AI ethics documents in the world. There is an acute shortage of people who can make them real.

The speed problem

Democratic institutions were designed for a different pace of change. A parliament can take years to pass a law. An AI system can be deployed to a billion users in months. By the time legislators understand a technology well enough to regulate it, the next generation of that technology has already arrived.

This is not a reason to abandon democratic governance. It is a reason to invest urgently in building the capacity — technical, ethical, and institutional — that allows democratic societies to govern themselves effectively in a world of rapid technological change.

The coordination problem

AI is a global technology. The data it is trained on crosses borders. The companies that build it operate across jurisdictions. The harms it can cause do not respect national boundaries. And yet the governance frameworks being developed are almost entirely national — and often incompatible with each other.

Without international coordination, the result is a race to the bottom — AI deployed in the least regulated environment and exported everywhere else.

The inclusion problem

The communities most likely to be harmed by unethical AI are often the least represented in the rooms where AI policy is made. People in the Global South. Marginalised communities in wealthy nations. Future generations, who will inherit the systems we build today but have no vote in the decisions we make.

Ethical AI governance that does not centre the voices of those most at risk is not ethical AI governance. It is risk management for the powerful.

This is the gap we are working to close.

The gap is real. But gaps can be closed — if we start now.