AI Inequality: Why the Next Great Divergence is Also a Safety Crisis
Introduction
Artificial intelligence is often hailed as a democratiser of knowledge, opportunity and skill. It is set to improve diagnostics, personalised learning, and productivity to boost the economy, which can assist millions of people to leave poverty. However, this may be an incomplete picture. A report of the United Nations Development Programme in 2025 tells a more complex tale. The Next Great Divergence: Why AI May Widen Inequality Between Countries cautions that, unless acts are taken to intervene, AI will not alleviate inequality between countries but will instead concentrate benefits in already advantaged economies and increase risks in more vulnerable ones.
Two Gaps, One Crisis
AI is not going to create a level playing field: it has been injected into a world where there is unprecedented inequality. The report outlines two structural asymmetries that will influence the ways in which its effects manifest: a capability gap and a vulnerability gap.
Those countries that have high connectivity, skills, compute and regulation will be in a position to reap a greater portion of the AI dividend. Others will be exposed to greater risks of job losses, information exclusion, misinformation, and the indirect consequences of increased energy and water demands.
The centre of this transition is the Asia-Pacific region, that harbors a population of more than 55 per cent of the world. More than half of the global AI users are now located in the region, but the initial positions are quite different. Nations such as Singapore and South Korea are already spending a lot of money on AI infrastructure, with others still striving to offer basic broadband services. Two out of three individuals already use AI tools in certain high-income economies. In most countries with low incomes, the utilisation is lower. Such figures are important as they depict not only a gap in technology but also a structural difference in terms of who controls AI and who is controlled by the latter.
When Inequality Becomes a Trust Problem
Any trusted technological system is based on three tenets: transparency, fairness and accountability. AI inequality negatively impacts all three.
If governments implement imported AI systems in areas with limited technical capability, with limited transparency on their operation, their construction, and their biases. Citizens do not really trust when decision-making systems are black boxes and domestic institutions lack the know-how to question them.
Data exclusion also interferes with fairness. The AI systems trained with the datasets not sufficiently representative of the rural population, linguistic minorities, and women will generate poorer results in those groups systematically. Since South Asian women are much less likely to own a smartphone, this impacts their representation in digital data, and consequently in any AI system trained on such data.
Safety Risks Are Not Evenly Distributed
The lack of trust has a direct safety aspect. For example, those countries that have less robust information ecosystems have a greater exposure to AI-generated misinformation that can bias the discourse of the populace, alter elections, and cause violence. They also have the weakest capability of screening, tagging, or combating such content.
The same can be said about labour markets. The very same technologies that can speed up marginalisation and destabilise governance increase human insecurity, especially among employees in the informal economy with weak social security. The UNDP report points out that the exposure of female employment to disruption by AI is disproportionate to that of male employment, which further presents a gendered dimension in an already unequal situation.
Risks of infrastructure are skewed as well. Large AI systems may create disproportionately high energy and water demands on countries that host the data infrastructure without there being an equivalent economic payback. The environmental cost is local while profits are outsourced. Dangers of AI spread downwards, and the advantages go upwards.
The Governance Gap and Regulatory Arbitrage
Governance is perhaps the most important aspect. There are only a few states that presently have extensive AI regulation systems. This gives rise to a patchy landscape, in which safety standards differ dramatically and where companies have an incentive to install systems in jurisdictions that have weaker regulation.
The main reason is the lack of capability, as expressed by Philip Schellekens, chief economist of the UNDP in Asia and the Pacific, who says that those countries that invest in skills, computing power and well-run governance structures will gain. The rest will be left far behind.
This departure has its ramifications outside the nations. When users in other areas are subjected to widely different rates of safety and equity by the same international platforms, the concept of uniform digital norms would no longer be sustainable. Confidence in AI systems is lost not only locally but also on a global scale.
Way Forward
The UNDP report makes it clear that there is no inevitability of divergence. To avert it, however, it is necessary to consider AI governance as a development, rather than a technology problem.
The capacity to govern should be constructed and not presumed. This implies assisting countries in establishing regulatory systems, institutional capacity, and facilitating cross-border collaboration on standards. It can also imply considering some AI features as a public good, with common models and open standards that do not allow a few firms or states to become too powerful.
The UNDP articulates the problem in a simple manner: in the end, the world's people and not machines must decide on what technologies should be given priority and how to utilise them optimally.
Conclusion
AI inequality is often framed as an economic divergence story. But its implications run deeper. It reshapes who is protected, who is visible in data, and who has the power to challenge harmful outcomes. The risk is not just that some countries fall behind economically. It is that the global digital ecosystem fragments into zones of high trust and low trust, high protection and low protection. The choices made now will determine which path prevails. AI can reinforce existing divides or help bridge them.
But that outcome will not be decided by the technology itself. It will be decided by how societies choose to distribute access, power, and responsibility in the systems they build.
References
- https://www.undp.org/sites/g/files/zskgke326/files/2025-12/undp-rbap-the-next-great-divergence_1.pdf
- https://www.undp.org/asia-pacific/press-releases/ai-risks-sparking-new-era-divergence-development-gaps-between-countries-widen-undp-report-finds
- https://www.undp.org/asia-pacific/blog/next-great-divergence-how-ai-could-split-world-again-if-we-dont-intervene
- https://www.aljazeera.com/news/2025/12/2/ai-threatens-to-widen-inequality-among-states-un
- https://www.undp.org/asia-pacific/next-great-divergence
- https://www.eco-business.com/press-releases/ai-risks-spark-new-era-of-divergence-as-development-gaps-widen-undp-report/





