International
AI governance initiatives are highly fragmented and dominated by developed countries. AI technology is largely controlled by a few technology giants, which are likely to prioritize profits over societal benefits, and it can be deployed virtually anywhere, extending its influence beyond borders. Therefore, Governments should act to establish international guidance on AI development that favours public interest and promotes AI as a public good. Most developing countries have significant stakes in the future of AI but have limited influence over the direction it takes, which may result in a failure of global AI governance. This requires multi-stakeholder cooperation to make AI accessible and beneficial for everyone and foster inclusive innovation in tackling global challenges. A comprehensive global framework for AI should incorporate accountability mechanisms for companies, Governments and institutions. UNCTAD, in this report, advocates an AI-for-all approach, addressing infrastructure, data and skills, to steer the technology towards shared goals and values
Key policy takeaways
A framework for industry commitment – Public disclosure
of AI systems can improve transparency and accountability.
One possible model is the environmental, social and
governance (ESG) framework. An AI equivalent could
involve impact assessments throughout the AI life cycle
and detailed explanations by developers of how AI systems
function. Once shared standards have been established,
certification could shift from voluntary to mandatory
reporting, supported by measures to oversee compliance.
Shared digital public infrastructure – A global shared
facility, for example following the CERN model, can provide
equitable access to AI infrastructure. Governments can also
collaborate with the private sector through public–private
partnerships to expedite the development of digital public
infrastructure (DPI) for AI in local innovation ecosystems.
Tailored DPI systems can offer essential resources and
services to support AI adoption and development.
Open innovation – Open innovation models, such as open data
and open source, can democratize knowledge and resources
to foster inclusive AI innovation. The international community
can benefit from coordinating and harmonizing the valuable but
fragmented open-source AI resources worldwide. Connected
and interoperable repositories with common standards can
enhance the global knowledge base and improve access
through trusted hubs that ensure quality and security.
A global hub – An AI-focused centre and network modelled,
for example, on the United Nations Climate Technology
Centre and Network, can function as a global hub for
building AI capacity, facilitating technology transfer and
coordinating technical assistance to developing countries.
South–South collaboration – Strengthening South–
South cooperation in science and technology, through
building regional innovation hubs and expert networks,
can contribute to enhancing the capacity of developing
countries to address common AI challenges. Provisions for
AI technology and services could be included in existing
trade agreements, while regional institutions can assist in
sharing best practices and developing coherent AI policies.
The need for global AI
governance
Many AI-related issues can be addressed at the national level through well-designed
policies. However, as AI encompasses
intangible goods and services that can
be replicated and deployed virtually
anywhere, its influence extends beyond
borders, necessitating international
collaboration. Ensuring AI as a public good
requires a collective multi-stakeholder
effort to make it accessible, equitable
and beneficial for all, driving inclusive
innovation to tackle global challenges.
AI is set to change the technological,
economic and social landscape,
presenting new opportunities and
risks while requiring stronger global
collaboration, including the following:
• Reshaped economic opportunities –
AI shifts innovation and value creation
towards knowledge-intensive sectors,
reshaping economic opportunities and
power relationships in a multipolar world.
It is also transforming traditional sectors
and businesses, leading to greater
servicification across economies. This
can energize economic activities and
open new opportunities, but it can also
displace workers and undermine the
comparative advantage of developing
countries in low-cost labour.
Aligning AI with social objectives
The dominance of
multinational tech giants
Technology leadership by the private
sector is not new. What is new to AI is
the unprecedented level of control and
understanding that private companies have
over the technology, an imbalance that
limits the ability of Governments to steer
AI development in the public interest.
The current AI boom relies on decades
of academic work, such as in machine
learning and natural-language processing,
but most of the latest cutting-edge and
high-profile research is carried out by
private companies and is not published in
peer-reviewed scientific journals. In 2023,
researchers in corporations contributed
only 3.8 per cent of AI-related academic
papers. Most knowledge is being created
behind closed doors, limiting the potential
for learning and idea spillovers (Owens,
2024; Oxfam International, 2024).
The dominance of multinational technology
corporations in AI is pronounced and can
be considered an oligopoly due to their
market power. For example, Alphabet,
Amazon and Microsoft control over
two thirds of the global cloud market
through their computing services and
storage capacities (Lynn et al., 2023).
For the graphics processing units that are
critical for large-scale computation, there is
a virtual monopoly, with Nvidia having a 90
per cent market share in the third quarter
of 2024 (Jon Peddie Research, 2024).
Private companies correspondingly
dominate investment in AI. In 2021, the
industry worldwide spent over $340 billion,
compared with $1.5 billion spent by United
States Government agencies (excluding
the Department of Defense) and $1.1 billion
spent by the European Commission (Owens,
2024; UNCTAD, 2021a). The Government
of China has increased support to AI-related
firms through various State-backed initiatives
that have amounted to $210 billion over the
past decade (Beraja et al., 2024). In general,
private companies have the resources to
attract and retain high-skill employees.
Between 2004 and 2020, the proportion of
graduates from universities in North America
with PhDs in AI-related fields working in
the industry increased from 21 to 70 per
cent (Ahmed et al., 2023). Multinational
technology corporations also draw talent
and resources from domestic firms,
which can hamper knowledge spillovers
within economies (Holm et al., 2020).
The dominance of a few private companies
in AI is creating new security risks. One
programming error can have rapidly diffused effects around the world.
For example, in July 2024, a faulty
update of security software distributed
by CrowdStrike crashed about 8.5 million
Microsoft-operated systems, causing
widespread global disruptions, and
affecting business operations, as well as
public and critical infrastructure (Oldager,
2024; Philstar, 2024; Weston, 2024).
Without external oversight, businesses
are unlikely to prioritize ethics and societal
impacts in their development processes
or address potential issues such as biases
or misinformation, on the grounds that
this might make them less competitive,
with lower returns for investors.
Even AI projects aimed at social impact
may feel the pressures of the profit motive
and capital markets. OpenAI, for example,
was initially founded as a non-profit
organization, but to secure the necessary
capital it later established a for-profit
subsidiary. At the time of writing, to make
the company more attractive to investors,
OpenAI is planning to restructure its core
business into a for-profit benefit corporation
that will no longer be controlled by its
non-profit board (Hu and Cai, 2024).
Under the pressure of substantial profitrelated incentives, self-regulation is likely to
be ineffective. Rather than influence from
public policy, control is often in the opposite
direction, with companies putting pressure on
Governments. Many technology companies
have been influencing regulations and public
policies (UNCTAD, 2021b). Moreover, while
they may have an incentive to collaborate
with Governments in large markets, they have
less need to establish mutually beneficial
relationships with smaller countries.
• Dominant companies – AI development
and deployment are led by a handful of
large multinational companies. Private
enterprises are driven by profit motives
for shareholders, but their decisions
can affect the whole of society. Larger
countries can seek to regulate these
companies but smaller countries,
particularly less developed ones, may
lack institutional capacity and economic
strength. They may, therefore, be subject
to decisions made elsewhere unless
consistent international cooperation and
common principles on AI are established
.
• Rapid diffusion – New foundation
models and AI applications can be
diffused virtually everywhere in a short
period of time. They can therefore impact
economies and business worldwide
before policymakers become aware of
their existence. For example, Facebook
took about 10 months to reach 1 million
users and the platform known at the time
as Twitter, about two years; in contrast,
ChatGPT reached 100 million users in
only two months (Hu, 2023). Such rapid
diffusion requires international coordination
in regulation and monitoring, aiming
for broader societal goals that benefit
the global community (Cihon, 2019).
• Slow regulatory adaptation –
Technological advances often outstrip
the pace at which current regulatory
frameworks can adapt, particularly in
countries with lower levels of development.
This means that hundreds of millions of
people in developing countries cannot
influence the direction of technological
change but are nevertheless exposed
to possible negative consequences.
This includes different types of bias, as
AI technologies trained on skewed or
discriminatory data are likely to ignore
particular social, economic, environmental
and cultural contexts, with the risk of
deepening existing data divides (UNCTAD,
2024a). Regulatory mechanisms that
differ from one country to another may
result in inconsistent or contradictory
impacts across countries, sectors or
parts of society, distributing benefits an
• Cross-border flows of data and skills
– AI applications are spread across
digital infrastructures and rely on digital
skills and vast amounts of data that
flow through international hubs. Crossborder flows are growing rapidly in
digital trade, international commerce
and Internet platforms and services.
This digital economy shows increasing
returns to scale, which can trigger a
self-reinforcing dynamic whereby more
data translates into value that in turn
enables the collection of even more data
(UNCTAD, 2024a). Moreover, certain
categories of workers are increasingly
able to participate in the global labour
market either through online freelance and
virtual work or by relocating to countries
with more or better job opportunities.
Such labour flows are typically from
developing to developed countries and
costs in an uneven and unfair manner.
In response to the increasing concerns
about market dominance that can stifle
competition, a number of jurisdictions have
opened antitrust investigations, for example,
Germany, India, Japan, the Republic of
Korea, the United Kingdom, the United
States and the European Union (Chu, 2022;
Gil, 2023; Milmo, 2024; Kim and Kim, 2024;
The Yomiuri Shimbun, 2024; White, 2024).
The importance of a multistakeholder approach,
If AI governance is to align the incentives of
the private sector with societal development
goals and the public interest, it should take a
multi-stakeholder approach. The technology
needs to be fair, namely, findable,
accessible, interoperable and reusable
(GO FAIR, 2016). It also needs to be care,
namely, with collective benefits, authority
to control, responsibility and ethics, and to
prioritize people and purpose (GIDA, 2020).
International cooperation can use more
accessible open-source technologies not
only as cornerstones of science but also
to accelerate innovation. Open innovation
strengthens international cooperation
in science, technology and innovation
(STI) and favours knowledge diffusion
and the creation of a common pool of
capacities that can allow less endowed
countries to benefit from AI development.
Currently, there are several industry bodies
working on guiding and self-regulating the
responsible development of AI. For example,
the AI Alliance brings together technology
developers, researchers, and industry
leaders to advance safe and responsible
AI rooted in open innovation. The AI
Governance Alliance focuses on integrating
AI technologies responsibly across industries
and advancing technical standards for
safe and advanced AI systems. The
Frontier Model Forum advances AI safety
research and identifies best practices
for AI development and deployment.
These initiatives are important but lack broad
representation. The Frontier Model Forum,
for example, involves only a handful of large
technology corporations. The more inclusive
bodies involve at most a few hundred
entities, mainly from developed countries.
Only large companies have the resources to
participate in different discussions and assert
their perspectives across various forums.
The need to include
consumer views
International AI governance should
incorporate public opinions,
aspirations and concerns.
Figure V.1 shows the results from a multicountry survey on how people feel about AI,
highlighting concerns about personal data
protection and consumer interactions with
AI products and services (Ipsos, 2023).

The survey shows that most respondents
do not believe that companies using AI will
protect their privacy. In Canada, France,
Italy, Japan, Sweden and the United
States, only 3 out of 10 respondents trust
companies to make respectful use of their
data. In addition, most respondents do not
know which types of products and services
make use of AI, exposing them to possible
misuse. Some companies, for example,
created databases by mining social media
websites and the Internet for photographs
without obtaining permission to index
individuals’ faces (Candelon et al., 2022).
In developing a set of internationally agreed
principles for safeguarding consumer
rights, an important reference point is the
United Nations guidelines for consumer
protection (UNCTAD, 2016). The guidelines
can assist countries, particularly those
with weaker institutions, in designing
protection systems responsive to consumer
needs and desires, favouring market
differentiation and international cooperation.
A key concern related to consumer
protection is the GenAI-driven creation of
digital replicas, including deepfakes such
as recreations of musical performances,
impersonations of political and other
public figures and the blending of real and
artificial images to form disturbing images
and explicit content. These pose risks
to everyone, spreading misinformation
and damaging reputations, and even
undermining elections (United Nations,
Secretary General, 2023). In a recent
report, the United States Copyright Office
identified the risks of digital replicas
and the problems of privacy violation,
unfair competition, consumer protection
and potential fraud. Current legislation
might not be well designed to address
issues related to digital replicas. Legislation should protect all individuals
independent of their fame or commercial
exposure, and tie liability to the making or
distribution of unauthorized digital replicas
(United States, Copyright Office, 2024).
Protecting intellectual
property
The use of AI is also introducing new
uncertainties with regard to the protection
of intellectual property. It is not always
clear how AI-assisted or AI-generated
inventions should be treated under current
intellectual property law (Cuntz et al., 2024).
In general, AI algorithms themselves cannot
be patented unless they take the form of
software and only then in a few jurisdictions
such as the United States. However, due
to the statistical nature of AI, which relies
on probabilistic models, the issue of how
patents for computer software apply in this
case has not yet been settled (WIPO, 2024).
In most jurisdictions, patent protection
can apply only to applications that amount
to new inventions and are connected
to some technological device, such as
control systems for autonomous driving.
Regarding AI-generated inventions, the
Supreme Court of the United Kingdom
ruled in 2021 that AI cannot be named as a
patent inventor because a machine cannot
hold (and transmit) property rights and has
not devised any relevant invention (United
Kingdom, The Supreme Court, 2021).
Similar conclusions have been reached by
the United States Patent and Trademark
Office and the European Patent Office.1
A
notable exception is in South Africa, where
a patent naming an AI system as inventor
was granted in 2021 (IPWatchdog, 2021). Another challenge for intellectual property
policy is how to balance the need to
train AI models with real-world data
while protecting existing copyrights.
3 The following signed the convention in September 2024: Andorra; Georgia; Iceland; Israel; Norway; Republic of
Moldova; San Marino; United Kingdom; United States; and European Union, on behalf of the 27 member States.
In many instances, it is not clear whether
training data fall under current exceptions
to copyright protection. On these and
other issues, it is important to ensure
clarity, coherence and consistency
AI governance initiatives from
international forums
A fragmented political
process.
Recent multilateral forums have
created a variety of initiatives and
frameworks, including the following:
• OECD – In 2019, OECD approved
the Recommendation of the Council
on Artificial Intelligence, setting the
first intergovernmental standards to
foster innovation and trust in AI.
• Group of 20 (G20) – In 2019, the G20
AI principles called for AI stakeholders
to ensure accountability and beneficial
outcomes for people and the planet.
• Global Partnership on AI – In
2023, a ministerial declaration
by the Global Partnership on AI
underscored the need for ethical
considerations to be woven into AI.
• Group of Seven (G7) – In 2023,
the G7 launched the Hiroshima
Process, defining a risk-based code
of conduct for advanced AI systems
but leaving different jurisdictions to
choose their own approaches.
• AI Safety Summit – The Bletchley
Declaration in 2023 called for reinforced
cooperation for risk-based policies.
• AI Seoul Summit – In 2024, the Seoul
Declaration highlighted potential risks
posed by advanced AI and proposed
the creation of an international
network of AI safety institutes.
• Council of Europe – In 2024, the Council
of Europe issued the first international
legally binding treaty in the field of AI,
namely, The Framework Convention on
Artificial Intelligence and Human Rights,
Democracy and the Rule of Law.3
However, none of these initiatives can be
considered comprehensive. Figure V.2
shows that these seven major international
initiatives are largely driven by members
of the G7, whereas 118 countries, mostly
from the Global South, are party to
none (United Nations, AI Advisory Body,
2024). Existing international initiatives
may lack coordination or alignment,
risking gaps and incompatibilities
that could lead to a patchwork of
fragmented regimes worldwide.
Many countries in the Global South
provide essential services and resources
fundamental to the functioning of AI
systems, from content moderation to
rare-earth metals (UNCTAD, 2024b), yet
they have limited representation with
regard to AI governance. Their absence
may prevent governance frameworks
from effectively addressing key challenges
and priorities in developing countries,
such as environmental degradation
from AI-related mining and poor labour
conditions in AI hardware manufacturing
and the AI life cycle (see chapter II), as
well as the socioeconomic impacts of
AI-driven data work in vulnerable areas

Global AI governance should involve more
inclusive engagement with the Global
South and with marginalized and vulnerable
communities, who have largely been
excluded despite the significant impact
on their lives (United Nations, 2020).
Emerging common
principles
The evolution of the seven major international AI governance initiatives reveals a notable shift in approach from one based on principles to one based on risks (table V.1). This has been accompanied by calls for industry stakeholders to guarantee the development of safe and trustworthy AI systems, paying greater attention to transparency and accountability along the AI life cycle. Box V.1 discusses the shift of approaches to AI regulation, from outlining principles to addressing the risks.
The United Nations contribution
to AI governance.
Over the years, the United Nations has
made a significant contribution to the
global discourse on AI governance
(figure V.3). For example, since 2017,
ITU has organized sessions of the AI for
Good Global Summit, a key platform that
identifies AI applications to advance on the
Sustainable Development Goals and scale
such applications for global impacts. Other
important United Nations-based platforms
for advancing understanding on science
and technology are the Commission on
Science and Technology for Development
(CSTD) and the Multi-stakeholder Forum on
Science, Technology and Innovation for the
Sustainable Development Goals (STI Forum).
In 2021, member States adopted the first
global standard on AI ethics. The UNESCO
Recommendation on the Ethics of Artificial
Intelligence provides a shared framework of
values, principles and actions for shaping
legislation and policies (UNESCO, 2022).
A key policy area is gender, including to
protect girls and women and ensure that AI
systems do not violate their human rights or
fundamental freedoms; the recommendation
also calls for investment in girls’ and
women’s participation in STEM and ICT
disciplines, to improve their employability
and help ensure equal career development.
The recommendation is accompanied by
a readiness assessment methodology that
helps countries measure their preparedness
for applying AI and an ethical impact
assessment for evaluating the benefits and
risks of AI systems (UNESCO, 2023).
In 2024, the United Nations General Assembly
adopted two resolutions, one on seizing the
opportunities of safe, secure and trustworthy
AI systems for sustainable development
(United Nations General Assembly, 2024a)
and one on enhancing international
cooperation on capacity-building of AI
(United Nations General Assembly, 2024b).

The resolutions serve to help
strengthen international and multistakeholder collaboration and support
the effective, equitable and meaningful
participation of developing countries.
In September 2024, United Nations Member
States adopted the Pact for the Future. This
highlights the importance of international
cooperation in harnessing STI while bridging
the growing divide within and between
countries. This was accompanied by a
Global Digital Compact that sets a series of
commitments for enhancing international
AI governance for the benefit of humanity
(United Nations General Assembly, 2024c).4
The development of AI is intrinsically
connected to the collection, processing,
storage and use of digital data. The
CSTD has been requested to establish
a dedicated working group to engage
in a comprehensive and inclusive multistakeholder dialogue on data governance
at all levels as relevant for development,
which will report on its progress to the
General Assembly in 2026. The group
will consider equitable and interoperable
data governance arrangements, such as
fundamental principles of data governance
for development, proposals to support
interoperability between national, regional
and international data systems, with
considerations of sharing the benefits
of data and options to facilitate safe,
4 During the intergovernmental process of the Global Digital Compact, several thematic deep-dive consultations
were conducted to discuss priorities and key issues, one of which focused on AI and other emerging
technologies and centred on harmonizing institutional coherence and the importance of aligning digital
transformation strategies, data governance and cybersecurity frameworks.
secure and trusted data flows (United
Nations General Assembly, 2024c).
Following on the recommendations
of the High-Level Advisory Body on
Artificial Intelligence, in the Global Digital
Compact, Member States committed to
the establishment of a multidisciplinary
Independent International Scientific Panel on
AI and a Global Dialogue on AI Governance.
These initiatives aim to promote reliable
scientific AI understanding through
evidence-based impact, risk and opportunity
assessments. By sharing best practices,
they also support interoperability and
compatible approaches to AI governance.
Other United Nations agencies and bodies
have been leveraging AI for the Sustainable
Development Goals, as well as informing
and shaping global AI governance. For
example, UNESCO has developed Guidance
for Generative AI in Education and Research,
UNICEF has developed Policy Guidance
on AI for Children and WHO has developed
Guidance on the Ethics and Governance
of Artificial Intelligence for Health.
In coordinating efforts across various
domains, international law offers a
shared normative foundation that can
support coherent global AI governance
and avoid the proliferation of fragmented
initiatives and institutions.
Ensuring accountability
All players in the AI life cycle should have
well-defined roles, namely, developers
need to ensure the fairness and safety
of their systems and users need to
ensure ethical AI deployment.
All should be accountable, through
frameworks that define responsibilities, foster
transparency and ensure responsible use.
Given the growing influence of technology
giants, companies, particularly those
deploying large-scale AI systems, should
be required to make public disclosures of
their activities. This would help anticipate
and address potential impacts of AI,
increase systemic resilience and enhance
transparency and accountability.
One possible model is the ESG
framework. An AI equivalent could
involve impact assessments across
stakeholders throughout the AI life cycle,
measuring the effects on the environment,
employment, human rights, safety and
inclusivity (figure V.4). Companies can use
international guidelines and standards as
a basis for impact assessments. Carried
out before and after deployment, these
can shed light on how AI systems affect
jobs, wages and working conditions, for
example, and ensure that companies have
mitigation strategies to support workers.5
5 An example is the guidelines for AI and shared prosperity developed by the Partnership on AI that include a
job impact assessment tool, responsible practices and other resources, https://partnershiponai.org/paper/
shared-prosperity/.
Public disclosure measures should also
detail how AI systems work, including
algorithmic decision-making processes;
the collection, use and management
of data; and efforts to ensure fairness
and accountability. Auditing impact
assessments and public reports helps
ensure compliance with established
guidelines, identify potential risks and
certify that AI systems meet standards
for fairness, transparency and safety.
The evolution of ESG reporting provides
valuable lessons for engaging the private
sector in developing AI accountability
mechanisms. A certification system
can attest that a company meets AIrelated ethical and transparency criteria.
Once the standards are well developed
with clear reporting frameworks and
regulations, reporting can become
mandatory to ensure comprehensive,
standardized and transparent disclosures.
At present, many stock exchanges
mandate ESG reporting or require listed
companies to provide explanations if they
are unable to comply; the “comply or
explain” approach. Mandatory reporting for
AI can be supported by similar oversight
measures. For enterprises that fail to comply
with established standards and regulations,
fines may be imposed or restrictions set on
the deployment of particular AI systems.
Public disclosure of AI systems should:
Balance innovation and safety –
Policymakers need to strike a balance
between fostering innovation and ensuring
public safety and trust. Overregulation
may hinder technological progress, while
underregulation could pose significant risks
and make it difficult to hold companies
accountable. It is also important to consider
the regulatory burden on SMEs. Larger
firms may find it easier to meet stringent AI
regulations, since they have the resources
to manage legal risks and deal with complex
regulatory requirements (Kretschmer et
al., 2023). In contrast, SMEs may lack
the skills or resources required to achieve
compliance, potentially diverting funds
from innovation and making them less
competitive. SMEs may therefore need
support, particularly in developing countries,
where AI ecosystems are less developed.
Incorporate flexibility – The requirements
should be flexible and capable of adapting
to rapidly evolving technologies.
Regulations need to be regularly
updated to address emerging ethical
dilemmas and incorporate technological
breakthroughs and unforeseen impacts
that appear with the diffusion of AI.
Involve different stakeholders – Policies
and requirements need to reflect diverse
perspectives, interests and expertise;
it is therefore important to take a multistakeholder approach, involving the
private sector, civil society and academia.
Particular attention should be given to
vulnerable populations, who are less likely
to benefit from AI advances but more
likely to experience AI-related harms.
For example, AI can exacerbate existing
gender inequality and amplify biases. It
is also critical to encourage workers to
participate in the design and implementation
of AI systems, guaranteeing that new
AI tools complement their work and are
aligned with their needs and interests.
To ensure fairness and positive outcomes
across societies and jurisdictions,
existing platforms, such as the AI for
Good Global Summit, the CSTD, the
STI Forum and Global Dialogue on AI
Governance, can serve as venues to
discuss common AI public disclosure
requirements and accountability in AI
governance. These platforms can also help
strengthen data governance cooperation
at all levels and unlock the full potential
of digital and emerging technologies.
International cooperation for
infrastructure, data and skills
Harnessing the benefits of AI inclusively
requires international actions at each of
the three leverage points of infrastructure,
data and skills. International collaboration
can enable countries to develop
consistent approaches and actions, as
well as pool resources and expertise for
directing AI development towards the
benefit of humanity. Such collaboration
is critical in order to avoid fragmentation,
duplication of efforts and the risks of AI
use amplifying inequality across borders.
For effective global collaboration on
infrastructure, data and skills, the following
sections outline three propositions,
namely, digital public infrastructure,
open innovation and capacity-building
and research collaboration.
Developing digital public
infrastructure for AI
To address the increasing demands for
connectivity and computing power, DPI
models can offer an equitable approach to
provide the necessary access and services
to stakeholders of the AI ecosystem.
DPI is a set of shared, secure and
interoperable digital systems and
applications that can be used flexibly
in different activities and sectors. It can
be built on open standards to provide
societies with equitable access to public
and private services (G20, 2023a).
DPI connects people, businesses and
Governments through secure and reliable
online systems, and it is often referred to
as the infrastructure of the digital era.
Building on foundational physical
infrastructure, such as networks, data
centres and storage systems, DPI offers
a shared means to many ends, including
e-government services, digital identity
systems and digital payment systems. There
are many successful experiences across
countries. For example, in Estonia, a DPI
platform facilitated the secure exchange of
data across consumers, energy distributors
and producers, to enhance decision-making
in the energy sector. In India, a DPI approach
led the way for identification provision to
over 1 billion people. In Togo, during the
pandemic, social assistance to about
450,000 people was distributed within one
week through a DPI platform (UNDP, 2023a).
It is estimated that low- and middle-income
countries can achieve the equivalent
of two to three years of growth by
implementing DPI in the financial sector.
In the climate sector, DPI is expected to
bring benefits to carbon offsetting and
trading, accelerating emissions control
efforts by 5–10 years (UNDP, 2023a).
The Secretary-General has selected DPI
as one of the high-impact initiatives that
can accelerate progress on achieving
the Sustainable Development Goals.
Developing countries can provide resources
to build flexible DPI systems and support AI
adoption and development. For example,
Governments, alone or with private partners,
can establish high-speed networks for
reliable, fast Internet access, enabling
data transfer and real-time AI applications.
Data centres can ensure secure, efficient
storage and easy access to information,
and support platforms such as cloud
services and government databases for
seamless data exchanges. Interoperable
frameworks can unlock data exchanges
and open data platforms, enhancing the
use of AI models across sectors. Combining
high-speed networks and data centres,
high-performance computing provides
scalable computing power for AI training,
applications and data management. These
modular components can address particular
challenges and needs in developing
countries, offering resources that can enable
collaboration, innovation and responsible
AI deployment at scale (figure V.5).
Despite the potential of DPI for AI,
developing countries face significant
challenges in its design and implementation.
The international community can support
developing countries by providing a
combination of guidelines and principles,6
financial resources and technical expertise.
In 2023, for example, the G20 Digital
Economy Ministers reached a consensus
on how to leverage DPI for digital inclusion
and innovation. The framework includes a
list of key components and principles (G20,
2023a), as well as a playbook with practical
guidelines and a design checklist (UNDP,
2023b). In addition, to address the existing
knowledge gaps in practices for designing,
building and deploying population-scale DPI,
the G20 has created a Global Digital Public
Infrastructure Repository.
Other international programmes
and initiatives are emerging,
including the following:
• The United Nations High Impact
Initiative on DPI – Aimed at unlocking
targeted support for DPI in 100
countries by 2030 (ITU, 2023).
• Identification for Development and
Digitizing Government-to-Person
Payments – These World Bank
initiatives aim to help over 60 countries
issue digital identification to 550
million people (World Bank, 2023).
• The Universal Safeguards for DPI
initiative – Launched in 2023 by the
Office of the Secretary-General’s Envoy
on Technology and UNDP, this initiative
is aimed at co-creating a pragmatic
framework designed to mitigate risks,
advance on the Sustainable Development
Goals and foster trust and equity
(Universal DPI Safeguards, 2023).
• The 50-in-5 campaign – Aimed
at helping 50 countries design,
launch and scale components
for open, secure and resilient DPI
within five years (50 in 5, 2024).
• The Global Digital Compact –
The Compact represents the latest
landmark, with countries committed
to increasing investment and funding
towards the development of DPI to
advance solutions for the Sustainable
Development Goals (United Nations
General Assembly, 2024c).
Efforts from the international community can
help scale up and tailor DPIs for AI, providing
developing countries with the foundational
systems needed for digital inclusion and
technological innovation. The international
community could provide developing
countries with financial support or access
to existing DPIs (Gottschalk, 2019).
DPI for AI can rely on two service models
that, compared with traditional infrastructure,
provide greater flexibility, scalability and
global accessibility. The first is infrastructure
as a service, which provides virtualized
computing resources on the cloud on
an as-needed basis, including servers,
storage and networking. The second is
data as a service, which provides data on
demand, through application programming
interfaces, or cloud-based platforms,
enabling users to access, manage and
analyse data sets without owning the
underlying infrastructure. Cloud and data
resources from infrastructure as a service
and data as a service providers can be
leveraged to develop packaged, cloud
deployable and interoperable AI services.
Infrastructure as a service and data as a
service are mainly owned and operated
by private companies on a commercial
basis. However, governments can
collaborate with these companies to offer
services within the local AI ecosystem.
A shared AI infrastructure could be
developed as a distributed public
infrastructure across institutions and
countries in multiple centres using highspeed networks, with system interoperability
and security protocols.9
A key element
for success is the involvement and
openness of various stakeholders, including
Governments, businesses, academia and
civil society, which could use the shared
facility as a virtual space for interaction,
experimentation and co-creation.
Promoting AI through open
innovation
Open innovation provides a way of
managing the innovation process and
enabling collaboration and knowledgesharing among independent innovators,
companies, institutions and countries.
Compared with the traditional model of
innovation where each company relies on its
own resources, open innovation encourages
firms, public organizations and other actors
to tap into the large pool of innovative
resources available among external actors,
including customers and citizens. Open
innovation can speed up research and
development, lower costs and enhance
the quality or relevance of innovation
outcomes,10 which is particularly beneficial
for developing countries and SMEs, to
compensate for limited resources and skills.
Open innovation has gained significant
traction in recent years and is widely
recognized as a key driver of technological
opportunities, enabling risk and cost-sharing
and the championing of transparency while
democratizing access to diverse, technically
advanced resources. For example, through
the Global Digital Compact, United Nations
Member States have committed to
developing safe and secure open-source
software, open data, open AI models and
open standards, also referred to as digital
public goods (United Nations General
Assembly, 2024c). Another important effort
is the Manaus package issued under the
Presidency of Brazil by the G20 Research
and Innovation Working Group. This
includes an open innovation strategy to
foster international collaboration on STI,
and puts forward principles, approaches
and tools for inclusive and equitable
open innovation initiatives (G20, 2024).
Concepts and approaches for open
innovation are still evolving, but they
generally involve open data, that is,
making data freely available. This can
facilitate the training and testing of
AI models and foster innovation by
allowing researchers and developers to
experiment with data and create new AI
solutions. Open data can also improve
transparency and facilitate the assessment
of new AI models and applications.
Prominent examples of open data initiatives
include the Human Genome Project, the
COVID-19 Open Research Data Set and
the Human Connectome Project. Most
emerging open data platforms for AI
are from the private sector, such as the
Kaggle data sets, the OpenAI data sets,
the Microsoft Azure open data sets and
the registry of open data on Amazon Web
Services. They vary in their operation,
data management approaches and open
data standards. Common international
definitions and standards for open data
are essential to give both the public and
private sectors access to high quality
and diverse data and make them digital
public goods. Further important aspects
include privacy, security and the prevention
of data misuse and misinterpretation.
Another important instrument
is open source, largely diffused
in software development.
This is a model wherein the source code,
design or blueprint of a software package
or a project is made freely available
through public platforms. Well-known
open-source operating systems include
Android and Linux, which power critical
infrastructure and digital devices. By
providing free and open tools, libraries
and frameworks, the use of open source
democratizes knowledge and resources,
enables global collaboration and innovation
and improves transparency and trust.
Since the emergence of GenAI, there has
been a surge in open-source AI and GenAI
projects. These include commercial large
language models, as well as applications
developed by academic institutions and
individual developers (Daigle and GitHub
staff, 2023). The code is communally
maintained on open-source platforms
such as GitHub and others, which offer
diverse use cases and readily accessible
AI models, with community engagement
for discussion and mutual support.
The international community can benefit
from coordinating and harmonizing
the important but fragmented open AI
resources worldwide. Successful open
innovation for AI relies on connected and
interoperable open repositories of global
knowledge, using open data and open
source in a global innovators network with
standardized protocols. Such a repository
can strengthen the global knowledge
base, foster inclusiveness, improve access
through trusted hubs that ensure quality
and security, mitigate potential risks and
accelerate AI-driven innovation (figure V.6).
Strengthening capacity-building and research
collaboration
Both DPI and open innovation provide
accessible resources for businesses,
academia and the general public to engage
in the adoption and development of AI.
However, using these resources requires
technical knowledge and skills, such
as statistical knowledge, programming
skills, familiarity with open-source
platforms and protocols and knowledge
of machine learning algorithms, as well
as an understanding of the domain for
which an application is to be used.
These capacities are often highly
concentrated in technology companies
and developed countries, an imbalance
that the international community should
address through the transfer of knowledge
and technology to developing countries, as
well as assistance for capacity-building.
The CSTD has been advancing international
STI collaboration through knowledge
and experience-sharing, and capacitybuilding. The Commission can further
strengthen international AI collaboration
by sharing good practices, facilitating
coordination and contributing to enhanced
trust, transparency and inclusivity.
Multi-stakeholder engagement and
knowledge-sharing on AI, through
international dialogues or global networks
of exchange, for example, could build
on existing platforms such as the CSTD,
the STI Forum, the Internet Governance
Forum and the AI for Good Global Summit.
It is also important to have technical
assistance and tailored solutions based
on local needs and the limited absorptive
capacities of many developing countries.
This can help effective transfers of technical
knowledge and reduce the risk of misuse
due to a lack of resources or expertise.
Knowledge and technology transfer
typically focus on particular information,
skills or activities. Capacity-building
is critical in adopting and developing
rapidly evolving frontier technologies,
and encompasses a broad set of
capabilities that enable individuals or
countries to innovate continuously. It can
take place through training workshops
that enable policymakers to develop
STI policies or tailored educational
programmes on AI and data literacy.
Capacity-building can also take place
through AI incubators and research hubs
and R&D partnerships. Special attention
should be given to the adoption and
development of human-complementary
AI technologies. This can be achieved
by allocating dedicated funding to AI
solutions that augment rather than replace
workers, and setting up international
AI research networks or partnerships
that prioritize human-centred AI.
These activities align with the resolution
adopted by the General Assembly on
enhancing international cooperation on
capacity-building of artificial intelligence,
particularly in developing countries, as
well as the Global Digital Compact, which
encourages the development of international
partnerships on AI capacity-building.
To create global hubs for AI capacity-building
or an AI-focused centre and network, a useful
model and reference point is the United
Nations Climate Technology Centre and
Network. This is the implementation arm of
the Technology Mechanism of the United
Nations Framework Convention on Climate
Change, which supports developing countries
through technical assistance and access to
information and knowledge on technologies,
including capacity-building and policy advice,
as well as fosters collaboration among
stakeholders via its network of regional and
sectoral experts. While the CERN model
focuses on shared infrastructure, the Climate
Technology Centre and Network approach
is aimed at providing technical assistance to
developing countries and building capacity
through knowledge and technology transfer.
An AI-focused centre and network could
help developing countries in adopting,
adapting and developing AI. This could
build on existing efforts such as the
International Research Centre on Artificial
Intelligence under UNESCO auspices,
which promotes ethical AI solutions for
the Sustainable Development Goals,
and the Global Partnership on Artificial
Intelligence, which advances the
implementation of human-centric, safe,
secure and trustworthy AI solutions.
Furthermore, collaboration in AI research
and innovation can help scale up
South–South cooperation in science and
technology to address common challenges
(United Nations General Assembly, 2019).
For this purpose, the more technologically
advanced developing countries can
collaborate with other countries, for
example, through regional partnerships,
to create critical mass in AI, favouring
knowledge and technology transfer, and
overcoming the resource constraints that
may hamper the establishment of thriving
AI ecosystems in less-endowed countries.
In recent years, there have been numerous
instances of new South–South cooperation
in the field of AI. The BRICS member
countries, for example, have formed an
AI study group aimed at catalysing AI
innovation. China has expanded cooperation
with Africa in various areas, including AI,
as outlined in the Forum on China-Africa
Cooperation Beijing Action Plan (China,
Ministry of Foreign Affairs, 2024). In 2024,
the launch of the ASEAN Committee on
Science, Technology and Innovation Tracks
on AI aimed at expanding regional capacity
development initiatives in AI (ASEAN, 2024).
These initiatives represent promising starting
points for South–South cooperation, and
the Global South can also make use of
other mechanisms for exchanging AI
technologies, data and services. The
Global South can, for example, incorporate
provisions for AI technology and services
in trade agreements and engage regional
institutions such as the African Union
or ASEAN for sharing best practices
and developing coherent AI policies.
In addition, developing countries can
build regional innovation hubs and expert
networks for addressing AI challenges. In
Africa, for instance, the Artificial Intelligence
for Development programme scales AI
innovations through the creation of four
pan-African Innovation Research Networks
and supports policy research by funding
two research-to-policy and think-and-do
tanks in East Africa and a policy network in
West Africa. It also engages African talent
and skills through two multidisciplinary
university labs. Other ways in which
countries in the Global South can work
together are mobility programmes, human
capital development initiatives and joint
research and technical projects in the field
of AI and other frontier technologies.
Countries can cooperate on particular
themes or in sectors in which AI brings
sustainable and scalable change. One of
the most important areas is agriculture, for
which a major resource is the Consultative
Group on International Agricultural Research
(CGIAR), the largest global partnership
focusing on agricultural research for
development, which can integrate AI as
a tool to create and diffuse new solutions
for climate-smart, innovative and socially
inclusive agriculture, while addressing
challenges such as crop disease and
pest detection, yield prediction and
precision irrigation. A thematic approach
of AI partnership can help coordinate
and target efforts in key areas that are
most relevant to the socioeconomic and
developmental needs of the Global South.
Guiding AI for shared prosperity
Technology does not have intrinsic moral
or ethical qualities. Whether its impact
is positive or negative depends on how
humans develop and use it. At first glance,
AI technologies are no different; their use
can enhance various aspects of our lives,
but can also deepen inequalities and further
concentrate economic power (Korinek and
Stiglitz, 2021). Nevertheless, AI is beginning
to challenge the notion of technological
neutrality. This is the first technology in
history capable of making decisions and
generating ideas by recombining existing
knowledge, and which could evolve into
an active agent. As AI grows faster and
more powerful, the potential response
times shorten and the room for error may
become smaller (AI Action Summit, 2025).
History shows that technological
progress brings economic growth but
does not guarantee that the benefits
will be broadly distributed, nor does it
necessarily lead to inclusive and equitable
human development. Driven forward by
new technologies, markets may make
efficient economic decisions in the short
term, but do not assume responsibility
for distributive consequences or
automatically maximize social value.
Technological advances have typically
fostered the rise of technology giants
and favoured the owners of capital at
the expense of labour, leading to greater
concentration of wealth (Acemoglu and
Restrepo, 2019; Korinek et al., 2021). There
is an urgent need to guide AI advances.
Responsible design, conscientious use
and ethical oversight of AI depends on
effective global AI governance, along
with international support for developing
countries through DPI, open innovation
and capacity-building. Equally important
is building a common vision to guide
AI progress towards promoting shared
prosperity and fostering an inclusive
economic future for all of humanity.
UNCTAD, in this report, calls for a shift of
focus from technology to people, putting
humans at the centre of AI development.
AI technologies should complement rather
than displace human workers, and the
transformation of production processes
should bring benefits that are shared fairly
among countries, firms and workers.
Inclusion and equity are central to an AIfor-all approach, supported by policies,
incentives and regulations driven by a
global agenda that promotes international
multi-stakeholder collaboration.
Comments
Post a Comment