Notes
CHAPTER 5 Navigating AI Governance as a Normative Field: Norms, Patterns, and Dynamics
Urs Gasser
Introduction
This article conceptualizes AI governance as an emerging normative field by offering a series of analytical lenses and a set of initial observations aimed at contributing toward a navigation aid for what promises to be a rapidly evolving and complex ecosystem. The main objective of this contribution is to make visible the broad range of approaches, strategies, and instruments available in the governance toolbox as decision-makers in the public and private sectors seek to anticipate, analyze, and address harms and risks associated with the accelerating pace of AI development, deployment, and use while harnessing its potential for humans, society, and the planet at large.
This article is written at a moment in time when a myriad of AI governance initiatives are underway at the national, regional, and global levels, involving a broad range of actors, incentives, and interests. Such efforts range from comprehensive legislative projects like the EU AI Act1 and whole-of-government efforts like the US Executive Order on Safe, Secure, and Trustworthy AI2 and its accompanying implementation initiatives, to voluntary commitments and best practice frameworks. They include local governance interventions at the city level and international initiatives put forward by organizations like the Council of Europe,3 the United Nations,4 or G75 and G20,6 to name just a few examples. Other important components of evolving AI governance arrangements include ethical as well as technical standards, developed again across all levels of governance, ranging from company-level to international-level.
Taken together, AI governance as a “hot field” (borrowing a term coined by sociologist Robert Merton) consists of a heterogeneous set of principles, norms, rules, standards, and decision-making procedures. In governance parlance, it fits within the broader concepts of multilevel, multiactor, and multimodal governance, despite recent trends toward an enhanced role of governments as regulators.7 At least at present and for the foreseeable future, AI governance can be understood as a case of polycentric governance, to invoke a concept developed by Elinor Ostrom,8 with multiple centers of decision-making and overlapping responsibilities, without a single entity that has the ultimate authority for making all collective decisions.
Given the polycentric nature and fluid state of AI governance, this contribution does not aim to describe or evaluate any single effort in greater depth or to arrive at policy recommendations. Rather, it seeks to offer a series of lenses through which contemporary initiatives can be analyzed and contextualized. Such a descriptive approach might inform future normative frameworks by offering a sense of various approaches and instruments available and by highlighting some of the factors shaping their application.
The first section frames AI governance as a normative field and situates it within the broader context of ever-evolving technology as a socially embedded venture shaped by numerous factors and forces at play. The following section, “Approaches to AI Governance,” offers several elements of a possible taxonomy of approaches to AI governance that shape the contours and interactions among a diverse set of principles, norms, rules, standards, and decision-making procedures. It suggests a number of lenses that might be useful when understanding and navigating the range of options available to steer the development, deployment, and use of AI. Embracing the complexity and heterogeneity of AI governance as a normative field, the subsequent section, “Mapping Normative Patterns,” seeks to identify a series of normative patterns within and across different AI governance arrangements, with a focus on recent legal developments. The last sections of the chapter aim to demarcate conceptual zones of convergence, divergence, and possible interoperability across different AI governance arrangements (“Selected Nodes of AI Governance”), and to offer final considerations for AI governance-making as shaping the further evolution of a normative field (“AI Governance for an Uncertain Future”).
AI Governance as a Normative Field
This section frames AI governance as a normative field, starting with a working definition, followed by a brief overview of some of the most salient initiatives and building blocks of AI governance arrangements both nationally and internationally. By briefly contextualizing AI governance in a broader social context, it also offers a reminder that neither the technology nor efforts to govern it have emerged in a vacuum.
Defining AI Governance
Defining the contours of AI governance is not an easy task.9 The definition of what accounts for AI has been contested all along and varies across contexts and actors. Despite various efforts, a uniform standard definition has not emerged yet—and even some of the most influential definitions are subject to updates, as the recent definitional amendments to AI Guidelines of the Organisation for Economic Co-operation and Development (OECD) illustrate.10 The challenge of defining where AI governance starts and where it ends is further exacerbated not only because the term AI is contested but also because the notion of governance is a highly amorphous concept with many meanings across different cultural and application contexts.11 Questions of terminology seem mostly of academic interest at first, and it is striking that languages generally have received relatively little attention in contemporary AI conversations. But when entering the regulatory arena, more precise understandings of certain terms matter greatly and have real-world consequences, as the struggles to specify the many newly introduced terms in the EU AI Act might illustrate.
This article avoids a sharp definition of the subject it seeks to explore and takes a pragmatic approach. With respect to AI, the updated definition by the OECD serves as the term’s core with a halo around it, reflecting broader definitions used in other norm complexes aimed at steering the development, deployment, and use of AI across a spectrum of open and closed technological and organizational settings. Similarly, a pragmatic understanding of the concept of governance is adopted, embracing a diversity of modalities of norms (from ethical principles to hard law), different levels of governance (from local to global), and a range of actors involved in such efforts (from professional associations to lawmakers).
Taking these elements together, AI governance can be circumscribed as the sum of all coexisting forms of collective regulation of matters associated with machine-based systems, which infer from inputs how to generate outputs that have the potential to influence physical or virtual environments.
Emerging AI Governance Arrangements
Fueled by an accelerating pace of innovation in AI research, development, and deployment, debates about the needs for and modalities of AI governance have intensified in recent years, spanning local to global levels. A broad range of stakeholders has launched various initiatives to set up dedicated guardrails for AI-based technologies, starting with several hundred AI ethics principles initiatives,12 followed by hundreds of legislative and regulatory interventions,13 as well as a plethora of standard-setting and best practice efforts. Among the many initiatives, the following flagship efforts with the potential for international impact serve as reference points in this chapter:
- Canada’s Draft Artificial Intelligence and Data Act14 introduces guardrails to ensure that AI systems deployed in Canada are safe and nondiscriminatory and creates accountability mechanisms for businesses as they develop and use AI-based technologies.
- China’s Interim Generative AI Measures15 seek to encourage and guide the responsible use of generative AI with respect for national security while making everyone who develops and uses generative AI products to provide services to the public in China subject to government oversight.
- Brazil’s Draft Artificial Intelligence Act16 seeks to create rules for making AI systems available in Brazil, establish rights for people affected by their operation, provide penalties for violations, and set up a supervising body.
- EU’s AI Act17 is a comprehensive draft law aimed at addressing the risks of AI through a broad range of obligations and requirements to safeguard the health, safety, and fundamental rights of citizens. It seeks to ensure the proper functioning of the EU single market by setting consistent rules for AI systems across the EU.
- US Executive Order on Safe, Secure, and Trustworthy AI18 establishes new and whole-of-government standards for government agencies to address safety and security risks associated with the development and use of AI in the social, economic, and national security spheres.
These initiatives only provide a subset of the diverse AI governance arrangements at the national level. The US AI governance landscape, for instance, consists of an amalgam of norms, which includes—in addition to the Executive Order and bills such as the US Algorithmic Accountability Act19—sector-specific initiatives (e.g., in the health and transportation sectors) and legislation at the state and city level, in addition to a broad range of soft law instruments ranging from an AI Bill of Rights20 to voluntary commitments by leading AI companies, numerous ethical principles by private and public sector entities, and technical standards by standard-setting organizations such as the National Institute of Standards and Technology (NIST),21 to name just a few AI governance sources.
Other nation states—including the United Kingdom, India, Japan, Singapore, and Switzerland—have taken a different route so far (note that things remain in flux) by either pursuing a sectoral approach to AI governance or refraining from the use of hard law while promoting the responsible development, deployment, and use of AI through nonbinding governance mechanisms such as guidelines, best practices, and standards.
While some of these efforts at the national and regional level target AI specifically as a distinct set of technologies using different techniques and methods, AI governance has not emerged in isolation. Existing general guardrail regimes, among other factors discussed subsequently, provide the relevant normative context in which more specific interventions now take place.
Contextualizing AI Governance
AI governance, like AI itself, should not be considered in isolation but rather contextualized as part of a social fabric of norms and stabilized expectations, ranging from formalized policies and laws to often more implicit cultural values and attitudes.22 They shape and limit what is possible, feasible, and desirable within a given ecosystem when addressing the broad range of opportunities and challenges associated with AI through means of governance.
Approaches to AI governance arrangements are situated within broader economic, social, environmental, technology, and regulatory policies of countries. Within these general parameters, many nations have enacted national AI strategies, which often also outline the contours of the envisioned AI regime.23 A comparative analysis of AI strategies across twenty-two countries suggests a typology of prescribed governance approaches, resulting in a matrix with strong versus weak state interventions on one axis and stimulation versus enclosure-and-control approaches on the other. Different roles of the state in AI governance can be mapped onto each resulting quadrant, indicating certain levels of activity and the use of preferred governance instruments.24
Preexisting laws are another contextual element, as briefly mentioned. Consider, for instance, how relatively relaxed privacy laws or safe harbor provisions have contributed to an AI innovation-friendly ecosystem in the United States.25 Conversely, other sets of norms have arguably constrained some of the conditions conducive to AI advancement. While the empirical effects of stricter data protection laws in Europe on the development and adoption of AI remain contested, some studies suggest that the General Data Protection Regulation (GDPR) and particularly more stringent enforcement actions shaped important dimensions of the research and innovation ecosystems.26
The relevant context of AI governance is of course not limited to policy and law. Powerful forces that shape the present and future of AI governance originate from the spheres of economic and national security interests—an important nexus that goes beyond the scope of this chapter.27 For context, it suffices to acknowledge that the shapes both of general legal norms and specific AI guardrails are heavily influenced by the political economy, understood as the actions taken by different stakeholders with divergent interests and unequal resources and power that characterize a given environment.28 The extensive lobbying efforts by large technology companies, for instance, to push for guardrails that are favorable to their businesses are well-known and have also become apparent in the AI context. Perhaps more than anything, geopolitical dynamics—both in terms of competition and cooperation—frame the broader normative picture in which AI governance activities unfold in each domain and region,29 and have led to what some have described as a “race to AI regulation” on top of the global race for AI.30 The AI policy of the European Union, for instance, was positioned from the outset against the backdrop of global developments,31 and its AI Act has already been analyzed through the prism of the so-called Brussels effect.32
These and several other factors—including culturally anchored values, preferences, and attitudes by people toward innovative technologies33—influence the normative context in which present day AI governance efforts crystallize. In other words, emerging AI governance norms are not endogenous rules but are socially embedded. The AI policies of Nordic nations, for instance, distinctly rely on core cultural values as organizing principles to steer the development of AI in society.34 These normative dynamics complicate any comparison between different regimes and, above all, limit the possibility of successfully transplanting legal and other AI norms from one context to another.
International Initiatives
The AI governance landscape at the national and regional level is also shaped by a series of important international developments and initiatives,35 including the influential OECD Principles on Artificial Intelligence,36 which seek to promote AI that is innovative and trustworthy and that respects human rights and democratic values; the UNESCO Recommendations on the Ethics of AI37 that spans standard-setting, policy advice, and capacity building; the UK-led Bletchley Declaration38 that concerns international coordination on frontier AI; the G7 Hiroshima AI Process that promotes guardrails for advanced AI systems at the global level, among several others, including efforts such as the formation of a UN High-Level Advisory Body on AI and, more recently, the UN Resolution on Safe, Secure and Trustworthy Artificial Intelligence System.39
As this incomplete list already indicates, international efforts also range from relatively high-level aspirational principles to binding instruments. With respect to the latter, the most important initiative is the Council of Europe’s (CoE) Framework Convention on Artificial Intelligence Human Rights, Democracy, and the Rule of Law.40 The treaty covers the use of AI systems in both public and private sectors (with notable exceptions in areas such as national security), offering two compliance pathways when regulating the private sector: direct obligation to the treaty’s provisions or alternative measures while respecting international human rights, democracy, and the rule of law. This accommodates global legal diversity. It mandates transparency, oversight, risk assessment, and mitigation measures, including identifying AI-generated content and assessing the need for moratoriums or bans on high-risk AI uses.
The treaty ensures AI systems uphold equality, privacy rights, and accountability for adverse impacts, with legal remedies for human rights violations and procedural safeguards. It requires parties to adopt measures to ensure that AI systems do not undermine democratic institutions and establishes a Conference of the Parties for follow-up, and it requires independent oversight to ensure compliance, raise awareness, and foster public debate on AI technology.41
Relevant building blocks of international AI governance that predate some of the most recent global AI initiatives can also be found in the domain of free trade and digital economy agreements. For instance, the Digital Economy Partnership Agreement between Singapore, Chile, and New Zealand, promoting interoperability among the different digital trade regimens, promoted the adoption of ethical AI frameworks and developed mechanisms for cross-border data flows.42 The UK-New Zealand Free Trade Agreement, to take another example, removed certain data localization requirements and established guardrails for international data flows between the two countries.43
Institutionalized initiatives also include regional and bilateral efforts.44 Under the institutional umbrella of the US-EU Trade and Technology Council (TCC), for instance, the United States and the European Union committed to a series of projects to advance trustworthy AI through collaborations in the area of measurement and evaluation, the design of AI tools to protect privacy, and the economic analysis of AI’s impact on workforce. An initial contribution is the TCC Joint Roadmap on Evaluation and Measurements Tools for Trustworthy AI and Risk Management, with commitments to work toward a common terminology (a draft of an EU-US Terminology and Taxonomy for Artificial Intelligence was recently released for consultation) and a common knowledge base of metrics and methodologies to coordinate their work with international standard bodies, and track emerging risks and work toward compatible evaluations of AI systems. Progress has also been made in the area of privacy and AI workforce impact analysis.
Cross-Pollination
Even below the threshold of larger institutionalized international efforts, and despite the previously mentioned idiosyncrasies that point toward nuanced AI governance arrangements across geographies and contexts, the process of developing such arrangements at the local and national levels is currently characterized by a remarkable degree of cross-pollination among policymakers and lawmakers.45 Put differently, not only do geopolitical dynamics shape the normative field of AI governance, but the approaches and instruments that are deployed within the respective spheres of polycentric governance-making are themselves shaped by interactions among relevant stakeholders, elevating the complexity of the norm dynamics at play.
Forums and venues where such processes take place range from informal Zoom calls, conferences, and workshops to engagement in committees and networks, such as the Global Partnership on AI Governance or the G20 Working Group on Artificial Intelligence, to name just a few examples specific to the domain of AI. Platforms such as Globalpolicy.AI, the Transatlantic Policy Network, and the World Economic Forum, or collaborations between think tanks such as the Brookings Institution and the Center for European Policy Studies, also serve as important spaces for cross-pollination among various stakeholders, including policymakers and lawmakers, in addition to direct lines of communication among them. (Members of the US Congress, for instance, have engaged with one of the rapporteurs of the EU AI Act.) Efforts facilitated by academic institutions, such as the Stanford Institute for Human-Centered AI, also serve as exchange points for decision-makers in the field of AI.
Cross-pollination through knowledge diffusion in the field of AI governance takes place though various other mechanisms with varying degrees of informality and transparency. Examples include structured interactions in the context of standards-setting organizations involved in AI governance—the collaboration between the OECD and NIST to develop a catalog of AI tools and metrics is a case in point—but also lobbying efforts by industry and industry associations that often operate across jurisdictions and promote certain approaches or instruments across different forums of AI governance-making.
Toward Governance Innovation?
Each cycle of technological innovation with the potential to induce structural shifts in a socioeconomic environment when interacting with humans and society typically challenges existing governance structures. While the default response to such challenges is to apply the old structures to the new phenomenon, the disruptions also offer a window of opportunity for innovation within governance systems. Some of these governance innovations are gradual in nature and others more radical; some include novel institutions, and others innovate around processes or rights.46 The internet revolution, for instance, led to several governance innovations across all three domains, with ICANN being an example of an institutional innovation, online dispute resolution systems a process innovation, and the right to be forgotten a rights innovation.47
While traces of innovative governance might be spotted at the levels of individual norms within large governance projects such as the EU AI Act, it is the calls for new AI oversight institutions voiced by government representatives, industry leaders, and academics that have recently garnered public attention. The new models proposed for AI governance often find inspiration in other policy domains, including climate, finance, or nuclear energy. A recent review of proposals for new AI institutions clustered them into seven functional categories that transcend traditional government policies. Models range from scientific and political consensus-building to coordinating institutions in the realm of policy and regulation, and from enforcement of standards and restrictions to international joint research and distribution of benefits and access to AI technology.48
The analysis suggests a wide array of models and experiences that can be leveraged as the quest for global AI governance intensifies. In the current quicksilver environment, it is arguably one of the most intriguing and consequential questions how much innovation in governance is needed and (politically) possible to unlock the benefits of AI while managing its risk at the global level, and what would such an arrangement look like in practice.49
Approaches to AI Governance
When returning to present day approaches to AI governance, the complexity and heterogeneity of the evolving AI governance landscape, the contextually embedded nature of the respective normative arrangements, and the speed of development make it difficult to meaningfully engage in a comparative norm-level analysis between and among different initiatives across various levels of governance. What this section offers, instead, are a number of analytical lenses that can be used to help understand and position different governance approaches relative to each other, highlighting the broad range of conceptual and functional pathways available.
Positioning Approaches
With these caveats in mind, one might take a closer look at the diverse AI governance arrangements that together form the normative field. Given the number of initiatives and the fluid state of norm development around the world, it is virtually impossible within the scope and purpose of this article to offer even a representative, let alone a comprehensive overview of current attempts aimed at governing AI. A more modest approach is to position some of the most salient governance initiatives along several spectrums with ideal-type approaches at their respective ends:
- Sectoral versus horizontal approaches: AI-based technologies cover various application contexts. Governance approaches can seek to regulate AI horizontally across their different use cases or regulate the development, deployment, and use for specific sectors, such as health, transportation, justice, or education, to name just a few. The United Kingdom takes a sectoral approach; other country examples include Japan and Switzerland. At least traditionally, the United States has also pursued a sectoral approach, with the recent Executive Order blurring the lines to the extent it pursues a whole-of-government approach. The EU, with its EU AI Act and related efforts like the AI Liability Directive, takes a decidedly horizontal approach to AI governance, supplemented by sector-specific regulations, resulting in a mixed approach, but partly also interacting with other legislation, including the GDPR but also the Digital Services Act (DSA)—the latter in which foundation models are incorporated in very large online platforms and search engines.50
- Soft law versus hard law: Another positioning point is the question whether a given AI governance approach relies more on soft law or hard law. Soft law instruments include standards, ethics guidelines, checklists, best practices, to name a few. They play a key role in self-regulatory regimes, but in the field of AI, they also supplement state-driven legislation and regulation. Japan and Singapore currently rely heavily on soft law instruments, which continue to play a prominent role in the United States, for instance, in the gestalt of the Voluntary Commitments51 from leading companies to manage the risks posed by AI, but also the NIST AI Risk Management Framework,52 among others. Ambitious hard law approaches are currently pursued in the European Union, Canada, and Brazil, but also part of the AI governance mix in the United Kingdom (sectoral regulation) and the United States (particularly state and local levels), as already mentioned.
The spectrums outlined here interact with each other and partially overlap. As already indicated, AI governance initiatives often combine different approaches and instruments within them. For instance, hard law approaches to AI governance will typically also rely on standard-setting outside the formal lawmaking processes, as subsequently discussed. While not being exclusive and clear-cut, the spectrums might still serve as a rough coordination system to identify the position of different approaches relative to each other.
Cutting across the sectoral versus horizontal and soft versus hard law approaches are two other spectrums that can be helpful when considering the available toolbox of AI governance and comparing different strategic choices made by AI governance bodies:
- Outcomes versus procedural approaches: Outcome-based approaches to AI governance stipulate a desirable outcome such as innovation, economic growth, or safety, to name just a few objectives, and keep the means to achieve these objectives typically flexible. Procedural approaches, in contrast, prescribe instruments that need to be adopted along the way, assuming that they will lead to a desirable outcome. Risk management is a case in point. Risk-based approaches categorize AI applications based on their level of risk to individuals and society and attach tailored requirements to each level. Perhaps the most prominent example in the latter category is the EU AI Act, with its intricate scheme of risk classification and corresponding legal obligations. The Canadian draft legislation also builds upon a risk-based approach, as well as the Brazilian Draft AI Law, which the EU AI Act inspired. Examples of the former approach include the United Kingdom’s pro-innovation approach to AI governance, which targets the outcomes AI will likely generate in specific applications rather than assigning rules according to risk levels.
- Principles versus rules-based approaches: As the name suggests, principle-based approaches seek to guide the development, deployment, and use of AI by laying down a set of overarching principles that guide the relevant stakeholders. Prominent examples of such an approach are the OECD AI Principles and the G7 International Guiding Principles on AI. Rules-based approaches, in contrast, typically lay out specific and more detailed rules according to which the relevant stakeholders must play. At the country level, China’s Interim Measures in the realm of generative AI are illustrative. But also sector-specific requirements, for instance, in the area of medical AI or transportation, might often be rule-based, suggesting again that approaches might be mixed and are often not clear-cut.
Across these four spectrums (and others could be added), it is important to remember that any categorization of this sort runs the risk of oversimplification and is of limited value given the characteristics of AI as a “messy” normative field as discussed in the earlier section. At the very least, however, they might illustrate the range of approaches available and serve as a rough navigation aid when contrasting different choices made by various AI governance actors.
Functional Dimensions
Another lens for positioning approaches to AI governance—and within such approaches, individual norms—is functional in nature. Borrowing from analyses of previous cycles of technological innovation and accompanying governance responses in law and regulation, one can distinguish between constraining, enabling, and leveling functions of norms.53
Lawmakers or regulators might draft and enact norms that constrain the development or use of certain types of technologies or functionalities. Frequently used instruments include the following:
- Prohibitions: Legislation or regulation can ban the development or use of certain AI systems or applications. The EU AI Act, for instance, prohibits certain AI use cases that pose unacceptable risks. Its far-reaching restrictions on using facial recognition technology are another case in point. Similarly, several US municipalities have restricted or banned the use of facial recognition technology by local agencies. Restrictions on the export of dual-use technologies are another illustration. More generally, the Canadian AIDA proposes new criminal law provisions to prohibit any reckless and malicious uses of AI that cause serious harm to Canadians and their interests.
- Premarket obligations: AI laws and regulations can stipulate requirements that need to be met before an AI product enters the market. The EU AI Act requires developers of high-risk AI systems to perform comprehensive conformity assessments before placing them on the market. In areas such as medical AI or autonomous vehicles premarket approval is often required, including under US regulations. Post-market monitoring is another instrument in the toolbox, often supplementing premarket regulatory schemes, like in the case of high-risk systems under the EU AI Act.
- Certification and registration: Analytically distinct, but closely related to premarket requirements are certification and registration schemes. The important role of (at least) voluntary certifications is mentioned in the in the Canadian Artificial Intelligence and Data Act (AIDA), for example. High-Risk AI systems under the proposed EU AI Act are subject to a strict certification regime. In addition, such systems, as well as foundational models, need to be registered in an EU database.
Enabling norms, in contrast, are designed to permit or even promote the development and use of technology. Such norms—most prominently so-called safe harbors—have played an important role in creating a flourishing digital platform economy, as mentioned before. In the AI realm, laws and regulations can promote the development and use of AI systems in various ways, ranging from compliance exceptions for certain use cases to government investments. Some of the common instruments include the following:
- Funding and subsides: Enabling AI legislation can establish funding schemes to support the development or adoption of AI-based technologies. Lawmakers across countries and regions, including the United States and Europe, have enacted laws as a foundation to direct investment and subsidies toward industry as well as the public sector. The US Executive Order, for instance, directs federal funding toward a research coordination network to support privacy preserving technologies, among several other actions.
- Capacity building: AI legislation might stipulate capacity building measures. Instruments may range from technical assistance programs to setting up resource and innovation centers. The US Executive Order, for instance, provides small developers and entrepreneurs with access to technical assistance and resources. The envisioned AI and Data commissioner under the Canadian AIDA is also engaged in capacity building.
- Sandboxes: Various AI laws encourage or even mandate regulatory sandboxes, which allow businesses and regulators to cooperate in a controlled environment to test innovative products or services and gain insights with regard to risks of these innovation and appropriate safeguards. Sandboxes are a key measure in support of innovation under the EU AI Act, for instance. The Brazilian AI Act set the foundation for testing environments to support the development of innovative AI systems.
Norms that are leveling the playing field include, for instance, general rules prohibiting anti-competitive behavior or deceptive business practices. Specific instruments that seek to address information asymmetries or other imbalances in the AI context include the following:
- Transparency: To bridge information gaps, AI laws and regulations often impose disclosure obligations, which come in many forms and shapes. The EU AI Act, for instance, requires clear and comprehensible information about the capabilities and limitations of high-risk AI systems, and transparent and traceable decision-making processes. The Brazilian AI Act, to take another example, mandates transparency in the use of AI systems in interactions with natural persons, among other requirements.
- Education and training: AI literacy and skill-building programs might also have their anchor in laws and regulations. The US Executive Order, for instance, supports various programs to enhance AI-relevant skills to ensure access to AI opportunities for the workforce in general, and for specialized groups of professionals, such as investigators or prosecutors.
To be sure, this list of instruments is not exhaustive; additional mechanisms currently under consideration cover a broad spectrum of governance techniques, including licensing requirements,54 tax obligations, rulemaking authority, and procurement power, among others. Furthermore, several additional instruments transcend the three categories of constraining, enabling, and leveling norms. For instance, auditing and inspection regimes, oversight mechanisms, or sanctions are frequently used techniques to create accountability and ensure compliance and thus serve a cross-cutting function.
Mixed Approaches
Clearly, the evolving normative field of AI governance is complex and the traditional taxonomies start to blur. Particularly at the country level, AI governance often involves mixed approaches, combining different strategies and instruments and situating these countries somewhere along the spectrum of the ideal-type approaches outlined earlier. Moreover, although the dimensions that mark each spectrum might be analytically distinct, they are also interacting. The EU approach to AI governance is a helpful illustration in this regard: The EU AI Act advances a risk-based approach through hard law but is supplemented by sectoral regulations in areas such as health and transportation as well as soft law instruments such as technical standards and ethical principles. As already indicated, the coordination system is perhaps most helpful to understand the relative positioning between different approaches and to create awareness of the available options, particularly for countries and communities that remain undecided on which approach to pursue.
The same applies to the functional categories. AI governance, like previous tech-induced governance regimes, typically consists of a complex amalgam of norms. Such arrangements typically combine several of the functions briefly described earlier, as a recent empirical study of several hundred of proposed (and at times enacted) AI laws and regulations across the Atlantic indicates.55 Nonetheless, certain trends become visible when applying a functional lens. Mapping proposed, rejected, and enacted legislation on AI-based technologies in the United States and Europe over the past seven years, the study reveals that legislative activities on both sides of the Atlantic serve different functions, but (proposed) laws and regulations in the United States tend to be more strongly in the enabling zone than their European counterparts.
Effectiveness and Ripple Effects
Mapping existing approaches to AI governance in general, and a high-level overview of some of the most salient instruments that are available to lawmakers and regulators in particular, indicates a deep reservoir of normative techniques (both social and technological in nature) that AI governing bodies can tap into when seeking to steer the development, deployment, and use of AI across diverse application areas. While the choices among approaches and instruments are not unconstrained and, as discussed in the subsection “Contextualizing AI Governance”, shaped by numerous factors that create path-dependencies, the respective actors involved often have significant leeway when selecting and mixing the tools to address specific AI governance issues.
AI governance shares characteristics of a wicked policy problem with many interdependencies and contingencies, making it virtually impossible to predict in all nuances the individual and aggregated effects when choosing one governance approach over another, or when selecting certain instruments while not deploying others.56 Experiences from previous cycles of technology innovation offer some high-level insights for the design of “good governance” and clues about possible ramifications of different approaches at a basic level.57 For instance, comparisons between US and European approaches to privacy and data protection, or an analysis of different governance regimes across regions when regulating online intermediaries such as social media platforms, might teach some lessons.58 However, at the more granular level of specific instruments, lawmakers and regulators often fly in the dark, as links between interventions and desirable outcomes (for instance, in terms of effectiveness, efficiency, and flexibility when addressing a given AI issue) remain chronically uncertain when dealing with structural sociotechnological transitions.59
In the normative field of AI governance, as in other domains, a complex web of economic, social, technological, organizational, and also human factors influences the practical outcomes emerging from any given mix of governance approaches and instruments over time. Constraining norms as a subset of AI governance arrangements and the question of pressures and incentives that might affect compliance and enforceability are indicative of some of the complicating dynamics. Research on the effects of different approaches to privacy and (pre-GDPR) data protection in the United States and European countries, for instance, revealed that on-the-ground practices—including overall awareness, leadership buy-in, and professional culture—have been critical factors determining privacy outcomes regardless of the underlying conceptual choices made by lawmakers and regulators.60 Another example is a finding from a recent in-depth examination of China’s hard law approach to AI governance concerning generative AI, suggesting a significant gap between the “law on the books” and “law in action” when it comes to the willingness to enforce the strict rules amid the geopolitical arms race and vis-à-vis domestic economic struggles.61 At a more abstract level, both stories—albeit for different reasons—point toward the importance of communities of practice and implementation capacities, respectively, that in no small part will co-determine the effectiveness of any of the available approaches to AI governance.
The uncertainties and dynamics involved when dealing with emerging science and technology such as AI make it not only challenging to select and combine approaches in ways that best address a given governance issue but also very difficult to anticipate second order and ripple effects. Governance instruments that promote the use of AI in the public administration, for instance, might exacerbate environmental issues or have implications for AI supply chains. AI guardrails that do not stand the test of time might affect public trust not only in the technology but also in the state. For all these reasons, it is vital to incorporate performance benchmarks and evaluation processes as well as mechanisms of responsible experimentation and systematic learning into AI governance arrangements, whether based on soft or hard law or on a sectoral or horizontal approach.62
Mapping Normative Patterns
Analyzing contemporary AI governance arrangements as building blocks of emerging governance regimes is a difficult task, as the caveats in the previous sections already indicate. AI governance as a normative field, as mentioned before, is not yet defined by clear boundaries; rather it is a moving target where general background rules and specific norms enacted by a broad range of governance actors interact. Given the polycentric character of current AI governance arrangements, various norm types are involved, with varying degrees of abstraction, levels of legitimacy, and prescriptive power. Finally, AI governance norms emerge within specific institutional, legal, and cultural contexts, but they also interact with each other as discussed earlier, making comparisons among them challenging.
One promising methodological approach to deal with this complexity and heterogeneity is to look out for normative patterns instead of comparing individual norms. The approach, which is inspired by the theory of law as normative patterns in a normative field as developed by the late Swedish legal theorist Anna Christensen (who in turn got inspiration from Douglas Hofstadter’s analysis of AI programs), is based on the empirical observation that different basic normative patterns can be distinguished both within and across a multitude of norms that seek to collectively regulate social matters.63
Applying this idea of normative pattern analysis to the normative field of AI governance, several patterns emerge when looking within and across the EU AI Act and US Executive Order in particular, but also when considering selected other laws such as the Brazilian and Canadian AI bills, as well as soft laws and international governance initiatives.64
Protection of Established Rights
Various legal norms in evolving AI governing arrangements seek to ensure and bolster the protection of established rights of rightsholders vis-à-vis novel risks and potential harms associated with the development and use of AI. Together, this cluster of norms forms one of the key patterns that transcend the heterogeneous set of norms of AI governance. Within the EU AI Act, for instance, the protection of established rights plays an important role and goes to the heart of the raison d’être of the legislation, which is set out to ensure a high level of protection of health, safety, fundamental rights, democracy, the rule of law, and the environment from harmful effects of AI systems. Similarly, several sections of the US Executive Order aim to protect established rights, for instance, when stipulating requirements against unlawful discrimination, protection against fraud, and threats to privacy, or to ensure the safety, security, and reliability of AI systems. Brazil’s proposed AI legislation, for example, included a section on the protection of the rights of individuals impacted by AI decision-making and outlined individual and collective rights of action. Rights protection is also a core motif of soft law instruments advanced globally. Consider, for example, Singapore’s Model AI Governance Framework, which includes the protection of the interests of human beings, including their well-being and safety, as primary considerations in the design, development, and deployment of AI. Also, in the international realm, various AI governance initiatives include explicit or implicit references to protecting established rights. The G7 Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems with its requirement to respect human rights and protect children and vulnerable groups, or the Bletchley Declaration with its recognition that the protection of human rights, safety, privacy, and data protection needs to be addressed, or the CoE’s Framework Convention with its rules to align the life cycle of AI systems with international and national legal protections of human rights, are examples of pars pro toto.
Protection of Established Positions
A second, related pattern that crystallizes across a diverse set of AI governance arrangements at the national and international levels is the protection of established positions, where some of the legal norms aimed at governing AI are crafted in ways that are protective of previously recognized economic, cultural, and social interests and aimed at preserving a given status quo. The EU AI Act includes various normative references along these lines. At the systemic and most fundamental level, measures taken to shield democracy and justice, for instance by limiting certain uses of AI or imposing strict upfront requirements, are examples of the protection of established positions. Regarding the protection of individual interests, the EU AI Act clarifies that it does not alter any of the previous rules, particularly in the realm of data protection, consumer protection, and product safety, which establish important baselines in terms of protected interests and positions. In other cases, it seeks to reaffirm established interests, for instance, with respect to copyright holders’ economic interests in the context of the regulation of foundation models used in generative AI systems, where providers would be required to publicly disclose a sufficiently detailed summary of the copyrighted material used as training data. In the same way, the Brazilian AI Law reinforces previous regulations, especially those related to data and consumer protection. Concerning intellectual property, it establishes that it is without prejudice to the owners of these rights. US Executive Order also includes norms aimed at protecting established positions, for instance, in the form of requirements aimed at improving the security, resilience, and incident response related to AI usage in critical infrastructure or when outlining different measures and programs in support of workers who might face future AI-related job disruptions, including the protection of their economic interests and well-being. Protecting established positions is also a key driver behind international AI governance initiatives. The Bletchley Declaration, to take just one example, with its focus on measures to ensure the safety of AI systems, for instance, in the context of frontier AI capabilities, is motivated to safeguard protected interests and existing positions of individuals, organizations, and governments.
Market Functional Pattern
The market functional pattern is a dynamic element in AI governance arrangements. Norms at the core of this pattern aim to promote new economic activities, stimulate technological development through market mechanisms, and support new markets and business models. Improving the smooth functioning of the internal market while promoting the uptake of human-centric and trustworthy AI and ensuring high levels of protection is among the overarching objectives of the EU AI Act. It enables AI systems, with notable exceptions, to benefit from the principle of free movement of goods and services. References to market functioning are spread across the proposed law and mentioned in the context of open software and data as enablers of market-based research and innovation. Transparency requirements are also contextualized as minimally invasive measures to avoid unjustifiable restrictions on trade. More generally, the EU AI Act is embedded in a broader digital strategy aimed at enhancing Europe’s competitiveness and promoting innovation in the digital market. On the other side of the Atlantic, the US Executive Order also states as a core principle the promotion of responsible innovation and competition and stresses the importance of a fair, open, and competitive ecosystem and marketplace for AI and related technologies. Its requirements to promote innovation and competition, but also to nurture AI talent and strengthen US leadership internationally are, to a large extent, part of a market functional pattern. At least traces of the same pattern can also be found in soft law instruments. The Singaporean Model AI Framework,65 for instance, contextualizes its best practices in terms of AI as an enabler of new goods and services and a booster of productivity and competitiveness, which can lead to economic growth and better quality of life. At the international level, the market functional pattern has been less explicit in recent AI governance initiatives, with occasional references to productivity gains and inclusive economic growth, for instance, in the Bletchley Declaration. Market functional rationales also pop up in various AI-related efforts, such as in the realm of data governance aimed at enabling trans-border flows of data, building upon the international order of IP and trade as the normative bedrock of globalized markets.
Fostering Innovation
As already mentioned in the context of the functional dimensions of AI governance, several AI governance arrangements contain dedicated norms to promote research and development of AI-based technologies. Expanding on the original concept of normative pattern analysis pioneered by Anna Christensen, this complex of norms can be conceptualized as the fostering of innovation pattern. It interacts with the market functional pattern and is often framed as the protection of the potential for innovation based on preexisting commitments to free trade and intellectual property. Again, this normative pattern is typically present in national-level AI governance arrangements and in international initiatives and cuts across the hard law and soft law distinction. The provisions of the EU AI Act mention innovation close to thirty times. It stipulates various norms aimed at promoting innovation or protecting the potential for innovation, ranging from the possibility of regulatory sandboxes and coordinated standard-setting in the technical realm to AI literacy initiatives, among others. Likewise, the proposed Brazilian legislation also adopts regulatory sandboxes to promote innovation. Norms aimed at promoting AI innovation are also integral to the UK White Paper, which proportionately tailors its regulatory framework to fulfill the goal of innovation promotion,66 and to the US Executive Order, which outlines a broad range of measures to promote innovation and competition through immigration reform, investments in resources, support for research and development, and measures in the realm of IP protection, spanning various governmental agencies and bolstering private-public partnerships. Soft law instruments of AI governance often also include recommendations aimed at promoting innovation, both at the national and international levels. The influential OECD AI Principles, for example, call on governments to consider long-term public investments and encourage private investments in research and development to spur innovation in trustworthy AI, including in creating open datasets to support the overall environment for responsible AI research. Along similar lines, at the global level, the UNESCO Recommendation on the Ethics of AI calls upon member states to ensure that public funds are dedicated to responsible and inclusive AI research and that governments promote international collaboration to advance innovation.
The patterns proposed in this chapter, inspired by Christensen’s original work, are an attempt at describing some of the core normative elements within and across different AI governance arrangements. Given the complexity and heterogeneity of AI governance arrangement, these different patterns do not make up a hierarchy of norms; rather they coexist and interact with each other in ways shaped by various contextual factors, including cultural, political, and economic conditions as alluded to in the previous section when contextualizing AI governance. Although the mode of analysis is descriptive rather than prescriptive, the approach can serve as a foundation to study how patterns manifest themselves over time within and across different societal conditions and application contexts.
Selected Nodes of AI Governance
The previous sections sketched some of the approaches, functions, and patterns of AI governance as a moving normative field, highlighting by example the great variety of pathways available when seeking to regulate the development, deployment, and use of AI. Building on this mapping exercise, this section looks at cross-cutting crystallization points in some of the AI governance arrangements featured in this article and intends to highlight both zones of convergence and divergence in the normative field. Again, several AI governance initiatives at the national and international levels are referenced to illustrate some of the commonalities and differences among them at the conceptual level. Last, the section suggests and identifies early traces of an interoperability approach as a potential way forward to navigate both zones of convergence and divergence across AI governance arrangements.
Zones of Convergence
While much nuance remains, some trends of convergence can be observed across most of the AI governance arrangements reviewed in this chapter. For the methodological reasons mentioned before, the following commonalities focus on conceptual “nodes” of AI governance rather than on individual norm-level comparisons.
- Prominence of risk-based approaches: While some AI governance actors opt for outcome-based approaches, risk-based approaches to AI governance have gained popularity at both the national and international levels, cutting across sectoral and horizontal as well as soft and hard law instruments. Leading examples at the national level include the EU AI Act, Canadian AI and Data Act, and Brazilian AI bill, which all use some forms of risk and impact assessment to group AI systems into different categories of compliance obligations. The US Executive Order also highlights the importance of a risk-based approach, particularly when managing risks from the federal government’s own use of AI and in the context of implementation measures, for instance, in the gestalt of the NIST AI Risk Management Framework as an influential voluntary standard. Other soft law instruments, such as the Singaporean Model AI Governance Framework, provide guidance to organizations to adopt a risk-based approach when implementing measures. At the international level, risk-based approaches have been promoted by G7 digital and technology ministers and referenced in the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.67 Although these examples suggest conceptual convergence toward risk-based approaches, substantial differences continue to exist at the operational level.68
- Role of regulatory sandboxes: Building on previous experiences using sandboxes as a supervised experimental space to enable responsible testing of emerging technologies and foster bidirectional learning between developers and regulators, AI governance bodies across the globe have started to embrace this technique and are currently applying it to AI. The European Union in the EU AI Act and several European member states, including Spain and Germany, are promoting the use of AI regulatory sandboxes as controlled environments with reduced regulatory burden to keep pace with rapid AI development while gaining experience dealing with it effectively. The Brazilian AI Act also authorizes the operation of an experimental regulatory environment for innovation in AI, and the preparation for a first sandbox is already underway. Singapore, as a last example, recently launched a Generative AI Evaluation Sandbox as an experimental platform for developers to build responsible AI use cases and enable the evaluation of trusted AI products.69
- Importance of standards: Across all the reviewed AI governance arrangements, regardless of their respective positioning, standards play a vital role.70 Even in cases where comprehensive legislation is at the core of AI governance, like in Europe with the EU AI Act, standard-setting is a critical part of the strategy. The European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) are leading organizations developing standards that could provide developers the presumption of conformity with the EU AI Act. As already mentioned, NIST in the United States has been actively involved in developing standards for AI by creating a framework that fosters the development of trustworthy and responsible AI systems, covering areas such as bias, explainability, and robustness. The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have a joint technical committee focused on AI standardization, and international organizations like the Institute of Electrical and Electronics Engineers (IEEE) have set up various working groups developing standards for ethical considerations in AI, to name just a few initiatives among many.
Zones of Divergence
The complexity and heterogeneity of the AI governance landscape make it unsurprising that many differences exist not only at the level of individual norms—for instance, whether a given AI governance arrangement specifically addresses foundation models, and if so, how—but also at the conceptual level. In addition to the higher-level differences resulting from distinct approaches to AI governance already mentioned earlier in “Approaches to AI Governance,” some of the particularly noteworthy conceptual areas of divergence include the following:
- Scope and definitions: Many important nuances exist regarding the scope of application among the variety of different governance initiatives, as well as with respect to definitions of various technical and legal terms. Voluntary industry standards and professional best practices, for instance, have a very different scope and reach than mandatory laws and regulations, whether horizontal by design or sector-specific. The EU AI Act, for instance, seeks to regulate the full range of AI applications across the private and public sectors, whereas the aforementioned US Algorithmic Accountability Act appears to be more specific and selective. The US Executive Order, in contrast, takes a broad whole-of-government approach to AI governance. While progress has been made when it comes to developing a shared understanding of key terms such as AI itself at the international level—strongly influenced by the important work of the OECD, which recently updated the definition of AI—many other important definitions of key concepts are a work-in-progress or remain contested, as recent debates about the definition of all-purpose AI, generative AI, and foundation models in the EU AI Act illustrate. To be sure, differences in terminology are neither a new phenomenon nor unique to AI governance. However, in light of the heterogeneous norms landscape, it remains a significant challenge for the years to come to create appropriate levels of (semantic) interoperability across a thickening web of emerging laws, standards, and best practices that might apply simultaneously given the polycentric nature of AI governance.
- Level of normative commitment: Despite the flourishing of AI governance initiatives in general and recent momentum around the creation of hard laws after a phase with a strong emphasis on ethical norms, stark differences remain among such efforts when it comes to the level of the underlying normative commitment. Perhaps most significantly and visibly, the commitment to individual rights varies greatly across AI governance arrangements. This applies not only when comparing AI norms of environments governed by the rule of law versus others but also when considering the depth of normative guarantees offered by different democratic regimes. For instance, while the US Executive Order marks without any doubt an important step forward when it comes to the protection of civil rights and privacy against emerging AI risks, it does not immediately offer the same level of actionable legal protection for individuals as the provisions set forth in the EU AI Act and the General Data Protection Regulation, respectively. Similarly, most, if not all, the AI governance requirements stipulated in soft laws such as Singapore’s AI Model AI Governance Framework as well as many of the global AI governance initiatives, are important signals and milestones on a longer trajectory but still are relatively weak normative commitments when assessed from the vantage point of advancing individual rights beyond the current baseline of human rights protections.
- Enforcement: The AI governance arrangements reviewed in the context of this chapter (and beyond) vary greatly in terms of enforcement regimes. As a threshold, much depends initially on the specifics of the governance approach itself, for instance, the role of voluntary self-regulation versus government-based regulation through hard law. Consistent with the polycentric characteristics of the AI governance landscape, different norms are typically enforced by different actors, ranging from in-house AI accountability boards or professional associations to traditional law enforcement or from newly created AI authorities to preexisting specialized agencies tasked with AI norm enforcement in their respective sectors or industries. How an AI enforcement regime looks is yet again shaped by broader contextual factors, including preexisting legal order and market structure. For instance, the EU AI Act puts the primary enforcement responsibility in the hands of the member states, with consultation and coordination mechanisms at the EU level in the form of a European Artificial Intelligence Board chaired by the EU Commission. At least from a structural perspective, the EU approach to enforcement resembles regimes of harmonized data protection law, including strong enforcement tools. Adopting the same strategy, the Brazilian AI Law establishes a new supervisory authority that will be responsible for monitoring noncompliance with the law, promoting its implementation, and issuing other regulations related to AI. In the United States, the Federal Trade Commission (FTC)—in addition to the agencies now tasked with the implementation of the US Executive Order—is expected to play a particularly important role as an enforcer of consumer protection–oriented AI norms and standards at the federal level (complemented by specialized agencies such as the FDA in health AI), but without a comprehensive mandate compared to the EU counterparts.
Zones of Interoperability
The concept of interoperability offers an alternative analytical view on the diverse landscape of emerging AI governance arrangements—a perspective that transcends the binary division between zones of convergence and divergence. Originally a technical concept, interoperability in the digital realm can be broadly understood as the ability of different systems, applications, or components to work together based on the exchange of useful data and other information.71 Under the header of “legal interop,” it has been analogized to conceptualize the working together among distinct legal norms across jurisdiction that regulate the global flow of information.72 A number of instruments are available to enhance legal interoperability, including legal harmonization, mutual recognition, reciprocity, cooperation, and standardization—approaches that can be operationalized through various means, ranging from treaty law to self-regulation.73 Some of these tools might also be relevant when seeking to enhance interop between AI governance arrangements or their components.
- Many of the international initiatives led by state and nonstate actors mentioned earlier in this chapter are often aimed at enhancing the interoperability of norms, rules, standards, and decision-making procedures across different AI governance arrangements. The OECD Principles on Artificial Intelligence or the UNESCO Recommendations on the Ethics of AI, for instance, have informed and often shaped hard law and soft law approaches across various jurisdictions, promoting interoperability at the norm and process levels and beyond. The G7 Hiroshima Process also aims to establish common guiding principles for organizations developing advanced AI systems while acknowledging that “different jurisdictions may take their own unique approaches to implementing these guiding principles in different way.”74 More recent efforts such as the United Nations Resolution encourage “internationally interoperable identification, classification, evaluation, testing, prevention and mitigation of vulnerabilities and risks” of AI systems.75
- Higher levels of interop, however, might not only come from top-down efforts. Multistakeholder initiatives can also enable the working together among different arrangements and regimes.76 In the field of AI governance, such efforts are still in the early stages, but important work is well underway.77 The Global Partnership on Artificial Intelligence, for instance, has produced various guides on the responsible development, use, and adoption of AI.78 The Partnership on AI, too, has advanced best practices in various areas of AI governance, including synthetic media.79 The AI Governance Alliance, convened by the World Economic Forum, produced interoperable building blocks to guide the safe development, deployment and use of generative AI across AI governance arrangements.80 ETH Zurich in collaboration with the Swiss government hosts a multistakeholder Gen AI Redteaming network to collaborate on disclosing, replicating, and mitigating safety issues and develop best practices.81
As discussed, emerging AI governance arrangements introduce and legitimize a variety of innovative approaches, tools, and practices—ranging from human rights and risk assessments to codes of practices—that will need to be further specified and operationalized in different forums and processes. From an interop perspective, this modularization of AI governance opens the possibility for cross-border multistakeholder cooperation, with the promise to enhance alignment between different AI governance arrangements by enabling the working together among some of their core components even absent more ambitious harmonization at the international regime level.82
AI Governance for an Uncertain Future
This chapter has explored AI governance as a normative field from a predominantly descriptive perspective. Developing detailed prescriptions at the level of concrete norms from such an initial mapping exercise during the early stage of AI governance with little empirical evidence about what works under what conditions is at least problematic from a methodological perspective. Therefore, the analytical lenses introduced in the previous section and the discussion of possible normative patterns within and across AI governance arrangements suggest at least a number of considerations when contemplating additional interventions to regulate the development, deployment, and use of AI. Specifically, the discussion in this chapter offers five key takeaway points.
First, the complexity and heterogeneity of AI governance as an evolving normative field suggest the adoption of an ecosystem perspective when considering additional initiatives aimed at steering the development, deployment, and use of AI. Metaphorically speaking, the AI governance landscape resembles more a tropical garden rather than a formal garden with neatly trimmed lawns, arranged flower beds, and precise geometric designs. Without pushing the analogy too far, future AI governance interventions like tropical gardening require interaction with the sociotechnological environment, a deep understanding of the cultural, societal, economic, legal, and other relevant contexts, and a sense for integrating governance initiatives within the surrounding environment.
Second, the mapping of various AI governance arrangements along a number of interacting spectrums—such as sectoral versus horizontal approaches, soft versus hard law, outcomes versus risk-based, or principles versus rules-based approaches—as well as the different functions of governance norms, principles, standards, and decision-making procedures point toward a broad range of available approaches, strategies, and tools in the AI governance toolkit. Future regulatory initiatives should consider the full range of instruments available and select them based on their fit for purpose when addressing specific AI governance issues. Ultimately, the selection of tools will need to be guided not only by features such as efficacy and efficiency but also by overarching values such as legitimacy, accountability, and fairness.
Third, any future governance initiative needs to be designed and implemented with context in mind. The discussion in the preceding sections has highlighted a number of such contextual factors and alluded to legal path-dependencies, the political economy, and geopolitical dynamics among the forces at play. While AI governance arrangements from other contexts—for instance, from other regions—might serve as sources of inspiration, recent experiences with the General Data Protection Regulation offer a cautionary tale when it comes to legal transplants that ignore the contextual realities in which they are supposed to be adopted. Debates about a possible Brussels effect originating from the EU AI Act need to consider these complexities and limitations, particularly vis-à-vis majority world countries.
Functions | Instruments and mechanisms | Normative patterns | Main AI governance issues | Examples |
---|---|---|---|---|
Constraining | Prohibitions | Protection of established rights; protection of established positions; market functional patterns | Existential risk; democratic erosion; freedom and autonomy | Chapter II EU AI Act (Prohibited AI Practices) |
Pre-market obligations | Performance outcomes, incl. security, safety, privacy, nondiscrimination (bias), fairness | Chapter III Section 2 EU AI Act (Requirements for high-risk AI systems) | ||
Certification, registration | Responsibility, incl. accountability | Chapter III Section 5 EU AI Act (… conformity assessment, certificates, registration) | ||
Enabling | Funding, subsidies | Fostering innovation; protection of established positions | Performance outcomes; sustainability; geopolitical competition | US Executive Order (various provisions) |
Capacity building | Fostering innovation; market functional patterns | Performance outcomes; Geopolitical competition | US Executive Order (various provisions) | |
Sandboxes | Performance outcomes, evidence-based policy | Art. 38 and Art. 39 Brazilian AI Act Draft (measures to encourage innovation) | ||
Leveling | Transparency | Market functional patterns; protection of established positions; protection of established rights | Explainability; trustworthiness; accountability | S. 11 Canadian AIDA Draft (publication of description) |
Education, training | Fostering innovation; market functional patterns | Labor displacement; job quality; performance outcomes | US Executive Order (various provisions) | |
Cross-cutting | Rulemaking | Protection of established rights; protection of established positions; market functional patterns | Accountability, compliance, enforcement | US Executive Order (various provisions) |
Auditing | US Executive Order (various provisions) | |||
Oversight | S. 33 Canadian AIDA Draft (AI and data commissioner) | |||
Sanctions | Chapter XII EU AI Act (incl. penalties) | |||
Note: The table connects the sections “Approaches to AI Governance” and “Mapping Normative Patterns” and benefits from the concise overview of AI governance issues and interventions by the Working Group on Regulation and Executive Action of the National AI Advisory Committee (NAIAC), “Rationales, Mechanisms, and Challenges to Regulating AI: A Concise Guide and Explanation,” Non-Decisional Statement. |
Fourth, the select initiatives touched on in this chapter, featuring a small subset of AI governance arrangements currently in the making, give a sense not only of the heterogeneity of relevant principles, norms, standards, and decision-making processes but also point toward an enormous degree of complexity at the implementation level. Future AI governance initiatives should not only specify what problem they seek to address in what context and through what means, but in parallel invest in capacity building to enable and empower key actors both in the private and public sectors to turn abstract principles and norms into actual practices. Such capacity building requires multistakeholder and increasingly international cooperation and has significant implications for the education and training of civil servants and private sector leaders alike.
Last, the descriptive engagement with selected elements of various AI governance arrangements suggests a series of broader design questions when it comes to guardrail-making amid an increasingly discontinuous future in front of us. From the vantage point of guardrail design more generally, AI governance—and not only AI—should be human-centric by guiding and supporting individuals to make better decisions considering socially desirable outcomes that define us as communities and hold us together as societies. Such a perspective not only suggests a critical examination of the suitable principles, norms, standards, and decision-making processes to govern AI but also highlights the importance of appropriate requirements that guide the design of such rules, including principles such as guardrail diversity, variability, plasticity, and self-constraint.83
Notes
Thanks to Martha Minow and Susan Ness as well as the participants of the NAS-Sunnylands-APPC AI Retreat for helpful comments on an earlier draft, and to Noha Lea Halim and Jiawei Zhang for research assistance. Manuscript as of March 24, 2024. Contact: Urs.Gasser@tum.de.
1. European Parliament, Artificial Intelligence Act (March 2024), https://
www .europarl .europa .eu /doceo /document /TA -9 -2024 -0138 _EN .pdf. 2. The White House, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023), https://
www .whitehouse .gov /briefing -room /presidential -actions /2023 /10 /30 /executive -order -on -the -safe -secure -and -trustworthy -development -and -use -of -artificial -intelligence /. 3. Council of Europe, Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (September 2, 2024), https://
rm .coe .int /1680afae3c. 4. United Nations General Assembly, Seizing the Opportunities of Safe, Secure and Trustworthy Artificial Intelligence Systems for Sustainable Development: Draft Resolution (March 11, 2024), https://
digitallibrary .un .org /record /4040897 ?v =pdf&ln =en. 5. Ministry of Foreign Affairs of Japan, G7 Leaders’ Statement on the Hiroshima AI Process (October 30, 2023), https://
www .mofa .go .jp /ecm /ec /page5e _000076 .html. 6. Ministry of External Affairs, Government of India, G20 New Delhi Leaders’ Declaration (September 9, 2023), https://
www .mea .gov .in /bilateral -documents .htm ?dtl /37084 /G20 _New _Delhi _Leaders _Declaration. 7. See, for example, Urs Gasser and Virgilio A. F. Almeida, “A Layered Model for AI Governance,” IEEE Internet Computing 21, no. 6 (November 20, 2017), https://
ieeexplore .ieee .org /document /8114684. 8. See, for example, Elinor Ostrom, Understanding Institutional Diversity (Princeton, NJ: Princeton University Press, 2005).
9. See also Araz Taeihagh, “Governance of Artificial Intelligence,” Policy and Society 40, no. 2 (June 4, 2021), 137–157.
10. OECD, Recommendation of the Council on Artificial Intelligence (adopted May 21, 2019, amended May 2, 2024), https://
legalinstruments .oecd .org /en /instruments /OECD -LEGAL -0449. 11. See, for example, Gunnar Folke Schuppert, The World of Rules: A Somewhat Different Measurement of the World (Frankfurt/Main, Germany: Max-Planck-Institut für Rechtsgeschichte und Rechtstheorie, 2017).
12. See, for example, Anna Jobin, Marcello Ienca, and Effy Vayena, “The Global Landscape of AI Ethics Guidelines,” Nature Machine Intelligence 1 (September 2, 2019), https://
doi .org /10 .1038 /s42256 -019 -0088 -2. 13. See, for example, Nestor Maslej et al., “The Artificial Intelligence Index Report 2023,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University (Stanford, CA: April 2023), https://
aiindex .stanford .edu /wp -content /uploads /2023 /04 /HAI _AI -Index -Report _2023 .pdf. 14. Canada, Parliament, House of Commons, An Act to Enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to Make Consequential and Related Amendments to Other Acts, 1st sess., 44th Parliament, 2021, https://
www .parl .ca /legisinfo /en /bill /44 -1 /c -27. 15. Cyberspace Administration of China (CAC), “Interim Measures for the Management of Generative Artificial Intelligence Services” [in Chinese] (July 10, 2023), https://
www .cac .gov .cn /2023 -07 /13 /c _1690898327029107 .htm. 16. Brazilian Federal Senate, Dispõe sobre o uso da Inteligência Artificial, Bill No. 2338/2023 [in Portugese] (2023), https://
www25 .senado .leg .br /web /atividade /materias / - /materia /157233. 17. European Parliament, Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA Relevance), https://
eur -lex .europa .eu /eli /reg /2024 /1689 /oj. 18. White House, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. See also Office of Management and Budget, “OMB Releases Implementation Guidance Following President Biden’s Executive Order on Artificial Intelligence,” The White House, November 1, 2023, https://
www .whitehouse .gov /omb /briefing -room /2023 /11 /01 /omb -releases -implementation -guidance -following -president -bidens -executive -order -on -artificial -intelligence /. 19. Algorithmic Accountability Act of 2023, H. R. 5628 (2023).
20. Office of Science and Technology Policy, Blueprint for an AI Bill of Rights, The White House (2022), https://
www .whitehouse .gov /wp -content /uploads /2022 /10 /Blueprint -for -an -AI -Bill -of -Rights .pdf. 21. National Institute of Standards and Technology, “AI Standards,” August 3, 2021, updated June 5, 2024, https://
www .nist .gov /artificial -intelligence /ai -standards. 22. See, for example, Susana Borrás and Jakob Edler, Eds., The Governance of Socio-Technical Systems: Explaining Change (Cheltenham: Edward Elgar, 2014).
23. See, for example, Laura Galindo, Karine Perset, and Francesca Sheeka, “An Overview of National AI Strategies and Policies,” OECD Going Digital Toolkit, Policy Note, 2021, https://
goingdigital .oecd .org /data /notes /No14 _ToolkitNote _AIStrategies .pdf. 24. Christian Djeffal, Markus B. Siewert, and Stefan Wurster, “Role of the State and Responsibility in Governing Artificial Intelligence: A Comparative Analysis of AI Strategies,” Journal of European Public Policy 29, no. 11 (2022): 1799–1821, https://
doi .org /10 .1080 /13501763 .2022 .2094987. 25. See, for example, Ajay Agrawal, Joshua Gans, and Avi Goldfarb, “Economic Policy for Artificial Intelligence,” Innovation Policy and the Economy 19, no. 1 (2019), https://
doi .org /10 .1086 /699935. 26. See, for example, Katherine Quezada-Tavarez, Lidia Dutkiewicz, and Noémie Krack, “Voicing Challenges: GDPR and AI Research,” Open Research Europe 2:126 (November 23, 2022), https://
open -research -europe .ec .europa .eu /articles /2 -126; Nicholas Martin, Christian Matt, Crispin Niebel, and Knut Blind, “How Data Protection Regulation Affects Startup Innovation,” Information Systems Frontiers (2019), https:// doi .org /10 .1007 /s10796 -019 -09974 -2; for a more general discussion, see Panel for the Future of Science and Technology, The Impact of the General Data Protection Regulation (GDPR) on Artificial Intelligence (Brussels: European Union, 2020), https:// www .europarl .europa .eu /RegData /etudes /STUD /2020 /641530 /EPRS _STU(2020)641530 _EN .pdf. 27. See, for example, Paul Scharre, Four Battlegrounds: Power in the Age of Artificial Intelligence (New York: W.W. Norton 2023).
28. See, for example, Maximilian Kasy, “The Political Economy of AI: Towards Democratic Control of the Means of Prediction,” Institute for New Economic Thinking, the Oxford Martin School, April 14, 2023, https://
oms -inet .files .svdcdn .com /production /files /handbook _politicalecon _ai .pdf. 29. See, for example, Inga Ulnicane et al., “Governance of Artificial Intelligence: Emerging International Trends and Policy Frames,” in Maurizio Tinnirello, Ed., The Global Politics of Artificial Intelligence (New York: Chapman and Hall/CRC, 2022).
30. Nathalie A. Smuha, “From a ‘Race to AI’ to a ‘Race to AI Regulation’: Regulatory Competition for Artificial Intelligence,” Law, Innovation and Technology 13, no. 1 (March 23, 2021), https://
doi -org .proxy .library .upenn .edu /10 .1080 /17579961 .2021 .1898300. 31. See, for example, Alessandro Annoni et al., “Artificial Intelligence: A European Perspective,” European Union (Luxembourg: Publication Office of the European Union, 2018), doi:10.2760/11251.
32. See Anu Bradford, Digital Empires: The Global Battle to Regulate Technology (Oxford: Oxford University Press, 2023).
33. See, for example, Sönke Ehret, “Public Preferences for Governing AI Technology: Comparative Evidence,” Journal of European Public Policy 29, no. 11 (2022): 1779–1798, https://
doi .org /10 .1080 /13501763 .2022 .2094988. 34. Stephen Cory Robinson, “Trust, Transparency, and Openness: How Inclusion of Cultural Values Shapes Nordic National Public Policy Strategies for Artificial Intelligence (AI),” Technology and Society 63 (2020), https://
doi .org /10 .1016 /j .techsoc .2020 .101421. 35. See, for example, Lewin Schmitt, “Mapping Global AI Governance: A Nascent Regime in a Fragmented Landscape,” AI and Ethics 2 (August 17, 2022): 303–314.
36. Organisation for Economic Co-operation and Development, “OECD AI Principles Overview,” oecd.ai, n.d., https://
oecd .ai /en /ai -principles. 37. UNESCO, Recommendation on the Ethics of Artificial Intelligence, United Nations (2022), https://
unesdoc .unesco .org /ark: /48223 /pf0000381137. 38. AI Safety Summit, The Bletchley Declaration (November 1, 2023), https://
www .gov .uk /government /publications /ai -safety -summit -2023 -the -bletchley -declaration /the -bletchley -declaration -by -countries -attending -the -ai -safety -summit -1 -2 -november -2023. 39. Office of the Secretary-General’s Envoy on Technology, “High-Level Advisory Body on Artificial Intelligence,” United Nations (n.d.).
40. Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, Council of Europe (2024), https://
rm .coe .int /1680afae3c. 41. Council of Europe Newsroom, “Council of Europe Adopts First International Treaty on Artificial Intelligence,” Council of Europe, May 17, 2024, https://
www .coe .int /en /web /portal / - /council -of -europe -adopts -first -international -treaty -on -artificial -intelligence. 42. “Digital Economic Partnership Agreement (DEPA),” New Zealand Ministry of Foreign Affairs & Trade, n.d., https://
www .mfat .govt .nz /en /trade /free -trade -agreements /free -trade -agreements -in -force /digital -economy -partnership -agreement -depa. 43. Department for Business and Trade and the Department for International Trade, “UK-New Zealand FTA: Data Explainer,” gov.uk, February 28, 2022, https://
www .gov .uk /government /publications /uk -new -zealand -fta -data -explainer. 44. For a detailed overview, see Cameron F. Kerry et al., “Strengthening International Cooperation on AI,” Brookings/Center for European Policy Studies (October 25, 2021). For a framework, see Pekka Ala-Pietilä and Nathalie A. Smuha, “A Framework for Global Cooperation on Artificial Intelligence and its Governance,” in B. Braunschweig and M. Ghallab, Reflections on AI for Humanity (Preprint) (New York, NY: Springer, 2021).
45. For a general overview of the barriers to cross-cultural cooperation on AI and how to overcome them, see, for example, Seán S. ÓhÉigeartaigh et al. “Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance,” Philosophy & Technology 33 (2020): 571–593.
46. Urs Gasser and Herbert Burkert, “Regulating Technological Innovation: An Information and a Business Law Perspective” in Rechtliche Rahmenbedingungen des Wirtschaftsstandortes Schweiz: Festschrift 25 Jahre juristische Abschlüsse an der Universität St. Gallen (Zürich: Dike, 2007).
47. See, for example, Urs Gasser and John Palfrey, Advanced Introduction to Digital Law (Cheltenham: Edward Elgar, forthcoming 2025).
48. Matthjis Maas and José Jaime Villalobos, “International AI institutions: A Literature Review of Models, Examples, and Proposals,” Legal Priorities Project (September 2023).
49. See Alondra Nelson, “The Right Way to Regulate AI: Focus on Its Possibilities, Not Its Perils,” Foreign Affairs, January 12, 2024.
50. See, for example, Cornelia Kutterer, “Regulating Foundation Models in the AI Act: From ‘High’ to ‘Systemic’ Risk,” AI Regulation Papers (January 2024).
51. The White House, “FACT Sheet: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” The White House, September 12, 2023, https://
www .whitehouse .gov /briefing -room /statements -releases /2023 /07 /21 /fact -sheet -biden -harris -administration -secures -voluntary -commitments -from -leading -artificial -intelligence -companies -to -manage -the -risks -posed -by -ai / 52. “AI Risk Management Framework,” NIST, 2024, https://
www .nist .gov /itl /ai -risk -management -framework. 53. Gasser and Burkert, “Regulating Technological Innovation.”
54. See, for example, Neel Guha et al., “The AI Regulatory Alignment Problem,” HAI Policy & Society and Stanford RegLab (November 2023).
55. Kerstin N. Vokinger, David Schneider, and Urs Gasser, “Mapping Legislative and Regulatory Dynamics of Artificial Intelligence in the US and Europe” (September 2023, manuscript under review).
56. Tim Büthe et al., “Governing AI—Attempting to Herd Cats? Introduction to the Special Issue on the Governance of Artificial Intelligence,” Journal of European Public Policy 29, no. 11 (2022): 1721–1752, https://
doi .org /10 .1080 /13501763 .2022 .2126515. 57. See, for example, Inga Ulnicane et al., “Good Governance as a Response to Discontents? Déjà Vu, or Lessons for AI from Other Emerging Technologies,” Interdisciplinary Science Reviews 46, no. 1–2 (March 7, 2021): 71–93, https://
doi -org .proxy .library .upenn .edu /10 .1080 /03080188 .2020 .1840220. 58. See, for example, Urs Gasser and Wolfgang Schulz, “Governance of Online Intermediaries: Observations from a Series of National Case Studies,” Berkman Center Research Publication Series No. 2015–5 (February 2015).
59. See, for example, Stefan Kuhlmann, Peter Stegmaier, and Kornelia Konrad, “The Tentative Governance of Emerging Science and Technology—A Conceptual Introduction,” Research Policy 48 (2019), 1091–1097, https://
doi .org /10 .1016 /j .respol .2019 .01 .006. 60. Kenneth A. Bamberger and Deirdre K. Mulligan, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe (Cambridge, MA: MIT Press, 2015).
61. Angela Huyue Zhang, “The Promise and Perils of China’s Regulation of Artificial Intelligence,” University of Hong Kong Faculty of Law Research Paper No. 2024/02 (February 12, 2024), http://
dx .doi .org /10 .2139 /ssrn .4708676. 62. One such approach is tentative governance, see Stefan Kuhlmann, Peter Stegmaier, and Kornelia Konrad, “The Tentative Governance of Emerging Science and Technology—A Conceptual Introduction,” Research Policy 48 (2019), 1091–1097, https://
doi .org /10 .1016 /j .respol .2019 .01 .006. 63. See, in particular, Anna Christensen, “Normative Patterns and the Normative Field: A Post-Liberal View on Law,” in Thomas Wilhelmsson and Samuel Hurri (eds.), From Dissonance to Sense: Welfare State Expectations, Privatisation and Private Law (Farnham: Ashgate 1999).
64. Bill C-27, Artificial Intelligence and Data Act, 1st sess., 44th Parliament, 70–71 Elizabeth II, 2021–2022.
65. Personal Data Protection Commission Singapore, Model Artificial Intelligence Governance Framework, 2nd ed. (2020), https://
www .pdpc .gov .sg / - /media /Files /PDPC /PDF -Files /Resource -for -Organisation /AI /SGModelAIGovFramework2 .pdf. 66. “A Pro-Innovation Approach to AI Regulation,” UK Department for Science, Innovation and Technology, August 3, 2023, https://
www .gov .uk /government /publications /ai -regulation -a -pro -innovation -approach /white -paper. 67. European Commission, Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (October 13, 2023), https://
digital -strategy .ec .europa .eu /en /library /hiroshima -process -international -code -conduct -advanced -ai -systems. 68. See, for example, Alex Engler, “The EU and U.S. Diverge on AI Regulation: A Transatlantic Comparison and Steps to Alignment,” Brookings Research, April 25, 2023, https://
www .brookings .edu /articles /the -eu -and -us -diverge -on -ai -regulation -a -transatlantic -comparison -and -steps -to -alignment /. 69. INFOCOMM Media Development Authority, “First of Its Kind Generative AI Evaluation Sandbox for Trusted AI by AI Verify Foundation and IMDA,” imda.gov, October 31, 2023, https://
www .imda .gov .sg /resources /press -releases -factsheets -and -speeches /press -releases /2023 /generative -ai -evaluation -sandbox. 70. For a detailed analysis and a discussion of the factors that determine competition or cooperation in standardization efforts, see Nora von Ingersleben-Seip, “Competition and Cooperation in Artificial Intelligence Standard Setting: Explaining Emergent Patterns,” Review of Policy Research 40, no. 5 (September 2023): 781–810.
71. See John Palfrey and Urs Gasser, Interop: The Promise and Perils of Highly Interconnected Systems (New York: Basic Books, 2012).
72. See chapter 10 of Palfrey and Gasser, Interop, and expanding on it, Rolf H. Weber, Legal Interoperability as a Tool for Combatting Fragmentation, Centre for International Governance Innovation (December 2014), https://
www .cigionline .org /static /documents /gcig _paper _no4 .pdf. 73. See Weber, “Legal Interoperability as a Tool for Combatting Fragmentation,” 9.
74. European Commission, Hiroshima Process International Code of Conduct.
75. United Nations General Assembly, Seizing the Opportunities of Safe, Secure and Trustworthy Artificial Intelligence Systems for Sustainable Development.
76. See also United Nations AI Advisory Body, “Governing AI for Humanity,” Interim Report (December 2023), suggesting ILO’s tripartite structure and the UN Global Compact as possible sources of inspiration (p. 16).
77. For early country examples, see, for example “Multistakeholder AI Development: 10 Building Blocks for Inclusive Policy Design,” UNESCO and i4Policy (2022).
78. The Global Partnership on Artificial Intelligence, “Multistakeholder Expert Group Annual Report,” 2023, https://
gpai .ai /projects /. 79. Partnership on AI, “Responsible Practices for Synthetic Media: A Framework for Collective Action,” n.d., https://
syntheticmedia .partnershiponai .org /. 80. AI Governance Alliance, Presidio AI Framework: Towards Safe Generative AI Models, World Economic Forum (2024), https://
www3 .weforum .org /docs /WEF _AI _Governance _Alliance _Briefing _Paper _Series _2024 .pdf. 81. ETH AI Center, “Joining Forces to Reveal and Address the Risks of Generative AI,” January 2024, https://
ai .ethz .ch /news -and -events /ai -center -news /2024 /01 /launch -of -a -risk -exploration -and -mitigation -network -for -generative -ai .html#:~:text =As%20a%20fully%20transparent%20system,developing%20effective%2C%20standardized%20AI%20testing. 82. In the context of platform regulation, see Chris Riley and Susan Ness, “Modularity for International Internet Governance,” Lawfare, July 19, 2022.
83. Urs Gasser and Viktor Mayer-Schönberger, Guardrails: Guiding Human Decisions in the Age of AI (Princeton, NJ: Princeton University Press, 2024).