Notes
CHAPTER 6 Challenges to Evaluating Emerging Technologies and the Need for a Justice-Led Approach to Shaping Innovation
Alex John London
As it is useful that while mankind are imperfect there should be different opinions, so it is that there should be different experiments of living; that free scope should be given to the varieties of character, short of injury to others; and that the worth of different modes of life should be proved practically, when any one thinks fit to try them.
—Mill, On Liberty, Chapter 3
Introduction
Innovation is inherently disruptive.1 It involves developing or discovering new ideas, new practices, new products, or new services (call these “ends” for convenience), or finding new ways to achieve established ends. It is also deeply social. In some cases, new ends compete with established ends for people’s attention or allegiance. In other cases, new ways of achieving the same end compete with old ways of achieving that end. In both cases, the disruptions of innovation can have a profound impact on the rights and well-being of individuals. In some cases, this impact can be positive, as when individuals are better able to more safely or effectively advance ends that are important to them. But the disruptions of innovation can also have negative effects. Not all innovations are successful; some efforts to achieve new ends or to achieve established ends in better ways fail. When unsafe or ineffective technologies circulate, their use can produce direct harms, as when unsafe or ineffective medications subject users to toxic side effects, as well as opportunity costs from not having accessed a safer or more effective alternative. In other cases, successful innovation means that old ends or old ways of achieving established ends are placed at a competitive disadvantage and the people who identify with them, or who built their expertise or life around them, find themselves out of work or displaced in some other way.
A variety of parties whose interests are affected by innovation—individuals, organizations, social institutions, government bodies, policymakers, lawmakers, and leaders in all sectors—would benefit from an ethical framework that would facilitate the assessment of innovative technologies and of the ecosystem of innovation from which they are produced. Such a framework would be valuable for a variety of reasons, but I focus on three in particular. The first involves normative guidance: It would be valuable to the extent that it helps these stakeholders determine when some technological innovation is disruptive but morally permissible, when such disruptions call for some type of social action, and what form such a response should take. The second involves the allocation of responsibility: It would be valuable if it could facilitate the process of identifying which agents or actors are responsible for intervening to eliminate, reduce, or mitigate the ethical concerns associated with a particular innovation. The third involves tracking the health of the innovation ecosystem: It would be valuable for such a framework to facilitate the assessment of the ecosystem of innovation, understood as the division of social labor and the rules, regulations, laws, social structures, and institutions that shape the process of innovation in order to determine when this ecosystem is functioning in a way that is morally and socially justifiable and when it requires redress or improvement.
In the following section, “ Distinctive Challenges to the Ethical Assessment of Innovation,” I outline some of the factors that pose a challenge to any such framework. These factors include complexities around understanding or modeling the process of innovation, predicting the effects of innovation, and what the philosopher John Rawls referred to as the fact of reasonable pluralism—the idea that freedom promotes reasonable diversity in moral values and commitments. In “ Pragmatic Approaches and the Neglect of Justice,” I discuss a common approach to navigating the fact of reasonable pluralism—namely, relying on a “thin” set of ethical principles that might be used to evaluate individual innovations and the innovation ecosystem. These values include the avoidance of harm or nonmaleficence, the provision of benefit or beneficence, and respect for autonomy, fairness, and justice. In “ Toward a Justice-Led Approach to Shaping and Evaluating Innovation,” I argue that these values are often interpreted in a way that places the greatest emphasis on a set of direct or immediate effects of innovation and that marks out the contributions of a limited set of stakeholders. What is left out is a clear recognition of indirect or higher-order effects from innovation, stakeholders who influence these effects, and the way that these effects can influence considerations of justice. In “Conclusion,” I argue that these shortcomings might be mitigated by a framework that adopts a justice-led approach to assessing innovation and the innovation ecosystem.
Distinctive Challenges to the Ethical Assessment of Innovation
The ethical assessment of innovation is complicated by at least three distinctive factors. The first has to do with freedom and decentralization. At the most general level, the decision to employ one’s intellect, time, and resources in the service of discovering new ends or new means to achieve established ends is morally permissible, if not morally meritorious. It is morally permissible because it falls under the broad liberty to pursue a life plan of one’s own. This liberty is itself grounded in two very basic values. The first is respect for individual autonomy, that is, the ability of individuals to decide how they want to live and to make momentous decisions for themselves is valuable because these freedoms allow individuals to express their individuality, they are central to a person’s status as an agent, and they capture a person’s interest in exerting fundamental influence, if not control, over how their life goes. Second, the ability of persons to pursue a life plan of their own is fundamental to their well-being—to their ability to lead a life that advances their interests and in which they find satisfaction and fulfillment. Individuals who engage in the process of inquiry, experimentation, and discovery necessary for innovation often do so because such activities are personally rewarding and part of what they regard as a good life.
Beyond being merely permissible, the decision to employ one’s intellect, time, and resources in the service of discovering new ends or new means to achieve established ends is often morally meritorious. The reason is that innovation is rarely a purely personal act. As the philosopher John Stuart Mill noted, the knowledge of how to achieve new ends, or how to more effectively or efficiently achieve established ends, often propagates through society so that the benefits produced through innovation are enjoyed by many people. As a result, the process of innovation is often a socially valuable activity to be encouraged.
Academic freedom can thus be seen as a value that sits at the confluence of these two contributories: It protects the rights of individuals to pursue their interests and reflects the idea that, in the aggregate, the free pursuit of novel ideas is likely to contribute to social progress.2 A legitimate social role for government in an open society, a society in which individuals generally have the liberty to decide how they want to live and how they want to employ their time and energies, is to find ways to manage the risks, costs, and burdens of innovation so that they are fairly distributed and outweighed by the resulting social benefits.
In an open society, the process of innovation is often decentralized. This is not to say that in an open society there will be no efforts to centralize innovation—to facilitate state-sponsored initiatives in science or health—since open societies often do undertake such efforts. It is simply to say that government action will not be the only avenue for innovation and that even when governments are the sponsor of innovation, the process of innovation will often be carried out by entities outside of government. Individuals and associations such as corporations, philanthropies, nonprofits, and other entities can be sources for the discovery of new ends, better means, or for the innovative use of new technologies. Additionally, the process of innovation is not limited to the developers of new technology. Developers may produce a technology with a particular set of goals or uses in mind, but other individuals may use that technology as an occasion for further innovation. For example, the smartphone created a platform for a multitude of developers to create mobile applications, and end users are free to put these devices and their associated software to use in practices that might not have been foreseeable prior to the invention of this platform. If all else is equal, the freedom to experiment and to innovate this way is grounded in the same respect for individual freedom, autonomy, and well-being just discussed.
As a result, the parties involved in innovation can be quite diverse, ranging from individuals, small groups or clubs, to philanthropies, nonprofit organizations, private and public corporations, educational institutions, or entities within local, state, or national government. Some of these parties make decisions as individuals while others make decisions through a complex division of social labor, as when corporations or government bodies make decisions. Some of these parties are also deeply enmeshed in social roles, social structures, or a division of social labor that entails different sets of prior obligations or commitments that guide or constrain their behavior. Likewise, their activities fall into different sectors of social life, from private hobbies to consumer products, individual or public health, employment, banking and finance, criminal justice, security and defense, political participation, the provision of essential social services, and so on. Activities in these different spheres may differ in the ethical issues they raise since they affect different rights and interests of persons or implicate the functioning of social structures with different social functions and expectations.
A second factor complicating the ethical assessment of innovation stems from the degree of uncertainty surrounding this process and the difficulty of predicting how it will unfold and what its outcomes will be. Individuals or groups who set out to create or to discover something new often fail and it can be difficult to predict which of their efforts will succeed. Similarly, some efforts at innovation succeed, but not in ways that were originally intended.3 As a result, innovation is often fortuitous, with efforts to develop something in one area or domain or for one purpose resulting in the ability to achieve some different purpose in a different area or domain. Likewise, it can be difficult to envision how technologies developed to advance one set of goals or purposes might be used in unexpected or innovative ways.
The impacts of innovation are not simply a function of the relationship between a technology and an end user. The emergence of a new technology can alter the way that individuals or groups divide social labor, can shift the nature and function of social roles, and can lead to unforeseen uses that have further impacts on social relationships, opportunity, and the relative costs or ease of performing certain tasks, the relative value of those tasks in a reconfigured environment, and so on. Similarly, the sectors of social life are not static. Innovations in one sector can affect opportunity in others or shift the boundaries between sectors.4 This in turn can blur lines regarding which set of established norms should be used to evaluate, govern, or regulate a new technology and challenge the utility of the way those norms have been articulated and enforced.
As a result, the interests that are potentially affected by innovation can be extremely diverse. They can include interests that are very specific to an individual because they are tightly bound up with an idiosyncratic feature of their particular life plan, to interests that are widely shared because they are grounded in a human right. Uncertainty surrounding the process and outcomes of innovation entail that these impacts can also be difficult to foresee.
A third factor complicating the ethical assessment of innovation stems from complexity of the relevant normative considerations. On a very broad level, open societies are characterized by what the philosopher John Rawls refers to as the fact of reasonable pluralism.5 The basic idea here is that individuals pursue a variety of life plans, often built around a diverse set of “thick” or “substantive” conceptions of the good life. By a conception of the good life, we simply mean a set of goals, values, and ideals that mark out some activities as valuable, worthwhile, or beneficial and others as harmful, ignoble, or lacking in worth.6 As an extremely simplified example, some people are deeply religious and will forsake wealth or popularity in service to their particular faith tradition; others may regard religion as silly superstition. Some people value music and spend long hours practicing an instrument, whereas others value the exploration of wide-open spaces and would find being cooped up in a room doing the same thing over and over the worst possible existence.
Because different individuals care about, and are committed to, different goals, activities, and ideals, their interests will be advanced or set back by different activities and outcomes. This is centrally relevant to innovation since some people may be deeply invested in, committed to, or may identify with activities or technologies that are displaced by innovation. Individuals who identify deeply with their role in the ice industry, the telegraph, the whale-oil industry, steam engines and the like will find these important interests set back by the development and diffusion of refrigeration, telephony, electricity, and internal combustion or electric engines. For others, the development of these new technologies may be an unalloyed benefit as it enables them to advance more of their interests more effectively and efficiently.
The fact of reasonable pluralism adds to complexities already mentioned surrounding the diversity of the parties involved in innovation, the sectors in which innovation can take place, and the extent to which disruption in these sectors affects interests that are peculiar to individuals or widely shared because they are grounded in some kind of basic human, social, ethical, or legal right. For example, in many countries, health care contexts are governed by a different (usually stricter) set of norms, rules, regulations, or laws compared to consumer products or other business contexts. Likewise, innovations that take place within, or have a significant impact on, relationships between doctors and patients, or lawyers and their clients, individuals and the police, may have different implications from innovations that involve producers and consumers of consumer products.
An acceptable framework for assessing the ethics of technological innovations and the innovation ecosystem should be broad enough to recognize the full range of stakeholders whose activities may be relevant to ethical appraisal, capable of recognizing how social structures mediate social interactions and alter the division of labor and responsibility, and of differentiating disruptions from innovation that are morally permissible from those that rise to the level of an injustice and therefore call for social solutions.
Pragmatic Approaches and the Neglect of Justice
Efforts to develop ethical and policy frameworks to evaluate the process of innovation or the impact of novel technologies have been sensitive to the prospect that they must be capable of providing guidance to stakeholders in a diverse society in which there may be reasonable disagreement over a wide range of issues. This has motivated approaches that are pragmatic in the sense that they do not claim to be grounded in a single, “thick” or substantive, “comprehensive theory” of the good, the good life, the good society, or other set of ethical, social, or political ideals. Instead, proponents appeal to constructs that are supposed to be “thin” or “freestanding,” in the sense that they are supposed to have normative force without being tied to and dependent on any single comprehensive theory.
As examples, some have appealed to what they call “common morality,” understood as something like a set of pre-theoretical intuitions or commitments that are widely shared and regarded as so important that they need to be accommodated within (rather than overridden or eliminated by) thicker or more substantive conceptions of the good or the good life.7 A similar idea is that there are certain values that function as “midlevel principles,” in the sense that they group and explain a wide range of judgments about particular cases while being common elements within different substantive comprehensive theories.8 A related concept appeals to what Rawls calls an “overlapping consensus” of reasonable views.9 Here the idea is that there may be a multiple competing comprehensive theories and that these theories may differ in the way that they justify various claims, but that they often overlap in their endorsement of particular norms or values and the judgments that flow from them.
Although different approaches frame the elements of these thin frameworks slightly differently, they commonly include the following.10 Nonmaleficence is generally understood as the duty to avoid inflicting harm or imposing burdens on others. Beneficence is the duty to aid, assist, improve, or otherwise benefit others where possible. Respect for autonomy is the duty to respect the interest that other persons who have the capacity to make their own decisions have in being able to make those decisions for themselves. Fairness is the duty to treat like cases alike, to apply the same rules or to follow the same process for all individuals, regardless of features or characteristics that are not directly related to some morally relevant aspect of the case, such as culpability, responsibility, merit, or desert. Finally, justice is widely recognized as an important element in many pragmatic approaches, but its content is often not clear.11 It is often regarded as a form of fairness, since it involves treating like cases alike and applying uniform procedures or rules, without a clear specification of the grounds to differentiating these two concepts. In many contexts of professional ethics, appeals to these thin or freestanding constructs are bolstered by appeals to role-related obligations of professionals. One of the oldest and most well-developed examples is medical ethics, where the asymmetry of knowledge between doctors and patients, the dependency of patients on doctors, and the profound importance of health to human agency and well-being is seen as grounding a special obligation on the part of doctors to avoid harming patients, to do their best to advance patient interests, and to place those interests above potentially competing interests.
Research ethics is the field that developed to regulate and evaluate the development of new drugs, devices, practices or procedures in medicine. In research ethics, the principles of nonmaleficence, beneficence, respect for autonomy, and justice are codified in The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research, a report of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979).12 There are conflicting views about whether the same norms from clinical medicine should also regulate the activities of researchers.13 Nevertheless, research ethics stands out as a branch of practical ethics that is tightly connected to a clear set of regulatory requirements and a set of institutions and structures necessary to implement and to some degree even to enforce those requirements.
Although this pragmatic approach has many virtues, the neglect of justice produces significant shortcomings rooted in the extent to which the resulting frameworks are highly parochial.14 In particular, these approaches mark out as salient an incomplete set of actors, an incomplete set of impacts, and draw on local norms, often grounded in role-related obligations, to resolve conflicts among its values or principles. For present purposes, the main point is not to evaluate the merits of the assessments that these frameworks facilitate but to emphasize the types of consideration that such approaches struggle to formulate and to address.15 To make these concerns concrete, I illustrate how they apply to research ethics and then consider how they generalize to the context of machine learning (ML) and artificial intelligence (AI).
Within research ethics, the dominant focus is on the relationship between two central parties: researchers and study participants. At the center of this focus is the review of individual study protocols by independent, local review committees, referred to in the United States as Institutional Review Boards (IRBs). The purpose of the IRB is to review individual study protocols, where study protocols basically define the terms on which researchers will interact with study participants. These interactions are then assessed according to the set of values described earlier. That is, to address beneficence, researchers are expected to explain the goals of the study, the methods that will be used to achieve those goals, the value of the information that is expected to result from the study (as a proxy for benefits to society), and any benefits that accrue directly to participants from participation. To address nonmaleficence, they must detail the risks to which participants will be exposed, the steps that will be taken to eliminate unnecessary risks, to mitigate any remaining risks, and to show how risks that cannot be eliminated are justified in light of the benefits expected from the research. To address respect for persons, the protocol must also contain an account of the information that will be provided to potential study participants so that they can make a free and informed decision about whether to participate or not to participate. In cases where this kind of informed consent is not possible, the protocol must contain a justification for a waiver of consent and specify the steps that will be taken to secure informed consent from a proxy (in cases where participants themselves lack decisional capacity) or to inform participants that they have been involved in research after the fact (e.g., in cases of research on interventions that are used in emergency circumstances). Finally, to address issues of fairness or justice, the protocol must contain a description of the process that will be used to recruit study participants and why this process is fair in the sense of not overburdening populations that are convenient, vulnerable, or easy to manipulate, while taking steps to include populations that are often underrepresented in research.
The system of requiring IRB approval of research before it can be conducted plays an important role in ensuring that abuses of the past are not repeated and helps to provide confidence on the part of study participants that by volunteering to participate in research, they are not submitting to treatment that is unnecessarily risky, abusive, or substantially different from what is described to them during the process of informed consent.16 Nevertheless, this way of framing the oversight of innovation in biomedicine focuses primarily on direct or first-order effects of the interactions of researchers and study participants. Consider now the broad range of issues that are not marked out as salient by this approach.
First, which research questions are asked and how research funds are allocated has a profound impact on which health needs are or are not the subject of investigation. This in turn has a direct impact on whether or not health systems can respond effectively, efficiently, or equitably to the diverse range of important medical needs that are represented in the populations they serve. The current capacity of health systems is the result of long histories of social inclusion and exclusion including histories of oppression and racism but also histories of neglect and indifference. It is also the result of decisions about which health needs to regard as priorities, how to divide social labor for addressing these needs between public health, prevention, and medical care, how research fund should be allocated and what the requirements are for bringing innovative products to market. Call this the problem of aligning the focus of innovation with the capabilities of social institutions.
Second, many of the decisions described in the previous paragraph are often not made by researchers but by governmental agencies, such as the National Institutes of Health (NIH) or the National Science Foundation (NSF), nongovernmental funding agencies, philanthropies, or private ventures, such as biotech startups or pharmaceutical companies. Politicians, government employees, and corporate executives are rarely the focus of ethical discussion in research ethics. Yet their decisions have a profound impact on whether a set of basic social institutions—systems that are responsible for individual and public health—have the knowledge and the means to respond safely, effectively, and equitably to the needs of the populations that depend on them and for whether communities perpetuate or rectify health disparities that arise, at least in part from histories of exclusion, animus, or neglect, or abuse. Call this the problem of full coverage for accountability.
Third, IRBs evaluating individual trials on a case-by-case basis might regard each study as morally permissible while the portfolio comprised of those studies is morally problematic.17 For example, the resulting portfolio might be biased to favor the health needs of already advantaged groups, to favor health needs that are traditionally well-studied over health needs that have been neglected, or to advance the pecuniary interests of sponsors without addressing priority health needs of the community. The portfolio as a whole might also expose more participants to worse risks than alternative ways of generating the same information through the application of different study designs. Similarly, the evidence gaps in a portfolio may shift risks and burdens to parties who are already burdened with excessive costs. This problem is partly a consequence of the first two points—the framework in question focuses on an overly narrow set of issues and actors. But it is also a function of the case-by-case approach to evaluation and the absence of guidance for evaluating larger sets of studies and larger strategies of decision-making and the patterns of outcomes or impacts they will produce over time, including the bandwidth of information that can be achieved by different ways of organizing a study portfolio and the evidence gaps that remain. Call this the problem of portfolio-level ethical issues.
Finally, each of the preceding points reflects a particular aspect of a more general fact—namely, that innovation takes place within a much larger social ecosystem, one aspect of which is a division of labor among a multiple parties. One function of this division of labor is to shift or transfer the distribution of rights or responsibilities so that there is not a one-to-one correspondence between the actions of a party, the moral appraisal of the outcome that results from that action, and the responsibility to address that outcome. Researchers design and propose individual protocols. But which protocols are funded is a function of the decisions of funding agencies, which, in turn, is influenced by decisions of their leadership, donors, or politicians. A researcher who proposes a study to evaluate a drug in an adult population is performing an act that is morally permissible, if not morally meritorious. Whether that same intervention is ever studied in children is a function of decisions of a much larger set of stakeholders. But the knowledge gap created by a system that does not promote studies in children, pregnant women, or similar populations, can create or perpetuate health disparities with detrimental consequences for the health and well-being of members of these groups. In such cases, although researchers are responsible for the protocols they carry out, responsibility for the ecosystem that shapes the protocols that researchers propose often falls to other parties (e.g., policymakers, funding agencies, drug companies).
More broadly, the advent of new technologies and shifts in their use can cause workers who produced, maintained, or used supplanted technologies to lose their jobs. Developing new technologies is a morally permissible undertaking, as are the general steps it takes to offer a product in a competitive marketplace. But losing a job is a serious setback to a person’s interests, constituting harm. Nevertheless, it would be unreasonable to regard this consequence as grounds for holding that the development of innovative technology is morally wrong, and developers of new technologies are not commonly held responsible for these harms or for redress to the workers displaced by it. Rather, responsibility for mitigating the negative consequences of innovation on employment and for facilitating the ability of workers to transition between employment without serious adverse harm usually falls to governments. Call this the problem of distributed responsibility.
Interestingly, discussions surrounding ethical and responsible development and use of AI have been sensitive to the fact that these systems can be developed or deployed in ways that recapitulate prior unfairness or injustice. Primarily, this awareness arises because AI systems are trained on large datasets, and these datasets capture patterns in the underlying data-generating process. In a society with sexiest discourse, corpora of text will contain sexist language. In a society with racist histories, the groups marginalized by such attitudes and practices will be underrepresented in databases generated from the provision of medical care or other social services and overrepresented in databases used to police or penalize. Likewise, databases will contain demeaning or racist statements about groups that are subject to social animus and reflect associations between certain traits or characteristics and attitudes of normality versus aberrance, beauty versus ugliness, and competence versus incompetence. Training AI systems on this data can perpetuate these judgments and attitudes.18 Recognizing these relationships and taking steps to effectively manage, mitigate, or eliminate these biases is extremely important.
Because these biases can be inherited from training data, the responsibility for managing them is often seen as falling on the shoulders of developers. The problem is also framed in relatively narrow terms of discordance between training data and the ground truth in the relevant population. As a result of these assumptions, the vast majority of the burgeoning literature on fairness in AI focuses on statistical properties of model outputs, such as the relationship between false negatives and false positives along with a guiding assumption that the relevant considerations of fairness or justice are local—they have to do with the rules that should govern the distribution of specific goods, opportunities, or services.19 Against this background, the central assumption is that developers should ensure that each person receives equal treatment relative to this set of standards for local justice.
One problem with this focus on local justice derives from what I called the problem of portfolio-level ethical issues: Each algorithm evaluated on a case-by-case basis, evaluated solely for their conformity to considerations of local justice, might be morally acceptable, but the system of such algorithms could be deeply unjust. This is possible because society is not just a collection of interactions that operate independently of one another. It is, rather, a network of interrelated interactions, often mediated by social institutions that affect overlapping aspects of people’s opportunities, capabilities, rights, and interests. As a result, historical injustice in one domain, such as housing,20 finance,21 or policing,22 can have a profound, detrimental impact on the health of oppressed populations, the quality of education available to them, their ability to take advantage of educational opportunities, their career prospects, their ability to vote or hold political office, their freedom to move and associate, their financial prospects, and other important rights and interests. Prior injustice in one aspect of society creates disparities that reduce or impede the opportunities or capabilities of affected individuals or groups. When this is the case, norms of local justice in other parts of society can effectively ensure that disadvantaged populations remain at a disadvantage in transactions or relationships that take place in those domains.
As a result, upholding norms of local justice in the operation of important social institutions (such as access to education, opportunities for employment, and so forth) can serve to reinforce unjust disparities and social inequalities that arise from prior histories of unfair treatment. The myopic focus of local justice is poorly suited to the task of recognizing injustice in the operation of larger social structures (to the problem of structural injustice) and to framing strategies for enacting justice as rectification—the process of rectifying unjust practices and mitigating their effects on disadvantaged parties with the goal of restoring relationships of equal standing, equal regard, and fair treatment.
The dynamics outlined in this section illustrate important shortcomings in frameworks that focus primarily on developers or firms that develop particular technologies, the impact of particular technologies on users or the targets of the technology, and on issues of fairness that are framed as complying with the norms for local justice.
Toward a Justice-Led Approach to Shaping and Evaluating Innovation
The neglect of justice in practical ethics stems, at least in part, from the perception that every formulation of this value is necessarily tied to and embodies some thick, comprehensive conception of the good, the good life, or the good community and that, therefore, it is incapable of securing the kind of widespread commitment necessary to guide policy in an open, pluralistic society. This concern is not without merit since there certainly are competing and potentially conflicting comprehensive conceptions of justice to which some people are deeply committed. But this prospect should not be a deterrent to identifying elements of justice that can make salient the ways in which innovation and innovations can affect important social institutions, relationships, opportunities, or interests. Making these issues salient means not only drawing attention to them but highlighting reasons why they may need to be addressed and helping to identify which stakeholders might have responsibility for redress. Such a framework need not provide complete solutions to the problems it identifies. But we cannot solve problems we do not formulate, and being able to formulate the ways in which innovation and innovations might raise concerns of justice can facilitate concrete action, even if this must play out within some larger political process.
A justice-led approach would begin by identifying the space within which diverse members of an open society have a claim to equal standing and equal regard. The idea that justice is fundamentally concerned with giving equal treatment to equals, and treating like cases alike, requires a specification of the respect in which individuals are equal and in which they have a claim to like treatment. The fact of diversity entails that individuals in an open society embrace and follow different substantive, first-order conceptions of the good. But amid this diversity, every such individual should also recognize that they share a higher-order interest in having the real freedom to formulate, pursue, and revise a life plan based on some first-order conception of the good. This shared higher-order interest need not be grounded in or tied to any particular conception of the good. It can be grounded solely in the recognition that there is a more general respect in which each person in a diverse, open society is engaged in the same kind of fundamental project (formulating, pursuing, and revising a life plan that embodies some set of ideals and values) and that this project is of deep personal and social importance to each of those individuals.
This shared interest constitutes a compelling ground for claims of equal standing and equal regard. First, it captures a social perspective that is available to, and that has a compelling rational claim on, every individual in a diverse and open society. Different individuals embrace different values, goals, and ideals, but they can see one another as engaged, at a more general level, in a shared project that is of profound importance to each individual. Second, from this higher-order perspective there are no grounds on which to regard any individual, or set of individuals, as in any respect better than, superior to, or more deserving than any other. Individuals who pursue different life plans, and so hold different values, are nevertheless equal in the sense that they each want to be free to formulate, pursue, and revise some first-order life plan. Third, this social perspective is consistent with and can accommodate all first-order life plans that are reasonable in the following minimal respect: They are not predicated on the domination or subordination of some other group or class of persons. This notion of reasonableness is not grounded in a thick conception of reason but follows simply from the recognition that societies are constituted by distinct individuals, that every individual shares an interest in having real freedom to formulate, pursue, and revise a life plan of their own, and that at this more general level there are no grounds for regarding one individual as superior to or as having any right to dominion or priority over any other.
Next, a critical role for, and criteria for the justification of, social institutions in a diverse, open society is to create and maintain conditions that secure and promote this shared interest. This includes institutions of government and security that affect the distribution of rights, privileges, and prerogatives as well as institutions that influence the distribution of social opportunity and material resources such as the institutions of individual and public health, provisions for a social safety net, and institutions that govern employment and market-based transactions and relationships. Institutions that secure and promote this shared interest can be seen as supporting the ability of these distinct individuals to function as free and equal persons.23
This focus provides normative guidance that can help stakeholders determine when the positive or negative impacts of the ecosystem of innovation or of specific innovative technologies is disruptive but not unjust and when these disruptions rise to the level of an injustice. First, because a key function of basic social institutions is to uphold conditions that respect the moral equality of persons by securing their shared higher-order interest in having the real freedom to formulate, pursue, and revise a reasonable life plan, these social institutions should be called into action when individuals face widespread threats to this shared higher-order interest. The diffusion of innovative technologies can alter social conditions that affect this higher-order interest. In such cases, just social institutions should intervene to promote equal treatment and equal regard.
As an example, successful innovation often creates social circumstances in which some individuals can advance their personal ends more effectively or efficiently than others and this necessarily creates inequalities. When these advantages or disadvantages are limited to advancing or detracting from a person’s specific first-order life plan, then those advantages or disadvantages qualify as benefits or harms and fall under the rubric of.24 If this is all that is at stake, then these benefits or harms do not rise to the level of an injustice. The reason is that justice is not concerned with how well individuals are able to achieve the specific first-order life plan they set out for themselves—this is the domain of beneficence. Rather, justice concerns the higher-order interest of individuals in having real freedom to formulate, pursue, and revise some reasonable life plan.
Treating such inequalities as unjust per se would require that we refrain from producing innovative technologies unless they can advance the first-order life plans of all individuals equally. But this is likely an impossible requirement, since the diversity of life plans frequently involves zero-sum relationships involving rival goods—goods that cannot be enjoyed by multiple agents simultaneously. These include positional goods (e.g., being the best at something) and other scarce resources. Promoting equality by preventing advances unless those advances benefit everyone to the same degree relative to their distinctive life plan would require that we secure equality by “leveling down,” which is to say, it would make some people worse off, without the prospect of making anyone better off, simply to ensure their equality to others. In other cases, innovation creates inequalities that directly influence the higher-order interests of individuals because the knowledge or the means that are produced cannot benefit all persons equally. For example, advances in cancer research might extend the lives of or restore physical functioning to patients with one type of tumor but not to all cancer patients. Similarly, there are cases where researchers first seek to establish that some technology works in what is regarded to be a comparatively easy test case before trying to extend its use to a wider range of applications. For example, hemophilia might be an excellent model system for gene-based therapeutics if it is believed to represent a comparatively simple application of a new technology. Initial successes in treating this condition would generate inequalities, since gene-based treatments for other conditions would not be available in the same time frame.
Here again, promoting equality by prohibiting research that would save some lives just because we would not know how to save all others would be self-defeating. If all else is equal, stepwise innovation in which developers seek to unlock the benefits of an intervention in one case and then to extend it to others is morally permissible. The problem, however, is that frequently, all else is not equal.
In particular, this is another portfolio-level ethical issue. Problems arise when the portfolio of such decisions winds up tracking, and therefore, recapitulating histories of social exclusion and marginalization—as when research systematically excludes women or members of marginalized groups or when research does not focus on health needs that are distinctive of such groups. Problems also arise when research is undertaken under the assumption that successful efforts will be followed by further research only to see further research not carried out. This happens in drug development, for example, when new interventions are tested in adults first but then subsequent trials in pediatric populations are not carried out.
Similarly, the proliferation of machine learning and artificial intelligence raises concerns about justice because, as we noted earlier, data on which AI systems are built often reflect patterns of social interaction in which specific groups have been subject to unfair treatment. The widespread use of this data creates models that can recapitulate these patterns of social inequality. The point here is that even when individual AI systems are developed to function in ways that primarily are valuable to individuals relative to their individual life plan, concerns of justice arise because social bias is widespread and therefore likely to affect a broad range of datasets used to develop such systems and because the impacts of such problems are connected to histories of exclusion and subordination. The widespread recapitulation or exacerbation of these histories of exclusion and subordination creates an issue of justice because this adversely affects the higher-order interest of affected groups in being treated as free and equal members of society. There is, thus, a strong social interest in eliminating these disparities that applies across the full range of areas where these applications might be applied. It also generalizes beyond ML and AI. Disparities in technologies that adversely affect individuals from groups that have historically been subject to neglect, animus, exclusion, domination, or subordination threaten to recapitulate or exacerbate relationships that are antithetical to justice. Widespread acceptance of these disparities signals that some individuals have lower standing or status than others—a message that is also antithetical to justice. When social institutions act to reduce these disparities, it advances an important cause of justice—ensuring that all people are treated as free and equal.
Second, the critical role of basic social institutions in securing this higher-order interest of individuals combined with the diversity of needs and of circumstances entails a legitimate social interest in promoting innovation that can increase their effectiveness, efficiency, and equity. This provides normative guidance relevant to the problem of aligning the focus of innovation with the capabilities of social institutions. The importance of the ability of these institutions to function equitably follows from the fact that, relative to this shared higher-order interest, there are no grounds for regarding the needs of any individual or group as somehow superior to or more important than the needs or interests of any other individual or group. As such, these institutions should strive to function with equal efficacy for all members of the population they serve. However, histories of racism, ableism, sexism, and other forms of social animus, marginalization, or exclusion have created unfair social disparities as well as deficiencies in the functioning of these institutions that exacerbate these disparities. There is thus a strong social and moral imperative to rectify these disparities and to promote the development of technologies that better enable important social institutions to promote the real freedom of all individuals and groups.
The social imperative to ensure that these institutions function effectively and efficiently is grounded in scarcity—the shortfall between resources available and the needs of individuals or social groups—and from the critical role that these institutions play in promoting the real freedom and equality of the individuals whose lives they affect. It follows from these first two points that there is a strong social imperative to ensure that the dissemination and incorporation of new technologies does not undermine or detract from the ability of these social systems to function effectively, efficiently, and equitably.
Here again, ML and AI have been used in applications that have negatively affected the capacity of basic social institutions to function. In this case the resulting disparities affect the functioning of social institutions that have an immediate and direct impact on the ability of individuals to function as free and equal persons. Disparities in algorithms used in policing, sentencing, bail, or parole decisions are unjust because of the strong claim of each individual to equal standing and equal regard in this space. The same is true for disparities from AI systems that make decisions regarding employment, lending, banking, and the provision of social services. Such disparities are unjust even if they are not connected to prior histories of exclusion, indifference, or subjugation because of the important role that the decisions and conduct of these social institutions play in securing this higher-order interests of persons. But such disparities can be, and often are, doubly concerning precisely because they are connected to, and do recapitulate or compound, prior histories of subjugation.
Third, this approach provides a framework for addressing the problem of full coverage for accountability because it situates the activity of innovation in a larger web or network of social relationships among a broader set of individuals and groups, highlighting the role of different stakeholders in shaping the process of innovation and allowing for a more explicit consideration of the appropriate division of social and moral responsibility between these parties. This includes the relationship between developers, funders, regulators, policymakers, users, and various social institutions that are required to serve specific social functions grounded in considerations of justice. All individuals in a society rely on these institutions to support and protect their higher-order interest in being able to formulate, pursue, and revise a life plan; these social institutions are sometimes called into action to support innovation (as when government agencies sponsor and support innovation directly, or when they carry out regulatory or legal functions that shape the incentives of actors in this ecosystem); and these institutions are affected by the process and outcomes of innovation, as when their capacity to perform their functions depends on the capabilities of the technologies they deploy for this purpose.
Responsibility for identifying shortfalls in the capacity of important social institutions to secure and promote this higher-order interest for all community members falls to government leaders and to leaders in the relevant social institutions, in consultation with community members. This includes identifying threats to this higher-order interest—from sickness, injury, and disease, environmental degradation, and hazards; to access to employment, social, economic and political opportunity, and social limitations imposed by the built environment; from social animus, exclusion, or indifference; and from the way that novel technologies might unduly consolidate social or political power. These stakeholders also bear primary responsibility for identifying broad priorities for investing in innovation and development with the goal of reducing or eliminating these shortfalls and addressing these threats. This includes identifying and rectifying social inequalities that undermine the freedom or equality of individuals or groups including inequalities that stem from prior histories of animus, indifference, neglect, or other forms of domination or marginalization.
As an illustration of how this framework makes more tractable the problem of distributed responsibility, this framework recognizes that individual innovators have broad liberty to pursue the ideas, programs, and projects that interest them. This follows from respect for the freedom of individuals and groups to pursue a reasonable first-order conception of the good and from the difficulty of identifying which avenues of innovation will succeed and how they might be taken up and adopted in innovative ways by others. To align this liberty with considerations of justice, policymakers, regulators, funders, and other leaders have a responsibility to create incentives that encourage individuals to explore avenues for innovation that connect to and address knowledge or capability gaps within these priority areas. This responsibility is not widely recognized, and it is a virtue of the current approach that it would make salient the responsibility of this wider range of actors and facilitate collaborative efforts to advance these important social goals.
Similarly, when it comes to identifying and averting portfolio-level ethical issues, the present framework identifies social and political leaders, in conjunction with community members and the heads of entities that fund research or carry out innovation, to identify when patterns of local decision-making can recapitulate, exacerbate, or create patterns of social exclusion or marginalization, and then to intervene, whether through rules or incentives, to rectify such patterning. This can involve ensuring that the novel technologies that address distinctive needs of marginalized or minoritized groups are equitably funded, ensuring that novel technologies are developed in populations that include such groups, ensuring that novel technologies are extended to use cases that affect such groups and ensuring that sequential strategies for testing or development are funded and carried to fruition.
These examples represent problems that range beyond the purview of individual researchers. They arise because of the potential for repeated decisions made solely on myopic criteria to recapitulate or exacerbate larger patterns of inequality, and they call for attention from a wider range of stakeholders including policymakers, research funders, and regulators. It is a virtue of the present framework that it can make salient such second-order issues and facilitate the identification of parties in the innovation ecosystem who should bear responsibility for addressing these issues. This, in turn, can help lawmakers, policymakers, civic and corporate leaders, activists, and other community members craft rules, policies, norms, and incentives that discourage activities that threaten to undermine the equal standing of individuals and promote activities that enhance the ability of social institutions to secure and to promote this shared higher-order interest.
Finally, the approach outlined here provides high-level benchmarks that stakeholders might use to assess the relative health of the innovation ecosystem and the range of norms, rules, practices, regulations, and laws that constitute its governance structure. In particular, this ecosystem is healthier if it has a governance structure that addresses the set of problems outlined here. In other words, innovation ecosystems are healthier to the extent that their governance structure identifies the full range of stakeholders with responsibilities in this area to ensure full coverage for accountability for the purpose of protecting the higher-order interests of persons and aligning the focus of innovation with the capabilities of social institutions. Similarly, the various incentives that influence the conduct of these agents should ensure that responsibilities are distributed to relevant parties and then enforced in a coherent manner, so that portfolio-level ethical issues can be identified and addressed.
Conclusion
This chapter outlines a justice-led approach to evaluating the innovation ecosystem and the innovations that it produces. The proposed framework is pragmatic in the sense that it is grounded on moral claims that should have wide purchase on diverse members of an open society without requiring special commitment to some particular conception of the good, the good life, or the good community. It articulates a respect in which members of a diverse, open society can see one another as free and equal, and it recognizes the special role that social institutions play in upholding this conception of freedom and equality. This position is consistent with broad respect for individual and academic freedom while also outlining mechanisms that can be used to ensure that the division of social labor serves to expand the capacity of important social institutions to protect and advance the shared interest of those who depend on them. This breadth of scope creates a framework in which the activities and responsibilities of a broader range of agents can be articulated and evaluated. It is also not limited to the direct or first-order effects of specific agents on others. It can recognize impacts that arise from the cumulative or synergistic interactions of portfolios of decisions.
Clearly, this sketch requires additional work to flesh out key details and improve its relevance to policy. However, the framework outlined here is likely to be particularly sensitive to growing concerns about the impact of AI systems on democratic accountability, public discourse, the integrity of elections, and the role of science and evidence in democratic governance. The reason is that the role of government and the critical social institutions of government, within this framework, is to secure the higher-order interest that all persons share in having the real freedom to formulate, pursue, and revise a life plan of their own. When technologies proliferate in ways that threaten the ability of citizens to hold political leaders accountable, to identify truth from fabrication, to ensure the integrity of elections, and to participate in democratic deliberation, these impacts implicate issues of justice. Moreover, these impacts need not be tied to individual developers and their individual technologies. They can arise from the synergistic interactions of a multitude of novel technologies and from the conduct of stakeholders including corporate executives, lawmakers, politicians, and ordinary people who use and abuse technology. This is an important area in which delineating conduct that is disruptive but morally permissible from conduct that is disruptive and morally problematic is particularly pressing. It is unlikely that such distinctions can be fruitfully drawn and defended without appeal to considerations of justice.
Notes
1. I use the term disruptive in a colloquial sense meaning to bring about change. This differs from the more restricted sense in Clayton M. Christensen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (Cambridge, MA: Harvard Business Review Press, 2013). On different uses of this term, see Steven Si and Hui Chen, “A Literature Review of Disruptive Innovation: What It Is, How It Works and Where It Goes,” Journal of Engineering and Technology Management 56 (2020): 101568; and Jeroen Hopster, “What Are Socially Disruptive Technologies?” Technology in Society 67 (2021): 101750.
2. Vannevar Bush, Science, the Endless Frontier (Princeton, NJ: Princeton University Press, 2021).
3. Royston M. Roberts, Serendipity: Accidental Discoveries in Science (Hoboken, NJ: Wiley, 1991).
4. Debra J. H. Mathews, Rachel Fabi, and Anaeze C. Offodile II, “Imagining Governance for Emerging Technologies,” Issues in Science and Technology 38, no. 3 (2022): 40–46.
5. John Rawls, Political Liberalism (New York: Columbia University Press, 1991); John Rawls, Justice as Fairness: A Restatement (Cambridge, MA: Harvard University Press, 2001).
6. Alex John London, For the Common Good: Philosophical Foundations of Research Ethics (Oxford, UK: Oxford University Press, 2021).
7. Tom L. Beauchamp and James F. Childress, Principles of Biomedical Ethics, 5th ed. (Oxford, UK: Oxford University Press, 2001); Bernard Gert, Common Morality: Deciding What to Do (Oxford, UK: Oxford University Press, 2004); Bernard Gert, Charles M. Culver, and K. Danner Clouser, Bioethics: A Return to Fundamentals (Oxford, UK: Oxford University Press, 2006).
8. Tom L. Beauchamp and James F. Childress, Principles of Biomedical Ethics, 8th ed. (Oxford, UK: Oxford University Press, 2019).
9. Rawls, Political Liberalism; Rawls, Justice as Fairness.
10. Luciano Floridi et al., “AI4People—An Ethical Framework for a Goof AI Society: Opportunities, Risks, Principles, and Recommendations,” Minds and Machines 28 (November 26, 2018): 689–707; Anna Jobin, Marcello Ienca, and Effy Vayena, “The Global Landscape of AI Ethics Guidelines,” Nature Machine Intelligence 1 (September 2, 2019): 389–399; Onur Bakiner, “What Do Academics Say About Artificial Intelligence Ethics? An Overview of the Scholarship,” AI and Ethics (2022): 1–13.
11. Alex John London, “The Independence of Practical Ethics,” Theoretical Medicine and Bioethics 22 (2001): 87–105.
12. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research, Department of Health, Education, and Welfare (April 18, 1979), https://
www .hhs .gov /ohrp /regulations -and -policy /belmont -report /read -the -belmont -report /index .html. 13. Paul B. Miller and Charles Weijer, “Rehabilitating Equipoise,” Kennedy Institute of Ethics Journal 13, no. 2 (June 2003): 93–118; Franklin G. Miller and Howard Brody, “A Critique of Clinical Equipoise. Therapeutic Misconception in the Ethics of Clinical Trials,” Hastings Center Report 33, no. 3 (2003): 19–28; Paul B. Miller and Charles Weijer, “Fiduciary Obligation in Clinical Research,” Journal of Law, Medicine, and Ethics 34, no. 2 (2006): 424–440; Paul B. Miller and Charles Weijer, “Trust Based Obligations of the State and Physician-Researchers to Patient-Subjects,” Journal of Medical Ethics 32, no. 9 (2006): 542–547; Franklin G. Miller and Howard Brody, “Clinical Equipoise and the Incoherence of Research Ethics,” Journal of Medicine and Philosophy 32, no. 2 (2007): 151–165; see also London, For the Common Good.
14. London, For the Common Good.
15. In parallel work we have explored how the value of beneficence can be understood in a narrow sense that relates to the ability of a technology to expand the ability of users to do something they could not do before, and in a meaningful sense, in which a technology expands the ability of the user to function in ways that they value. A. J. London and H. Heidari, “Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures Through AI Systems,” Minds and Machines (forthcoming).
16. A. J. London, “A Non-Paternalistic Model of Research Ethics and Oversight: Assessing the Benefits of Prospective Review,” Journal of Law, Medicine and Ethics 40, no. 4 (2012): 930–944.
17. Alex John London and Jonathan Kimmelman, “Clinical Trial Portfolios: A Critical Oversight in Human Research Ethics, Drug Regulation, and Policy,” Hastings Center Report 49, no. 4 (2019): 31–41.
18. Jesutofunmi A. Omiye et al., “Large Language Models Propagate Race-Based Medicine,” NPJ Digital Medicine 6 (October 20, 2023): 195, https://
www .nature .com /articles /s41746 -023 -00939 -z. 19. Jon Elster, Local Justice: How Institutions Allocate Scare Goods and Necessary Burdens (New York: Russel Sage Foundation, 1992); H. Peyton Young, Equity: In Theory and Practice (Princeton, NJ: Princeton University Press, 1995).
20. Richard Rothstein, The Color of Law (New York: Liveright, 2017).
21. Mehrsa Baradaran, The Color of Money (Cambridge, MA: Harvard University Press, 2017).
22. Michelle Alexander, The New Jim Crow (New York: New Press, 2012).
23. London, For the Common Good.
24. London and Heidari, “Beneficent Intelligence.”