Skip to main content

Realizing the Promise and Minimizing the Perils of AI for Science and the Scientific Community: 7. Bringing Power In: Rethinking Equity Solutions for AI

Realizing the Promise and Minimizing the Perils of AI for Science and the Scientific Community
7. Bringing Power In: Rethinking Equity Solutions for AI
    • Notifications
    • Privacy
  • Project HomeRealizing the Promise and Minimizing the Perils of AI for Science and the Scientific Community
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Title Page
  2. Copyright
  3. Contents
  4. 1. Overview and Context
  5. 2. The Value and Limits of Statements from the Scientific Community: Human Genome Editing as a Case Study
  6. 3. Science in the Context of AI
  7. 4. We’ve Been Here Before: Historical Precedents for Managing Artificial Intelligence
  8. 5. Navigating AI Governance as a Normative Field: Norms, Patterns, and Dynamics
  9. 6. Challenges to Evaluating Emerging Technologies and the Need for a Justice-Led Approach to Shaping Innovation
  10. 7. Bringing Power In: Rethinking Equity Solutions for AI
  11. 8. Scientific Progress in Artificial Intelligence: History, Status, and Futures
  12. 9. Perspectives on AI from Across the Disciplines
  13. 10. Protecting Scientific Integrity in an Age of Generative AI
  14. 11. Safeguarding the Norms and Values of Science in the Age of Generative AI
  15. Appendix 1. List of Retreatants
  16. Appendix 2. Biographies of Framework Authors, Paper Authors, and Editors
  17. Index

CHAPTER 7 Bringing Power In: Rethinking Equity Solutions for AI

Shobita Parthasarathy and Jared Katzman

There is great hope that artificial intelligence (AI) and machine learning (ML) can benefit society, from providing real-time translation to more accurate cancer screening. But there are also growing concerns that it is exacerbating social inequity and injustice. In recent years, media reports have revealed the serious negative consequences of the biases in AI datasets, including false arrests triggered by facial recognition technology.1 Meanwhile, despite the hope that AI will help criminal court judges reduce bias, its use seems to amplify structural inequalities in the justice system.2 The workers training algorithms to ameliorate bias receive little pay and labor under extremely stressful conditions.3 At the same time, AI tools meant to benefit marginalized communities are often inaccessible to them.4

In response to these emerging equity and justice concerns, policymakers, academics, and the technical community have proposed solutions. The Blueprint for an AI Bill of Rights developed by the Biden administration recommends identifying statistical biases in datasets; designing systems to be more transparent and explainable in their decision-making; incorporating proactive equity assessments into system design, including input from diverse viewpoints and identities; ensuring accessibility for people with disabilities; predeployment and ongoing disparity testing and mitigation; and clear oversight.5 Scholars have suggested new evaluation capabilities for existing government agencies and even the creation of new regulatory structures.6 In parallel, the technology industry has focused on educating programmers about the impact of social biases on AI software and creating a market for fairness monitoring tools and services.7

These initiatives will surely address some harms. However, most do not address the social inequalities that shape the landscape of technology development, use, and governance, including the concentration of economic and political power in a handful of technology companies and the systematic devaluation of lay contributions and perspectives, especially from those who have been historically marginalized. As a result, the proposed solutions are likely to fall short. To establish a better AI innovation ecosystem and more equitable and just technologies, we must develop solutions that account for these historical inequalities and power imbalances, in addition to addressing current concerns like bias and discrimination in model predictions.

Current Approaches to AI Inequities

Efforts to make AI more equitable rest on the growing realization that many communities, particularly those who have been historically marginalized, have not benefited—and some have been harmed—by technology.8 By equity, we mean “the overarching driver of a process for identifying and ameliorating structural and social conditions that disadvantage individuals and groups by unfairly limiting their freedom, their opportunity, or the conditions needed for well being.”9 In some cases, the problem is simply one of access. Prospective users may not be able to afford a technology or it is otherwise unavailable.10 In others, the problem is one of design: developers build a technology without understanding the needs and characteristics of a user community, and sometimes even with biased assumptions.11 This can ultimately have deleterious effects on communities that are already disadvantaged, especially because these assumptions and values are hidden in technical specifications. Finally, there are inequities even early in the process, in terms of who gets to set priorities and how data that informs development is gathered and categorized.

AI equity solutions fit into four categories: technical, organizational, legal/policy, and enhancing civic capacity. Technical solutions understand equity in terms of accuracy and focus on reducing disparities in model performance (often referred to as “algorithmic bias”).12 Technologists recognize that when an AI system does not perform equally for all groups of people, it can produce social exclusion, as when a Black smartphone user struggles to unlock their device using facial recognition technology,13 or when they cannot wash their hands in a public bathroom because the sensor does not recognize their skin.14 To address such problems, developers try to improve the data, software, and other technical dimensions of a system’s design. They may refine datasets to better represent diverse social groups, optimize AI algorithms to mitigate social biases, and implement more stringent quality assurance through additional testing. Inequalities become technical errors. For instance, when Joy Buolamwini and colleagues discovered that major facial recognition platforms had difficulties identifying darker-skinned individuals and women,15 many companies responded by collecting photos featuring more dark-skinned and female faces and retraining algorithms to take this additional data into account.16 In some cases, researchers may ask marginalized groups to provide input during the development process but give them no meaningful power to shape priorities or influence results.17 This alienates these communities further.

Such solutions can only have limited impact because the datasets themselves are assembled in a structurally biased context. Algorithms designed to predict crime, for example, use historical data that reflect discriminatory policing practices.18 As a result, they tend to overpredict crime in communities of color that are already over-surveilled. And, characterizing data as the solution produces perverse incentives that can exacerbate disproportionate burdens. A contractor working for Google tried to fix inaccuracies in facial recognition technology by paying unhoused Black men in Atlanta a few dollars each to play with a phone.19 The phone took pictures of the men to improve the datasets without their informed consent. Then even when the AI is technically accurate, it can produce unjust outcomes. Cities have used facial recognition cameras to curb the freedoms of residents in public housing, many of whom are Black, by aggressively surveilling and policing them.20

Organizational solutions deployed across the development process view people, practices, and programs as the route to achieving equity. Often used by the tech industry, they include initiatives to make the workforce more diverse and inclusive, “responsible AI” offices identify ethical principles to guide research and development by training technologists about the ethical and social dimensions of their work, supporting humanistic and social scientific research related to AI, and projects that bring user needs explicitly into product design. Microsoft, for example, has issued multiple iterations of its “Responsible AI Standard” to guide technology development across its organization.21 The Responsible Computing Challenge funded by the Mozilla Foundation trains the next generation of technologists to think holistically about technologies, considering them in social and political contexts.22

However, these solutions are often seen as auxiliary to the main project of technology development and are therefore dismissed. Ethics teams inside companies tend to be the first fired when the industry contracts economically.23 Even when the industry is stable, these teams lack resources, authority, and ultimately impact.24 Consider the now-famous case of Google firing Timnit Gebru and Margaret Mitchell, who led the company’s ethical AI group. Google tried to suppress a paper they coauthored, which discussed environmental harms and racial, gender, and other biases triggered by large language models.25 When the pair refused, the company fired them.26 Similarly, despite evidence that the rise of AI will place enormous strain on electricity and water, disproportionately burdening marginalized communities,27 technologists tend to exclude such factors from their definition of responsible AI and ethical practices.28 After all, considering environmental impacts would raise the question of whether AI can ever be responsible. Ultimately, because the public is completely dependent on how technologists define responsible, ethical, and equitable AI, they can become little more than buzzwords.

The success of organizational solutions also depends on institutional culture. Even as they have increased their efforts to diversify their ranks, tech companies have struggled to retain employees of color due to alienating work environments. One Black Facebook recruiter has recounted insensitive comments and stereotyping in discussions about hiring,29 while a Black Google recruiter reported inadequate pay and promotion opportunities for people of color.30 If the dominant communities in an organization are not reflective or open to change, it is impossible for a handful of employees or new initiatives to produce equitable or just outcomes. Organizational culture also matters in AI use: Although the FBI has a training program for law enforcement officials who use facial recognition technologies, only 5 percent have taken it.31

Due to growing concern that tech companies cannot be trusted to police themselves, scholars, civil society groups, and even some technologists have turned to governments for help. Legal and policy solutions include temporary moratoria and bans on specific applications, requirements for companies to disclose AI use in “high-risk” decisions, and new government capabilities to assess the effectiveness of AI products. New York City passed a first-of-its-kind law in 2021 to regulate AI use in hiring practices. It requires companies to work with independent auditors to evaluate, on an annual basis, whether their tools exhibit bias in hiring decisions based on race or gender. Job candidates also have the right to request data collected about them. The European Union’s pending AI Act will establish a regulatory approval process for technologies it deems high risk, including for migration, asylum, and border control management and biometric identification.

Such laws represent a significant step toward addressing AI inequities by reducing inaccurate uses, preventing disparate impacts, and protecting civil liberties. However, they are often vague and difficult to enforce. The New York law does not adequately define its auditing requirement.32 As a result, AI companies will have financial incentive to seek lenient bias assessments, and auditors, facing market pressures, will have little leverage to produce more critical and thorough reports.33

In addition, governments often justify moratoria and bans on the basis of perceived technical inaccuracies. Cities have banned facial recognition technology because of its poor performance among marginalized communities.34 But this does not grapple with the civil rights and liberties questions. For example, is it appropriate to allow facial recognition technology in neighborhoods that have suffered for generations due to excessive surveillance?

Last, there are emerging efforts to enhance civic capabilities, empowering the public to participate in discussions and even decisions regarding AI. This includes new social movements and new institutions to gauge, engage, and explicitly serve public priorities. The Ford Foundation’s Technology and Society Program tries to encourage a vibrant civil society surrounding digital technology, funding the Center for Democracy and Technology, the Leadership Conference for Civil and Human Rights, the People’s Tech Project, and the Distributed AI Research Institute. The United Kingdom’s Ada Lovelace Institute regularly conducts public dialogues on topics that include the responsible use of location data, trustworthiness of data-driven public health responses, and the use of biometric identification technologies including facial recognition.35 The US National Science Foundation has tried to create the National Artificial Intelligence Research Resource to broaden access to AI development resources.36

This has enhanced public discussion and produced important critiques. But these organizations receive very little funding compared to the investment in technology design,37 and as a result they are often spread quite thin and end up chasing after individual technologies rather than first imagining the society they want and then considering the role they want technology to play. Many also focus on representing the public as a whole, which means that they may be less adept at identifying issues of specific concern to marginalized communities. Finally, because these efforts are almost always institutionally separate from technology development and policy making, their impact is limited. Consider, for example, the ongoing discussion about the potential for existential risk from AI, initiated by a letter signed by the CEOs of major technology companies in March 2023. This frustrated civic tech leaders, who for years have called attention to the harms and inequities, including algorithmic bias, already produced by AI.38 But in comparison to the worries about existential risk, their concerns have had little impact. In fact, some civic tech leaders signed the existential risk letter so that they could bring some attention to their concerns, not because they worried that AI would kill us all.39

Structural Inequity in Science and Technology

The aforementioned efforts to address equity are serious and well-meaning, but by and large they do not take into account the historical power imbalances that mark the AI ecosystem. As a result, they are likely to have limited impact. In particular, we point to two things: the economic, political, and epistemological influence of technologists and the tech industry; and the systematic discrimination some communities have faced in science and technology for generations. Both, we argue, shape what counts as an equity problem, what counts as a solution, who participates, and how they do so.

Technologists have long had an authoritative role in Western societies. In the early days of the United States, the founders saw the development of new inventions as key to the country’s prosperity.40 This enthusiasm only accelerated in the twentieth century, after the Manhattan Project demonstrated that scientists and engineers could produce technologies for the national interest.41 Western governments began to increase their investments in research and development and to view innovation as the route not only to national security and economic competitiveness but also social progress.42 As the technology industry began to grow, then, it was naturally the object of great pride and fascination. Microsoft, Google, Apple, Amazon, and Facebook were not only creating products that the public seemed to want, but they had significant economic value. Excited by their potential, governments were reluctant to hear concerns or regulate them.43 They have since become so dominant that they are known as “platforms,” where they control multiple markets and the behavior of other companies.

This has created concentrated economic power: in 2023, eight of the ten richest people in the world made their money in tech, and the six big tech companies accounted for nearly all of the S&P 500’s return.44 Ultimately, this produces political power (these companies not only spend significant sums lobbying policymakers but try to cultivate a positive public image) and shapes the research ecosystem. They fund far more AI research than governments or philanthropic foundations, so the resulting technologies are likely to reflect their needs and priorities. Even most (58 percent) of “ethical tech” researchers receive their funding from industry,45 which likely limits the strength of their critique.

Finally, AI researchers–whether in industry or academia–are demographically homogeneous. In the United States, most of the people with an undergraduate degree in computer science are male and either white or Asian.46 Likely as a result, the industry is less diverse than the private sector as a whole.47 The demographic homogeneity also creates an alienating masculine culture in innovation spaces, which then reproduces the problem.48 Employees from disadvantaged communities of color are more likely to provide catering or custodial services.49 The global landscape echoes these inequalities, with workers receiving shockingly low wages to perform high-stress jobs like content moderation and image tagging.50

Structural inequality has even deeper roots in the history of science and technology. The power of innovators with formal technical expertise, who can contribute to the marketplace, has erased the contributions of indigenous knowledge systems until relatively recently,51 not to mention the experiential knowledge of citizens who may have different priorities than scientists or engineers. A long legacy of mistreating the participants in research—including the famous Tuskegee syphilis experiment, the use of Henrietta Lacks’s cells across biomedicine, and the unhoused Black men who did not consent to improve Google’s facial recognition technology—has led marginalized communities to be skeptical of technological innovation even when it is designed to benefit them. Significant portions of the US Black community, for example, have refused the COVID-19 vaccine because they question the intense public health attention. Marginalized communities have also experienced devastating neglect, which can be a matter of life and death. For years, scientists have known that the pulse oximeter, used to measure blood oxygen during the COVID-19 pandemic, was inaccurate among those with darker skin. As a result, those who need supplemental oxygen may not receive it.52 It was only after 2020, when an anthropologist sounded the alarm in the wake of George Floyd’s murder and physicians confirmed the problem, that health ministries around the world took notice.53 While technical communities and policymakers may treat such problems as minor, isolated errors, marginalized communities see them as examples of structural inequality, justifying their frustration with science and technology.

Ultimately, the concentration of power in the tech industry combined with structural inequality make it very difficult to produce more equitable and just AI. A handful of tech leaders shape the definition of AI problems and promote a simplistic understanding of the relationships between technology and society, including assumptions that technologies usually have beneficial impacts and can easily fix societal ills. These are the understandable assumptions of those whose lives have generally improved with technology, but they have serious consequences for others.

Bringing Power into AI Equity Solutions

To ensure that AI ameliorates, or at least does not exacerbate, the structural inequities we have identified, we must reimagine the four types of solutions already described. Technical solutions that account for power would focus scientists and engineers on the concerns of marginalized people, rather than the other way around.54 This starts with agenda-setting: Research funders or technologists might begin by asking a community about the biggest challenges they face and then determine development priorities accordingly. The partnership would continue throughout the design process, so that citizens may provide their expertise and feel some ownership over the project and so researchers can establish trust with the community.

In Pittsburgh, for example, a technical team led by computer scientists at Carnegie Mellon University (CMU) worked with community members to build a technology that monitored and visualized local air quality.55 The collaboration began when the researchers attended community meetings and learned about residents’ concerns about air pollution from a nearby factory. Residents had previously struggled to get the attention of local or national officials because they were unable to produce enough quantitative data in a timely fashion. The researchers listened to the residents’ plight, built prototypes, and then altered the technology in response to community input. Eventually, their system brought together heterogeneous data, including crowdsourced smell reports, animated smoke images, finer air quality data, and wind information, which the community then used to trigger government action—EPA administrators agreed to review the factory’s compliance, and later that year, the parent company announced its closure. This approach, however, required openness and humility from the researchers, recognition of community expertise, a desire to empower marginalized people, and willingness to suppress technical priorities in favor of the needs of the neighborhood.56

Organizational solutions that alleviate structural inequality require leaders to identify how culture, language, norms, and daily practices can reinforce the power of certain groups and then work to change them. Diversifying an organization without this attention will simply produce more alienation and scandal. Technical organizations must clearly demonstrate their openness to hear hard truths about their own privilege, understand how historically disadvantaged people may be disproportionately harmed by their work, and prioritize solutions. To achieve this, all tech companies should have teams that focus on the equity dimensions of AI and report directly to the CEO. Such teams would weigh in on major research and development decisions, would be given long-term funding commitments, and would receive whistleblower protections.57 Universities also have an important role to play in training the next generations of scientists and engineers to understand the discrimination and harms perpetrated by their forebears; few people know, for example, that the academic field of statistics—which underlies AI—is rooted in eugenic ideology.58 Today, universities may require STEM students to take a single course on professional ethics.59 Instead, they should integrate attention to the equity, social, and ethical impacts of AI into core technical courses.60 And humanists and social scientists should teach this content to disrupt the conventional privileges afforded technical experts. After all, these experts offer deep knowledge of how technology works in society. Finally, government agencies and philanthropic foundations who have begun to encourage research into the implications of AI should facilitate equitable multidisciplinary collaborations.

Scholars have envisioned a variety of legal and policy tools that take power imbalances seriously. This includes algorithmic impact assessments (AIAs), which governments could use to assess the risks and benefits of a particular technology before it is deployed.61 Similar to environmental impact assessments required for new development projects and government reviews of new drugs, they would require government officials to answer a standard battery of questions about the impacts of the system’s technical attributes, which would result in a final impact score that would determine its regulation.62 However, focusing on the technical dimensions of the system is insufficient. AIAs must consider the social implications. In its report on the benefits and harms of facial recognition technology in K-12 schools, New York state’s Office of Information Technology Services considered not only accuracy but also the likelihood that the technology would exacerbate bias and harm against already marginalized communities.63 Even if facial recognition technology became more accurate, the Office concluded, it would violate civil rights and liberties. The state legislature banned this use in response to the report.64

Others have suggested more deliberative approaches to increase civic capacity. In the case of genomics and biotechnology, Osagie Obasogie advocates for race impact assessments that are collaborative and involve multiple stakeholders.65 Systematically incorporating marginalized communities into algorithmic impact assessments could also help to empower them and ultimately alleviate structural inequalities. Key, however, is to link democratic deliberation to decision-making; otherwise these citizens will feel further exploited and neglected.66 Before launching an initiative to bring people with disabilities more centrally into tech innovation, for example, Borealis Philanthropy and the Ford Foundation appealed to an advisory committee made up of people with disabilities who offered a range of expertise.67 Over the course of a year, the committee identified priorities, offered strategies to address harms at the intersection of disability and technology, and nominated and selected the inaugural cohort of grantees. Experts can also help community organizations advocate for more just AI development and use. The University of Michigan’s Science, Technology, and Public Policy Program has established the Community Partnerships Initiative, which responds to the concerns and priorities of organizations in Detroit and southeast Michigan with research and analysis.68 For example, it produced a policy brief on acoustic gun detection systems, which enabled We the People Michigan to challenge Detroit’s investment in the technology.69

Distributing Responsibility for AI and Equity

More serious attention to structural inequality and the power imbalances they produce will require all the participants in the innovation ecosystem—innovators, customers, funders, regulators, and the public—to take on additional responsibilities. AI innovators must abandon the notion that their work is politically neutral and objective and recognize that if they seek societal benefit rather than harm, they must engage a diverse populace throughout the process, even at the priority-setting stage. They must treat these communities with respect, which includes taking their advice especially when they sound alarms, paying them, and making transparent decisions.70 Innovators must also understand that social context will shape the impact of technologies they build, both positively and negatively. In other words, technologies are only solutions if they fit with the culture, conventions, and relationships in a particular place. For interventions to have the benefits technologists seek, they should work with historians, sociologists, and anthropologists who can offer deep understanding of communities and the relationships between technology and society.

Meanwhile, those that purchase AI must develop the capacity to inquire about datasets and algorithms and the structural inequalities that they may hide and perpetuate. In some cases, they may be able to force technologists to change the technology. But even when they cannot, they can guide those who ultimately use the technology regarding its limitations and processes that may minimize harm to vulnerable communities.

Funders, whether public, philanthropic, or private, also have an important role to play. They can include marginalized communities on advisory committees that set funding priorities, and privilege these insights with the understanding that they have had virtually no voice in the history of technological innovation thus far.71 Funders will also need to think quite differently about innovation. For AI to achieve important goals such as improving cancer survival rates or mitigating climate change among vulnerable communities, funders must recognize that the problems are simultaneously social and technical and create research opportunities accordingly. Funders can provide incentives, or even require technologists to collaborate with marginalized communities, humanists, and social scientists on individual projects, to ensure that they redress historical inequities. Private sector funders can provide incentives to technologists who consider equity and justice explicitly in their work; these developments will likely open new markets, which will ultimately benefit investors as well.

Regulators around the world have begun to take some responsibility for AI. In the United States, the Biden administration’s recent Executive Order aims to provide guidance on responsible use to the users of algorithms across multiple sectors, including housing, criminal justice, and benefits programs with guidance on responsible use.72 It is also developing systems to evaluate AI safety including requiring developers to disclose the results of their “red team” tests. But this is not enough and is likely to focus regulators on the technical dimensions of the systems.73 We suggest that regulators consider more comprehensive impact assessments. This would require not only technical investigation of the datasets and algorithms but also the consequences when the technology is deployed in society. In other words, regulators will have to move beyond technical evaluation, which will require them to incorporate new types of expertise and evaluation processes.

As innovators, customers, regulators, and funders take on these new responsibilities, it will place new burdens on already marginalized communities. Their inclusion is crucial to achieve equity and justice, but it is also risky. They may be overwhelmed by requests, tokenized, or provided insufficient compensation for their participation. They may also simply be wary of being ignored or abused, given the history of their participation in innovation. They must always have the agency to say no, and the innovation ecosystem must accept this. When they choose to participate, their knowledge must be valued and compensated fairly. This is the only way to build trust and ultimately alleviate structural inequities in AI and innovation more generally.

Notes

  1. 1. ACLU of Michigan, “After Third Wrongful Arrest, ACLU Slams Detroit Police Department for Continuing to Use Faulty Facial Recognition Technology,” American Civil Liberties Union, August 6, 2023, https://www.aclu.org/press-releases/after-third-wrongful-arrest-aclu-slams-detroit-police-department-for-continuing-to-use-faulty-facial-recognition-technology.

  2. 2. Rashida Richardson, Jason M. Schultz, and Kate Crawford, “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice,” New York University Law Review Online 94 (2019): 15–55.

  3. 3. Bobby Allyn, “In Settlement, Facebook to Pay $52 Million to Content Moderators with PTSD,” NPR, May 12, 2020, sec. Technology, https://www.npr.org/2020/05/12/854998616/in-settlement-facebook-to-pay-52-million-to-content-moderators-with-ptsd; Billy Perrigo, “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic,” TIME, January 18, 2023, https://time.com/6247678/openai-chatgpt-kenya-workers/; Miriah Steiger et al., “The Psychological Well-Being of Content Moderators: The Emotional Labor of Commercial Moderation and Avenues for Improving Support,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2021 CHI Conference on Human Factors in Computing Systems (New York: Association for Computing Machinery, 2021), 1–14, https://doi.org/10.1145/3411764.3445092.

  4. 4. Todd Feathers, “People with Disabilities Say This AI Tool Is Making the Web Worse for Them,” Vice, March 17, 2021, https://www.vice.com/en/article/m7az74/people-with-disabilities-say-this-ai-tool-is-making-the-web-worse-for-them.

  5. 5. Office of Science and Technology Policy, “Blueprint for an AI Bill of Rights,” OSTP, The White House, 2022, https://www.whitehouse.gov/ostp/ai-bill-of-rights/.

  6. 6. Andrew Tutt, “An FDA for Algorithms,” Administrative Law Review 69, no. 1 (2017): 83–123; Ryan Calo, “The Case for a Federal Robotics Commission,” Brookings, September 15, 2014, https://www.brookings.edu/articles/the-case-for-a-federal-robotics-commission/.

  7. 7. Sanders Kleinfeld, “A New Course to Teach People about Fairness in Machine Learning,” The Keyword, Google (blog), October 18, 2018, https://blog.google/technology/ai/new-course-teach-people-about-fairness-machine-learning/; Brianna Richardson and Juan E. Gilbert, “A Framework for Fairness: A Systematic Review of Existing Fair AI Solutions,” arXiv, December 10, 2021, https://doi.org/10.48550/arXiv.2112.05700.

  8. 8. Shobita Parthasarathy, “Innovating for Equity,” Issues in Science and Technology 38, no. 3 (Spring 2022), https://issues.org/innovating-for-equity-shobita-parthasarathy-forum/.

  9. 9. National Academies of Sciences, Engineering, and Medicine and National Academy of Medicine, Toward Equitable Innovation in Health and Medicine: A Framework (Washington, DC: National Academies Press, 2023), https://doi.org/10.17226/27184.

  10. 10. Tawanna R. Dillahunt and Tiffany C. Veinot, “Getting There: Barriers and Facilitators to Transportation Access in Underserved Communities,” ACM Transactions on Computer-Human Interaction 25, no. 5 (October 11, 2018): 1–39, https://doi.org/10.1145/3233985.

  11. 11. Sasha Costanza-Chock, Design Justice: Community-Led Practices to Build the Worlds We Need (Cambridge, MA: MIT Press, 2020), https://doi.org/10.7551/mitpress/12255.001.0001.

  12. 12. Solon Barocas, Moritz Hardt, and Arvind Narayanan, Fairness and Machine Learning: Limitations and Opportunities (Cambridge, MA: MIT Press, 2023).

  13. 13. Algernon Austin, “My Phone’s Facial Recognition Technology Doesn’t See Me, a Black Man. But It Gets Worse,” USA Today, December 17, 2019, https://www.usatoday.com/story/opinion/voices/2019/12/17/artificial-intelligence-facial-recognition-technology-black-african-american-column/2664575001/.

  14. 14. Max Plenke, “The Reason This ‘Racist Soap Dispenser’ Doesn’t Work on Black Skin,” Mic, September 9, 2015, https://www.mic.com/articles/124899/the-reason-this-racist-soap-dispenser-doesn-t-work-on-black-skin.

  15. 15. Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” in Proceedings of the 1st Conference on Fairness, Accountability and Transparency, Conference on Fairness, Accountability and Transparency, PMLR (2018), 77–91, https://proceedings.mlr.press/v81/buolamwini18a.html.

  16. 16. Inioluwa Deborah Raji and Joy Buolamwini, “Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products,” in Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’19 (New York: Association for Computing Machinery, 2019), 429–435, https://doi.org/10.1145/3306618.3314244.

  17. 17. Mona Sloane et al., “Participation Is Not a Design Fix for Machine Learning,” in Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, EAAMO ’22 (New York: Association for Computing Machinery, 2022), 1–6, https://doi.org/10.1145/3551624.3555285.

  18. 18. Sarah Brayne, Predict and Surveil: Data, Discretion, and the Future of Policing (New York: Oxford University Press, 2020).

  19. 19. Jennifer Elias, “Google Contractor Reportedly Tricked Homeless People into Face Scans,” CNBC, October 3, 2019, sec. Technology, https://www.cnbc.com/2019/10/03/google-contractor-reportedly-tricked-homeless-people-into-face-scans.html.

  20. 20. Maggie Harrison Dupré, “Facial Recognition Used to Evict Single Mother for Taking Night Classes,” Futurism (blog), May 17, 2023, https://futurism.com/facial-recognition-housing-projects.

  21. 21. Microsoft, “Responsible AI Standard, V2,” June 2022, https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf.

  22. 22. Mozilla, “$2.4 Million in Prizes for Schools Teaching Ethics Alongside Computer Science,” Distilled—The Mozilla Blog (blog), April 30, 2019, https://blog.mozilla.org/en/mozilla/2-4-million-in-prizes-for-schools-teaching-ethics-alongside-computer-science/.

  23. 23. Madhumita Murgia and Cristina Criddle, “Big Tech Companies Cut AI Ethics Staff, Raising Safety Concerns,” Financial Times, March 29, 2023, sec. Artificial intelligence, https://www.ft.com/content/26372287-6fb3-457b-9e9c-f722027f36b3.

  24. 24. Katharine Miller, “Ethics Teams in Tech Are Stymied by Lack of Support,” Human-Centered Artificial Intelligence, Stanford University (blog), June 21, 2023, https://hai.stanford.edu/news/ethics-teams-tech-are-stymied-lack-support; Emanuel Moss and Jacob Metcalf, “Ethics Owners: A New Model of Organizational Responsibility in Data-Driven Technology Companies,” Data & Society, September 23, 2020, https://datasociety.net/library/ethics-owners/.

  25. 25. Emily M. Bender et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21 (New York: Association for Computing Machinery, 2021), 610–623, https://doi.org/10.1145/3442188.3445922.

  26. 26. Tom Simonite, “What Really Happened When Google Ousted Timnit Gebru,” WIRED, June 8, 2021, https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/.

  27. 27. Johanna Okerlund et al., “What’s in the Chatterbox? Large Language Models, Why They Matter, and What We Should Do About Them,” Science, Technology and Public Policy (STPP), Gerald R. Ford School of Public Policy, University of Michigan, April 2022, https://stpp.fordschool.umich.edu/research/research-report/whats-in-the-chatterbox.

  28. 28. Tamara Kneese, “Climate Justice & Labor Rights,” AI Now, August 2, 2023, https://ainowinstitute.org/general/climate-justice-and-labor-rights-part-i-ai-supply-chains-and-workflows.

  29. 29. Elizabeth Dwoskin and Nitasha Tiku, “A Recruiter Joined Facebook to Help It Meet Its Diversity Targets. He Says Its Hiring Practices Hurt People of Color,” Washington Post, April 9, 2021, https://www.washingtonpost.com/technology/2021/04/06/facebook-discrimination-hiring-bias/.

  30. 30. “Google Gives Black Workers Lower-Level Jobs and Pays Them Less, Suit Claims,” The Guardian, March 18, 2022, sec. Technology, https://www.theguardian.com/technology/2022/mar/18/google-black-employees-lawsuit-racial-bias.

  31. 31. Facial Recognition Services: Federal Law Enforcement Agencies Should Take Actions to Implement Training, and Policies for Civil Liberties (Washington, DC: US Government Accountability Office, September 12, 2023), https://www.gao.gov/products/gao-23-105607.

  32. 32. Ivana Saric, “NYC Law Promises to Regulate AI in Hiring, but Leaves Crucial Gaps,” Axios, July 6, 2023, https://www.axios.com/2023/07/06/new-york-ai-hiring-law.

  33. 33. Ellen P. Goodman and Julia Trehu, “AI Audit Washing and Accountability,” SSRN Scholarly Paper (Rochester, NY: September 22, 2022), https://doi.org/10.2139/ssrn.4227350.

  34. 34. Ally Jarmanning, “Boston Lawmakers Vote to Ban Use of Facial Recognition Technology by the City,” NPR, June 24, 2020, sec. America Reckons with Racial Injustice, https://www.npr.org/sections/live-updates-protests-for-racial-justice/2020/06/24/883107627/boston-lawmakers-vote-to-ban-use-of-facial-recognition-technology-by-the-city.

  35. 35. Aidan Peppin, “Listening to the Public,” Ada Lovelace Institute, August 18, 2023, https://www.adalovelaceinstitute.org/report/listening-to-the-public/; Octavia Reeve, Anna Colom, and Roshni Modhvadia, “What Do the Public Think about AI?,” Ada Lovelace Institute, October 26, 2023, https://www.adalovelaceinstitute.org/evidence-review/what-do-the-public-think-about-ai/.

  36. 36. “NSF and Partners Kick off the National Artificial Intelligence Research Resource Pilot Program,” US National Science Foundation, November 9, 2023, https://new.nsf.gov/news/nsf-partners-kick-nairr-pilot-program.

  37. 37. Nur Ahmed, Muntasir Wahed, and Neil C. Thompson, “The Growing Influence of Industry in AI Research,” Science 379, no. 6635 (March 3, 2023): 884–886, https://doi.org/10.1126/science.ade2420.

  38. 38. Alex Hanna and Emily M. Bender, “AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype,” Scientific American, August 12, 2023, https://www.scientificamerican.com/article/we-need-to-focus-on-ais-real-harms-not-imaginary-existential-risks/; Lorena O’Neil, “These Women Tried to Warn Us About AI,” Rolling Stone, August 12, 2023, https://www.rollingstone.com/culture/culture-features/women-warnings-ai-danger-risk-before-chatgpt-1234804367/.

  39. 39. Will Knight, “A Letter Prompted Talk of AI Doomsday. Many Who Signed Weren’t Actually AI Doomers,” Wired, August 17, 2023, https://www.wired.com/story/letter-prompted-talk-of-ai-doomsday-many-who-signed-werent-actually-doomers/.

  40. 40. Mario Biagioli, “Patent Specification and Political Representation: How Patents Became Rights,” in Making and Unmaking Intellectual Property: Creative Production in Legal and Cultural Perspective, ed. Mario Biagioli, Peter Jaszi, and Martha Woodmansee (Chicago: University of Chicago Press, 2011), 25–40, https://press.uchicago.edu/ucp/books/book/chicago/M/bo11103013.html.

  41. 41. Stuart W. Leslie, The Cold War and American Science: The Military-Industrial-Academic Complex at MIT and Stanford (New York: Columbia University Press, 1993).

  42. 42. Sebastian M. Pfotenhauer, Joakim Juhl, and Erik Aarden, “Challenging the ‘Deficit Model’ of Innovation: Framing Policy Issues Under the Innovation Imperative,” Research Policy, New Frontiers in Science, Technology and Innovation Research from SPRU’s 50th Anniversary Conference, 48, no. 4 (May 1, 2019): 895–904, https://doi.org/10.1016/j.respol.2018.10.015.

  43. 43. Roger McNamee, Zucked: Waking Up to the Facebook Catastrophe (New York: Penguin Press, 2020).

  44. 44. Chase Peterson-Withorn, “The 25 Richest People in the World 2023,” Forbes, April 4, 2023, https://www.forbes.com/sites/chasewithorn/2023/04/04/the-25-richest-people-in-the-world-2023/; Felix Richter, “Tech Giants Do Heavy Lifting in 2023 Stock Market Rebound,” Statista Daily Data (blog), June 19, 2023, https://www.statista.com/chart/30219/main-contributors-to-s-p-500-gains-in-2023.

  45. 45. Mohamed Abdalla and Moustafa Abdalla, “The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’21 (New York: Association for Computing Machinery, 2021), 287–297, https://doi.org/10.1145/3461702.3462563.

  46. 46. Shana Lynch, “2023 State of AI in 14 Charts,” Human-Centered Artificial Intelligence, Stanford University (blog), April 3, 2023, https://hai.stanford.edu/news/2023-state-ai-14-charts.

  47. 47. Susan Laborde, “30+ Diversity in High Tech Statistics [2023 Data],” Tech Report, May 28, 2024, https://techreport.com/statistics/business-workplace/diversity-in-high-tech-statistics/.

  48. 48. J. Lewis, “Barriers to Women’s Involvement in Hackspaces and Makerspaces,” Monograph (University of Sheffield, September 3, 2019), https://eprints.whiterose.ac.uk/144264/.

  49. 49. Jessica Guynn, “Race and Class Divide: Black and Hispanic Service Workers Are Tech’s Growing Underclass,” USA Today, July 10, 2020, sec. Tech, https://www.usatoday.com/story/tech/2020/07/10/black-hispanic-workers-tech-underclass-amazon-apple-facebook-google/13461027/.

  50. 50. Perrigo, “OpenAI Used Kenyan Workers on Less Than $2 Per Hour”; Billy Perrigo, “Inside Facebook’s African Sweatshop,” TIME, February 14, 2022, https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/.

  51. 51. Christopher I. Roos et al., “Native American Fire Management at an Ancient Wildland–Urban Interface in the Southwest United States,” Proceedings of the National Academy of Sciences 118, no. 4 (January 26, 2021): e2018733118, https://doi.org/10.1073/pnas.2018733118; Charles R Menzies and Caroline F Butler, “Returning to Selective Fishing Through Indigenous Fisheries Knowledge: The Example of K’moda, Gitxaala Territory,” American Indian Quarterly 31, no. 3 (2007): 441–464.

  52. 52. Amy Moran-Thomas, “How a Popular Medical Device Encodes Racial Bias,” Boston Review, August 5, 2020, https://www.bostonreview.net/articles/amy-moran-thomas-pulse-oximeter/.

  53. 53. Center for Devices and Radiological Health, “Pulse Oximeter Accuracy and Limitations: FDA Safety Communication,” US Food and Drug Administration, September 15, 2022, https://public4.pagefreezer.com/content/FDA/20-02-2024T15:13/https://www.fda.gov/medical-devices/safety-communications/pulse-oximeter-accuracy-and-limitations-fda-safety-communication.

  54. 54. Abeba Birhane, “Algorithmic Injustice: A Relational Ethics Approach,” Patterns 2, no. 2 (February 12, 2021): 100205, https://doi.org/10.1016/j.patter.2021.100205.

  55. 55. Yen-Chia Hsu et al., “Community-Empowered Air Quality Monitoring System,” arXiv April 9, 2018, https://doi.org/10.48550/arXiv.1804.03293.

  56. 56. Yen-Chia Hsu et al., “Empowering Local Communities Using Artificial Intelligence,” Patterns 3, no. 3 (March 11, 2022): 100449, https://doi.org/10.1016/j.patter.2022.100449.

  57. 57. Christina J. Colclough and Kate Lappin, “Building Union Power to Rein in the AI Boss,” Stanford Social Innovation Review, September 20, 2023, https://ssir.org/articles/entry/building_union_power_to_rein_in_the_ai_boss.

  58. 58. Ruth Schwartz Cowan, “Francis Galton’s Statistical Ideas: The Influence of Eugenics,” Isis 63, no. 4 (December 1972): 509–528, https://doi.org/10.1086/351000.

  59. 59. Sepehr Vakil, “Ethics, Identity, and Political Vision: Toward a Justice-Centered Approach to Equity in Computer Science Education,” Harvard Educational Review 88, no. 1 (2018): 26–52, https://doi.org/10.17763/1943-5045-88.1.26.

  60. 60. James W. Malazita, “Translating Critical Design: Agonism in Engineering Education,” Design Issues 34, no. 4 (October 1, 2018): 96–109, https://doi.org/10.1162/desi_a_00514; James W. Malazita and Korryn Resetar, “Infrastructures of Abstraction: How Computer Science Education Produces Anti-Political Subjects,” Digital Creativity 30, no. 4 (October 2, 2019): 300–312, https://doi.org/10.1080/14626268.2019.1682616.

  61. 61. Emanuel Moss et al., “Assembling Accountability: Algorithmic Impact Assessment for the Public Interest,” Data & Society, June 29, 2021, https://datasociety.net/library/assembling-accountability-algorithmic-impact-assessment-for-the-public-interest/.

  62. 62. Lara Groves, “Algorithmic Impact Assessment: A Case Study in Healthcare,” Ada Lovelace Institute, February 8, 2022, https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/.

  63. 63. “Use of Biometric Identifying Technology in Schools,” Office of Information Technology Services, New York State, August 2023, https://its.ny.gov/system/files/documents/2023/08/biometrics-report-final-2023.pdf.

  64. 64. Carolyn Thompson, “New York Bans Facial Recognition in Schools,” TIME, September 27, 2023, https://time.com/6318033/new-york-bans-facial-recognition-schools/.

  65. 65. Osagie K. Obasogie, “Toward Race Impact Assessments,” in Beyond Bioethics: Toward a New Biopolitics, ed. Osagie K. Obasogie and Marcy Darnovsky (Oakland, CA: University of California Press, 2018), 461–471.

  66. 66. Mark B. Brown, Science in Democracy: Expertise, Institutions, and Representation (Cambridge, MA: MIT Press, 2009).

  67. 67. “Borealis Philanthropy and Ford Foundation Launch $1 Million Disability x Tech Fund to Advance Leadership of People with Disabilities in Tech Innovation,” Ford Foundation (blog), February 28, 2023, https://www.fordfoundation.org/news-and-stories/news-and-press/news/borealis-philanthropy-and-ford-foundation-launch-1-million-disability-x-tech-fund-to-advance-leadership-of-people-with-disabilities-in-tech-innovation/.

  68. 68. “Expanding Participation in Science and Technology Policy Through Civil Society Partnerships,” Gerald R. Ford School of Public Policy, University of Michigan (blog), November 4, 2021, https://fordschool.umich.edu/news/2021/expanding-participation-science-and-technology-policy-through-civil-society-partnerships.

  69. 69. Jillian Mammino, “Acoustic Gunshot Detection Systems: Community & Policy Considerations” (Gerald R. Ford School of Public Policy, University of Michigan, June 2022).

  70. 70. “Community Engagement Playbook,” Gerald R. Ford School of Public Policy, University of Michigan, forthcoming; “STPP to Explore Best Practice in Community Engagement,” Gerald R. Ford School of Public Policy, University of Michigan (blog), August 1, 2023, https://fordschool.umich.edu/news/2023/stpp-explore-best-practice-community-engagement?theme=ipc.

  71. 71. Shobita Parthasarathy, “Can Innovation Serve the Public Good?,” Boston Review, July 6, 2023, https://www.bostonreview.net/articles/can-innovation-serve-the-public-good/.

  72. 72. “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” The White House, October 30, 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

  73. 73. Sorelle Friedler et al., “AI Red-Teaming Is Not a One-Stop Solution to AI Harms: Recommendations for Using Red-Teaming for AI Accountability,” Data & Society, October 25, 2023, https://datasociety.net/library/ai-red-teaming-is-not-a-one-stop-solution-to-ai-harms-recommendations-for-using-red-teaming-for-ai-accountability/.

Annotate

Next Chapter
8. Scientific Progress in Artificial Intelligence: History, Status, and Futures
PreviousNext
All rights reserved
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org