Skip to main content

Realizing the Promise and Minimizing the Perils of AI for Science and the Scientific Community: Index

Realizing the Promise and Minimizing the Perils of AI for Science and the Scientific Community
Index
    • Notifications
    • Privacy
  • Project HomeRealizing the Promise and Minimizing the Perils of AI for Science and the Scientific Community
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Title Page
  2. Copyright
  3. Contents
  4. 1. Overview and Context
  5. 2. The Value and Limits of Statements from the Scientific Community: Human Genome Editing as a Case Study
  6. 3. Science in the Context of AI
  7. 4. We’ve Been Here Before: Historical Precedents for Managing Artificial Intelligence
  8. 5. Navigating AI Governance as a Normative Field: Norms, Patterns, and Dynamics
  9. 6. Challenges to Evaluating Emerging Technologies and the Need for a Justice-Led Approach to Shaping Innovation
  10. 7. Bringing Power In: Rethinking Equity Solutions for AI
  11. 8. Scientific Progress in Artificial Intelligence: History, Status, and Futures
  12. 9. Perspectives on AI from Across the Disciplines
  13. 10. Protecting Scientific Integrity in an Age of Generative AI
  14. 11. Safeguarding the Norms and Values of Science in the Age of Generative AI
  15. Appendix 1. List of Retreatants
  16. Appendix 2. Biographies of Framework Authors, Paper Authors, and Editors
  17. Index

INDEX

  • AAAI (Association for the Advancement of AI), 5, 147, 206
  • AI (artificial intelligence), definition and terminology, 5, 59–60, 85; future trends, 127–128, 138, 177, 188, 208–209; historical, 148, 211; limitations, 2, 50, 71, 168, 188
  • AI and facial recognition, 45, 48–49, 71, 127–129, 131–132, 134, 137
  • AI and society, 112, 119, 123, 204, 210
  • AI Bill of Rights, 8, 61, 127
  • AI governance, 49–50, 57–59, 69, 72–75, 84, 139; challenges and future directions, 60, 63, 67, 88–89; ethical guidelines and principles, 88, 92, 178; global governance frameworks, 63–64, 70–71, 79; norms and patterns in governance, 58, 62, 68, 72, 74, 76–78, 82–84, 88–89, 92, 105–106, 113; social imperatives, 102–104, 114–115, 119–120, 122
  • AI in climate and sustainability, 31, 67, 138, 183–184, 188, 212, 216–217
  • AI in computer science, 21–22, 30, 135, 148, 178, 196, 201–202, 214–216; in computer science engineering, 203–206
  • AI in education, 49, 68, 103, 113, 168, 179, 184, 186, 200
  • AI in engineering, 170, 184
  • AI in healthcare, 180, 182, 183
  • AI in public policy, 3, 72, 76, 81, 84, 92, 109–110, 115
  • AI in science, 179–181, 183, 195–196, 222; astronomy, 29–30; chemistry, 199–201; geology, 211–212; physical sciences, 183, 185; physics, 31, 212–213, 216–218
  • AlphaFold, 1, 24, 179, 196, 203
  • Amazon, 133, 178
  • Annenberg Foundation Trust at Sunnylands, 2–4, 147, 221, 230, 242–243
  • Apple, 133, 176, 232
  • Asilomar Conference on Recombinant DNA: in 1975, 3, 36, 42–43, 243–244; in 2009, 206–207; in 2017, 233–34, 243
  • Belmont Report, 3, 44, 107, 216, 240–242
  • Biotechnology, 36, 41, 43
  • Bletchley Declaration, 64, 78–80, 234
  • Brazil’s Draft Artificial Intelligence Act, 61, 70–73, 77, 79, 81–83, 86
  • Canada’s Draft Artificial Intelligence and Data Act, 60, 70–72, 77, 82
  • ChatGPT. See OpenAI
  • China’s Interim Generative AI Measures, 60–61, 70, 75
  • CoE (Council on Europe), 57, 64, 78
  • CRISPR-Cas9, 3, 15–17. See also Human genome editing
  • DALL-E/DALL-E2, 152, 184, 187
  • DNN (deep neural networks), 22–25, 27–29, 153–157, 167, 181
  • Equity in AI, 116–118, 123, 128, 133–134, 138–139, 224, 239, 241; biases, 48–49, 112, 117–118, 130–131, 134; digital divide, 187; equity and inclusion frameworks, 129, 134–136; funding, 132–133, 138–139, 178; policy recommendations, 35–36, 40, 64, 87
  • Ethics, 105–109, 110–111, 115; in AI, 108, 117, 121, 136, 224, 239, 241; ethical frameworks and principles, 64, 79, 101; ethical Guidelines, 101–102, 105, 122; malevolent uses of AI, 184, 186–187, 210, 212; trustworthy practices, 99
  • EU AI Act, 57, 59, 61, 63, 66–68, 70–74, 77, 80–84, 86, 91, 234–235
  • Facebook, 131–133
  • FBI, 38, 40, 131
  • Frameworks, AI, 65, 105–107, 119, 123, 174, 232
  • GDPR (General Data Protection Regulation), 63, 68, 75, 85, 91
  • Generative AI, 1, 19, 21, 23, 112, 158, 197, 222
  • Google, 129–131, 133–134, 176, 178, 197; DeepMind, 233; Gemini (formerly Bard), 162, 170, 175–176, 178
  • G7 and G20, 57, 64, 66, 70, 78, 83, 87
  • Human genome editing, 4–5, 15–19, 196, 244
  • Human responsibility in AI, 100, 130–131, 138–139, 206, 222–226, 232–233, 236–237
  • Informed consent, 4, 44, 109, 129–130, 240
  • IRB (Institutional Review Board), 3–4, 44, 108–110, 241
  • Justice in AI, 8, 64–65, 78, 101–109, 112–118, 120–123, 127, 139, 222, 231–232, 241–242
  • LLM (large language models), 23, 25, 29–30, 130, 168, 171–172, 175, 183; training, 198–199
  • Machine learning, 22; diffusion modeling, 23, 25, 29, 152, 180; discriminative and generative models, 151–152; foundations and advancements, 150–153; open-source modeling, 177, 179; transformers, 25, 27–28, 30, 152, 158–62, 164–165, 167–168, 179
  • Microsoft, 5, 130, 133, 174, 176, 178, 183, 240, 242
  • Monitoring and oversight, 12, 36, 68, 71, 86, 110, 119, 128, 130, 208, 224–226, 229–230, 235
  • NAS (National Academy of Sciences), 2–4, 147, 195–199, 209, 221, 226, 230–231, 236, 238–239, 243–245
  • NIH (National Institutes of Health), 3, 42–44, 110, 232, 238–239
  • NIST (National Institute of Standards and Technology), 48, 61, 66, 69, 83–84
  • NSF (National Science Foundation), 31, 110, 132, 179, 231
  • OECD (Organisation for Economic Co-operation and Development), 59–61, 64, 66, 70, 81, 85, 87
  • OpenAI, 1, 24–25, 27, 159, 162, 173, 178, 197, 233; GPT-4, 23, 163–171, 175–176, 182, 184–186, 207; GPT-4V, 170–171
  • Policymakers, 2, 8, 35, 39–43, 66, 111–113, 120–121, 127, 133–134, 222–225, 242
  • Rome Call for Ethics, 239–240
  • Scientific community, 2–5, 30, 49, 204; integrity, 1, 4, 12, 19, 36, 42, 50, 221; scientific norms, 4, 13, 19, 36–38, 57, 221, 230–231, 243
  • Singapore’s Model AI Governance Framework, 78, 80, 83, 85
  • Transparency in AI, 2, 9, 64, 68, 73, 79, 127, 138, 222–225, 232–237
  • Turing Test, 21, 28, 148
  • US Executive Order on Safe, Secure, and Trustworthy AI, 57, 61, 68, 72, 77–82, 85–86, 90–91, 139
  • Verification of AI-generated content, 12, 168, 203, 223–224, 234–235

Annotate

Previous
All rights reserved
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org