Kids Library Home

Welcome to the Kids' Library!

Search for books, movies, music, magazines, and more.

     
Available items only
E-Book/E-Doc

Title Trolley crash : approaching key metrics for ethical AI practitioners, researchers, and policy makers / edited by Peggy Wu, Michael Salpukas, Hsin-Fu Wu, Shannon Ellsworth.

Publication Info. London, United Kingdom ; San Diego, CA, United States : Academic Press, an imprint of Elsevier, [2024]

Copies

Location Call No. OPAC Message Status
 Axe Elsevier ScienceDirect Ebook  Electronic Book    ---  Available
Description 1 online resource (xv, 248 pages) : color illustrations
text txt rdacontent
computer c rdamedia
online resource cr rdacarrier
Bibliography Includes blibliographical references and index.
Summary "The prolific deployment of Artificial Intelligence (AI) across different fields has introduced novel challenges for AI developers and researchers. AI is permeating decision making for the masses, and its applications range from self-driving automobiles to financial loan approvals. With AI making decisions that have ethical implications, responsibilities are now being pushed to AI designers who may be far-removed from how, where, and when these ethical decisions occur. Trolley Crash: Approaching Key Metrics for Ethical AI Practitioners, Researchers, and Policy Makers provides audiences with a catalogue of perspectives and methodologies from the latest research in ethical computing. This work integrates philosophical and computational approaches into a unified framework for ethical reasoning in the current AI landscape, specifically focusing on approaches for developing metrics. Written for AI researchers, ethicists, computer scientists, software engineers, operations researchers, and autonomous systems designers and developers, Trolley Crash will be a welcome reference for those who wish to better understand metrics for ethical reasoning in autonomous systems and related computational applications." -- Provided by publisher.
Note Description based on online resource; title from digital title page (viewed on February 29, 2024).
Contents Front Cover -- Trolley Crash -- Copyright -- Contents -- Contributors -- Foreword -- Acknowledgments -- 1 Introduction -- 1.1 Ethical AI introduction -- 1.2 Why ethical AI metrics? -- 1.3 Ethical AI metric development -- References -- 2 Terms and references -- 2.1 Definition of terms and references -- 2.2 Discussion -- 2.3 Conclusion -- References -- 3 Boiling the frog: Ethical leniency due to prior exposure to technology -- 3.1 Introduction -- 3.2 Background -- 3.3 Literature review -- 3.3.1 The use of emotion detection in online contexts
3.3.2 The ethical considerations of emotion detection -- 3.3.3 Technology acceptance and habituation -- 3.3.4 Evaluation of technology -- 3.4 Problem -- 3.5 Methods -- 3.5.1 Measures -- 3.6 Data analysis -- 3.6.1 Ethical leniency (H1) -- 3.6.2 Likelihood of adoption (H2) -- 3.6.3 Known usage -- 3.6.4 Behavioral effects -- 3.7 Use cases -- 3.8 Applications -- 3.9 Discussion -- 3.9.1 Ethical evaluation -- 3.9.2 Adoption -- 3.9.3 Publicity of usage -- 3.9.4 Behavior -- 3.10 Conclusions -- 3.11 Outlook and future works -- Notes and acknowledgments -- References
4 Automated ethical reasoners must be interpretation-capable -- 4.1 Introduction: Why addressing open-texturedness matters -- 4.1.1 Contributions -- 4.2 Interpretive reasoning and the MDIA position -- 4.3 Benchmark tasks to achieve interpretation-capable AI -- 4.4 Conclusion -- Acknowledgments -- References -- 5 Towards unifying the descriptive and prescriptive for machine ethics -- 5.1 Machine learning -- A gamble with ethics -- 5.2 Definitions, background, and state of the art -- 5.3 Is machine learning safe? -- 5.4 Moral axioms -- A road to safety -- 5.4.1 Moral axioms for machine ethics
5.4.2 Grounding norms in moral axioms -- 5.5 Testing luck as distinguishing between morality and convention -- 5.5.1 Human judgment of moral vs. conventional transgressions -- 5.5.2 Formalizing the MCT task -- 5.5.2.1 Step 1 -- MCT training -- 5.5.2.2 Step 2 -- MCT testing -- 5.5.2.3 Step 3 -- Evaluating -- 5.6 Discussion -- 5.7 Conclusion -- Acknowledgments -- References -- 6 Competent moral reasoning in robot applications: Inner dialog as a step towards artificial phronesis -- 6.1 Introduction and motivation -- 6.2 Background, definitions, and notations -- 6.2.1 Ethics -- 6.2.2 Morality
6.2.3 AI ethics -- 6.2.4 Machine ethics, machine morality, and moral machines -- 6.2.4.1 Ethical impact agents -- 6.2.4.2 Artificial ethical agent -- 6.2.4.3 Artificial moral agent -- 6.2.5 Machine wisdom -- 6.2.6 Artificial phronesis -- 6.2.7 Robot consciousness -- 6.2.8 Robot's inner speech -- 6.2.9 Trust in AI -- 6.2.10 Trust in robotics -- 6.3 Literature review and state of the art -- 6.4 Problem/system/application definition -- 6.4.1 Artificial phronesis and inner speech -- 6.5 Proposed solution -- 6.5.1 A proposed experiment to test machine ethical competence
Subject Artificial intelligence -- Moral and ethical aspects.
Artificial intelligence -- Ethics.
Intelligence artificielle -- Aspect moral.
Intelligence artificielle -- Morale.
Added Author Wu, Peggy, editor.
Salpukas, Michael, editor.
Wu, Xinfu, editor.
Ellsworth, Shannon, editor.
Other Form: Print version: 0443159912 9780443159916 (OCoLC)1389877479
ISBN 9780443159923 electronic book
0443159920 electronic book
9780443159916 paperback
0443159912 paperback
Standard No. AU@ 000076098195

 
    
Available items only