

The present book chapter takes up this, less commonly considered, ethical angle of AI.Įvolutionary accounts of feelings, and in particular of negative affect and of pain, assume that creatures that feel and care about the outcomes of their behavior outperform those that do not in terms of their evolutionary fitness. In real life, with very few exceptions, things are different: theorists working on the ethics of AI completely ignore the possibility of robots needing protection from their creators. In each of these three fictional cases (as well as in many others), the miserable artificial creature - mercilessly exploited, or cornered by a murderous mob, and driven to violence in self-defense - has its author's sympathy. Leivick's 1920 play going on a rampage the rebellious robots of Karel \vapek - these and hundreds of other examples of the genre are the background against which the preoccupation of AI ethics with preventing robots from behaving badly towards people is best understood.

Frankenstein's monster in the novel by Mary Shelley rising against its creator the unorthodox golem in H. In its pragmatic turn, the new discipline of AI ethics came to be dominated by humanity's collective fear of its creatures, as reflected in an extensive and perennially popular literary tradition.
