•  
  •  
 

Abstract 摘要

The ultimate result of AI medicine may be the birth of "Dr. Super AI." Dr. Super AI would not only supplement medical care, but also acquire a level of autonomy. Its capacity for reason would be much greater than that of most human doctors, but it would not possess certain intrinsic characteristics of human beings, so it would fall somewhere between "machine" and "human." This raises an important question: if such an agent makes mistakes, who or what is responsible? This paper argues that this "responsibility gap" is not insurmountable. If the types of mistakes made by Dr. Super AI can be distinguished, we can find the corresponding object of responsibility. This is a viable countermeasure, at least until AI robots have developed human emotions and moral awareness.

Share

COinS