•  
  •  
 

Abstract 摘要

Since the introduction of artificial intelligence technologies in medical diagnosis, ethical issues have emerged. One of these concerns is the "black box," which can only be seen in terms of inputs and outputs, with no way to understand the AI algorithm. This is problematic because patients, physicians, and even designers do not understand why or how a treatment recommendation is produced by AI technologies. In this paper, I argue that AI technologies should be explained on the grounds that patients have a right to informed consent.

隨著人工智能技術在醫療診斷中得到了越來越廣泛的應用,對於其“不透明性”的擔心也日益加重。這種擔心來自人們對人工智能系統的工作機制尚不清楚。在我們還無法知道其内部工作原理的情況下,根據它的診斷對患者進行治療是否可行?本文嘗試從患者知情同意權的角度去論證,應用於醫療診斷的人工智能系統應該更加透明,以避免對患者造成可能的傷害。

Share

COinS