Artificial Intelligence (AI) is pervasive across various industries, from medical diagnostics to finance, to streamline tasks and aid decision-making. However, instances of AI causing harm are widespread, including cases such as chatbots offering dangerous advice or driverless cars hitting someone. What legal consequences should follow when AI injures individuals? In May 2023, BCLI released the Report on Artificial Intelligence and Civil Liability to offer answers to this question and others regarding civil justice in cases involving harm caused by AI.
AI systems, designed to operate with varying degrees of autonomy, rely on probabilistic inferences based on the recognition of patterns in data, which may not be accurate, complete or representative. How they arrive at particular decisions and other outputs is not always explainable. This creates challenges when applying tort law to address harm to individuals and property caused by autonomous AI actions as the rules were crafted to handle harm caused by humans.
Our report explores AI’s impact and challenges for tort law, legal theories on civil liability, standards and good practices for AI development, and recommendations for adapting the law of tort to the context of AI-related harm tort law. Read the report at the BCLI website.