Addressing Concerns with Artificial Intelligence

 

June 2020

BC WLF Update - By Catherine Zhou

The federal government has been exploring possible changes to the Personal Information Protection and Electronic Documents Act (“PIPEDA”). These possible changes aim to address privacy concerns stemming from the use of artificial intelligence (“AI”) in providing services. As AI becomes more commonplace in businesses, through the use of services such as website chatbots, it is also used more frequently in services involving decision making. For example, decisions on granting loans may now be made by financial institutions through AI. The federal government has expressed that organizations using automated decision-making can expect their AI to retain certain details involved in the decision, as well as the logic used.

Currently, privacy law in Canada does not expressly deal with AI use, outside of the Directive on Automated Decision-Making, which applies only to the federal government. One of the most recent movements in AI regulation is from the Office of the Privacy Commissioner of Canada. They have recommended aligning PIPEDA with the rights provided for in the EU’s General Data Protection Regulation, including the right not to be subject to decisions made solely by AI, and the right to demand human intervention and contest these decisions.

As the law advances, it is important that lawyers act as checks against problematic AI use. Women are highly underrepresented in the AI field, comprising only 22% of AI professionals globally. If the data used by AI is biased as a result of the skewed industry demographics, biased decisions with significant implications may become a major problem as AI decision-making becomes more commonplace.

Lawyers can help by ensuring their clients use AI ethically and according to the possible upcoming changes to PIPEDA.