Generative AI in the Workplace: The Potential, the Pitfalls and the Precautions

Articles

In the second of our three-part Technology & Privacy Law in the Workplace series we will be highlighting the benefits, risks and recommendations when it comes to incorporating generative artificial intelligence (“generative AI”) into the workplace.

Generative AI describes algorithms, such as ChatGPT, that can be used to create new content. Although generative AI has been around for many years, (most of us still remember Microsoft’s Clippy), recent breakthroughs in this technology have begun paving the way to drastically change how businesses operate. However, this has not been without a few sharp learning curves. In recent claims regarding the misuse of generative AI, courts and tribunals are holding companies, and individuals, accountable for information provided by their agents, even if those agents are chatbots (an automated system providing responses to a person’s prompts and input ), or other forms of generative AI.

Air Canada is one of the latest parties to feel the sting of AI mishaps after they lost a small claims case against a passenger who had relied on information provided by the airline’s chatbot. After the death of his grandmother, the claimant passenger used the Air Canada chatbot to research flights under the airline’s bereavement leave policy. The chatbot stated that the claimant could apply for bereavement fares retroactively. However, the claimant soon learned, after having his retroactive claim then denied by the airline, that the information provided by the chatbot actually contradicted the airline’s policy, which did not allow for bereavement refunds to be provided retroactively. The claimant had taken a screenshot of the chatbot’s message and relied on that at the hearing to enforce the terms the chatbot had set out. Air Canada attempted to deny liability based on the link provided in the chatbot message, which linked to the airline’s policy on the issue. However, it was found that Air Canada failed to explain why the passenger should not trust information provided on its website by the chatbot. Ultimately, the chatbot message resulted in negligent misrepresentation and Air Canada was held liable for a partial refund of the claimant’s airfare. Air Canada’s chatbot is no longer found on their website.

As more and more AI-related cases make their way through the court system, courts across Canada have been quick to issue warnings regarding the use of AI. BC recently heard the first Canadian court case regarding AI-generated filings, after a lawyer used generative AI to create court filings that contained cases that didn’t actually exist. AI chatbots have been known to generate realistic sounding information, that is actually incorrect, which is referred to as “hallucination”. This hallucination, subsequently relied on by the lawyer in a family law matter, was found by the court to be “tantamount to making a false statement to the court” and resulted in the lawyer being liable to pay costs, personally.

However, in technology’s fast-paced and ever-changing landscape, generative AI also provides effective and efficient business solutions and is quickly paving the way as an invaluable resource for many companies. Whether used for content generating purposes for presentations or articles, searching capabilities, the ability to swiftly summarize emails, cases or transcripts, or even as a virtual assistant, the benefits of generative AI appear (almost) infinite. The key to the successful incorporation of generative AI is to recognize the benefits, and limitations, of such machine learning.

If you are considering incorporating generative AI in your workplace, you are not alone, it is estimated that generative AI will become a $1.3 trillion market by 2032! To make the most of the vast potential of this technology, while recognizing the risks involved, we recommend starting with a few key considerations.

  1. The first step is to consider what the organization’s objective is when it comes to using generative AI: will it be used only internally, or externally as well? Consider what the driving force is in wanting to incorporate generative AI and what areas of the business it will be incorporated in first.
  2. Next, you will need to consider the risks associated with each and every use of generative AI. For example, there may be less risk in using generative AI to create a brief internal presentation than there is in using it to generate a contract for a client. Don’t forget to consider privacy risks associated with inputting information into generative AI.
  3. Once you have considered each and every use of generative AI and the associated risks, you then need to set parameters for such use. Some examples include restrictions on client information being entered into the generative AI, having every output vetted and fact checked before is it relied on, having management approval for every piece of generative AI content, stamping work products with the disclaimer that it was produced with the use of generative AI.
  4. Once you have considered all the uses, risks and parameters of generative AI in the workplace, create a clear and concise policy setting it all out. A clear policy is the first step to ensuring that all employees are aware of their responsibilities and obligations when it comes to using generative AI.

For any questions regarding the use of generative AI in the workplace, please contact any member of Clark Wilson’s Employment & Labour Group or Privacy Law Group listed below:

Employment & Labour Group

Andrea Raso

Catherine Repel

Debbie Preston

John Soden

Pu Zhang

Privacy Group

Scott Lamb

Jeff Holowaychuk

Monica Sharma

Debbie Preston