Related Posts
Popular Tags

AI in the workplace can be a legal minefield, warns employment lawyer

AI in the workplace can be a legal minefield, warns employment lawyer

Employers are rushing to adopt artificial intelligence tools, but the legal and privacy risks are piling up faster than many realize. Howard Levitt, Senior Partner at Levitt LLP, says companies need to tread carefully before letting algorithms into the workplace.

This is highlighted by a recent report from Infosys, which found that 95% of C-suite and director-level executives had suffered from at least one type of problematic incident from their use of AI. The most-reported incidents were privacy violations and systemic failures, as cited by 33% of the respondents.

More than half of respondents (53%) reported the damage incurred from AI was reputational, while nearly half (46%) indicated it was legal, resulting in fines and settlements.

For Levitt, using AI to monitoring employee productivity, computer usage and output questions raise real questions about employee rights. 

“The question is whether or not that’s so intrusive as to be a constructive dismissal,” he explains. 

Traditionally, workers could expect some privacy even under in-person supervision. An AI program logging every keystroke or mouse movement could be interpreted as a digital equivalent of a supervisor standing permanently over someone’s shoulder. 

“An employee might argue that that fundamentally violates your basic privacy, and that could lead to an employee resigning and claiming constructive dismissal,” he says.

AI oversight blurs the line between workplace monitoring and personal privacy

Beyond surveillance, AI poses risks when it comes to data handling. Levitt points out that employees occasionally use workplace systems for personal matters, whether sending a quick note home or writing about medical concerns. 

“Employers are reviewing their documents, reviewing everything they’re doing, and thereby gaining access to matters that reveal things that are personal to them,” he explains.

That exposure, particularly if it touches on protected categories under human rights law, could itself be unlawful. And the problems extend to monitoring chat communications.

“That would be something that would violate their privacy rights, and it’s very important that employers ensure their proposed AI use complies with privacy laws and their own privacy policies,” he says.

Levitt urges companies to update their agreements with staff to reflect the new reality. Without those safeguards, sensitive material can easily slip into AI systems where it no longer belongs.

“Employees should have contracts, making clear that employees have to maintain the confidentiality [of] company confidential information, trade secrets and that the confidentiality obligation extends to employees’ use of AI tools, not only during employment, but post-employment,” he explains. 

Real world examples of AI risks 

But even when no confidential information is revealed, AI still creates liability because of accuracy problems, Levitt says. 

“AI hallucinations could result in the company being sued for defamation, for example, for breach of confidentiality or for copyright infringement,” Levitt says. 

He recounts one case where an AI system mischaracterized a U.S. discrimination lawsuit. The AI report said the company discriminated against all employees over 40 when it was actually women over 55 and men over 60.

That misstatement, while seemingly small, could have serious consequences if circulated as fact, Levitt says. 

Another incident saw AI generate workplace gossip about a female employee’s pregnancy. 

“That’s an example of where AI can breach privacy rights inaccurately,” Levitt says. 

The risk remains even if the information had been accurate – workplace health matters are private, and AI systems do not have the judgment to handle them responsibly, he says. 

Human review is the only safeguard against legal fallout

For Levitt, that’s why human review must remain central; a bot cannot assess company culture, legal nuance or reputational impact.

“Nothing should ever be released on AI unless a human being reviews it first, both for accuracy and for legal compliance,” he says. “The law holds the company accountable for every algorithmic misstep, and if you’re going to be operating through bots, those bots could get you sued,” he explains.

Litigation also raises overlooked risks. AI tools keep logs, and those records may be open to discovery in lawsuits. Confidential information could be exposed in ways that strengthen an opposing case, Levitt says. 

“In every litigation, the other side has the right to production of all relevant, potentially relevant documents and information. That includes ChatGPT history, search history, and that can be damning to a particular C-level executive,” he says. 

Employers must also be prepared for the possibility of plagiarism claims; executives relying on AI without verifying outputs may pass off copyrighted works as their own, leaving companies vulnerable to lawsuits, Levitt says. 

Recommendations for employers 

For employers: Levitt emphasizes the need to review AI tools before implementing them and ensure policies are up to date. Any release of AI-generated information should be vetted by people, not machines.

“Audit every AI tool before using it; check the bias, check for accuracy, check for privacy compliance,” he says. “Update your contracts and policies to disclose your AI use, which is required in some provinces, and obtain explicit consent where it’s required, and make sure your team understands the liabilities and limitations of AI-generated data.”

Existing privacy laws, he argues, are not keeping pace with these developments. He expects provinces to move quickly.

“They’re not appropriate because AI is so rapidly developing. The law hasn’t caught up with it yet,” Levitt says. “I anticipate that most provinces are going to try and get ahead of it and make much more stringent and rigorous laws respecting privacy rights in the AI context than they have right now.”

That legislative gap won’t last long. In Levitt’s view, the issues are only going to expand. 

“Everything that I’ve talked about so far is going to happen increasingly, as more companies are relying on AI, and more individuals are relying on AI,” he says.

Source – https://www.hcamag.com/ca/specialization/employment-law/ai-in-the-workplace-can-be-a-legal-minefield-warns-employment-lawyer/547034

Leave a Reply