The legal risks of ChatGPT

Avoiding the legal pitfalls of Chat GPT requires close reading of the content.

Tom Corfman is a lawyer and senior consultant with Ragan Consulting Group, where he runs RCG’s Build Better Writers program and needs all the God-given intelligence he can find.

Late last year, as the technology investment giant SoftBank Group was piling up billions of dollars in losses, its billionaire chief executive began some soul-searching.

“How many years do I have?” Masayoshi Son, 65, asked himself, as the Tokyo-based company was on its way to losing $7.2 billion in its most recent fiscal year. “Can I finish my career as this is?”

“Actually, I cried very hard,” the company founder said through an interpreter at SoftBank’s annual meeting last month. “I really couldn’t stop my tears.”

Son got his mojo back in an unlikely way: brainstorming with ChatGPT.  For days, he pitched tech inventions to the artificial intelligence chatbot, which ChatGPT rejected or challenged. He then responded.

One middle-of-the-night debate ended with ChatGPT saying, “This is a great idea, very realistic.”

“So GPT praised me,” he said. “It made me very happy.”

Son said he has developed many ideas this way, but maybe he just needed somebody to talk to.

For every bad deal, Son probably has dozens of lawyers offering advice. Many executives jumping into AI are overlooking the risks, according to a survey last year of 500 C-suite executives by law firm Baker & McKenzie.

There are many undecided legal issues about using Chat GPT, the chatbot developed by OpenAI and Microsoft. Your law department will have plenty to say. We’re not offering legal advice. But professional communicators need to know enough to raise their hands.

Avoiding potential pitfalls requires a skill already possessed by top-flight communicators: close reading of the content. Everyone needs an editor, especial Chat GPT.

Here are six of the many legal issues to consider when using the app:

1. You giveth and Chat GPT giveth away. The app’s “Terms of Use” do not protect the confidentiality of the information you enter into the app, called a prompt. In fact, the FAQs state the opposite.

“Please don’t share any sensitive information in your conversations,” the company says.

Lawyers with Norton Rose Fulbright warn: “Although it is possible to select an option to opt out of use for these purposes, it is not clear whether the input data is still retained.”

Son may be surprised if some of the ideas he tested during his late-night conversations turn up in the answers to other users. That’s what happened when Samsung engineers submitted proprietary programming code to the app. Oops.

A growing number of companies limit their employees’ use of Chat GPT or ban it altogether.

2. Does anybody own the output? A selfie taken by a monkey can’t be copyrighted because it wasn’t created by a human. Who’s the owner of content created by monkeying around with Chat GPT?

“The output from machine learning models is not necessarily your own, or, even if unique, may not be protectable as intellectual property,” legal eagles at Orrick say.

This will worry your company’s law department and will certainly bother you when your carefully crafted, AI-generated words turn up on somebody else’s website.

3. That sounds familiar. How much tweaking do you need to do to make it your own? Many Chat GPT users say the app is helpful writing work emails and memos. That’s great until the boss gets many emails sounding the same. Then, you’re busted.

OpenAI doesn’t hide the fact that its answers will be the same. It says:

“Output may not be unique across users and the services may generate the same or similar output for OpenAI or a third party. For example, you may provide input to a model such as ‘What color is the sky?’ and receive output such as “The sky is blue.” Other users may also ask similar questions and receive the same response.”

Who would ask Chat GPT what color the sky is? Isn’t that what Siri is for?

4. Chat GPT doesn’t like everyone the same. The app’s tendency toward prejudice based on race, gender and sexual orientation creates a risk that implicit bias will be inserted into communications.

“Because AI models are built by humans and learn by devouring data created by humans, human bias can be baked into an AI’s design,” lawyers at Wilmer Cutler Pickering Hale and Dorr write.

Companies create enough bias on their own and don’t need help from artificial intelligence.

5. Look who’s talking. OpenAI’s terms of use say, “You may not … represent that output from the services was human-generated when it is not.”

What’s the world coming to when you can’t even plagiarize Chat GPT?

The company’s Publication Policy goes further, requiring disclosure: “The role of AI in formulating the content is clearly disclosed in a way that no reader could possibly miss, and that a typical reader would find sufficiently easy to understand.”

Does futzing with the copy avoid this requirement?

As Dr. Seuss once wrote, “I do not know. Go ask your dad.”

6. Error, error everywhere. Chat GPT’s mistakes are called “hallucinations.” Apparently, the app is trained on vast amounts of text data and psilocybin mushrooms.

Despite the pattern of mistakes, OpenAI’s Terms of Use limits its liability for any damages to $100. Moreover, as lawyers with the Mintz firm point out, “The user may be liable to defend and/or indemnify OpenAI from any claims, losses and expenses (including attorneys’ fees).

Everyone makes mistakes. The problem is you can’t blame Chat GPT, but it can certainly help others blame you.

Follow RCG on LinkedIn and subscribe to our weekly newsletter here.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.