How to use AI responsibly when writing
Honesty and oversight are key.
AI writing programs are exploding in number and popularity. There’s ChatGPT, of course, but also Google’s Bard, Quora’s Poe and others on the horizon.
The emotions these tools can inspire in a writer are complex and mixed. Some fear they’ll be replaced by robot overlords. Others see potential for a tool that can handle the basic tasks of writing so they can focus on higher level functions. And yet others see the tools as a cynical way to create content mill SEO bait.
We’re already seeing that last one permeate the media. There have been high-profile failures of AI writing at CNET and Men’s Journal. The problems with the AI writing range from a lack of transparency to plagiarism to sharing information that’s just flat-out wrong on important topics like finances and health.
Does this mean we should shun AI because of early problems? That’s surely an overreaction. The failures in the above examples aren’t ultimately due to AI — they’re due to humans.
Here are some things to keep in mind as you look to incorporate AI writing programs into your communications.
If you use AI to write a substantial portion of a piece, you must be clear about that. Whether it’s to your boss or a client, you need to ensure they are comfortable with you using verbatim readouts from a computer program. Without that honesty, you’re essentially plagiarizing: taking credit for words you did not write.
If the piece will be public facing, you must be clear about AI’s role to that audience as well. Both CNET and Men’s Journal were hammered for using vague bylines that referred to “editors.” A bot is not an editor. If you used AI to write most of a piece, disclose that at the top in no uncertain terms.
Now, none of this is to say you must over-explain. If you’ve merely used AI as a tool — say, to get an idea or check your grammar — you don’t need to disclose that, any more than you would if you consulted a dictionary or a style book.
Be honest with yourself about how much the AI contributed to the piece. If you’re using its words instead of your own, disclose.
Don’t trust its research
AI is becoming notorious for being very confidently wrong. Its answers leave little room for ambiguity or the possibility that it’s wrong — even though it very well can be. Sometimes it will even make up its own facts, as it did when we asked it to write a press release and it fabricated phone numbers and even entire people.
Also remember that much of the data AI is relying on is out of date. ChatGPT says its data is from 2021, which can be a lifetime ago in many industries.
It’s still best to do your own research so that you can critically evaluate sources, check time stamps for up-to-date information and so on. But if you do ask AI to pull facts for you, make sure you’re taking the time to check them manually, no matter how confident it sounds.
Use it for ideas and structure
You might be wondering at this point what you can trust AI for. There are still so many positive uses for communicators. Here are just a few:
- Writing something you’ve never tackled before? Ask it to show you a template or example.
- Not sure what to write about? Ask it to provide five questions people ask about the topic, or to offer an outline. These can be powerful jumping off points that save you the time staring blankly at your computer and let you get into putting your spin on the writing.
- Struggling with tone? Put in text and ask it to rewrite it as inspirational. Or somber. Or in the style of Donald Duck. Sky’s the limit.
We are only just scratching the surface of what AI can do and how it can serve us in the future. But remember: it serves us. Not the other way around. We must still be arbiters of truth, weavers of words and masters of the technology.
How are you incorporating AI writing into your communications practice?
Allison Carter is executive editor of PR Daily. Follow her on Twitter or LinkedIn.