You have to use artificial intelligence!
But be responsible!
That all sounds great. But how exactly are you supposed to do that?
The noise surrounding AI is so loud, it can be hard to keep track of your own moral compass. Sometimes you need some guidelines to help show you the way toward ethical, responsible use of these evolving tools in your daily PR practice.
To help on your journey, we’ve rounded up several artificial intelligence ethics guidelines from major PR organizations to help you better understand how to navigate these treacherous waters.
None are a replacement for deep thinking, open communication with leadership and your colleagues, and a commitment to doing the right thing. But all can help you determine how to keep on the right side of AI to deliver the best experience for employees, customers and other stakeholders.
You don’t have to do this alone.
The Public Relations Society of America recently released its comprehensive AI guidelines. Developed by the organization’s PRSA Work Group, the guide uses PRSA’s existing ethics code as a framework for navigating weighty moral issues surrounding artificial intelligence, including examples of proper use and improper use to help show the way to smart decisions.
“We have the opportunity to really educate across the board, to other professions and the C-suite about the challenges there and and how to prepare for it,” Michelle Egan, PRSA 2023 chair told PR Daily.
Chartered Institute of Public Relations and Canadian Public Relations Society
These organizations, the former based in the UK and the latter in Canada, have released their “Ethics Guide to Artificial Intelligence in PR.” This guide is helpful for its practical flow chart to assist in working through the complex issues that arise from figuring out how to use AI in a way that best serves the organization and the audience.
While the full flowchart is available in the guide, they also offer a simplified pyramid for thinking through AI concerns:
- Learn about AI data.
- Define the PR and AI pitfalls.
- Identify ethical issues and PR principles.
- Use decision-making tree.
- Decide ethically based on the above.
To develop its “PR Council Guidelines for Generative AI,” the organization worked with industry leaders and legal counsel to “help ensure that the use of generative AI aligns with our members’ core commitment to the highest level of professionalism, decision making, and ethical conduct.”
This document is helpful for its brevity and clarity. It’s straightfoward and to-the-point, focusing on practical dos and don’ts. It leans heavily on words like “always” and “never.” This guide offers helpful big-picture advice, but you may want to lean on some of the flowcharts and decision matrices for more niche concerns.
Simplest of all, Muck Rack offers a straight-to-the-point checklist you can print off and post beside your desk to keep yourself and your team accountable for AI work. This one-pager offers simple reminders to keep in mind whenever you’re working with generative AI and works as a quick accountability tool before you press publish on a tool.
All of these tools are helpful, but none are likely to comprehensively meet your exact needs. Using these documents as inspiration and guide, work within your organization to develop your own rules, guidelines and decision-making frameworks to help steer your team toward responsible, efficient and successful AI usage. Get input across departments (IT is a powerful partner here!) and develop deep-dive matrices for working out problems as well as easily digestible one-sheets to serve as a constant reminder of your ethical obligations when it comes to AI.
These tools are evolving quickly, but with a little pre-planning, you can keep your moral center no matter how quickly they move.
For more tips on leveling up your writing – with and without AI – join us for the Writing & Content Strategy Virtual Conference on Dec. 13! Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.