How communicators win over AI converts with ethics conversations

Groundbreaking tools need guidelines.

Communicators spend plenty of time talking about what AI can do. The harder conversation is what AI means, both practically and ethically. Between worries about job cuts and environmental impact, many employees remain skeptical about whether AI use is a responsible choice.

At Ragan’s AI Horizons Conference running from February 2-4 in Fort Lauderdale, Alex Mahadevan, director of MediaWise and the AI ethics guide at Poynter Institute, will discuss how communicators can convince AI-skeptical employees by rooting conversations in a strong sense of ethics.

“A lot of skeptics feel that their core ethical values run counter to what they see on the AI side,” Mahadevan said. “Many of these tools aren’t transparent or accountable, and they’re often built without much consideration for long-term harm. My goal is to bridge that gap and show people how they can use AI while still holding onto the principles that matter to them.”

He added that the two most important steps to take when engaging with AI-skeptical employees are explaining the ethical principles behind proper AI use and summarizing what a given tool is capable of doing.

“Once people understand the guardrails, they’re more open to the possibilities,” he said. “But if you skip the ethics part, you lose them before you even begin.”

Human involvement in AI can help with adoption

Because he works with journalists, accuracy is often at the top of the list of worries Mahadevan addresses regarding AI. They frequently ask about hallucinations, which occur when AI programs produce false or misleading information.

“I walk people through why hallucinations happen and how better prompting can reduce them,” Mahadevan said, “And I emphasize the importance of keeping a human in the loop so nothing inaccurate reaches your audience.”

He added that no output from an AI program should ever be shared directly with an audience, and that there needs to be a human being reviewing that material before any audience member sees it.

“That one principle alone eliminates a huge amount of concern,” Mahadevan said.

Mahadevan said that he’s addressed misgivings about AI that go beyond the ramifications of how everyday work gets done — some worries are rooted in the wellbeing of the planet.

“Some of our younger staff don’t want to use AI at all because they see it as environmentally harmful,” he said. “They’ll say, ‘Every time you run a query, it wastes water.’ That environmental concern is very real, and it’s one of the biggest barriers to adoption.”

Mahadevan said that he’s able to address these worries with his staff by reframing AI through a sustainability lens.

“I talk to them about using AI efficiently,” he said. “If you can get what you need with one well-crafted question, that’s more environmentally responsible than spending two days bouncing between tools just to get a PDF into the format you want.”

Mahadevan told Ragan that one of the biggest obstacles to adoption is a lack of understanding of what automated tools can do and how they affect workflow. He said that a simple but effective way to help soften an AI skeptic’s stance on the technology is to sit down and show how it can help them.

“I’ll open something like Google Anti-Gravity and in five seconds, parse a thousand-page PDF into a spreadsheet that would have taken someone an entire week,” Mahadevan said, “When people see a real and ethical use case that saves that much time, their whole attitude changes.”

He added that any conversation or guidelines about AI should be grounded in a company’s values. Doing that will help enable dialogue and can help soften skeptics.

“If fairness, transparency or environmental responsibility matter to your organization, then those commitments must show up in how you use these tools.”

To register for our AI Horizons Conference, click here.

Sean Devlin is an editor at Ragan Communications.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.