The AI adoption narrative hinges on governance over prohibition
Comms pros hold the key to shaping how AI adoption processes are discussed across their organizations.
AI adoption inside organizations doesn’t fail because of the tech — it breaks down because of a lack of trust. Communications pros can guide the adoption process by leaning into their core competencies. That means translating the complexities of AI, setting clear guardrails and helping employees feel confident using new tools — not fearful of them.
At Ragan’s AI Horizons Conference, Lisa Low, associate professor of practice, public relations and strategic communication management at Texas Tech University’s College of Media and Communication, led a panel that discussed the ways that communicators can guide the AI adoption process.
She encouraged the audience to view AI adoption not as a mandate from on high, but a trust-building exercise.
“It’s about leading with curiosity, but also recognizing and respecting that other colleagues might not share our excitement and they might not share our level of trust with the technology yet,” Low said. “To truly be a trusted advisor, you need to lead with that level of understanding.”
Building trust through safe AI use
When employees have a safe, controlled place to experiment with automation, AI adoption is far more likely to succeed. Faith McGrain, innovation, technology and transformation communications manager at NiSource, said that encouraging proper AI use from leadership to the rest of the organization helps root the adoption process in positive exploration and trust.
“We think it’s valuable to put yourself out there and try something new, and if it doesn’t go as planned, at least it was in a safe environment,” McGrain said. “Then you have that experience to build upon. We have our leaders share their experiences all the way up to the CEO. AI use is being discussed in more than one avenue, and that visibility really matters.”
She added that by controlling the AI adoption narrative, comms pros can communicate the difference between AI governance as control and guardrails as permission.
“Even when we do have strong guidance, we’re providing pathways for working within that guidance,” McGrain told the audience. “If there’s a business case for doing something differently, here’s what you need to do to receive organizational buy-in.”
Curiosity should be the entry point
Dan Hebert, special AI advisor to the director general of communications at the Canada Revenue Agency, said that at his highly-regulated organization, AI experimentation began with curiosity and a desire to show how the tech could be used on the job.. But this couldn’t happen without a deep sense of care and guidelines to ensure outputs are correct and usage follows security guidelines.
“At the beginning of my experimentation, I asked AI to write a news release to launch tax season — and it was nearly perfect, except it quoted the start of the U.S. tax season,” he said. “That was a pretty easy fix, but it immediately showed how powerful the tool was.”
That early success also made clear that curiosity alone wasn’t enough to ensure that adoption went smoothly. Hebert said due to the CRA’s tight regulations, any experimentation had to be paired with clear guardrails. This was necessary both to protect sensitive data and to address employee concerns about how AI might change their work. He said not factoring these realities in would have only driven AI use underground, creating more risk in the future.
“We were pretty sure a lot of staff were already using these tools in a kind of shadow IT situation,” Hebert said. “So we decided we needed to get a handle on it , but not by shutting it down. We put structure around it and managed the change for people who were understandably nervous about what this technology could mean.”
Rather than pushing through AI mandates blindly or forcing prohibitive measures on employees with no plan, the panelists emphasized that AI adoption works best when trust is built intentionally and communicators lead the charge.
Sean Devlin is an editor at Ragan Communications.