Exploring the pros and cons of communicating internally with AI avatars

Lessons from Ragan’s AI Certificate Course.

Ragan Insider Premium Content

Most comms pros are already using AI daily to help them craft messaging and refine their outputs. But many haven’t fully considered the implications of what will happen to organizational trust when AI is used to replicate human beings on the job.

During Ragan’s AI Certificate Course, Alexandre Sevigny, associate professor at McMaster University, told the audience that this technological development, while exciting, carries a great deal of risk of trust erosion if not handled properly.

“People are afraid of being misled,” he said. “It’s one thing to see rabbits bouncing on a trampoline and go, ‘Oh, it was fake, not the biggest deal.’ But when people thought they were buying into a product, subscribing to a political cause or investing in something and they realize they were misled by a deepfake, the shattering of trust is remarkable. People don’t just get annoyed — they feel betrayed.”

Sevigny said that when not created with care and precision, AI-created digital twins — especially of leaders — can do more harm than good when used to communicate with employees.

To read the full story, log in.
Become a Ragan Insider member to read this article and all other archived content.
Sign up today

Already a member? Log in here.
Learn more about Ragan Insider.