What’s now and what’s next in AI

AI is shifting from isolated tools to cross-departmental agent systems, putting new pressure on data readiness, governance and how results will be measured.

This story is brought to you by Ragan\'s Center for AI Strategy. Learn more by visiting ragan.com/center-for-ai-strategyThis story is brought to you by Ragan\'s Center for AI Strategy. Learn more by visiting ragan.com/center-for-ai-strategy
Listen to this article7:05minLearn more

Robbie Caploe is director of strategic initiatives and advisor to the Center for AI Strategy.

 

In every ballroom and side exchange among the 200 attendees at Ragan’s AI Horizons conference last week, one thing was clear: The conversation has moved on.
No one was debating whether AI belongs in the organization. Instead, leaders were swapping notes on what breaks when it’s put into real workflows. The focus was on data hygiene, agentic systems that cut across departments, legal and governance gaps, and the growing reality that measurement of AI’s effectiveness, efficiency and connection to bottom-line ROI is coming fast.
For comms leaders, the takeaway was unmistakable. AI maturity is no longer about experimentation. It is about being ready to operate at scale. Here’s how Ragan’s Center for AI Strategy advisors Paavana Kumar, Alex Sevigny and Rowan Toffoli broke it down in a roundtable conversation held in the lounge and hosted by the Center at the conference in Fort Lauderdale.
What are tension points in where we are with AI?

Alex Sevigny, associate professor of communications management, McMaster University: This past year was the year of implementation. People are further along than they were in 2024 and early 2025. They’re ready for agents and asking the right questions. Next year may be deeper measurement and showing ROI. What else I’m seeing is that everyone here is talking about data hygiene and data readiness. They’ve heard: “No data, no AI.” And it’s a heavier lift than expected.

Paavana Kumar, partner, advertising and marketing, Davis+Gilbert LLP: Agents are moving fast. Legal teams may not even know an agentic workflow is being piloted. They come to me for IP, prompts, tool comparisons (Runway vs Firefly), but not always for agent workflow risk. There are major legal issues depending on training and governance.
Rowan Toffoli, generative AI-powered communications manager, Lockheed Martin: Comms orgs will allocate dedicated AI people and resources: implementation, deployment, training. My job was the first comms AI role at Lockheed—and we just added another. Companies doing this will have advantage because it won’t be a burnout stretch assignment.
You’ve mentioned that we need younger communicators to think like engineers— or at least communicate with engineers—so that comms can lead instead of react.
Sevigny: People are moving from “summon an agent” to ecosystems of autonomous agents that talk to each other. That ties to measurement: Clean data in, clean data out.
So yes, that means communicators need to think more like data engineers or be aware of the role data plays in their AI implementation. We’re experimenting with tools and new approaches to unify data sources and reasoning across earned and social timing to show impact in a more comprehensive and mature manner.
So how do comms leaders help build AI maturity?
Sevigny: Start with the objective. Don’t build just because you can. Then do a data audit: Where is everything, what’s clean vs dirty, what plugs in where?
Kumar: Even CRM data isn’t necessarily structured—it depends how it was set up. Cleaning and structuring can be costly.
The next stage or step in AI seems to be to chain agent workflows across departments so agents aren’t siloed or one-offs. That’s where bottom-line impact comes from. Is that something you can address?
Kumar: Yes, I think so. In advertising, you can imagine cross-disciplinary agent workflows for campaigns: creator sourcing, disclosure obligations, contracts, etc. So, yes, that’s creating agents that connect, scale and work across departments.
Sevigny: Departments might each have their own “agent contingent” (HR, finance, etc.) that needs to coexist and talk. If not, then comms certainly could have a role in helping to connect those dots.
What kind of upskilling is needed, if comms are to be more active in this? And what are some missteps when it comes to upskilling and AI literacy?
Toffoli: I’d say the skills gaps include technical understanding: what AI can do, how it works (predictive statistics at scale), etc. Also, critical thinking and media literacy. And don’t treat outputs as gospel and recognize that hallucinations are high risk, especially in high-assurance industries. So that comes down to everything from prompting to risk management and assurance.
That said, I see the three big mistakes here as being:
  1. Lack of measurement savvy (to prove that skills translate into better outcomes).
  2. A “go figure it out yourself” managerial mindset that can be overwhelming, result in poor AI adoption experiences and eventually cause people to drop off.
  3. Weak monitoring, recordkeeping and governance. For instance, your AI policy exists but isn’t operationalized. Related is that you absolutely do need repeated training and documentation for regulatory defense and performance measurement.
How do you make AI L&D feel positive, rather than scary?
Kumar: Guardrails shouldn’t feel like red tape—they should enable safe speed.
Toffoli: I’d say focus on meeting people where they are, whether they’re fear-avoidant, casual or super-users. Bring everyone along.
I’d suggest we don’t sell “efficiency” as the main metric. Leaders hear, “Do more with fewer people” and that can be a shortsighted trap.
Toffoli: I agree and like to say that efficiency isn’t the metric. Instead, I want to see “better” as the touchstone for communicators. Sell better, get better pickup, drive better pitch acceptance, focus on better content performance, look for better visibility in answer engines and so on.
Kumar: It’s the same in legal. Clients won’t pay juniors to mine statutes. Instead, they want human judgement. Actually, they want better human judgement and AI should be enabling that. Professionals become more valuable from a strategic perspective if they can use the tools well. That’s what we all want, not just doing more with less.
Key Takeaways
  • AI has moved from pilots to production. The conversation has shifted from “Should we use AI?” to “What breaks when it’s embedded in real workflows?” Data readiness, governance and measurement are now table stakes.
  • Data hygiene and agent workflows are the next bottleneck. Cross-department AI agents are coming fast, but dirty data and siloed systems will stall impact. Comms leaders need visibility into how data is structured, shared and measured.
  • AI maturity requires resourcing, not side projects. Orgs pulling ahead are dedicating people, training and governance to AI—not treating it as a stretch assignment or one-off experiment.
Learn more from Ragan’s Center for AI Strategy

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.