AI for communicators: What’s new and what matters

Fresh data on the jobs most at risk, how Google and OpenAI want to scrape your content and why Disney formed an AI council.

It’s hard to believe that generative AI only exploded into the public consciousness with the broad release of ChatGPT this past November. Since then, it’s upended so many aspects of life — and threatens to change many more.

Now, we’re looking at fresh data on public concerns around AI and the job roles automation is most likely to upend, how Google and OpenAI want to scrape your content for their programs, and why Disney formed an AI council to ensure its responsible use across the enterprise. Here’s what’s happened just over the past two weeks in the world of AI — brace yourself, it’s a lot.

The impact on communications and specific job functions

What worries communicators and marketers most about AI? This was the core question in a joint study that Ragan conducted with The Conference Board. “Practitioners of both disciplines expressed concern about a variety of open questions that remain around the technology,” wrote PR Daily Executive Editor Allison Carter in her summary of the findings. “Chief among them were lack of accuracy and misinformation, legal uncertainties and data security and privacy.”

Meanwhile, a fresh report from McKinsey looks at what jobs and industries are most likely to be upended by AI. We were surprised to see comms and content jobs didn’t show up on the list, but the research found that food service work, customer service and sales, office support and production work (think manufacturing) are most likely to be impacted by automation.

According to McKinsey:

By 2030, activities that account for up to 30 percent of hours currently worked across the US economy could be automated—a trend accelerated by generative AI. However, we see generative AI enhancing the way STEM, creative, and business and legal professionals work rather than eliminating a significant number of jobs outright. Automation’s biggest effects are likely to hit other job categories.

Does comms fall into the ‘creative’ or ‘business professionals’ categories? Both, if you’re doing it right! These numbers offer another reminder that your ability to get creative with the work you do, and frame comms outcomes around larger business goals, will further demonstrate your value in a changing business ecosystem.

Google and OpenAI want to feed your content into their learning models.

Google’s recent submission to the Australian government’s review of AI regulatory framework includes the company’s assertion that generative AI programs should be allowed to scrape the entire internet for content. This would greatly upend current copyright laws.

The Guardian reports:

The company has called for Australian policymakers to promote “copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models in Australia on a broad and diverse range of data, while supporting workable opt-outs for entities that prefer their data not to be trained in using AI systems”.

The call for a fair use exception for AI systems is a view the company has expressed to the Australian government in the past, but the notion of an opt-out option for publishers is a new argument from Google.

When asked how such a system would work, a spokesperson pointed to a recent blog post by Google where the company said it wanted a discussion around creating a community-developed web standard similar to the robots.txt system that allows publishers to opt out of parts of their sites being crawled by search engines.

Meanwhile, ChatGPT creator OpenAI has released a new web crawler that can scrape your data—but blocking it seems easy enough for most web teams to handle.

Both examples make the case for comms being in close collaboration with digital teams to understand how they can best keep their proprietary content from being scraped. As The Guardian piece suggests, a paywall may do the trick, though this solution won’t work for every business.

Opt-out protocols are often made intentionally dense or tricky (just ask your marketing colleagues), so best to get this conversation rolling now. You can ensure this happens by forming a cross-functional AI task force at your organization. In our humble opinion, it can and should be led by comms.

Why Disney formed an AI task force

Speaking of task forces, Disney made news this week by announcing the formation of an AI task force that will investigate how automation can be applied across its business units to cut costs, streamline operations and, yes — enhance experiences in its theme parks.

Reuters reports:

Launched earlier this year, before the Hollywood writers’ strike, the group is looking to develop AI applications in-house as well as form partnerships with startups, three sources told Reuters.

As evidence of its interest, Disney has 11 current job openings seeking candidates with expertise in artificial intelligence or machine learning.

The positions touch virtually every corner of the company – from Walt Disney Studios to the company’s theme parks and engineering group, Walt Disney Imagineering, to Disney-branded television and the advertising team, which is looking to build a “next-generation” AI-powered ad system, according to the job ad descriptions.

The task force’s focus on jobs isn’t at all surprising – with the WGA writer’s and SAG-AFTRA actor’s strikes ongoing, AI is at the center of concern around talented people being replaced with machine learning. One need only look at how Netflix was raked over the coals for posting a high-paying machine learning job to understand how delicate the idea of creating new automated jobs in the entertainment industry is right now — and how doing so might be ill-advised from a risk and reputation perspective until agreements are made with the unions. With Disney-owned Marvel Studio visual effects artists recently forming a union to ensure better working conditions, expect this to be a delicate point for some time.

The forming of an internal AI task force promises benefits for communicators outside the entertainment industry, too. It’s incumbent on you to nurture communication touchpoints with managers around what efficiency can be gained from automating certain tasks, and where it might negatively impact both the quality of output and employee morale. This should also be driving conversations about employee upskilling for AI training.

The tendency of leadership to maximize efficiency also begs another existential question—if automation is enabled to streamline productivity, how do the shifting goalposts affect each role’s scope of work? Working in collaboration with managers and HR is the best way to figure this out, and a task force is the perfect cross-functional solution for ensuring those conversations happen.

AI image generation has an inclusivity problem.

Employees are increasingly using AI to punch up their headshots, but the Wall Street Journal reports that this trend is concerning to some who have seen the tools change their skin tone and facial structure.

According to WSJ:

Danielle DeRuiter-Williams, a 38-year-old diversity, equity and inclusion specialist based in New Orleans, used a service called AI SuitUp. She said she was surprised that the AI-generated photos narrowed her nose and lightened her skin.

“It was just more comical than anything,” she said. “And just a demonstration to me of how in the nascent stages a lot of this technology is, especially when it comes to its adaptability to nonwhite people.”

Innate racial biases for AI image tools are just one of the problems that this tech brings, another big one being the lack of legal precedent around modifying images owned by another party. While some AI image software providers are offering legal protections for those who use their services, it’s likely smart to not integrate this tech into your workflows just yet. Of course, consulting counsel (and hopefully your task force) will help you stay up on developments as safeguards emerge and regulations cement.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

COMMENT

One Response to “AI for communicators: What’s new and what matters”

    Terry says:

    Who decided AI was a good idea? How will AI potentially replace human writers, editors and designers? Can AI duplicate copyrighted content (without the consent of the copyright holders)?

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.