AI for communicators: What’s new and what’s next

Risks, regulations and new tools abound.

AI for communicators

Ai continues hurtling forward, bringing with it new promise and new peril. From threats to the world’s elections to hope for new kinds of jobs, let’s see how this technology is impacting the role of communicators this week.

Risks

2024 is likely the biggest election year in the history of the world. Nearly half the planet’s inhabitants will head to the polls this year, a major milestone. But that massive wave of humanity casting ballots comes at the precise moment that AI deepfakes are altering the information landscape, likely forever.

In both India and Indonesia, AI is digitally resurrecting long-dead politicians to weigh in on current elections. A likeness of M Karunanidhi (date of death: 2018), former leader of India’s Dravida Munnetra Kazhagam (DMK) party, delivered an 8-minute speech endorsing current party leaders. Indonesian general, president and strongman Suharto (date of death: 2008) appeared in a social media video touting the benefits of the Golkar party.

Neither video is intended to fool anyone into thinking these men are still alive. Rather, they’re using the cache and popularity of these deceased leaders to drum up votes for the elections of today. While these deepfakes may not be overtly deceptive, they’re still putting words these men never spoke into their virtual mouths. It’s an unsettling prospect and one that could pay big dividends in elections. There’s no data to know how successful the strategy might be – but we’ll have it soon, for better or worse.

Major tech companies, including Google, Microsoft, Meta, OpenAI, Adobe and TikTok all intend to sign an “accord” that would hopefully help identify and label AI deepfake amid these vital elections, the Washington Post reported. It stops short of banning such content, however, merely committing to more transparency around what’s real and what’s AI.

“The intentional and undisclosed generation and distribution of deceptive AI election content can deceive the public in ways that jeopardize the integrity of electoral processes,” the accord says.

But while the intentions may be good, the technology isn’t there yet. Meta has committed to labeling AI imagery created with any generative tool, not just its own, but they’re still developing the tools. Will transparency catch up in time to act as a safeguard to this year’s many elections? 

Indeed, OpenAI CEO Sam Altman admits that it’s not the threat of artificial intelligence spawning killer robots that keep him up at night – it’s how everyday people might use these tools. 

“I’m much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong,” Altman said during a video call at the World Governments Summit.

One example could be this technology for tracking employee’s Slack messages. More than 3 million employees at some of the world’s biggest companies are already being observed by Aware AI software, designed to track internal sentiment and preserve chats for legal reasons, Business Insider reported. It can also track other problematic behaviors, such as bullying or sexual harassment.

The CEO of Aware says its tools aren’t intended to be used for decision-making or disciplinary purposes. Unsurprisingly, this promise is being met with skepticism by privacy experts.

“No company is essentially in a position to make any sweeping assurances about the privacy and security of LLMs and these kinds of systems,” said Amba Kak, executive director of the AI Now Institute at New York University.

That’s where we are right now: a state of good intentions for using  technology that is powerful enough to be dangerous, but not powerful enough to be fully trusted. 

Regulation, ethics and government oversight

The push for global AI regulation shows no signs of slowing, with notable developments including a Vatican friar leading an AI commission alongside Bill Gates and Italian Prime Minister Giorgia Melonin to curb the influence of ChatGPT in Italian media, and NVIDIA CEO  Jensen Huang calling for each country to cultivate its own sovereign AI strategy and own the data it produces. 

“It codifies your culture, your society’s intelligence, your common sense, your history – you own your own data,” Huang told UAE’s Minister of AI Omar Al Olama earlier this week at the World Governments Summit in Dubai.

In the U.S., federal AI regulation took several steps forward last month when the White House followed up on its executive order announced last November with an update on key, coordinated actions being taken at the federal level. Since then, other federal agencies have followed suit, issuing new rules and precedents that promise to directly impact the communications field.

Last week, the Federal Communications Commission (FCC) officially banned AI-generated robocalls to curb concerns about election disinformation and voter fraud. 

According to the New York Times:

“It seems like something from the far-off future, but it is already here,” the F.C.C. chairwoman, Jessica Rosenworcel, said in a statement. “Bad actors are using A.I.-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities and misinform voters.”

Those concerns came to a head late last month, when thousands of voters received an unsolicited robocall from a faked voice of President Biden, instructing voters to abstain from voting in the first primary of the election season. The state attorney general office announced this week that it had opened a criminal investigation into a Texas-based company it believes is behind the robocall. The caller ID was falsified to make it seem as if the calls were coming from the former New Hampshire chairwoman of the Democratic Party.

This is a vital area for communicators to monitor, and to clearly and proactively send messages on how to spot scams and identify real calls and emails from your organization from the fake. Don’t wait until you’re being spoofed – communicate now. 

Closer to the communicator’s purview is another precedent expressed in recently published guidelines by the U.S. Patent and Trademark Office that states it will only grant its official legal protections to humans, citing Biden’s aforementioned Executive Order in claiming that “patents function to incentivize and reward human ingenuity.”

The guidance clarifies that, though inventions made using AI are not “categorically unpatentable,” the AI used to make them cannot be classified as the inventor from a legal standpoint. This requires at least one human to be named as the inventor for any given claim – opening their claim to ownership up for potential review if they have not created a significant portion of the work.

Organizations that want to copyright or patent work using GenAI would do well to codify their standards and documentation for explaining exactly how much of the work was created by humans. 

That may be why the PR Council recently updated its AI guidelines  “to include an overview of the current state of AI, common use cases across agencies and guidance on disclosure to clients, employee training and more.” 

The Council added that it created a cross-disciplinary team of experts in ethics, corporate reputation, digital, and DE&I to update the guidelines.

 The updates state:

  • A continuum has emerged that delineates phases in AI’s evolution within firms highlights its implications for serving clients, supporting teams and advancing the public interest. 
  • While AI use cases, especially among Creative teams, has expanded greatly, the outputs are not final, client-ready work due to copyright and trademark issues and the acknowledgment that human creativity is essential for producing unique, on-strategy outputs. 
  • With AI being integrated into many existing tools and platforms, agency professionals should stay informed about new capabilities, challenges and biases. 
  • Establishing clear policies regarding the use of generative AI, including transparency requirements, is an increasing need for agencies and clients. This applies to all vendors, including influencer or creator relationships. 
  • Despite predictions that large language models will eliminate hallucinations within 18 months, proper sourcing and fact-checking remain crucial skills. 
  • Experts continue to advise caution when inputting confidential client information, due to mistrust of promised security and confidentiality measures.  
  • Given the persistent risk of bias, adhering to a checklist to identify and mitigate bias is critical. 

These recommendations function as a hyperlocal safeguard for risk and reputation that communicators can own and operationalize throughout the organization. 

Tools and Innovations

AI’s evolution continues to hurtle ahead at lightning speed. We’re even getting rebrands and name changes, as Google’s old-fashioned sounding Bard becomes the more sci-fi Gemini. The new name comes with a new mobile app to enable to AI on the go, along with Gemini Advanced, a $19.99/month service that uses Google’s “Ultra 1.0 model,” which the company says is more adept at complex, creative and collaborative tasks.

MIT researchers are also making progress on an odd issue with chatbots: their tendency to crash if you talk to them for too long. You can read the MIT article for the technical details, but here’s the bottom line for end users: “This could allow a chatbot to conduct long conversations throughout the workday without needing to be continually rebooted, enabling efficient AI assistants for tasks like copywriting, editing, or generating code.”

Microsoft, one of the leading companies in the AI arms race, has released three major trends it foresees for the year ahead. This likely adheres to its own release plans, but nonetheless, keep an eye on these developments over the next year:

  • Small language models: The name is a bit misleading – these are still huge models with billions of data points. But they’re more compact than the more famous large language models, often able to be stored on a mobile phone, and feature a curated data set for specific tasks. 
  • Multimodal AI: These models can understand inputs via text, video, images and audio, offering more options for the humans seeking help.
  • AI in science: While many of us in comms use AI to generate text, conduct research or create images, scientists are using it to improve agriculture, fight cancer and save the environment. Microsoft predicts big improvements in this area moving forward. 

AI had a presence at this year’s Super Bowl, though not as pronounced as, say, crypto was in 2022. Still, Microsoft’s Copilot product got an ad, as did some of Google’s AI features, Adweek reported. AI also featured in non-tech brands like Avocados from Mexico (GuacAImole will help create guac recipes) and as a way to help Etsy shoppers find gifts.

But AI isn’t just being used as a marketing tool, it’s also being used to deliver ads to viewers. “Disney’s Magic Words” is a new spin on metadata. Advertisers on Disney+ or Hulu can tie their advertising not just to specific programs, but to specific scenes, Reuters reported. This will allow brands to tailor their ads to fit the mood or vibe of a precise moment. No more cutting away from an intense, dramatic scene to a silly, high-energy ad. This could help increase positive brand sentiment by more seamlessly integrating emotion into programmatic ad choices.

AI at work 

The question of whether or not AI will take away jobs has loomed large since ChatGPT came on the scene in late 2022. While there’s no shortage of studies, facts and figures analyzing this trend, recent reports suggest that the answer depends on where you sit in an organization.

A recent report in the Wall Street Journal points to recent layoffs at companies like Google, Duolingo and UPS as examples where roles were eliminated in favor of productivity automation strategies, and suggests that managers may find themselves particularly vulnerable.

The report reads:

“This wave [of technology] is a potential replacement or an enhancement for lots of critical-thinking, white-collar jobs,” said Andy Challenger, senior vice president of outplacement firm Challenger, Gray & Christmas.

Since last May, companies have attributed more than 4,600 job cuts to AI, particularly in media and tech, according to Challenger’s count. The firm estimates the full tally of AI-related job cuts is likely higher, since many companies haven’t explicitly linked cuts to AI adoption in layoff announcements.

Meanwhile, the number of professionals who now use generative AI in their daily work lives has surged. A majority of more than 15,000 workers in fields ranging from financial services to marketing analytics and professional services said they were using the technology at least once a week in late 2023, a sharp jump from May, according to Oliver Wyman Forum, the research arm of management-consulting group Oliver Wyman, which conducted the survey.

It’s not all doom and gloom, however. “Job postings on LinkedIn that mention either AI or generative AI more than doubled worldwide between July 2021 and July 2023 — and on Upwork, AI job posts increased more than 1,000% in the second quarter of 2023, compared to the same period last year,” reports CNBC. 

Of course, as companies are still in an early and experimental phase with integrating AI into workflows, the jobs centered around them carry a high level of risk and uncertainty. 

That may be why efforts are afoot to educate those who want to work in this emerging field.

Earlier this week, Reuters reported that Google pledged €25 million to help Europeans learn how to work with AI. Google accompanied the announcement by opening applications for social organizations and nonprofits to help reach those who would benefit most from the training. The company also expanded its online AI training courses to include 18 languages and announced “growth academies” that it claims will help companies using AI scale their business.

“Research shows that the benefits of AI could exacerbate existing inequalities — especially in terms of economic security and employment,” Adrian Brown, executive director of the Centre for Public Impact nonprofit collaborating with Google on the initiative, told Reuters. 

“This new program will help people across Europe develop their knowledge, skills and confidence around AI, ensuring that no one is left behind.”

While it’s unclear what industries or age demographics this initiative will target, one thing’s certain: the next generation workforce is eager to embrace AI.

A 2024 trends rport from Handshake, a career website for college students, found that 64% of tech majors and 45% of non-tech majors graduating in 2024 plan to develop new skills that will allow them to use gen AI in their careers.

Notably, students who are worried about the impact of generative AI on their careers are even more likely to plan on upskilling to adapt,” the report found.

These numbers suggest that there’s no use wasting time to fold AI education into your organization’s learning and development offerings. The best way to ease obsolescence concerns among your workforce is to integrate training into their career goals and development plans, standardize that training across all relevant functions and skill sets, then make it a core part of your employer brand.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.