AI for communicators: What’s new and what’s next

From AI entertainers to big regulatory moves, what you need to know.

AI roundup

We are still deep in the questions phase of AI. Communicators are grappling with deep, existential questions about how we should use AI, how we should respond to unethical AI use and how we can be positive stewards for these powerful technologies.

So far, the answers are elusive. But the only way we’ll get there is by thinking deeply, reading widely and staying up-to-date.

Let’s catch you up on the biggest AI news from the last few weeks and how that applies to communications. 

Tools and uses

Amazon has entered the AI assistant race – with a few notable twists over competitors like Microsoft Copilot and Google Bard.

The new Amazon Q is described as a “work companion” by Adam Selipsky, chief executive of Amazon Web Services, in an interview with the New York Times. It can handle tasks like “summarizing strategy documents, filling out internal support tickets and answering questions about company policy,” according to the Times.

 

The tool  was specifically built to handle corporate concerns around privacy and data security raised by other generative AI products. As the Times describes it:

Amazon Q, for example, can have the same security permissions that business customers have already set up for their users. At a company where an employee in marketing may not have access to sensitive financial forecasts, Q can emulate that by not providing that employee with such financial data when asked.

Q can also plug into existing corporate tools like Gmail and Slack. It undercuts the $30 price point of both Google and Microsoft, clocking in at $20 per user per month. 

But technology is already moving far beyond simple virtual assistants. An AI-generated “singer” posted “her” first song on X. It’s … something.

The appearance of “Anna Indiana” (please leave both Hannah Montana and the fine state of Indiana out of this) and the entirety of the song were composed via AI. The entire effect is uncanny valley to the extreme. But it’s not hard to peer into a not-too-distant future where this technology is refined and companies start creating their own bespoke AI influencers.

Imagine it: a custom spokesperson designed in a lab to appeal to your precise target audience, able to create their own material. This spokesperson will never go rogue and spout conspiracy theories or ask for huge posting fees. But they also won’t be, well, human. They’ll necessarily lack authenticity. Will that matter? 

The entertainment industry is grappling with similar issues as “synthetic performers” – or AI-generated actors – become a more concrete reality in film and television. While the new SAG-AFTRA contract puts some guardrails around the use of these performers, there are still so many questions, as Wired reports. What about AI-generated beings who have the vibes of Denzel Washington but aren’t precisely like him? Or if you train an AI model to mimic Jim Carrey’s physical humor, does that infringe on Carrey?

So many questions. Only time will have the answers. 

Risks

Yet another media outlet has seemingly passed off AI-generated content as if it were written by humans. Futurism found that authors of some articles on Sports Illustrated’s website had no social footprint and that their photographs were created with AI. The articles they “wrote” also contain head-scratching lines no human would write, such as opining on how volleyball “ can be a little tricky to get into, especially without an actual ball to practice with.”

Sports Illustrator’s publisher denies that the articles were created with AI, instead insisting an outside vendor wrote the pieces and used dummy profiles to “protect author privacy.” If this all sounds familiar, it’s because Gannett went through an almost identical scandal with the exact same company a month ago, including the same excuses and denials.

These examples underscore the importance of communicating with transparency about AI – and the need to carefully ensure vendors are living up to the same standards as your own organization. The results can be disastrous, especially in industries where the need for trust is high – like, say, media.

But the risks of AI in the hands of bad actors extendextends far beyond weird reviews for sporting equipment. Deepfakes are proliferating, spreading an intense amount of information about the ongoing war between Israel and Hamas in ways designed to tug on heartstrings and stoke anger.

The AP reports:

In many cases, the fakes seem designed to evoke a strong emotional reaction by including the bodies of babies, children or families. In the bloody first days of the war, supporters of both Israel and Hamas alleged the other side had victimized children and babies; deepfake images of wailing infants offered photographic ‘evidence’ that was quickly held up as proof.

It all serves to further polarize opinion on an issue that’s already deeply polarized: People find the deepfakes that confirm their own already-held beliefs and become even more entrenched. In addition to the risks to people on the ground in the region, it makes communicators’ jobs more difficult as we work to discern truth and fiction and communicate with internal and external audiences whose feelings only grow stronger and stronger to one extreme. 

Generative AI is also changing the game in cyber security.  Since ChatGPT burst onto the scene last year, there has been an exponential increase in phishing emails. Scammers are able to use generative AI to quickly churn out sophisticated emails that can fool even savvy users, according to CNBC. Be on guard and work with IT to update internal training to handle these new threats.

Legal and regulation

The regulatory landscape for AI is being written in real-time, notes Nieman Lab founder Joshua Benton in a piece that urges publishers to take a beat before diving head-first into using language learning models (LLM) to produce automated content. 

Benton’s argument focuses specifically on the most recent ruling in comedian and author Sara Silverman’s suit against Meta over its inclusion of copyrighted sections from her book, “The Bedwetter,” into its LLMs. Despite Meta’s LLM acquiring the text through a pirated copy, Judge Vince Chhabria ruled in the tech giant’s favor and gave Silverman a window to resubmit.

Benton writes:

Chhabria is just one judge, of course, whose rulings will be subject to appeal. And this will hardly be the last lawsuit to arise from AI. But it lines up with another recent ruling, by federal district judge William Orrick, which also rejected the idea of a broad-based liability based on using copyrighted material in training data, saying a more direct copy is required.

If that is the legal bar — an AI must produce outputs identical or near-identical to existing copyrighted work to be infringing — news companies have a very hard road ahead of them.

Cases like this also beg the question, how much more time and how many more resources will be exhausted before some standard precedents are set by federal regulation? 

While Meta may count the initial ruling as a victory, other big tech players continue to express the need for oversight. In the spirit of Elon Musk and Mark Zuckerberg visiting the Senate in September to voice support of federal regulation, former Google CEO Eric Schmidt said that individual company guardrails around AI won’t be enough. 

Schmidt told Axios that he believes the best regulatory solution would involve the formation of a global body, similar to the Intergovernmental Panel on Climate Change (IPCC), that would “feed accurate information to policymakers” so that they understand the urgency and can take action.

Global collaborations are already in the works. This past weekend, The U.S. joined Britain and over a dozen other countries to unveil what one senior U.S. official called “the first detailed international agreement on how to keep artificial intelligence safe from rogue actors,” reports Reuters. 

It’s worth noting that, while this 20-page document pushes companies to design secure AI systems, there is nothing binding about it. In that respect, it rings similar to the White House’s executive order responsible AI use last month – good advice with no tangible enforcement or application mechanism. 

But maybe we’re getting ahead of ourselves. The best case for effective federal legislation regulating AI will emerge when a pattern of state-level efforts to regulate AI take flight. 

In the latest example, Michigan Governor Gretchen Whitmer plans to sign legislation aimed to curb irresponsible or malicious AI use.

ABC News reports:

So far, states including California, Minnesota, Texas and Washington have passed laws regulating deepfakes in political advertising. Similar legislation has been introduced in Illinois, New Jersey and New York, according to the nonprofit advocacy group Public Citizen.

Under Michigan’s legislation, any person, committee or other entity that distributes an advertisement for a candidate would be required to clearly state if it uses generative AI. The disclosure would need to be in the same font size as the majority of the text in print ads, and would need to appear “for at least four seconds in letters that are as large as the majority of any text” in television ads, according to a legislative analysis from the state House Fiscal Agency.

One aspect of this anticipated legislation that has the potential to set federal precedents is its focus on federal and state-level campaign ads created using AI, which will be required to be labeled as such. 

You can take this “start local” approach to heart by getting the comms function involved in the internal creation of AI rules and guidelines at your organization early. Staying abreast of legal rulings, state and federal legislation and global developments will not only empower comms to earn its authority as being early adopters of the tech, but also strengthen your relationships with those who are fearful or hesitant over AI’s potential risks. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments! You can also get much more information about using AI in your writing during our upcoming Writing & Content Strategy Virtual Conference! 

Allison Carter is executive editor of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

 

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.