AI for communicators: What’s new and what matters

New risks, regulation news, the future of work and more.

It was the best of times, it was the worst of times for AI.

New tools are being rolled out to make life easier for us all. It could be so great, we could get down to a 3.5 day workweek!

But this week also brings us serious concerns about AI’s role in creating deepfakes, perpetrating colorism and more.

Read on.

Deepfakes, impersonation and skin hue take center stage

Some of the dystopian fears that AI represents are beginning to come to fruition. 

Impersonation and deepfakes are emerging as a critical problem.

Multiple actors have found that their likenesses are being used to endorse products without their consent. From beloved actor Tom Hanks (“Beware!! There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it.”) to “CBS Mornings” host Gayle King (“I’ve never heard of this product or used it! Please don’t be fooled by these AI videos.”), celebrities are speaking out with alarm, the New York Times reported.

The reputational risks for these entertainers are real, but it’s not hard to imagine more dire deepfakes causing major harm: the CEO of an airline “announcing” a crash, for instance, or a president “threatening” nuclear war. The technology is here, it’s real and it’s frightening.

But there is criticism of AI being used to mimic entertainers extending even beyond deepfakes. Zelda Williams, daughter of late comedian Robin Williams, strongly condemned attempts by studios and others to recreate her father’s voice using AI. “These recreations are, at their very best, a poor facsimile of greater people, but at their worst, a horrendous Frankensteinian monster, cobbled together from the worst bits of everything this industry is, instead of what it should stand for,” she wrote on Instagram. She strongly voiced her support for the actors currently on strike, where one of the issues at stake is the use of AI in entertainment. 

Outside of entertainment, it’s becoming clear how easy it is for bad actors to evade mandatory watermarks placed on AI-generated images, according to University of Maryland Computer Science Professor Soheil Feizi. Feizi’s research shows not only how easy it is to remove or “wash out” watermarks, but also how simple it is to add fake watermarks to non-AI images to generate false positives.

Many tech giants have looked to watermarks as a way to distinguish AI images from the real, but it appears that strategy won’t work, sending everyone back to the drawing board. 

“We don’t have any reliable watermarking at this point,” Feizi said. “We broke all of them.”

The people who make AI work are also struggling to ensure it is inclusive for people of all races. While it’s more common to test AI for bias in skin tone, The Verge reports that skin hue is often overlooked. In other words, researchers are currently controlling for the lightness and darkness of skin, but not redness and yellowness. 

“East Asians, South Asians, Hispanics, Middle Eastern individuals, and others who might not neatly fit along the light-to-dark spectrum” can be underrepresented because of this, Sony researchers wrote

But it isn’t all gloom and doom in the world of AI. There are positive elements coming, too. 

Recent data and insights on AI and the future of work

Q4 arrives with mere weeks leading up to Ragan’s Future of Communications Conference, and AI news around the future of work is plentiful.

A recent study from Morgan Stanley forecasts that over 40% of the labor force will be impacted by AI in the next three years.

CNBC reports:

Analyst Brian Nowak estimates that the AI technology will have a $4.1 trillion economic effect on the labor force — or affect about 44% of labor — over the next few years by changing input costs, automating tasks and shifting the ways companies obtain, process and analyze information. Today, Morgan Stanley pegs the AI effect at $2.1 trillion, affecting 25% of labor. 

Nowak identifies falling “input costs” for companies getting on board, which may inform why job posts mentioning AI have more than doubled over the past two years, according to LinkedIn’s Global Talent Trends report. 

Big investments in automation abound, with Visa earmarking $100 million to invest in generative AI companies “that will impact the future of commerce and payments,” reports TechCrunch.

Meanwhile, IBM announced a partnership with the U.S. Chamber of Commerce Foundation to explore AI’s potential application for better skills-based hiring practices. 

The Chamber created a test case for job seekers by examining if AI models can help workers identify and recognize their skills, then present them in the form of digital credentials.

“If proven possible, then future use cases of AI models could be explored, like matching users to potential employment and education opportunities based on their skill profiles,” explains IBM. 

“They discovered that AI models could in fact take someone’s past experiences—in different data formats—and convert them into digital credentials that could then be validated by the job seeker and shared with potential employers.”

What’s the endgame of all this? In a recent Bloomberg interview, JPMorgan Chase CEO Jamie Dimon offered some utopian ideas for how AI will positively impact the workplace, eventually leading to a 3.5-day workweek. Sounds nice, right? 

Dinon’s comments aren’t that far removed from other CEOs who believe AI will streamline repetitive tasks and help parse data more efficiently, but this optimism must be tempered with the reality that leaders–and their willingness to approve training and upskilling for their workforces on operationalizing AI applications now– will largely inform which roles are eliminated and what new ones are created. 

Bing’s ChatGPT levels up in a big way

One of the major drawbacks to using ChatGPT only “knew” things that happened up to September 2021. But now, it’s able to search the entire internet up to the current day to inform its responses, Yahoo Finance reported. The feature is currently available to paid users on ChatGPT 4 and people using ChatGPT’s integration with Bing, now known as Browse with Bing. 

Bing also added another helpful feature: You can now use OpenAI’s DALL-E 3 from directly within its ChatGPT integration, making it easier to create generative AI images without the need to open another browser tab. 

All of these changes continue to position Bing as a major player in the generative AI space (even if it’s getting most of its smarts from OpenAI) and open new possibilities for AI use. 

WGA protections may set a precedent for federal regulations

Last week saw the end of the Writer’s Guild of America’s (WGA) 148-day strike. Amid the terms of the agreement were substantial regulations that protect against AI encroaching on the writing process.

The WGA regulations say:

  • AI can’t write or rewrite literary material, and AI-generated material will not be considered source material under the MBA, meaning that AI-generated material can’t be used to undermine a writer’s credit or separated rights. 
  • A writer can choose to use AI when performing writing services, if the company consents and provided that the writer follows applicable company policies, but the company can’t require the writer to use AI software (e.g., ChatGPT) when performing writing services. 
  • The Company must disclose to the writer if any materials given to the writer have been generated by AI or incorporate AI-generated material.
  • The WGA reserves the right to assert that exploitation of writers’ material to train AI is prohibited by MBA or other law.

While these regulations aren’t federal, they do set an interesting precedent. Over the past few weeks, this column has explored how The District Court of D.C. ruled that AI images are not subject to copyright, while the U.S. Copyright Office held an open public comment period to determine how it will advise on federal AI regulations going forward. 

In a recent visit to Washington, even Musk and Zuck told the Senate that they want federal regulation on AI. The risks and liability of leaving this work to be self-regulated are simply too great.

Those risks are underscored by recent court cases, including a recent filing wherein authors including Sarah Silverman sued OpenAI for using their words in its learning models. Reuters reported on the filing, which alleges that “OpenAI violated U.S. law by copying their works to train an artificial intelligence system that will ‘replace the very writings it copied.’”

Add to that a chorus of state and local governments that are either taking AI for a test run or imposing a temporary ban, and the likelihood of federal regulation seems all the more assured.

Keep watching this column for future updates as they evolve, or join us for our AI Certificate Course for Communicators and Marketers. Don’t wait, classes start next week! 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

Allison Carter is executive editor of PR Daily. Follow her on Twitter or LinkedIn.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.