AI for communicators: What’s new and what’s next

New risks and regulations lead the news.

AI for communicators

This week’s update is a tug-of-war between new technological advancements that bring stunning opportunities and regulation that seeks to give shape to this radical new technology and hamper bad actors from running amok with this power.

Read on to find out what communicators need to be aware of this week in the chaotic, promising world of AI. 

Risks

As AI grows more sophisticated and powerful, it raises new risks that communicators never had to worry about before. This issue was exemplified by a bizarre case out of Maryland where a high school athletics director used AI to make it sound as though his principal was making racist and antisemitic remarks.

After damaging the principal’s reputation, the athletics director was arrested on a variety of charges. How this case plays out is certain to have legal ramifications, but the sheer ease with which a regular person was able to clone his boss’ voice to make him look bad should give all communicators pause. Be on the lookout for these devious deepfakes, and be prepared to push back. 

 

[RELATED: Join us on June 25 for the Data Fluency for Communicators: Turn Measurement into Action Webinar]

 

But artist FKA twigs is taking a unique approach to combating deepfakes by creating her own. In written testimony submitted to the U.S. Senate, she said that:

AI cannot replicate the depth of my life journey, yet those who control it hold the power to mimic the likeness of my art, to replicate it and falsely claim my identity and intellectual property. This prospect threatens to rewrite and unravel the fabric of my very existence. We must enact regulation now to safeguard our authenticity and protect against misappropriation of our inalienable rights.”

FKA Twigs says she intends to use her digital doppelganger to handle her social media presence and fan outreach while she focuses on her music. It’s a unique approach, and potentially one we’ll see more of in the future.

In other legal news, yet another lawsuit has been filed taking aim at what materials are used to train LLMs. 

Eight newspapers, including the Chicago Tribune and the Denver Post, are suing OpenAI and Microsoft, alleging that millions of their articles were used to train Microsoft Copilot and ChatGPT, the New York Times reported

Specifically, the suit complains that the bots offered up content that was only available behind their paywalls, thus relieving readers of the need to subscribe to gain access to specific knowledge and content. Similarly, a group of visual artists are suing Google on accusations that their artwork was used to train Google’s visual AI models. These cases will take years to resolve, but the outcomes could shape the future of AI.

We’re also now beginning to see some consumer backlash against the use of AI tools in areas where they really don’t want AI tools. Axios reports that Meta’s aggressive push to incorporate AI into the search bars of Facebook, Instagram and WhatsApp is leading to customer complaints. While Axios pointed out that historically, this is the pattern of new features launches on Meta apps – initial complaints followed by an embrace of the tool – AI fatigue is a trend to watch. 

That fatigue could also be playing out amid the second global AI summit, hosted by both Great Britain and South Korea, though it will largely play out virtually. Reuters reports that the summit is seeing less interest and lower projected attendance. 

Is the hype bubble bursting? 

Regulation

The White House announced a series of key AI regulatory actions, building on President Biden’s executive order from November with a detailed list of interdepartmental commitments and initiatives. 

While the initial executive order lacked concrete timelines and specifics on how the ambitious tasks would be fulfilled, this recent announcement begins by mapping its updates and explaining how progress was tethered to specific timeframes:

Today, federal agencies reported that they completed all of the 180-day actions in the E.O. on schedule, following their recent successes completing each 90-day, 120-day, and 150-day action on time. Agencies also progressed on other work tasked by the E.O. over longer timeframes.

Updates include:

  • Managing risks to safety and security. This effort directed agencies to acknowledge the safety and security risks of AI around infrastructure, biological warfare and software vulnerabilities. It included the development of a framework to prevent the possibility of using AI to engineer bioweapons, documents on generative AI risks that are available for public comment, safety and security guidelines for operators of critical infrastructure,  the launch of a safety and security board  to advise the secretary of Homeland Security, and the Department of Defense piloting of new AI tools to test for vulnerabilities in government software systems .
  • AI’s energy impact. Dubbed “Harnessing AI for good” in a delicate dance against accusations of wokeness”,  this portion of the update also shared details of how the government plans to advance AIfor scientific research and collaborate more with the private sector. They include announced funding opportunities led by the Department of Energy to support the development of energy-efficient algorithms and hardware. Meetings are on the books with clean energy developers, data center owners and operators alongside local regulators to determine how AI infrastructure can scale with clean energy in mind. There’s also an analysis of the risks that AI will pose to our nation’s power grid in the works. 

The update also featured progress on how the Biden administration is bringing AI talent into the federal government, which we’ll explore in the “AI at work” section below.

Overall, this update doubles as an example of how communicators can marry progress to a timeline to foster strategic, cross-departmental accountability. Those working in the software and energy sectors should also pay close attention to the commitments outlined above, and evaluate whether it makes sense for their organization to get involved in the private sector partnerships.

On the heels of this update, the Department of Commerce’s  National Institute of Standards and Technology released four draft publications aiming to improve the safety, security and trustworthiness of AI systems. These include an effort to develop advanced methods for determining what content is produced by humans and what is produced by AI.

“In the six months since President Biden enacted his historic Executive Order on AI, the Commerce Department has been working hard to research and develop the guidance needed to safely harness the potential of AI, while minimizing the risks associated with it,” said U.S. Secretary of Commerce Gina Raimondo. “The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time. With these resources and the previous work on AI from the department, we are continuing to support responsible innovation in AI and America’s technological leadership.”

While this progress on federal regulations should not be understated, TIME reported OpenSecrets data which reveals that 451 groups lobbied the federal government on artificial intelligence in 2023, nearly triple the 158 lobbying groups in 2022.

“And while these companies have publicly been supportive of AI regulation, in closed-door conversations with officials they tend to push for light-touch and voluntary rules, say Congressional staffers and advocates,” writes TIME. 

Whatever the intentions of these lobbyists are, it’ll be interesting to watch how their efforts fit in with the government’s initiatives and commitments. Public affairs leads should be mindful of how their efforts can be framed as a partnership with the government, which is offering ample touchpoints to engage with the private sector, or perceived as a challenge to national security under the guise of “innovation.” 

AI at work

The White House’s 180-day update also includes details about how the government will prepare the workforce to accelerate its AI applications and integrations. This includes a requirement of all government agencies to apply “developed bedrock principles and practices for employers and developers to build and deploy AI safely and in ways that empower workers.”

In this spirit, the Department of Labor published a guide for federal contractors to answer questions about legal obligations and equal employment opportunities. Whether your organization works with the government or not, this guide is a model to follow for any partner AI guidelines you may be asked to create. 

Other resources include guidance on how AI can violate employment discrimination laws, guidance on nondiscriminatory AI use in the housing sector and when administering public benefit programs. 

These updates include frameworks for testing AI in the healthcare sector. Healthcare communicators should pay particular attention to a rule “clarifying that nondiscrimination requirements in health programs and activities continue to apply to the use of AI, clinical algorithms, predictive analytics, and other tools. Specifically, the rule applies the nondiscrimination principles under Section 1557 of the Affordable Care Act to the use of patient care decision support tools in clinical care, and it requires those covered by the rule to take steps to identify and mitigate discrimination when they use AI and other forms of decision support tools for care.”

Beyond that, the White House also provided updates on its “AI Talent Surge” program.

Since President Biden signed the E.O., federal agencies have hired over 150 AI and AI-enabling professionals and, along with the tech talent programs, are on track to hire hundreds by Summer 2024,” the release reads.  “Individuals hired thus far are already working on critical AI missions, such as informing efforts to use AI for permitting, advising on AI investments across the federal government, and writing policy for the use of AI in government.”

Meanwhile in the private sector, Apple’s innovation plans are moving fast with The Financial Times reporting that the tech giant has poached dozens of Google’s AI experts to work at a secret lab in Zurich. 

All of this fast-moving behavior calls for a reminder that sometimes it’s best to slow down, especially as Wired reports that recruiters are overloaded with applications due to the flood of genAI tools making it easier for candidates to send applications en masse -– and harder for recruiters to sift through them all.

“To a job seeker and a recruiter, the AI is a little bit of a black box,” says Hilke Schellmann, whose book The Algorithm looks at software that automates résumé screening and human resources. “What exactly are the criteria of why people are suggested to a recruiter? We don’t know.”

As more recruiters go manual, it’s worth considering how your HR and people leaders evaluate candidates, balancing efficiencies in workflow with the human touch that can help identify a qualified candidate the algorithm may not catch. 

Ultimately the boundaries for responsible AI adoption at work will best be defined by those doing the work–not leadership– argues Verizon Consumer SVP and CEO Sowmyanarayan Sampath in HBR:

In developing applied technologies like AI, leaders must identify opportunities within workflows. In other words, to find a use for a new piece of tech, you need to understand how stuff gets done. Czars rarely figure that out, because they are sitting too far away from the supply line of information where the work happens.

There’s a better way: instead of decisions coming down the chain from above, leaders should let innovation happen on the frontline and support it with a center of excellence that supplies platforms, data engineering, and governance. Instead of hand-picking an expert leader, companies should give teams ownership of the process. Importantly, this structure lets you bring operational expertise to bear in applying technology to your business, responsibly and at scale and speed.

We couldn’t agree more.

Tools

For those who are interested in developing an AI tool but aren’t sure where to begin, Amazon Q might be the answer. The app will allow people to use natural language to build apps, no coding knowledge required. This could be a gamechanger to help democratize AI creation. Prices start at $20 per month.

From an end-user perspective, Yelp’s new Assistant AI tool says it will use natural language searches to help users find exactly what they’re looking for and then even draft messages to businesses. Yelp says these will help customers better communicate exactly what they’re looking for – a move that could save time for both customers and businesses. 

ChatGPT is widely rolling out a new feature that will allow chatbots to get to know you more deeply. Dubbed Memory, the ChatGPT Plus feature enables bots to remember more details about past conversations and to learn based on your interactions. This could cut down on time spent giving ChatGPT instructions about your life and preferences, but it could also come across as a bit invasive and creepy. ChatGPT does offer the ability to have the AI forget details, but expect more of this customization to come out in the future. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.