Over the past year or so, there have been many comparisons of the current lack of regulation and standards around generative AI tools to the Wild West. But an executive order out of the White House last week may set some new guidelines not just on the wider applications of AI in the workplace, but also on how communicators use and approach it as well.
Cues from the top government brass
President Biden’s executive order proposes a few actions, including an increased emphasis on the protection of privacy of people using AI and the proprietary information of the companies they work for, along with steps to enable the responsible development of AI technology going forward. The order also set out some rules for the government’s usage of AI in practice.
The executive order followed comments by Vice President Harris in London calling out multiple threats that AI poses and the need to tackle them, with emphasis on how AI can sometimes exacerbate inequities when fed certain information.
“These threats are often referred to as the existential threats of A.I. because, of course, they could endanger the very existence of humanity,” Ms. Harris said. “These threats, without question, are profound, and they demand global action. But let us be clear: There are additional threats that also demand our action — threats that are currently causing harm and which, to many people, also feel existential.”
This all might seem like big news, but it’s rather wide-sweeping and aspirational. PR Daily’s Allison Carter nailed it when she said that the order stops short of creating anything that’s truly enforceable. But what does this action signal for the comms pros among us?
The comms impact
Though few parts of this executive order seem to directly impact the day-to-day operations of a comms department concerning AI, the White House’s recommendations can still help comms pros form a set of guidelines around AI use if they don’t have them already.
In particular, the section of the order that focuses on the responsible usage of AI sticks out.
Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.
Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.
While the text of the executive order focuses mainly on how lapses in responsible use can cause national security issues, the takeaways can be generalized and carried over to any organization.
There are a few precepts a comms pro should learn from this recent order.
- First, communicators need to state and codify in writing what information is and is not acceptable to run through a generative AI program. A good rule of thumb to follow here is that if you’re not totally certain, don’t put it in the AI prompt.
- Comms departments need to not just create standards and be done with them, but continue to iterate on them early and often as technology advances. With tools that are so new and unknown to many in the workforce, overcommunication is the key.
The impact of actual regulation
While this executive order is a notable first step, again, it doesn’t move the needle much toward any actionable AI regulation. But the big signal is that real regulation doesn’t seem to be too far down the road, and organizations need to be prepared.
The best thing your company can do before any regulations come down is to be train employees now on acceptable use cases, best practices for crafting prompts, an understanding of how it fits in their workflows and any other elements of automation that are relevant to your business.
Communicate with your employees about the safe and acceptable uses of AI. Emphasize that when used the right way, AI is a tool, not a magical and amorphous entity that’s gunning for their jobs. Doing so will help maintain a sense of positivity and calm around something that’s still so unknown.
When actual regulation comes down (which will most likely have to be done through an act of Congress), the organizations that will come out the other side unscathed will be prepared with rules and guidelines surrounding its use. Doing that now is an effort, sure — it takes an intimate knowledge not only of how AI will impact your industry but of how your employees feel about generative AI and its effects on their roles. But it’s on communicators to be early adopters of tech, tools and practices that will allow their teams to be more strategic and contribute to larger business goals.
While no one knows for sure what the future holds, it doesn’t seem the AI push is slowing down. It’s best to be ready for what it may throw at us as a collective workforce.
Sean Devlin is an editor at Ragan Communications. In his spare time he enjoys Philly sports, a good pint and ’90s trivia night.
2 Responses to “What President Biden’s executive order on AI means for communicators”
Related to generative AI, yes, open AI included. No specific impacts for closed loop AI tools specifically! – JJ