Harnessing AI as a teaching tool: How to balance innovation and integrity

Two Ragan Training professors on how encouraging students to integrate AI into their coursework provides a model for upskilling professionals on AI.

Generative AI has become a classroom reality, whether educators like it or not. While comfort with the tech varies depending on the instructor’s background and digital literacy, teaching younger generations also means teaching digital natives.

For students, tools like ChatGPT offer a quicker way to research their assignments, draft outputs and streamline the editing process. But for instructors and communications leaders, there’s the ongoing challenge of meeting the promise of this tech alongside its pitfalls: accuracy gaps, over-reliance, and the temptation to cut corners are all very real.

The consequences are real, too: MIT’s Media Lab used electroencephalography , a method used to record the electrical activity of the brain, to discover a 47% drop in brain activity when AI is used for writing. That’s something educators should rightfully be concerned with. But it’s also not going to change the fact that many of their students have already integrated ChatGPT into their learning routine.

At Ragan, we’ve reported extensively on responsible prompting and AI integration for communicators. Those same lessons translate directly into higher education, where the task is not to block or fear these tools, but to teach students how to use them thoughtfully and ethically.

With back-to-school time around the corner, we caught up with two Ragan Training professors to learn how they encourage  students to integrate AI into their coursework and how this provides a model for upskilling professionals on AI.

Teaching AI’s value alongside its limits

Cat Colella-Graham, Professor at New York University and creator of Ragan Training’s “Employee Recognition During Times of Change” course, believes that it’s important to have a direct conversation about AI with students upfront.

“The first thing is that you need to proactively address the value of AI with students,” she recommended. “It will be an important tool they will need to succeed in the workplace. That said, it’s important to help them use it for good.”

During this initial exchange, Colella-Graham reminds her students that AI draws from a wide range of sources, some reliable and others less so. “Not every prompt will result in accuracy,” she tells them.

“Reinforce the need to check the voracity of AI, and to not cut and paste as it is not always as reliable as we presume.”

practice the same discipline.

Perfecting prompting with practice

Kerry O’Grady, Ed.D, senior lecturer of business communications at the Isenberg School of Management, UMass Amherst and creator of Ragan Training’s “Media Training During Times of Change” course, takes a hands-on approach to integrating AI in the classroom.

Her business communications students complete timed, in-class homework where secondary research is mandatory, in line with her focus on teaching data-driven decision making. The catch: the 35-40 minute time window is too short for deep library dives.

“I’m teaching them to use AI to prompt effectively for the research needed, provide feedback on the output, and then check if the sources exist and are credible,” O’Grady explained. “Students use the platform as a shortcut for research ‘pointers’ or ‘suggestions’, but not as fact.”

O’Grady then goes further, asking students to reverse engineer their research. Rather than pulling any published articles from AI, they start with scholarly sources, then use AI to summarize relevant passages. This process teaches students that AI speeds the process without replacing critical thinking.

“Students need to learn that AI isn’t a replacement for research, but instead, an asset and partner,” she explained. “If they understand its limitations, they further appreciate the work they need to do in order to ensure their work is a reflection of accuracy and effort.”

Her model reflects a workflow communicators can adapt, too: prompt → critique → confirm. It’s a cycle that treats AI as a collaborator rather than an authority.

Turning actionable lessons into responsible optimism

The common ground between these perspectives and our tips for communicators is that AI only adds value when paired with human judgment.

A few best practices to consider:

  • Normalize using AI as a skill, not a shortcut. As Colella-Graham noted, students and comms pros alike must understand that AI will be part of their workplace toolkit. Avoiding it is less useful than guiding its use.
  • Teach effective prompting with continuous reinforcement. Clear frameworks like the “CTC” method (context, task, constraints) train your team to get better results.
  • Require fact-checking. Just as educators require students to verify AI-suggested sources and go back to the originals, fact-checking should be baked into your workplace governance language.
  • Preserve scholarly rigor. Encourage reverse-engineering from peer-reviewed research, with AI playing a supporting, summarizing role.

By teaching students and professionals to view AI as a partner rather than a replacement, educators and comms leaders can ensure the next generation integrates the tech with a level of responsible optimism.

That means cultivating the discernment, responsibility, and creativity required to continue learning, retaining, and advancing their careers.

Check out Ragan Training for our past conference sessions, exclusive comms courses taught by O’Grady and Colella-Graham, and learning modules on responsibly integrating AI into your comms strategies.

Join our new Center for AI Strategy to ensure your team stays AI-optimized with a peer-to-peer exchange and access to our dedicated board of AI advisors.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.