Your brand’s 6-factor blueprint to effective AI measurement
From establishing clean baselines to embedding AI signals into existing scorecards, here’s how to quantify what matters: adoption, efficiency gains, quality checks and the business outcomes AI actually enables.
Brian Buchwald is Edelman president, global transformation and performance, and advisor to the Center for AI Strategy.
Center for AI Strategy: What framework do you give clients for building their own AI measurement system? Do you recommend they start with efficiency metrics, performance metrics, adoption tracking or something else?
Brian Buchwald: We start with the business, not the tech. There isn’t a one-size-fits-all model. The roadmap we lay out begins with a few use cases, tight baselines and tracking clear before/after metrics before scaling further. We:
• Define the business goal for specific workflows
• Determine the baseline for how each workflow performs today
• Stand up a pilot and track adoption
• Quantify efficiency gains (both time and cost)
• Measure quality and risk
• Tie it to outcomes the business already values, like revenue, retention or campaign impact
CAIS: What are the essential metrics you tell clients they must track from day one vs. metrics they can layer in later? How do you help them avoid analysis paralysis while ensuring they’re capturing what matters?
Buchwald: Day one is about creating a clean baseline and proving use: Who’s using it, for what and how often?
Next, track efficiency: time to complete, cycle time, cost per task.
Then it’s important to add metrics that assess quality: error rates, rework/edits, brand/safety flags, approval rates.
Finally, measure what AI enables — speed to market, ability to personalize, new deliverables and more strategic work unlocked by efficiency gains, and the end metrics that matter (win rate, earned coverage quality, retention).
CAIS: How do you help clients connect their AI metrics to business outcomes they already measure? What’s your approach for integrating AI measurement into existing KPI frameworks rather than creating a separate “AI dashboard” no one looks at?
Buchwald: We recommend embedding AI signals into the KPI cadence a business already runs — OKRs (objectives and key results), performance scorecards, finance reviews.
Instead of a separate “AI dashboard,” tag existing KPIs with an “AI-assist” marker — for example, time to pitch reduced by 40% via AI, or client approval rate up 10% with AI drafts. The team still owns the KPI, but AI becomes the explainable driver underneath it. The rule of thumb: If leadership already cares about a number, show exactly how AI moves that number, not just MAUs (monthly active users) or logins in isolation.
CAIS: What mistakes do you see clients make when building their own AI metrics?
Buchwald: Counting usage as the ultimate goal. Logins are a means, not an outcome. Other common traps can be skipping a baseline, measuring too much (15 dashboards, no decisions), using vanity metrics (prompt counts) and ignoring quality/safety.
You also want to avoid pilots being judged like scaled programs. The fix is simple: baseline first, define success upfront, measure both speed and quality, and always connect to a business KPI.
CAIS: How do you advise clients on measuring AI ROI when they’re using multiple AI tools across different teams? What’s your guidance for creating a unified view?
Buchwald: Treat “AI strategy” as a lens on corporate strategy with common goals and language. Then you can design solutions and procure tools in service of those broader objectives. This helps ensure that a unified view is established from the get-go, versus bespoke tools for individual needs that are often underutilized and have much narrower ROI measures that can be applied.

