Measurement masterclass: you asked, we answered

The response to our recent measurement lessons has been overwhelming. It confirmed what we already knew: that communications professionals across Australia and New Zealand are ready to move beyond “clip counts” and start proving real business value.

We had so many questions flooding the chat that we couldn’t get to them all directly. But every question asked is tied to an obstacle that may be preventing you from demonstrating your value — so I wanted to take some time to answer it here.

From fighting the constraints of “no budget” to weaning conservative executives off AVEs/ASRs, here are my answers to the most frequently asked questions!

“We don’t have a budget for research!”

S. Would you recommend running small “Survey Monkey” style surveys to audiences who received a campaign… asking if it was valuable?

Yes, of course.

In the context of our webinar, this is the perfect way to start measuring “reaction” (outcome). You can always take a small step and do a pulse check on a specific audience.

Don’t let the pursuit of “perfect” data stop you from getting it useful Data. Also always remember that finding a lack of engagement (maybe no one responds!) is also very valuable! Just remember that you can use this with other data, traffic, other calls to action, SRM or CRM data, social conversations – all of these can help paint a picture of exposure and impact.

We’ve received many questions about measuring sentiment, trust, and reputation without budgeting for extensive market research or brand tracking surveys.

This is the most common obstacle I hear. If you can’t afford a Michelin star, that doesn’t mean you can’t cook a great meal. All you have to do is be smarter with the data you have He does He owns.

If you can’t afford external surveys, look at proxy metrics that already exist within your organization (the “outcome” metrics we discussed in the webinar):

  • Customer Service Data: Are complaints rising or falling? What is it tone From feedback in customer relationship management?
  • Sales/Frontline Feedback: Ask your sales or customer service teams: “Do people mention a story or media campaign XX?” – Think about how this data will be recorded for you
  • Owned Channel Analysis: You don’t need a survey to find out what people are saying. Manually reviewing comments on your owned channels is a valid form of qualitative sentiment analysis.
  • Retention: Trust is often best measured by behavior. Are your members renewing? Will donors give again? A high retention rate usually equals high trust.

My executives are stuck in the past

S. I am a member of a very conservative organization that is reluctant to give up on KPIs like “Distribute X Press Releases” or ASR (Ad Space Rates). How can I encourage change in executives who have been around for more than 10 years?

Use the “and” strategy. If you remove their safety blanket (ASR/Clips), this may cause a problem.

For the next three months, give them exactly what they want. Place the volume and ASR on the first slide. And right next to it, put one A new metric that relates more closely to impact.

Then add a simple story: “Although our ASR dropped slightly, the outreach of our messaging around the ‘innovation’ theme jumped 10%, resulting in record visits to our new product page and increased inquiries.”

Over time, they will realize that the ASR number doesn’t tell them anything useful, when in fact the new metrics explain a more tangible contribution to the organization. Eventually, they’ll stop asking for the old numbers as you continue to embody that better story.

Policy measurement and advocacy

S. Policy/advocacy outcomes are difficult. Any tips for measuring results…other than surveys?

Politics is a long game, and the “result” often takes years. In the meantime, focus on measuring impact.

If you cannot conduct a public opinion poll, look at the language of the decision makers (parliament/government).

  • Hansard/Proceedings Record: Do Politicians Use for you Specific phrases or key messages in discussions?
  • Submissions: Do other organizations reference your research or arguments in their applications?
  • Access: Are you holding meetings with higher-level officials compared to last year?

These are all measurable forms of “key message penetration” and “impact” that demonstrate the success of advocacy efforts, long before policy is formulated, enacted, and understood. I would also argue that most policies relate to a broader topic or topic, and it is always useful to understand how these narratives shift in public spaces (may it be traditional media or social media) and what can be used in policy design or communication. Sometimes the best way to look at impact is not as linear as purported survey data, but rather about broader shifts in understanding.

Methodology and accuracy

S. Can you discuss feelings and how reliably they are accurate?

Sentiment analysis has come a long way, but treating it as a simple “positive/negative/neutral” score often misses the point. It is often used as a substitute for perception, which is often not how standard emotions work. Emotions are one input for evaluating the quality of something, but they also need context and accuracy.

For example, 100 “negative” posts about a simple website glitch are very different from 10 “negative” posts about your CEO’s ethics. The first is operational noise. The other is a strategic risk. A simple overall score may mask this distinction.

When you dig a little deeper, you can think about the following:

  • Who is speaking? (random bot vs. key stakeholder)
  • What is the topic? (Product advantage vs. company reputation)
  • What is the intensity? (slight annoyance vs. anger)
  • Who would have seen this? (Key stakeholder forum or low follower account on X)

One of the easiest trade-offs you can make in measurement and reporting is to improve the specificity of what positive means to you – remove 90% of positive coverage and replace it with 40% of positive message penetration.

“With more social content available to the public, have you found engagement to be more or less predictable?”

It’s definitely less predictable organically. Social algorithms increasingly penalize “corporate” content and prioritize “human” content.

What worked yesterday may not get any engagement today due to the algorithm change. This reinforces why we should not rely solely on “vanity metrics” (likes/shares) as a measure of success. We need to focus on what is happening after Click (“reaction” and “result”).

S. Does Isentia measure brand awareness among target audiences? I know that tracking brand awareness traditionally is a big investment.

I’m glad you asked this question, because “brand awareness” is often used as an umbrella term for three very different things:

  • Awareness (salience): Do people know you exist?
  • Comprehension (clarity): Do people know what you are? He does?
  • Reputation (trust): People’s action Trust You?

Yes, we measure brand awareness, but usually in terms of broad narratives which is often a more actionable way for communications teams to track their impact. Brand awareness metrics are often used as a proxy for reputation, and are often not measured in the same way. If awareness is important to you, there are plenty of foundational elements you can start with outside of these traditional survey-based approaches – the space, voice and impact of your messages and campaigns can be a good starting point.


Interested in watching the entire recording? Watch our Webinar here.

Instead, Contact our team To learn more ideas about meaningful measurement, KPIs, and communication with the right data set.

Leave a Reply