I'm currently embroiled in an increasingly heated discussion with Millward Brown over the details of a recent LINK test. I won't go into full details, but there are two aspects of the methodology which struck me as baffling (at best) or utterly pointless (at worst).
Firstly, LINK gives your ad an "active engagement" score (which is a key component of the all-important AI score). To measure the engagement of your ad, the respondent is simply asked which one of the following words applies most to it: Interesting, Irritating, Pleasant, Boring, Distinctive, Unpleasant, Soothing, Ordinary, Involving, Disturbing, Gentle, Weak.
While MB go to the trouble of dividing these words into four quadrants along active/passive & positive/negative axes, it is only the active/passive which contributes to the "engagement" metric. So having someone respond with "soothing" or "pleasant" (passive, +ve) is just as bad (in terms of your engagement score) as having them respond with "weak" or "boring" (passive, -ve). Equally, this methodology would have us believe a respondent who describes an ad as "involving" is just as engaged as the person who describes it as "unpleasant".
Now, is it just me, or is this just a bit ridiculous as a meaningful measure of engagement?
Elsewhere in the LINK test, they conduct an "interest trace" (ie a worm), which you'd think would be quite a good measure of engagement (ie if the line is largely above zero, people are interested). But no. "Interest" apparently has no bearing on "engagement", and the worm is only used to tell you which scenes you should keep and which you should cut. (Because that's how consumers actually respond to TV advertising, right? With a second-by-second analysis of whether each shot is interesting or not).
Further on in the test, MB measure for "Feel Good Factor", based on claimed emotional response to the ad. While this is useful data, it is apparently not a component of "engagement", which is limited to rational response only. Given what we know about the role of emotion in creating engagement, this seems somewhat bizarre.
Even more bizarre is the second score I'd like to discuss: the "Link Persuasion Score". MB tell us that this is a measure of "advertising effectiveness in terms of persuasion".
While this a truly noble goal, I can't help but wonder at the methodology employed: simply asking respondents whether the ad would affect their use of the product being advertised.
That's it. That's how Millward Brown measure how persuasive your ad is going to be. By asking people who've just watched the ad. Again, this seems to display a ridiculously simplistic and naive view of how advertising works.
Anyway, rant over. Next week, I am sitting down with a senior chap from the MB "Global Solutions" team to discuss all of this and more. I must say I'm curious as to what his reponses may be.
If any of my loyal readers have any questions for him, by all means fire away.
(photo credit here)
Recent Comments