PPM tracks exposure to stations, but Arbitron says it's cracked the code to measuring listener engagement. CEO Michael Skarzynski says they've developed a prototype which "couples" exposure to engagement. The development may be a way to quiet PPM critics. Some Black and Hispanic broadcasters believe the drop in ethnic station rankings is a measurement of exposure -- not station affinity. Skarzynski told the House Judiciary Committee yesterday they'll release the new engagement tool later this month.
PPM, by the way, is a passive device carried by panel members recruited by Arbitron. It "listens" and logs encoded audio from stations and web sites so the device knows what the panel member is hearing. Not necessarily what they are "listening to," but still it is about as close to real audience listening measurement we have seen yet.
Fly in the ointment: the PPM is showing dramatically lower listening to ethnic stations than did previous methodology. Some claim the other forms of measurement--keeping a listening diary, doing random telephone calls--allowed ethnic groups to "vote" for "their" stations, not to necessarily accurately report their true listening. There is no "voting" with the PPM. It can only hear what the panel member is hearing.
I cannot imagine what Arbitron has up its sleeve. But here is why YOU should care.
Electronic media has almost always been priced on ratings. So many dollars per rating point (one percent of the population) and the like. But as media becomes more and more diffused, the value of an audience is not in its numbers but in its makeup, its tendencies, its propensity to do certain things predictably. And in the media’s ability to make them do those things for the benefit of advertisers. If my radio station can deliver 3,000 people who will likely eat at a particular restaurant, it has far more value than another station that can deliver 50,000 people, none of whom will ever eat there. Using the old model, the station with 50,000 sets of ears could command the highest rates for its commercials and probably got the bulk of the advertising.
Now, if we could just deliver data about “predictable propensity,” and the software to make a compelling case, we’d have something. But it has to be as compelling as “lowest cost per point” once was.
How does this affect YOU? Advertisers support programming and media and stations that move widgets down at the store. If media effectiveness is not measured properly, ad money goes to the wrong media or station or programming. And those media, stations, and programming stay around, even though they are not necessarily what you or your peers want to read, watch or listen to. And the "good" ones go away because the "power" of their audiences--if not their numbers--are not getting reported accurately.
It impacts you in many, many more ways, too, but this is a blog, not a doctoral thesis.
But Lord help us all if Arbitron does some statistical mumbo jumbo, like weighting ethnic listeners higher than the rest of us, just to appease those who claim the PPM is somehow being unfair to stations that serve them. Nielsen is facing the same criticism, primarily from Hispanic TV stations, and could be forced into the same sort of audience measurement witchcraft.
If it ain't accurate, it can't be believed. And if it can't be believed, data have no value. So why bother?