- 14. Oktober 2021

Media Currencies: The Past, Present, and Future

 Michael Vinson, PhD
Michael Vinson, PhD
Chief Research Officer
Comscore

Introduction: Media Advertising Transactions

Much of the information and entertainment we consume is supported by advertising. Whether it’s television, websites, email newsletters, or social media messaging, chances are it is at least partially underwritten by advertising. And this is one of the great equalizers of information in the 21st century: most content doesn’t require cash payment; rather, the value exchange is exposure to ads in return for the content of interest.

In order for this value exchange to work, there must be an agreed-upon currency for the transaction. A media buyer and seller must be able to reach an agreement concerning the value of a given advertising placement; in particular, they must first agree on how the ad is to be evaluated and priced, usually in a way that is proportional to the size and quality of the audience. For example, TV ad inventory is often priced according to a negotiated CPM, or “cost per thousand”, where the “thousand” refers to the audience size – which requires a measurement. When stripped to its essentials, every media transaction requires a currency based on audience measurement. The minimum requirement the underlying measurement must meet is that both the buyer and the seller agree on it. Beyond that, there are other desirable properties that it should have:

  1. it should be accurate in the sense of being meaningfully related to the thing it allegedly measures;
  2. it should be fair in the sense of giving comparable results in comparable contexts;
  3. it should be stable over time and at least somewhat predictable;
  4. it should properly measure all segments of the audience, including traditionally under-represented populations;
  5. and, finally, it should not be plagued by statistical fluctuations or, in the worst case, the "zero cell" problem (in which the quantity being measured is below the minimum measurable threshold of the measurement system)1. It should continue to function reliably even when the audience is sliced into highly targeted segments.

The Past: Two Kinds of Currencies

In the media transaction markets, there are two major historical trends that are now on a collision course: on the one hand, the traditional currency measurement of television based on small, recruited panels; and on the other hand, the technology-driven, detailed, and immediate measurements available in the online world via ad server logs, pixel and other content tagging, return-path data, web analytics tools, etc., in which the role of panels is to provide context and additional metadata to the measurements. It’s not a question of panel or no panel. Rather, it’s about the appropriate use of panels in the measurement, a decision that must take into account the properties of both the medium being measured and the panel itself. In traditional television measurement, the panel is the source of audience size; in modern online measurement, overall audience size comes from the “big data” asset, with the panel relegated to a supporting (but important!) role. As viewers consume more and more video content in cross-platform contexts, changes in the ecosystem are forcing substantial changes to measurement paradigms, especially in the currency context.

Nielsen's poor handling of COVID-related sample replacement issues2 in their national television panel has led to a historically significant change in how Nielsen's methods are perceived in the marketplace. In particular, it has led to the realization that traditional methods based on recruiting a panel from a probability sample frame are no longer viable. This has actually been the case for many years, but the industry has been slow to recognize it due to institutional inertia and other subjective forces. There are two main reasons the traditional methods are no longer viable as the foundation of currency measurements. First, response rates have fallen drastically over the last few generations, such that today a large majority of people will simply refuse to be part of a panel or even answer a few survey questions on the phone. This is even more true for panels that require a technician to come into the home to set up an array of monitoring equipment. As recently as the middle of the 20th century, a researcher could reasonably expect to get a response rate in the 80 to 90% range; looking back, it's hard to believe people were so generous with their time and opinions. Ah, the good old days! Unfortunately, that's just not the reality anymore. Some studies have found single-digit response percentages3. When most people who are approached refuse to be involved, you can no longer assume that the small minority who do participate – those generous, wonderful people willing to share their opinions and behavior – are sufficiently representative of the overall population to constitute the basis of a measurement. Put differently, even with the best statistical methodology behind sample frame design, if most people hang up on you, then at best you are measuring that small slice of the population who are unusually willing to be measured. Low response rates leading to poor representation of the overall population: that's the first reason the traditional approach, putting a recruited panel at the center of the measurement, is no longer viable for currency.

The second reason undermining the traditional panel-centric approach comes from the dramatic fragmentation of the media environment. A small panel may have been sufficient in the days of a handful of available TV channels. But today, with hundreds or thousands of linear, on-demand, and streaming options, a proper measurement requires a far larger panel than anyone is likely to be able to afford. Gone are the days when a single popular primetime show can attract large fractions of the population. Instead, we have scores of live viewing options and thousands of time-shifted and catalog choices, we can watch on multiple screens from the ones we carry in our pockets to the huge ultra-high-resolution device on the wall, and much of the time we are multitasking and consuming media from several devices simultaneously. Try to describe that by pushing buttons on a meter!

The two problems of drastically declining response rates and vastly fragmented media choices have been getting more acute for decades; the COVID disruption caused the industry to critically reevaluate the fit-for-purpose of "what we have always done" -- finally. For too long, the old ways have been given a pass by the industry while new, more sophisticated methodologies have been held to a far higher standard. Evaluated on a fair basis, the stability requirements of a currency strongly favor a measurement based on large-scale transactional data – with recruited, non-probability panels adding context and other additional insights.

The Present: Using Big Data to Measure Television

Today's historic willingness to challenge the emperor's state of undress4 opens up opportunities for new approaches to television measurement in general, and to currency measurement in particular. Concurrently, a generation of digital media buyers and sellers have learned to use large data assets to transact online ads. This experience can be brought to bear to fairly evaluate "big data" approaches to measurement in the television market as well. Companies like Comscore have demonstrated that return-path data (RPD) collected from digital set-top boxes (STBs) and connected televisions (CTVs) can be used at scale to make accurate, representative, stable measurements of linear TV viewing even in today's fragmented media space. It's clear how RPD deals with the problem of sample or panel size – the data sets involved are huge; Comscore for example collects data from over 35 million TV households, and can measure far down the long tail of fragmented media content and consumption. But what about the fact that such data sets are not probability samples of all TV households? Doesn't that problem exist here too?

The answer is that it is actually a different problem, and one that is straightforward to address. To be in Comscore's measured set of RPD households, all that is required is to be a subscriber to one of the television services that collects STB data and passes it on to Comscore. Thus, it does not require willingness to allow a company to set up equipment to monitor your TV viewing in your own home; it's just a matter of signing up to a TV service (and choosing not to opt-out of measurement, which only a tiny fraction of subscribers actually do). We know a lot about the households that are measured, and we also know a fair amount about the ones that are not: we know how many they are, their geographic distribution, the MVPDs they subscribe to, what networks are available to them, and we can even infer something about their demographics (by comparing demos of our observed households to the general market). These observations can be (and are) used to account for the differences between the observed households and the overall TV household universe. Contrast this with the qualification of being in a recruited panel, which is based on subjective criteria like willingness to participate and then to push buttons or fill out diaries. It's hard to know how the panel households differ from the large majority of households that refuse to participate, and it's difficult to measure the refusers because, well, they tend to refuse answering follow up studies in addition to their initial unwillingness to participate in the panel. The biases involved in a non-probability sample can be accounted for when there is sufficient information about the differences between the sample and the population5. For television audience measurement, there is, therefore, an alternative to the no-longer-viable panel-centered approach of the last century, and now is the time for the industry to make the switch.

The Future: One Way to Measure Them All

The time is now to bring television measurement into the 21st century6. More and more premium video content is being viewed on non-traditional services such as over-the-top, not to mention online, whether on computers or mobile devices. A typical consumer doesn't think in terms of linear versus on-demand versus CTV -- they think of content, and the rest is noise. Measurement must reflect that reality, and that means a true cross-platform currency measurement that leverages big data assets.7 Panel measurements, even when based on non-probability samples, are useful to understand context but they should only be used where they are fit to the purpose at hand. In particular, no modern measurement should rely solely on a panel when large-scale data assets are available. The audience measurement component of the currency of the future will be a seamless blend of linear and streaming, with proper deduplication and contextualization to inform the media transaction marketplace in a way that is accurate, fair, reliable, and reflective of true consumer behavior.

1 See also, https://www.comscore.com/Insights/Blog/Relative-Errors-in-Television-Audience-Measurement-The-Future-is-Now

2 https://variety.com/2021/tv/news/nielsen-undercounting-tv-ratings-coronavirus-1234970113/

3 https://amerispeak.norc.org/Documents​/Research/NORC_Labs_and_Amerispeak_launch_​plans_Nov_2016_​DTP_Formatted.pdf

4 For example, the VAB’s call to suspend Nielsen’s MRC accreditation, https://www.thewrap.com/tv-networks-to-media-ratings-council-mrc-suspend-nielsen-accreditation/, and the call for “measurement independence” by NBCU’s Kelly Abcarian, https://www.thewrap.com/nbcuniversal-nielsen-measurement-independence-ratings-viewership-olympics-nbc/.

5 This point has been emphasized in the AAPOR report on non-probability sampling, https://www.aapor.org/AAPOR_Main​/media/MainSiteFiles​/NPS_TF_Report_Final_7_​revised_FNL_6_22_13.pdf

6 See also Bill Livek’s article, https://www.comscore.com/Insights/Blog/Measuring-for-the-Future-of-Media-Today

7 Privacy is also an important issue. See https://www.comscore.com/Insights/Blog/Respecting-Privacy-in-Online-Measurement-Comscores-Vision