- 23 ottobre 2007

The Challenges of Reconciling Panel-Based and Server-Based Unique Visitor Counts

Louise Story’s October 22nd article in the New York Times, entitled “How Many Site Hits? Depends Who’s Counting” highlights the challenges of getting up to speed on the continuing debate about the differences between the online unique visitor counts provided by publishers’ web server logs and the independently measured unique visitor counts from the panel-based audience measurement services of Comscore and NetRatings. With a wide ranging set of interviews, the article reflects many of the myths that plague our industry today.

Ms. Story begins her article by underlining the contrasting figures between Comscore and NetRatings panel-based data versus Style.com’s internal server log data. However, the comparison turns out to be apples to oranges. In this case, the apples are U.S. data while the oranges are worldwide data. The article later acknowledges that Style.com’s figures are worldwide while the Comscore and NetRatings data were U.S. In fact, if you were to compare Comscore’s worldwide figures for September (which we do publish, contrary to the statement that “Conde Nast counts international readers and ComScore and Neilsen (sic)/Netratings do not”), you would see that we reported 1.28 million visitors to Style.com, certainly much closer to their internally reported 1.8 million, a number which is undoubtedly inflated due to the impact of cookie deletion (which I will discuss further below).

One of the other key factors causing differences between server logs and panel data is people visiting the same sites using both home and work computers. Since site servers are counting computers and not people, servers double count such people while the panel services do not. As the article explains: “But online publishers say that their [Comscore’s and NetRatings’] systems drastically undercount people who use the Web during work hours, particularly in offices where corporate software makes the wanderings invisible to the tracking systems.” While it is true that Comscore’s work panel cannot be sourced from every company, it is important to point out that Comscore applies different statistical weights and projection factors to different population segments. For every company that prohibits the download of software on employees’ machines, there are others that allow it. Employees from the latter companies are used to represent the others and the entire work population. This is similar to opinion polls that use people who agree to answer a survey to represent the attitudes and opinions of people who hang up on the caller. Further, one of the advantages of the Comscore panel is that we have a segment of panelists that allow us to monitor both their home and work computers, thereby enabling us to understand overlapping usage between these locations and adjust for it. The ability to accurately filter out overlapping home and work usage is just one example of the advantages of panel data over server logs.

The article also gives scant attention given to the single most prominent source of discrepancy between panel-based data and server-based data: cookie deletion. This issue is mentioned at end of the piece, which is summarized thusly: “To make matters more complicated, consumers who delete cookies — small bits of computer code that track their online wanderings — are also over-counted by publishers’ servers, by most accounts.” Comscore published a white paper in June that studied the degree to which cookie deletion inflated server-based data. The results showed that 30% of U.S. Internet users delete their cookies at least once per month and that this group deletes their cookies, on average, 5 times in the month, thereby leading to an overstatement in server-based counts of unique visitors of as much as 2.5x.

Now, this isn’t the first time an article on web metrics has failed to point to this key issue more prominently. An August 2007 article in Fortune, entitled “The Online Numbers Game” drummed up similar controversy, using Digg.com as an example:

“A number of web entrepreneurs believe the two companies shortchange them. Consider Digg.com, a site that lets users submit and rank news stories. Its own server logs recorded 10.8 million unique U.S. visitors in July. ComScore reported 4.6 million, and Nielsen//NetRatings 4.7 million.”

So let’s examine this example for a moment. What we have is both Comscore and NetRatings in general agreement over the audience size for Digg, but both differing dramatically from Digg’s own server logs. If you divide Digg’s reported 10.8 million by Comscore’s 4.6 million, you get an overstatement factor of 2.3x. If you read our cookie deletion white paper, the obvious conclusion is that this is completely consistent with our findings that the overstatement factor caused by cookie deletion can be as high as 2.5x. Unfortunately, the article never even mentions cookie deletion as a likely source of discrepancy.

There is no shortage of publishers with a vested interest in claiming our data are inaccurate, because as one publisher noted in the Times article, “Everyone likes bigger numbers.” Comscore’s only vested interest, on the other hand, is in being as accurate as possible. True media accountability cannot exist without a high degree of accuracy, and accurate media measurement is at the core of Comscore’s business.

Ultimately, the disparities often cited between panels and server logs cannot be explained by simple methodological flaws. When numbers diverge by a factor of 2 or 3, there are larger forces at play, with cookie deletion and cookie rejection clearly being the most prominent. If the industry wants to put this issue to rest, there needs to be an acknowledgement on the part of the media world that cookie deletion is one of the root causes of this controversy and that its impact is significant.

Obviously this controversy will live on for awhile. However, we are making progress. The IAB is sponsoring an educational conference on Audience Measurement on November 29, where there will be an attempt to explain what the numbers mean and why they could be different. One of these days, as part of this education, hopefully one of these vocal publishers will agree to share their detailed data publicly for closer scrutiny. It would be interesting to see what happens under the spotlight!

Ulteriori informazioni

Cookies