THE TOLL-ROAD CALLED NIELSEN.
The High Cost of Data Makes Us Each a Little Dumber.
By Erwin Ephron
Nielsen-bashing can be a cheap-shot, since no one loves Goliath. But even his husky friends admit there are problems. This is about a large one, "Special Analysis."
When I worked for Nielsen in the 1960’s, I had the strangest job on record. As director of press relations, my orders were to keep Nielsen out of the press.
This admirable shyness was part of a larger pattern of good intentions. I remember also being surprised by the company’s lack of interest in how the buying and selling of television worked. After all Nielsen ratings were central to the TV marketplace. I learned the distancing was in a sense intentional, part of a policy to fiercely guard Nielsen’s research objectivity.
Although things have changed greatly, I think our legacy is not objectivity, but industry-wide ignorance. The problem was and still is, special analysis. Because Nielsen made working with the database costly, agencies, who knew what to look for, couldn’t afford to buy the data. And Nielsen, which didn’t have to buy the data, didn’t know what to look for.
Once a year, Nielsen initiated, and still does, a series of free studies-of-interest in their TV annual. These analyses quickly become the wisdom of our information-starved business. But when we rely on isolated snap-shots to look at viewing our own vision can get distorted.
We have always assumed a chunk of the population — the light viewing quintile — is unreachable with TV, because viewing patterns are a fixed characteristic similar to gender or ethnicity. But, easily accessible SMART data show people drift in and out of viewing quintiles over a year as events in their lives let them view more or less television. So the reach of a TV campaign is probably higher than we’ve assumed. This is news only because at Nielsen special analysis prices, no one thought to look at quintile data across time in a unified sample.
The US trails the world
in TV data access.
Other recent, quite ordinary events, show the scope of the access problem. The US— the world’s most advanced TV market—was six years behind the UK and Europe in using optimizers, because until the P&G AOR competition, no agency was willing to spend the extra $300,000 a year, to buy the Nielsen respondent databases.
Only after Turner spent hundreds of thousands of dollars for their "Media at the Millennium" study, did a $31 billion industry know that cable can substitute for a portion of prime time in generating reach.
Broadcast, cable and syndication audiences are still reported and sold separately, when everyone agrees the key to keeping TV cost-effective is to get past the "silos" and think of it all as just "television." And so on.
One size fits all.
There is another "syndication" that Nielsen practically invented. Selling the identical analysis to many users. This is a slow, consensus-driven, central-processing model. It is obsolete in a real-time, personal, distributed- processing world.
In 1997 we were we still working with reach curves from the 1988 cume study, although TV had changed more in those 10 years than in the 30-years prior.
The latest insight, "Quad Analysis," may tell us something important about patterns of attention in TV viewing. It a simple cross-tab of program viewers by percent-telecast-minutes-viewed and frequency-of-program-viewing. It has been stalled for months by software limitations and a debate over costs and specifications. Quads and cumes are both analyses that should be done at your desk.
decide what we learn?
There is another danger in high cost information hinted at by Turner’s "Millennium" initiative. Is it wise to let sellers with deep pockets decide what we learn?
And there is the dark issue of transparency. Without access to the database it’s impossible to judge whether unexpected changes in viewing are real. There is room for doubt when only the research company can do the analysis — and major irony when the networks have to pay Nielsen to investigate their loss of audience.
It’s unreasonable that US television should have the poorest reporting system of any major medium. Ask any agency. They know smaller media like magazines and network radio, have far better information access than national (or local) TV.
Ask friends in the UK or overseas. They’ll tell you full and free access to the TV respondent database comes with the contract. 
Ask a third-world agency and they’ll tell you the same thing.
It boggles the mind that our most pressing TV research problem is simply allowing smart people access to the data. Nielsen knows this and the long overdue Npower system (the new name for DART) is their response. It will make things better, but the central structural issue remains. The industry needs reasonable flat-rate, full-access pricing, like that advocated by SMART.
Planners, buyers and sellers don't want the data. They want the database and the software to use it. As long as Nielsen restricts data access to create a profit center, we will have the worst effects of an information monopoly — ignorance.
(Subsequent to the publication of this piece, Nielsen lowered the price of database access and thrid-party optimizer software like X*Pert provided desk-top analysis tools)
 In Germany, a station can look at the opportunity cost of a tune-in spot the next day with software that calculates the value of new viewers attracted to the pro-gram, compared to the commercial value of the air-time used.
- February 22, 1999 -