-
Researching The Research
August 12, 2008
Have an opinion? Add your comment below. -
Beware Of The Risks
If your station plans on doing callout research or auditorium music tests (AMTs) this summer, this is a great time to check up on them. Whether you are doing in-house research or your company has hired an outside research firm to do the work for you, there are several questions you should ask yourself. The first is are you using the right formulas? The key to successful research is a combination of great design and faithful execution that will give you actionable, conclusive data.
A lot of first-time programmers, as well as some veterans, fail to realize the overall health of any research program is critical to obtaining information that reflects a true cross-section of the market. A single flaw in the design, coupled with a few lapses in execution, and you're making decisions with data that belong in the garbage instead of decisions that can successfully guide your future.
So what are the first steps you should take in examining your callout research or auditorium music tests? Start with the source from which you're obtaining the sample -- the sample frame. If you're doing a form of random digit dialing, are all exchanges in your survey area represented based on the latest Arbitron zip-code information? In some growing markets, if a new exchange appears on line, you can sample it within 30 days.
Data Dangers
You should allow enough time to do the music test correctly. If not, you'd be better off not doing one at all. Invariably an upcoming rating period or a bad sweep means programmers want quick answers. Shortcuts are taken in hook selection and preparation, recruiting and scheduling. Too often the results end up being flawed. But the real problem is that you won't know the results are wrong until it's too late. Always plan far enough in advance so you don't rush yourself and the research company. As a general rule, allow at least five weeks for a project to go from start to finish. If your demo or screening requirements are particularly stringent, add another week or two.
Then in the case of a callout research project, you should do a tabulation of responses by zip code. Compare that tab with a zip code distribution of the population to determine whether or not you're getting representative coverage from your sample. If not, you have either a flaw in your frame or a problem with your cooperation rate.
Random samples that are ever-changing have the advantages of being projectable to the population and self-correcting over time, as long as your execution remains strong.
This is going to sound almost too basic to mention, but it's very important to be sure the hook is the proper part to be tested. While most hooks are easily identifiable, the choice of which five to 12-second segment to use becomes more important. Some research companies are not that familiar with Urban music ... and they occasionally miss the hook.
Never schedule more than 300 songs in one session. Attempting to do so is false economy. Yes, you might get feedback on more songs for less money, but the results will be suspect.
Panel Construction & Response Rates
If your sample consists of a panel, the same folks surveyed week after week, how is that panel constructed? If it's a panel of your listeners only, you have effectively created a built-in bias that discriminates against a broader point of view. Here are a couple of things to keep in mind for panel construction: Make sure you change the participants and be sure the participants represent a cross-section of your target audience. If you, or the company doing your callout research, employ participants who only listen to your station, your ego may be satisfied, but you probably won't advance in the ratings based on this research.
For those stations that are doing their own in-house callout research and using panels, I recommend creating three sub-panels. The first should consist of your brand-loyal listeners. Next you need one with people who listen your station on a secondary basis (your P2s). Then you need one with people who listen within your product genre (format), but not to your station. The weekly or monthly interview with each panel member would target music mixes among brand-loyal secondary listeners and competitive advantages among your station's non-listeners.
Panels are projectable to population only when costly controls are used, as will be the case with those markets now being measured with Arbitron's PPM. Panel samples are more often used as inexpensive tracking devices to measure relative changes within a panel.
Once you have identified your sample frame, how do you select your sample? Well, if you're doing random digit dialing, is your method of selecting prefixes or prefixes and block groups prone to error? If you're generating random numbers, check the randomness of your number generator.
Next, what are your editing rules? Are they universally accepted, understood and followed? What percentage of your entries have edit/key entry mistakes? Do you weigh the data? Has a research professional who understands the potential pitfalls of weighting looked over your procedure? Weighting, particularly for Urban and Hispanic stations, can often inject more error into the sample that it's meant to correct. Has your processing system been audited? Has a programmer other that you or the individual who prepared your system looked over the calculations? All of these questions must be answered regularly and honestly. You would be surprised how prevalent processing problems and calculation mistakes are.
Now let's look at what we call the "hit rate." What percentage of the numbers being dialed are actually households? Depending on the efficiency of your frame, this number should be between 75-90%. If it's any higher you're probably missing a lot of unlisted households or cell phone-only households. A lower number indicates your sample is inefficient. Then we have to look at the cooperation rate. What percentage of the households in which the phone is being answered is being converted to in-tab? While this is somewhat dependent on the length of the interview, you should look for a cooperation rate of 65% or higher.
Finally, we get to the response rate. Response rate is the in-tab divided by the number of qualified units or respondents in the designated sample. The denominator includes a percentage of no-answers, all refusals and all early terminates. The response rate should be at least 50% or better. A goal to strive for is above 55%.
Finally, understand how to process the results of the AMT. The most common mistake, when it's finally time to analyze the test results, is forgetting what the respondents were doing. They were telling you what they thought of a bunch of jams. They were not necessarily saying the best-testing songs belong on your station. That is a decision you make based on the type of people who participated and what you want the image and sound of your station to be. Maybe the participants dug the song, but only when they are in a particular mood or setting or listening to a particular station. They might be shocked and upset to hear that song on your station.
It all comes back to the importance of careful initial screening. You must know who is rating the songs to know how to use the results.
Be sure the sample size of special breakouts is sufficient to make the numbers meaningful. If you ask the research company to provide you with a breakout of the reaction of a certain number of 25-54 males who like Urban music and spend at least two hours a day with radio, you might be basing a decision on only a handful of responses. The more specific the breakout, the less stable the data. Use sub-cells only to help confirm a decision based on a larger part of the total sample.
Auditorium music testing can be the edge you need to make your station a success. By ignoring the dangers inherent in this form of research, however, you can create more problems than you solve.
I know this editorial may be boring for some of you. To those we apologize. But for the others, who are really into research and understand and appreciate the value of the information we've provided for you, that's our mission.
Word!
-
-