-
Meeting Expectations
August 15, 2006
Have an opinion? Add your comment below. -
Reading Between the Music Research Lines
Today it's more important than ever for urban programmers and music directors to be able to meet listeners' expectations. They must know what their core audience expects to hear when they hit their button and they must deliver to these expectations consistently. Sharp programmers must be able to do one more thing. They have to be able to read between the lines.
A few years ago programmers picked a stack of songs and hoped they were the right ones. Now, with the advent of music research, the music selection process has become more sophisticated and smart programmers are better able to determine if well-testing songs are compatible with the format and have the ability to become favorite songs. Ours is a never-ending quest for finding and formatting the most favorite songs on the radio and/or assembling the right play list. We can't effectively do this without delving into the topic of music research. Some of the approaches that helped programmers in the past are still valid, while others are not. Indeed, what we've learned is that some of the most reliable techniques relied on today still have some risks attached.
For example, auditorium music tests (AMTS), call-out and perceptuals have all run into problems. Some stations and companies have moved away from traditional callout as the difficulty (and the cost) of recruiting regular folks who could pass the screener and take the surveys increased. Caller ID and no-call lists were the first serious issues, but the amazing increase in young listeners with cell phones and no landlines meant callout has been eclipsed by technology. It was cost prohibitive to get enough P1s with random phone calls. Representative samples became more and more difficult to be found.
Researching The Research
As we look at the newest call-out research techniques for rock, Top-40 and country, many of us in urban radio wonder if there isn't a better way. One concern is the quality of listeners' feedback. When urban stations are unable to give a new song enough spins for it to become familiar enough to show reasonable passion scores, there is a problem. Some of urban's most successful stations now have syndicated drive time shows. This means that for eight hours of the day local programming is suspended and if the syndicated shows don't air certain songs, the spin totals could drop even lower.
Lowered spin totals are not the only problem facing urban radio today. When you interrupt someone's life with a phone call, barrage them with hooks of the songs you want to test, then put them on the spot regarding how they felt about each jam, that's a problem. Is the approach getting top quality results? Or will we have to accept just any old answer to get the interview over so the listeners could return to dinner or whatever we interrupted. Callout on currents is done more frequently with fewer titles and is much stronger and can be effective providing the weekly spin totals can get over 50.
Some companies use the same respondents over and over again. This tactic creates respondents who believe they're special and their opinions are more valuable than anyone else's. This biases the research. Larger online samples are much better, making it easier to invite the freshest respondents and avoid the problem.
There is a theory that you should only use a respondent once. The reality is that often that is a virtual impossibility. The difficulty is getting a large enough callout database. My position is so long as you space out the times you ask one person to take the survey, it can work. I personally know of a couple research companies that allow respondents to participate more than they acknowledge publicly. Their concept is that when you get a live one, you don't want to toss them back in the ocean. But on the other hand, studies show the average callout respondent wants to participate only about three times. Personally, I recommend no more than five times a year, maximum, with a rest period of a minimum of 30 days between participation. Why allow repeats from respondents? I have no problem allowing the same respondent to repeat participation in a callout survey as long as they've been qualified through a proper filter.
Remember call-out music research is a survey of both popularity and timing. We want to know how target listeners feel about each of the tracks being tested. So let them participate as long as they want. They'll tell you when they don't want to do it anymore. Some actually look forward to doing it. The net result is you'll have larger samples sizes than you would by insisting on some arbitrary rule that a respondent can only participate once or needs to rest for a week or two between surveys.
Stations have to be careful when they try to test their entire library through current callout. If they don't reach the target, passive audience, you can wind up with a distorted view of what the real audience wants to hear from the oldies base.
At Home Panels
It occurred to researchers and consultants that certain types of listeners didn't want to be hassled with an intrusive, call-out interview. So what did some do? They set up at-home panels. There are several ways to accomplish this, but the bottom line is that listeners are recruited (either by a brief phone call, responding to an over-the-air message, or to a mailer sent out to your key zip-codes to rate records, but at their convenience and at home. That way, there are no long intrusive calls, having to listen to hook-after-hook. Instead, these panelists receive cassettes or CDs weekly, containing the cuts that station wants to obtain feedback on. They then listen to the songs at the time/place of their choice when supposedly they're able to better reflect on how each song really hits their ears. Responses can then be returned weekly either phoned in, mailed back (a questionnaire can be included with each cassette or e-mailed back to the station.) Another way to achieve this is to use the Internet to send and receive the information, but we have found that response rates drop significantly because the respondent's attention is divided and some of our potential listeners still do not have Internet access.
Each listener is typically involved in the music research panel for a month. This allows for tracking of their perspectives over a several week period. Usually, in return for their cooperation, listeners are sent premiums by the station - anything from station merchandise to coupons for restaurant trade to cash. One major market programmer we know even sent cassette players to each respondent and when the person was through being on the panel they got to keep the tape players.
There are some drawbacks. For one thing, it's difficult to keep the sponsoring station or group confidential, which can affect the opinions of your panelists. In research it's usually best if you can keep the individual's responses as objective as possible by withholding the name of the project sponsor. Also, you can't always be sure who's filling out the questionnaire (which is the same problem Arbitron has with its diaries). When it's completed at home - the targeted person may give it to their brother, sister or friend to fill in, maybe as a lark and you may wind up with data from a person not in your target. Nevertheless at-home panels are an interesting alternate to call-out. It's one approach that a few urban stations have used.
How Auditorium Music Tests (AMTs) Really Work
The most widely used form of music research is still AMTs. It's rare when a music research technique generates strong controversy, but auditorium music tests have and do. Why? Well, let's look at the basics and the controversies.
Several researchers, including myself, claim to have modified and adapted this technique for urban radio. The basic process is really simple. Approximately 100 target listeners are recruited, screed and placed in an auditorium or ballroom, then exposed to (ideally) 250-300 hooks they're supposed to rate. The result is list of songs that are "OK" to play - and just as importantly, a tally of songs that are burned, unfamiliar or are tune-out factors.
AMTs are best used in specialized cases. This approach should primarily be used to gauge reactions to gold or re-currents. As such, AMTs have become relatively widely used primarily by urban AC stations. As opposed to on-going music research such as callout or at home panels. AMTs are used sporadically (largely due to cost factors). Most who use this technique look at it as a way to get a "fitness check-up" just before the start of a key sweep. It helps to ensure your play list is fine-tuned for the crucial survey. While AMTs have become a very much relied upon for of music research, there are concerns.
Many record companies, for example, complain that such tests are becoming too dictatorial - taking the local programmer's judgment out of the picture and that resulting play lists are too conservative. Some broadcasters have the same feeling - that stations that rely on AMTs become boring clones which then have to spend huge dollars in marketing to stand out from the competition (rather than have a locally created product that musically sounds different.) With urban AC stations, certain ballads often test well in certain 25-49 demos, yet what happens if all stations are airing the same "safe" songs?
Another key factor is cost. Well-done music tests can run between $20,000-$60,000. This covers recruits (you'll need to recruit at least 200 people in order to have 100 actually show up) renting the site for the evening's test (which usually takes about 90 minutes), cash premiums as incentives for people to come out after work and to this (premiums run between $25-$50 per person, depending on the city/location and the demos involved) and data procession- not to mention the skilled moderator and the creator of the tape, with the 300 or so hooks. If a researcher quotes you a figure way below that range, beware! They may be taking shortcuts than can hurt the quality of the data you receive.
Given the amount of a typical investment in an AMT, you should expect top quality. As you're considering doing such a test, ask prospective researchers if they can customize the music results to your needs, providing you information in a form you can understand (or do you have to fit into their standard software?) Also be vigilant regarding recruiting - the key to any successful research effort. We know a top station that was told by the researchers that recruits would be targeted urban fans - people who shopped in the neighborhood, those who attended urban concerts etc. It turned out that even though the station paid extra for this special recruiting the people were just standard recruits- not urban-targeted folks.
The final decision-making word once the audience research data has been coordinated, rests with the local program director. Good researchers have a good national overview of what the real audience wants to hear from their tests. When the research is completed it's important that the PD and MD listens to the researcher's perspective. But it's their responsibility to create the station's sound by carving music into different categories and rotations. It's easier for programmers to make decisions when they're offered many different perspectives. With all the new competition for the listener's ear, programmers must take an active role in the process of managing and meeting listener expectations.
Word.
-
-