-
Meeting Expectations
September 28, 2010
Have an opinion? Add your comment below. The Dr. intones on "Meeting Expectations."
-
Urban Is Not A Format In a Box
There is an opportunity for an Urban format that is contemporary, but not "audio wallpaper," to really score -- providing it can meet or exceed the expectations of the audience. The format has to be hip and have the production values and sensibilities that the 18-34 female-leaning audience is into and will accept. The key to making this format work is to first understand ours is not a format in a box.
Your station needs to be positioned as an at-work station with a lifestyle that young adult females can enjoy -- not background nor strident. When they hit your button you must deliver to their expectations instantly. There are plenty of listeners who don't want to hear an abundance of classic hits or ballads. This Urban hybrid format is a niche format. It's hipper than even the hippest mainstream or Rhythmic Top 40, but totally palatable in an office. So how do we achieve this and meet the required expectations? The answer, of course, is research.
Back in the day, programmers picked a stack of songs and hoped they were the right ones. Now, with the advent of music research, the music selection process has become more sophisticated -- and smart programmers are better able to determine if well-testing songs are compatible with the format and have the ability to become favorite songs. Ours is a never-ending quest for finding and formatting the most favorite songs on the radio and/or assembling the right playlist. We can't effectively do this without delving into the topic of music research. Some of the approaches that helped programmers in the past are still valid, while others are not. What we've learned is that some of the most reliable techniques relied on today still have some risks attached.
For example, auditorium music tests (AMTs), callout and perceptuals have all run into problems. Some stations and companies have moved away from traditional callout as the difficulty (and the cost) of recruiting regular folks who could pass the screener and take the surveys increased. Caller ID and no-call lists were the first serious issues, but the amazing increase in young listeners with cell phones and no landlines meant callout has been eclipsed by technology. It was cost-prohibitive to get enough P1s with random phone calls. Finding representative samples has become more and more difficult.
Callout Suffers With Lowered Spin Totals & Recycled Respondents
As we look at the newest callout research techniques, many of us in Urban radio wonder if there isn't a better way. One concern is the quality of listeners' feedback. When Urban stations are unable to give a new song enough spins for it to become familiar enough to show reasonable passion scores, there is a problem. Some of Urban's most successful stations now have at least one syndicated drive-time show. This means that for those hours local programming is suspended and if the syndicated shows don't air certain songs, the spin totals could drop even lower.
Lowered spin totals are not the only research problem facing Urban radio today. And it's not just Urban stations, but all stations that are running into problems. When you interrupt someone's life with a phone call, barrage them with hooks of the songs you want to test, and then put them on the spot regarding how they feel about each jam, that's a problem. Is the approach getting top quality results? Or will we have to accept just any old answer to get the interview over, so the listeners could return to dinner or whatever we interrupted. Today callout on currents is done more frequently with fewer titles. Despite this, it can be effective providing the weekly spin totals can get over 50.
Some research companies use the same respondents over and over again. This tactic creates respondents who believe they're special and their opinions are more valuable than anyone else's. This biases the research. Larger online samples can be much better, making it easier to invite the freshest respondents and avoid the problem. But there are risks here to be avoided, which we will review later.
There is a theory that you should only use a respondent once. The reality is that often that is a virtual impossibility. The difficulty is getting a large enough callout database. I recommend that you space out the times you ask one person to take the survey. I personally know of a couple research companies that allow respondents to participate more than they acknowledge publicly. Their concept is that when you get a live one, you don't want to toss them back in the ocean.
But on the other hand, studies show the average callout respondent wants to participate only about three times. Personally, I recommend no more than five times a year, maximum, with a rest period of a minimum of 30 days between participation. Why allow repeats from respondents? Because it's easier. I have no problem allowing the same respondent to repeat participation in a callout survey as long as they've been qualified through a proper filter.
Remember callout music research is a survey of both popularity and timing. We want to know how target listeners feel about each of the tracks being tested. So let them participate as long as they want. They'll tell you when they don't want to do it anymore. Some actually look forward to doing it. The net result is you'll have larger samples sizes than you would by insisting on some arbitrary rule that a respondent can only participate once or needs to rest for a week or two between surveys.
Stations have to be careful when they try to test their entire library through current callout. If you don't reach the target passive audience, you can wind up with a distorted view of what the real audience wants to hear from the oldies base.
Also, lately testing shows fewer and fewer songs receiving high passion scores. It used to be a song needed to have at least 75% or more total positive scores to be in a power gold category. Many stations now accept songs for power gold down into the 60% range.
Burn is accepted at a higher level today, primarily because we continue to test the same songs in every research project that is conducted. It's not necessary to expand the number of titles in our library; we just need to look for new and fresh titles to test.
At-Home Panels
It occurred to researchers and consultants that certain types of listeners didn't want to be hassled with an intrusive, callout interview. So what did some do? They set up at-home panels. There are several ways to accomplish this, but the bottom line is that listeners are recruited (either by a brief phone call, responding to an over-the-air message, a message on your website, or to a mailer sent out to your key zip-codes to rate records) -- at their convenience and at home. That way, there are no long intrusive calls, having to listen to hook-after-hook. Instead, these panelists receive custom loaded iPods, mp3s or CDs weekly, containing the cuts that station wants to obtain feedback on. They then listen to the songs at the time/place of their choice when supposedly they're able to better reflect on how each song really hits their ears. Responses can then be returned weekly either phoned in, mailed back (a questionnaire can be included with each MP3, or CD or e-mailed back to the station.)
Each listener is typically involved in the music research panel for a month. This allows for tracking of their perspectives over a several-week period. Usually, in return for their cooperation, listeners are sent premiums by the station -- anything from station merchandise to coupons for restaurant trade to cash. One major-market programmer we know even sent custom loaded iPods to each respondent and when the person was through being on the panel, they got to keep the iPods.
There are some drawbacks. For one thing, it's difficult to keep the sponsoring station or group confidential, which can bias the opinions of your panelists. In research, it's usually best if you can keep the individual's responses as objective as possible by withholding the name of the project sponsor. Also, you can't always be sure who's filling out the questionnaire (which is the same problem Arbitron has with its meters and diaries). When it's completed at home, the targeted person may give it to their brother, sister or friend to fill in -- maybe as a lark -- and you may wind up with data from a person not in your target. Nevertheless, at-home panels are an interesting alternate to callout. It's one approach that a few Urban stations have used.
Improved Online Research
One of the most widely used form of music research is still Auditorium Music Tests (AMTs.) The only difference is that the technique has changed. Research companies are now attempting to do at least a portion of it online. It's rare when a music research technique generates strong controversy, but online music tests have and do. Why? Well, let's look at the basics and the controversies.
As stations populated their e-mail databases, many hoped it would lead to a cheap way for obtaining reliable research. Well, it's cheap, but not so reliable. In order to get more qualified respondents and reliable data from online research, we need broader panel recruitment and higher paid incentives. We need to cast a wider net to get more qualified respondents and reliable data from online. We need broader recruitment strategies, such as telephone outreach. And we need to be able to incentivize panelists with paid premiums. Not a prize that a participant "might win," which is a big turn-off.
Most radio station databases were designed around contests and promotions. The problem is that radio station databases attract a disproportionate number of extreme fans and "contest pigs." As a result many potential online respondents in station databases are not in the defined target age groups. Some don't even listen to the station. Once you get past the screeners and have a group of respondents that reflect the station's audience, drawn from updated zip-code penetration data, we go to the next steps.
The basic process is really simple. Approximately 100 target listeners are recruited, screened and placed in an auditorium or ballroom, then exposed to (ideally) 150-200 hooks they're supposed to rate. The result is a list of songs that are "OK" to play -- and just as importantly, a tally of songs that are burned, unfamiliar or are tune-out factors.
AMTs are best used in specialized cases. This approach should primarily be used to gauge reactions to gold or recurrents. As such, AMTs have become relatively widely used primarily by Urban AC stations. As opposed to ongoing music research such as callout or at home panels. AMTs are used sporadically (largely due to cost factors). Most who use this technique look at it as a way to get a "fitness check-up" just before the start of a key sweep. It helps to ensure your playlist is fine-tuned for the crucial survey.
While AMTs have become very much relied upon for of music research, there are concerns. Many record companies, for example, complain that such tests are becoming too dictatorial -- taking the local programmer's judgment out of the picture and that resulting playlists are too conservative. Some broadcasters have the same feeling - that stations that rely on AMTs become boring clones, which then have to spend huge dollars in marketing to stand out from the competition (rather than having a locally created product that musically sounds different.) With Urban AC stations, certain ballads often test well in 25-49 demos, yet what happens if all stations are airing the same "safe" songs? The "music freaks" simply re-set their pre-sets.
Another key factor is cost. Well-done music tests can run at least $20,000. This covers recruits (you'll need to recruit at least 200 people in order to have 100 actually show up); renting the site for the evening's test (which usually takes about 90 minutes); and cash premiums as incentives for people to come out after work and to participate (premiums run between $25-$50 per person, depending on the city/location and the demos involved). Then there is the cost of food and beverages and data procession -- not to mention the skilled moderator and the creator of the tape, with the 200 or so hooks. If a researcher quotes you a figure way below that range, beware! They may be taking shortcuts than can hurt the quality of the data you receive.
Given the amount of a typical investment in an AMT, you should expect top quality. As you're considering doing such a test, ask prospective researchers if they can customize the music results to your needs, providing you information in a form you can understand (or do you have to fit into their standard software?). Also, be vigilant regarding recruiting -- the key to any successful research effort. We know a top station that was told by the researchers that recruits would be targeted Urban fans -- people who shopped in the neighborhood, attended Urban concerts, etc. It turned out that even though the station paid extra for this special recruiting, the people were just standard recruits -- not specially Urban-targeted folks.
The final decision-making word -- once the audience research data has been coordinated -- rests with the local program director. Good researchers have a good national overview of what the target audience wants to hear from their tests. When the research is completed it's important that the PD and MD listen to the researcher's perspective. But ultimately it's their responsibility to create the station's sound by carving music into different categories and rotations.
It's easier for programmers to make decisions when they're offered many different perspectives. Pay attention to the research-driven audience's expectations, even if you disagree. There could be a warning in there. It's kind of like if your smoke detector went off at 3am, you wouldn't jam your earplugs in, turn over and attempt to go back to sleep while listening to "Wake Up Everybody."
Word.
-
-