An Insight into Jon Coleman: Part 2

In part 2 of  An Insight into Jon Coleman, we explore how Coleman Insights has been the innovator when it comes to media research worldwide. It will explore some of the common mistakes or false assumptions that programmers make and it will look at case studies & the strategies used to build ratings success.

Greg Smith: What are some of the common mistakes or false assumptions that programmers make?

Jon Coleman: You know, most programmers today don’t make a lot of mistakes.  The men and women who have grown up in the era of research and in large companies with multiple holdings have been trained, in general, better than previous generations.  They also have more resources which allow them to do a better job of selecting the right strategy and tactics. And, I am not saying anything negative about older PD’s, I am just saying that the resources and support needed to make a good PD are greater, so there are fewer false assumptions or mistakes.

That said, you still see assumptions and mistakes.  We always will.  There are several that I think are most common.

First on my list would be assuming a level of involvement that simply does not exist.  As I mentioned above, listeners are not paying attention.  If you assume they are involved, then you run the risk of having a station that is not clearly defined.  Listeners cannot easily figure out your station’s format or points of differentiation without help…that help comes in the form of on air and off air marketing.

Assuming listeners are involved leads to complicated morning show roles and characters, complicated contests, poorly named and marketed features and special shows.  The result is that listeners never fully appreciate the station, its format, its unique values and how the station benefits them.  This shows up in research as a station with poorly developed images and that usually means weak ratings.

Second would be focusing all on tactics under the belief that you can manipulate the meter or the audience.  In the U.S. we had kind of moved away from an obsession on tactics until PPM.  That has made programmers fuss over stop set placement, appointment listening and more.  All of this has a role in modern programming, but when it is done at the expense of understanding the core values of the audience and programming to them, it can lead to station’s who have lost sight of their true position.  An example might be a station that is so focused on “milking another quarter hour from the audience” that they may have drifted musically from their original focus.  I have seen stations that have inadvertently become too broad or weakly imaged so that they become vulnerable to a more focused competitor.

There are also assumptions that some make about music that can get you into trouble.  The biggest mistake here is relying too much on music scores and not filtering the results of music studies by the strategic position of the station. Station’s like this can drift from hard to soft, old to new, pop rock to rhythmic just by virtue of one music test.  There is no doubt that stations need to respond to the tastes of the audience, but library testing and new music testing are not the best way to follow music tastes.  These research techniques are very valuable, but a small or poorly constructed sample, and no strategic overview can take a station off course.  Examples include “oldies” stations getting too contemporary too fast or CHR stations over chasing today’s flavour.

These are just some of the mistakes or assumptions that can be made.  Others involve breakfast shows and how to construct them, based on faulty assumptions about what listeners want.

GS: When you take on a new radio station client, what are the key questions you ask?

JC: Normally, our questions are not that unique, it’s the answers that are most instructive.  Typically, we first find out what the objective data tells the client about the station.  Ratings, library size, format breadth, etc.  We do this by reviewing every aspect of the station’s ratings, programming, marketing, personality profile. We do a complete analysis of the station’s and its competitors’ music, assuming it is a music station.  After a conversation of anywhere from an hour to a couple of hours, we are usually able to recommend which type of research will be most valuable to them.  Should we do a perceptual study, which we call a Plan Developer, a music study (FACT360 library study or Integr8 new music research), a mediaEKG moment to moment content analysis, or some other type of research?

GS: Take us through some client case studies & the strategy used to build ratings success. What are some of your best outcomes?

JC: This is my favourite thing about research.  Being a part of a winning station launch or format repositioning is awesome.  It’s awesome because right then and there you have new friends forever. You get to high five your clients and feel like a genius right along with them.

Over the years there have been a lot of very successful launches.  I will give you an example from back in the day and another more recently.

Around 1985 (plus or minus a year or two) we were hired by Nationwide Communications to research their newly acquired KZZP in Phoenix, Arizona.  The new program director was Guy Zapoleon.  Guy and I did not know each other before then, but now we are lifelong buddies.

We did some focus groups (now called 20/20 Focus Groups) and discovered that KZZP and KOPA were undifferentiated Top 40 stations.  Both had about a five share and listeners could not tell one from the other.  The music images were the same, no personality really stuck out, neither had a single memorable contest or station feature.  As a result they were in a dead heat.

Based on the research we recommended positioning the station as the leader by offering a level of marketing and excitement that neither station had offered to that point.

The research gave us every indication that KZZP could become the leader by claiming it and by becoming a bigger than life station.  A tactic that was used was the Phrase That Pays contest.  The phrase was long and hard to remember, but if a listener did remember it when called randomly on the phone, they would win $1,000 up to $10, 000. It was “KZZP 104.7 FM The Number One Hit Music Station”.  Well, within 90 days KZZP went from being tied at a 5 share to a dominant 10 share. KOPA soon changed format.  That was 30 years ago and KZZP is still the “Number One Hit Music Station”.

A more recent example is Amp in Los Angeles. Amp attacked long time market leader Kiss.   This is still a live market situation, so I can’t share too much, but I can say this.  Amp attacked a market dominant Clear Channel station with very similar music, but that is all.  Amp is basically the opposite of Kiss.  Kiss is high content, Amp is low content. Kiss is “hype”, Amp is anti-hype.  Kiss has a big highly produced morning show, Amp does not.  I think the key to Amp’s continued success was realistic expectations and a vigilant focus on staying true to the original strategy.  Usually, when stations launch with a unique strategy, they feel they can’t stay true to the strategy and they veer off course and start mimicking the leader.  In that scenario, the leader almost always wins.

GS: Is there one particular station or program that most people thought couldn’t be revived that you’re proud of bringing back to life?

JC: In the 80s WCBS FM in New York was an Oldies station that was faltering in the ratings.  The corporate guys wanted to turn it into a CHR station following the Hot Hits formula (there are no Hot Hits stations left in format 30+ years later).  We did research and found out that listeners loved the station except when it played contemporary music.  The WCBS called this music “future gold”.  The listeners called it “crap”.  When WCBS focused on the basic Oldies premise of the station and stopped assuming that listeners wanted recurrent music, it exploded.  Right now 30+ years later it is still the leading station in NY.

GS: Coleman Insights is an industry leader in media research.  What are some of the research techniques you’ve pioneered? 

JC: There are quite a few things we have either pioneered or refined.

For example, when we started, Todd Wallace was already doing call out for radio stations.  35 years later, we are still doing new music testing, but we do it differently and we go much deeper in our analysis and provide more than a simple song list.

When we started doing call out research, companies were not playing song hooks down a phone line.  They were doing it by reading the name of the artist and the name of the song.  We very quickly saw the shortcomings of that approach and began actually playing songs.

In perceptual research that was done for much of the 80s, radio format appeal was measured in the same way, which is by just using verbal descriptions of the format, without music.

We noticed that in our research at the time Rock based formats almost always did better than Rhythmic music based formats in this type of research.  The reason was that Rock artists were household names, while Rhythmic artists were not. In fact, some Rhythmic artist’s names were down right off putting to the white suburban teen or young adult.  With artist names like Public Enemy, Arrested Development and Above  The Law many consumers reacted negatively.  The names were scary, but the songs were not.  So, when we started playing song hooks as a way to measure format appeal, everything changed.  We were much more accurate in predicting format appeal.  And, not surprisingly Rhythmic music began to blossom as the basis of American radio formats.

We also pioneered the use of “expectation or image” in the measurement of song appeal in call out and auditorium library tests. We first discovered how helpful this would be in our focus groups for American Country music stations.  There was a time in the late 80s when Country stations were experimenting with playing Rock music.  There was a belief, later proven to be wrong, that Country radio could only maximize its audience potential if it could dip into Rock music that would also appeal to Country listeners. In our focus groups we played some of these songs to Country listeners, exploring their interest in hearing them on Country radio.  What we found was that some of the songs were well liked, but that listeners were surprised when they heard them in a Country mix.  On further probing we found that though liked, it was not the type of music they listened to Country radio to hear.  When they said they did not expect it, they meant they did not like it on that kind of station. Expectation, under the name “Fit” was born as a part of all of our music research.  The idea of living up to a product image was not new in marketing, but it was new in music research.

Fit has been controversial in some quarters.  It is criticized because some programmers feel that if you are locked in on what listeners currently expect, you cannot be flexible enough to follow an emerging trend.  We totally agree with that idea, though we preach never become a slave to Fit, especially with brand new music that is of a genre that is becoming or has become very popular recently with your listeners.

More recently, we have introduced time spent listening into our music research, where heavy listeners have proportionately more impact on scores than lighter listeners.  Now we have two scores for songs, a Cume score and a TSL weighted score. A 10 hour listener can have ten votes, while a one hour listener gets only one vote in this TSL weighted score. We identify songs that really drive consumption with our TSL Max score.

You can read Part 1 here.

In part 3 of  An Insight into Jon Coleman, Greg Smith asks Jon what advice he has for  PDs or Content Directors on how to get the best out of on-air talent. And what are some of the key ingredients that programmers should keep in mind for making great radio.

Comment Form

Your email address will not be published.

Recent comments (0)
Post new comment

Jobs

See all