In this era of Big Data and user profiles from mobile & smart devices, is there really still a place for traditional ways of radio research?
This is the first article in a series of best practices for radio research, written by Stephen Ryan, an expert in this field. Today’s research for radio stations is based on a legacy of tried and tested methods. In a time of Silicon Valley-driven technology evolution, where everyone with a connected device is monitored and analyzed in the tiniest of detail, is there really still a place for these traditional methodologies?
“It’s not simply about size; it’s more about complexity”
Much is made these days on the use of Big Data and companies such as Amazon, Facebook and Google, using complex data and predictive analytics to profile users and personalize experiences. Recent market valuations of these companies illustrate the payback for the investment in developing their sophisticated algorithms. Closer to our industry, music streaming services like Spotify, Deezer and Pandora spend vast amounts of time, money and resources on using Big Data to fine tune the listener experience and strive for ultimate personalization.
This all begs the question: where does Big Data fit with radio, and is it relevant? More importantly, if these data are so valuable, is it time to say goodbye to some of our traditional research analysis methodologies, and focus on Big Data processing instead? I will argue that while programmers should use every available resource to identify listeners’ desires and needs, including Big Data analysis from the likes of Twitter, we should still use legacy research methods to quantify and understand exactly what our listeners are doing – and more importantly, why they are doing it. Our emphasis is on music research, but the same applies to other forms of established radio research.
There is an important distinction to make. The availability of detailed data sets (such as through Portable People Meter analysis) is not the same as Big Data analysis in its truest sense. Big Data depends on the ability to analyze complex interactional and transactional behavior in an attempt to discover patterns and trends. It’s not simply about size; it’s more about complexity. The vast amount of these data is unstructured. Their analysis requires advanced computational power and methods that traditional data analysis simply cannot cope with. Legacy data analysis relies on a structured approach. Research methods that we have become so dependent on tend to be based on relational database models. Big Data analysis takes wildly unstructured and complex data, and attempts to make it structured and understandable through patterns and trends; taking random individual behavior, and trying to identify commonalities.
“Think of it like a massive focus group”
So, Big Data analysis is not the same as gaining greater insight from a more granular or detailed data set that is generated through a traditionally structured methodology. For example, PPM methodology provides minute-by-minute listening data – compared to the normal quarter-hour-by-quarter-hour data captured by diary (or yesterday recall) systems. We can say that the Portable People Meter provides greater detail and more data points, but it’s not Big Data in its purest form. All ratings systems tend to operate in a structured manner with a sample (or panel, in the case of PPM) being specifically set by criteria such as age, gender, race, ethnicity, and income. Radio ratings research needs to reflect the listening behavior of the local market’s population through appropriate sample sizes (and weighting where needed). So, in this regard, detailed does not necessarily mean complex.
In its basic construct, radio is a simple business model. We use our creativity and marketing techniques to produce an attractive 1-to-many service. Success is judged by how many people listen how long and how often. Big Data is used for 1-to-1 services via predictive analytics and content personalization. While this is almost the inverse of radio, it ends up with a similar aim of trying to group together a commonality amongst individual behavior.
So, is Big Data relevant to radio? Of course it is! Monitoring Shazam enquiry trends for new songs helps to identify those with potential, while Spotify and other streaming trend data can help us to identify the real hits we should be playing. But to find the songs that really matter to your specific audience, we should still ask that audience through structured research. Big Data does not replace legacy research; it flags and helps to point us in the right direction. Think of it like a massive focus group where opportunities and issues can be highlighted, but then need to be tested in a more structured way.
“It tells us what potentially happens when the song plays, but it doesn’t tell us why”
When we talk about monitoring Shazam or Spotify trends, we are of course looking at third-party generated analysis. Should radio develop its own Big Data sets? Such data and analytics are both resource and time intensive. It may be an opportunity for large radio groups with sufficient financial resources, but it’s likely to be well beyond the bounds of smaller groups and single stations. Also, while our listeners are complex as individuals, how they consume our service is simple. In pure transactional terms, the interaction is far from complex, and it is complexity where Big Data analytics come to the fore.
Even if resources do allow, how much information do you really need and what exactly are you going to do with it later? Research should help to generate actionable results, eliciting views and opinions that either support parts of a strategy or flag issues that need attention. In a Big Data world, we might be able to know not only that person X does most of his radio listening in a car, but also that the car is a blue Volvo, that its average speed is 50 miles/hour, and that the average occupancy of the vehicle is 2. Great, but do you have the time and resources to analyze the value of all this, and if yes, exactly what is your benefit? There is a large number of innovative web- and app-based techniques for identifying listener behavior. However, for the most part, these techniques illustrate what happens, but not necessarily why.
As an example, there are a number of analytical tools available that can match Portable People Meter data to the reconciled schedule play-out logs. Cross-referencing allows us to track the audience’s behavior as each song or segment plays across the day. By identifying people’s behavior each time say a certain song plays, we could see a particular trend where when that song is played, the audience dips. However, in a similar way to radio ratings, it tells us what potentially happens when the song plays, but it doesn’t tell us why. In order to spot a consistent trend, a song needs to be sufficiently exposed by having a reasonable rotation. If the song turns out to be turkey, hasn’t the damage already been done?
“The nuances that an experienced programmer can spot are simply not there”
While the US have pushed through the adoption of the Portable People Meter, there are more countries still reliant on diary and yesterday recall methods. This includes the UK, where RAJAR has retained diary methodology, while they investigate some concerns on methodology and cost. However, stations in non-PPM territories can still get minute-by-minute data using logs from their streaming output and/or the use of a station app for listening. Again, cross-referencing the logs with the reconciled schedule allows us to spot trends. But the issue remains the same: cross-referencing tells us what happens, not why.
Music research, such as callout, allows you to follow the life cycle of a song. If there is an issue, perhaps it’s unfamiliarity or (in the latter part of the cycle) burn. On numerous occasions, I’ve seen a new song with a high unfamiliarity – which may have been prematurely tested – in combination with a negative score. An experienced program director can see the nuances. If a new song is allowed to be further exposed, that negativity often dilutes as it becomes more familiar. If a decision was based on what listeners did when the song was initially played, a lot of songs with potential could be ripped from the playlist!
We’ve come a long way from listener requests and dedication letters being the only feedback channels. Now we have sophisticated interaction systems to use through websites and apps. Listeners can preview songs, vote on songs, and potentially influence the upcoming playlist at a click of a button. However, this voting is usually confined to an absolute choice of ‘like’ or ‘dislike’. Once again, the nuances that an experienced programmer can spot with music research are simply not there.
“Tried and tested radio research methods still remain relevant”
Listener behavior tracking through analysis of Portable People Meter and Internet radio streaming data (or the interactive voting results through the station’s website or app) are valuable tools for any PD. However, in a similar way that we should use focus groups, the results should only be used as a potential flag for further research, rather than an end in themselves. Further investigation could be done through callout research or auditorium music testing.
While mobile and smart devices continue to increase in sophistication and speed, there’s a growing array of tools for the modern radio programmer to understand more about their audience. To truly quantify and qualify the listeners’ desires, tried and tested radio research methods still remain relevant. We just need to ensure that the ability to capture and gather our sample data continues to evolve with (and remains compatible with) the use of mobile and smart device technology.
This is a guest post by (radio) research expert Stephen Ryan, whom we interviewed about why Radio Programming Should Avoid Any ‘Chalk & Cheese’ and Research Relies On Reliable Data. Stephen can be contacted through his website, www.ryanresearch.com.
Add Your Comment