Survey Takers Average Two Panel Memberships and Name Names
Who exactly is taking your survey?
It’s an important question beyond the obvious reasons and odds are your screener isn’t providing all of the answers.
Today’s blog post will be the first in a series previewing some key findings from a new study exploring the characteristics of survey research panelists.
The study was designed and conducted by Kerry Hecht, Director of Research at Ramius. OdinText was enlisted to analyze the text responses to the open-ended questions in the survey.
Today I’ll be sharing an OdinText analysis of results from one simple but important question: Which research companies are you signed up with?
Note: The full findings of this rather elaborate study will be released in June in a special workshop at IIEX North America (Insight Innovation Exchange) in Atlanta, GA. The workshop will be led by Kerry Hecht, Jessica Broome and yours truly. For more information, click here.
About the Data
The dataset we’ve used OdinText to analyze today is a survey of research panel members with just over 1,500 completes.
The sample was sourced in three equal parts from leading research panel providers Critical Mix and Schlesinger Associates and from third-party loyalty reward site Swagbucks, respectively.
The study’s author opted to use an open-ended question (“Which research companies are you signed up with?”) instead of a “select all that apply” variation for a couple of reasons, not the least of which being that the latter would’ve needed to list more than a thousand possible panel choices.
Only those panels that were mentioned by at least five respondents (0.3%) were included in the analysis. As it turned out, respondents identified more than 50 panels by name.
How Many Panels Does the Average Panelist Belong To?
The overwhelming majority of respondents—approx. 80%—indicated they belong to only one or two panels. (The average number of panels mentioned among those who could recall specific panel names was 2.3.)
Less than 2% told us they were members of 10 or more panels.
Finally, even fewer respondents told us they were members of as many as 20+ panels; others could not recall the name of a single panel when asked. Some declined to answer the question.
Naming Names…Here’s Who
Caption: To see the data more closely, please click this screenshot for an Excel file.
In Figure 1 we have the 50 most frequently mentioned panel companies by respondents in this survey.
It is interesting to note that even though every respondent was signed up with at least one of the three companies from which we sourced the sample, a third of respondents failed to name that company.
Who Else? Average Number of Other Panels Mentioned
Caption: To see the data more closely, please click this screenshot for an Excel file.
As expected—and, again, taking the fact that the sample comes from each of just three firms we mentioned earlier—larger panels are more likely than smaller, niche panels to contain respondents who belong to other panels (Figure 2).
Panel Overlap/Correlation
Caption: To see the data more closely, please click this screenshot for an Excel file.
Finally, we correlate the mentions of panels (Figure 3) and see that while there is some overlap everywhere, it looks to be relatively evenly distributed. In a few cases where correlation ishigher, it may be that these panels tend to recruit in the same place online or that there is a relationship between the companies.
What’s Next?
Again, all of the data provided above are the result of analyzing just a single, short open-ended question using OdinText.
In subsequent posts, we will look into what motivates these panelists to participate in research, as well as what they like and don’t like about the research process. We’ll also look more closely at demographics and psychographics.
You can also look forward to deeper insights from a qualitative leg provided by Kerry Hecht and her team in the workshop at IIEX in June.
Thank you for your readership. As always, I encourage your feedback and look forward to your comments!
@TomHCanderson @OdinText
PS. Just a reminder that OdinText is participating in the IIEX 2016 Insight Innovation Competition!
Voting ends Today! Please visit MAKE DATA ACCESSIBLE and VOTE OdinText!
[If you would like to attend IIEX feel free to use our Speaker discount code ODINTEXT]
To learn more about how OdinText can help you understand what really matters to your customers and predict actual behavior, please contact us or request a Free Demo here >
[NOTE: Tom H. C. Anderson is Founder of Next Generation Text Analytics software firm OdinText Inc. Click here for more Text Analytics Tips ]
16 Responses
I’m so disappointed I won’t be able to attend the ILEX Conference. Your report begs the question: how many panels is too many to participate in?Without knowing the details of your study, my sense is that the findings were similar to the Qualitative study that Abby Leafe and I reported on a couple of years ago ( at national and local chapters of the QRCA.)
50 Shades of Respondent Grey : What we learned about cheaters and repeaters (and thought starters about what we might do about it!)
We were appalled to hear stories of participants who game the system (with one fellow telling us of the 60 market events he participates in a year!). And we were also distressed to learn of some ways in which recruiters may have intentionally or unintentionally allowed this abuse to happen. We’ve shared our recommendations with many in the field and personally I see improvements. I’ve been pleased to see appropriate, “clean” recruits in my studies .
Hope to read your full report.
We came up with the same basic response – we’ll call them serial repeaters. We decided to delve a little deeper though and asked them (in our qual portion) why they thought it was ok to stretch the truth on a screener and it seems like we (our industry) bears a certain amount of responsibility for creating long and arduous screening processes. Also, their motivations for participating are wildly varied. I personally think we need to shift from talking about serial participators to discussing what makes a good participant and what are we doing to foster that. Truly, they feel like they are as much a part of our industry as we do, just in a different way and they genuinely CARE.
More to come!
Kerry;Yes, we also heard a range of responses as why they participate : 3 major camps- those who are purely financially motivated, those who seemed more intrinsically motivated and felt that they were offering sound advice and information to help companies (and those are the folks we love to work with!) and there was also a large pool of respondents who like to participate because it’s a social event for them.
I think you are right that we can shift the conversation a bit as to what makes for a good participants.
do you have this for professional (eg physician) panels as well?
I agree with you that this is interesting ” … a third of respondents failed to name [the panel they were sourced from]”.What is your interpretation of that, it light of the finding that 80% of people say they only belong to 1 or 2 panels?
I think many of them just don’t know. Also, we likely know the panel company by a different name than the participants. Even in conducting research (as a community manager or moderator) if a participant asks me a question regarding incentives and I push them back to the company they came from – you’ll get – ‘I don’t remember what company it was.’ as a typical response.
Isn’t the research a little (or a lot) biased due to the selection of three specific panels to populate the survey?
Good question, but in both rounds of this we got the same results – so it seems like a universal truth.
I, personally, am surprised when people in our industry are surprised by this. It seems like it should be the starting point for our conversation about what really matters in a participant.
This endeavor attempts to take on some of those questions and provide some additional ways of thinking about this. Watch this space (as well as GreenBook blog and IIeX, of course) for more on this matter in the coming weeks.
[…] Previously posted at http://odintext.com/blog/look-whos-talking-part-1full-research-panels/ […]
How can you expect this study to be legitimate when you sourced the names from existing panels? Seems to me it would be much more accurate had you sourced the contacts independently, then asked how many were members of panels. Did you omit the source panels from the findings?
Quick correction/addition. While OdinText picked up “e-Rewards” somehow it got excluded from the table when posting the results. It was mentioned by 3.96% respondents, and the respondents who mentioned e-Rewards said they were members of 4 panels on average.
I’ll take the liberty of answering a few of the methodological questions above, though Kerry Hecht and Jessica Broome are far more familiar with that aspect than I am. Again, OdinText was just brought in more for the text analytics of OE/comment data.
@g No, I think that was unfortunately out of scope for this study, though certainly I would expect expert panelist data (Doctors, IT professionals etc.) to look very different, and it certainly would be worth doing.
@Gregory/@Steve As you can imagine sampling for a study like this is complicated to say the least, and I believe in this case these companies volunteered to help. I think it’s great that they were willing to share data publically like this. While without looking at the data I might agree with you, after looking at the data (even though these firms are relatively different from each other) I was surprised to see such consistent overlap in panels being mentioned. This would lead me to believe if we included other panels results would probably be similar.
mTurk was not a frequently mentioned panel companies by respondents in this survey?
@Justin, No, it was mentioned only by 2 people, and so was not included in above analysis.
Melanie Courtright and Kartik Pashupati investigated cross-panel duplication as part of the ARF Foundations of Quality-2 (FoQ2) initiative. Our findings, published in the Journal of Advertising Research, showed that — depending on the detection method used — there was about a 20% duplication of individuals across panels.
http://www.journalofadvertisingresearch.com/content/54/3/263
[…] Original Article […]