Revisit Your Objectives
You probably worked hard to think of what your measurement objectives were for your evaluation and wrote your questions to follow them. Now that you have the results, look to see how your objectives were met, exceeded, or not met at all. If you met or exceeded your objectives, don’t just celebrate your success—look deeper into all the data you collected to find out why you met or exceeded your objectives. If the objectives were not met, don’t forget that negative feedback or information can sometimes provide more insight into how your audience perceives the topic than positive information.
Trends and Outliers—the Odd Couple
Most people immediately jump to the notion that they should focus on the trends of what the evaluation results say. What is the percentage of top two box responses to the question? How many people select this answer or that? However, the outlier responses can be just as informative as the most extreme responses, whether positive or negative, are usually provided because the respondent feels strongly about the answer. The outliers tell a different, oftentimes opposite story than the majority of the responses and thus sometimes expose more of the story.
Granny Smith vs Fuji Apples—Segmenting Your Data
Looking at how you can segment evaluation data gives you greater insights into how different groups or different types of respondents answered the same question. Whether all groups answered the same or not will help you understand whether you should approach the groups differently in the future. Examples of segmentation could be: role, practice setting, number of patients, or years in practice, or even the number of programs they have attended on a similar topic. The key to segmentation on anonymous survey results where you don’t have an individual identification (ie, attendee’s name or a unique ID number) is to make sure you have included any questions that will provide that segmentation information in your evaluation. Too often, we hear “can we segment this by X” after the responses are in, but we haven’t asked a question to get that information.
Comparing Yourself to Others Can Be Good
If the evaluation included the same questions as a previous evaluation or even similar questions, how did the new evaluation’s responses compare to the previous evaluation? Keeping benchmark data for different types of HCP programming—such as peer-to-peer/speaker programming, advisory boards, etc—will allow you to compare evaluation results from one program or project to tens to hundreds of similar programs or projects.
In the world of HCP programming, survey results are often positive due to social bias, lack of attention when completing the survey, indifference on the topic, etc. Therefore, we should take an even more critical eye to what might otherwise seem like a high score, especially in comparison to benchmarks. For example, if one set of program evaluation results has top two box responses around 87% for some questions, you may think, “Great! 87% of people have a favorable opinion!” However, if the benchmark for the same questions asked with other programs is 96%, you would know that while the 87% top two box response is still positive, it has historically been higher, so you may want to look at that 87% more closely.
Join the Geek Squad with Statistical Testing
If you have a large enough response size, you can run stats tests to see if there is significant difference between responses within the survey or in comparison to other evaluation results. Stats testing can show you just how powerful the differences are. Keep in mind stats testing should be done by someone with experience working with your evaluations or statistics. Ultimately, your survey/evaluation results can collect direct feedback, measure your performance, and help to reinforce or redirect your efforts. It is important to dig a little deeper, both in the questions you ask and in the depth of your analysis. Trends, outliers, segmentation, and benchmarking can help you identify additional insights that the high-level results may not show. Partner with your research and analytics team early and often to ensure that your measurements are providing the insights that you and your team need to be successful.
About the author
Sarah applies market research principles and techniques to develop an appropriate research strategy based on the goals and overall strategy of a project. She is experienced in developing market research collection tools, including evaluations, questionnaires, focus groups, and live interviews; and in finding and recruiting healthcare professionals for market research projects.
Preparation is everything. In a previous blog, my colleague pointed out that 50-66% of pharma brand launches fail to meet expectations. Additionally, more than 25% of brand launches fail to reach 50% of external revenue forecasts1. These are...
Launching a new product is hard. Remarkably hard, in fact—according to reports by Bain1 and McKinnsey2, 50-66% of brand launches do not meet expectations. In addition, the stakes are high and the cost of errors can be devastating. According to...
In the market research or measurement world, this phrase is also very true. In order to get valuable, usable information from a question, you need to ask a thoughtfully composed question that matters