When we are asked a question, there are two ways in which we tell our answer. The first is the explicit answer we give to the question; “Yes” or “No”, but the second is the implicit signals that we give off unwittingly before even opening our mouths. These responses are driven by our System One and System Two thought processes. Upon hearing a question our System One process whirs straight into action, perhaps changing our facial expression, responding with a frown or a smile or causing our shoulders lift, our palms to turn upwards, or maybe we exhale deeply. Each of these signals are giving away the answer that you subconsciously want to give. But then our System Two response, patiently waiting for our approval, allows us to provide the answer that we consciously want to give, and it might not necessarily match the first.
Qualitative research knows how to deal with this problem, or at least face to face qualitative research, whereby the explicit answer can be tempered by observing the delay, tone of voice and body language emitted by the consumer before they tell you their answer. A good moderator knows how to ask the question differently, to tease out more information and to probe for unspoken truths. But with quantitative research the problem is difficult to unpick. The majority of standard quant research techniques, whether face to face or online, only record the explicit response; “Excellent”, “Agree strongly”, “Likely to buy”, and this therefore only tells us what the respondent wants us to know.
Does this really matter? Well, for the majority of research, the response given explicitly will tell us enough; simple questions around attitudes and behaviours do not necessarily require the respondent to duel with their System One response, and therefore we can be sufficiently confident that what they tell us is what we need to know. But there are occasions when we need to understand not just what the consumer tells us, but how they tell us. To do this we can use sophisticated tools such as facial coding, or we can use simple tools such as implicit response testing, and we have found the latter to be an effective method in a number of situations.
Implicit Response Testing (IRT) is a method by which not only is the respondent’s answer to a binary question recorded, but also how long they take to answer. This gives us an indication of how considered their answer is and may reveal insight beyond their final response. It is also a way to identify how ingrained a perception is, is it immediate, or does it have to be mined out of the grey matter? Here are a couple of prime examples when you might choose to use it.
Our perceptions of most big brands are as established as the brands themselves. Over time we have been fed both direct and indirect messages that have led us to form perceptions that often are deeply rooted into our brains. So, when a brand decides to take a new direction, or chooses to launch a new marketing campaign with a clear message, it takes an extraordinary effort to forcibly shift the tide that has been flowing across our brain waves. Imagine trying to turn the Thames around when the tide is rushing out! And this is where brand trackers can struggle to appease marketers, who are disappointed that their budget breaking campaign hasn’t even twitched the dial on the monthly tracker in their efforts to shift the perception that the brand is old-fashioned. This is like placing a propeller in the Thames, against the tide. Like it or not the water will still flow downstream. But IRT tells us more than just the final outcome, it can also show us the thought process. It may show us that the answer took a little longer to give, suggesting that the ‘propeller’ is starting to slowly loosen the perception. With a follow-up campaign (a second propeller if you like), the perception slows down further, until finally you reverse the tide and the final outcome changes. Brand X isn’t old-fashioned, it’s modern, it just took more than one advertising campaign to affect that change.
A second golden area for IRT is research around subjects where the consumer feels predisposed to answer in a certain way. This might be questions around social attitudes, or questions relating to unsocial behaviour, or even topics such as sexual health. In these instances, even when hidden behind the anonymity of a quantitative survey, respondents may feel inclined to give the answer they feel they ought to, rather than the one they would like to. An interesting example of using IRT for this purpose is our recent self-funded research that aimed to understand how consumers view different charitable causes. We didn’t just want to know which causes consumers thought were worthy of donation, because we were conscious that lots, if not all, charitable causes have merit. So we wanted to time the response to find out whether some causes require greater consideration than others, particularly given our collaboration with grassroots charity The Worldwide Tribe, who work with refugees. We wanted to know both the implicit and the explicit response to the worthiness of this cause. The full results of this study can be found here.
Is IRT right for you?
Implicit response testing is a very flexible and inexpensive approach that can either be bolted on to pre-existing research programmes or added to a standalone project. If you are interested to learn more, then please give us a call.