Our brains lie. In fact our brains lie so well that we don’t even realise they’re doing it. They’re such expert liars that they can even fool themselves into thinking what they said is true. And that’s no mean feat.
As people interested in the results of user testing, we strive for accuracy and validity to our tests, so how can we avoid the pitfalls that cause the brain to lie? Well we can start by knowing what the main pitfalls are.
Also known as the ‘I knew it all along’ effect, or as we often hear in the user research business: “That’s what I expected to see / happen.” It’s caused by a nasty little memory distortion which causes us to see things as predictable after the fact. That means that when we allow users to complete a process before they give feedback, they’re very likely to claim it was easy or predictable even if we could see them having difficulty or being hesitant.
A bi-product of this is another related effect: because something makes sense now, it must have always made sense. So because I have found the solution or end point, this was always the most obvious way of doing so. In basic terms people don’t like appearing to be stupid, and this is usually a subconscious rather than concious reaction. It’s something we don’t even realise we do. This is obviously an effect we want to avoid, especially when testing purchase flows or similar linear routes through a site.
Before entering a flow, ask the user what they expect the steps will be through the flow. Then compare to their reaction after they’ve finished the flow. Sometimes it can be interesting to ask why they think their reaction has changed.
Have you ever asked a user what they expect to see next and then seen them rush directly for the mouse and try to click through before answering? That’s because the brain absolutely loves knowing the outcome to a question, and hates being forced to think about what the outcome might be. Think of all the times you turned to the back of your school math book for the answer to a problem without fully understanding the process.
Put in a slightly more technical manner, outcome bias is the brain’s tendency to use an observed outcome to ascribe reasoning after the fact. Tied closely to this is our brain’s tendency to judge the quality of a decision based on the outcome. So if a decision has a good outcome it was a good choice, if it had a bad outcome it was a bad choice. Of course this ignores the fact that we can have no knowledge of whether or not it will have a good outcome before we make the choice. This bias could be simplified as ‘the ends justify the means’. In testing we want to avoid this bias, as we are trying to test if a choice is obviously good or right before the choice is made.
Again, the solution to this is to judge expectations before the user makes a choice. Often this involves asking the user to show us what they would click next, but not allowing them to click it. While this can at times be frustrating for the user (because their brain instinctively wants to cheat), it provides more valid results in the long run.
Our memory is a terrible thing. It plays tricks on us all the time. Where did I leave the keys, when did that happen, what’s his name? Why then do we think we would remember our choices better than actual observable facts. We don’t – in fact we are absolutely useless at remembering our choices in an objective way. Research into how our brains work indicates that the way we make and remember choices creates memories that are distorted.
Chemically speaking, true and false memories are almost indistinguishable to the brain as they are created by the same mechanism that processes and stores information. Generally speaking, it is context which gives us the cue as to whether something is a true or false memory, but this is often lacking in how we remember our choices.
Because this bias comes from the way we remember decisions, we should always avoid asking questions about choices after the fact. Questions like “Why did you do that?” lead to inherently biased answers and therefore have little place in a test unless we account for the natural bias.
Finally we have the Pygmalion effect. I like to compare this to the story of ‘The Little Engine that Could’. Much like the little engine, users in a testing environment have a tendency towards pushing themselves harder and thus outperforming their ‘normal’ behaviours. In short: the greater the expectation placed on a person, generally speaking, the better the performance – and users in a testing environment often feel there is a very great expectation placed on them to get the ‘right answer’.
This is very much a self fulfilling prophecy. If users feel that there is in fact a ‘right answer’ (and this is a test after all, so this is often the expectation), they will often keep going much longer than they would if they were at home. In the words of Henry Ford: “Whether you think you can or whether you think you can’t – you’re right.”
What this means is that for our tests to be valid we need to:
- Manage expectations very carefully
- Put users at ease from the beginning
- Let users know that it is okay to fail
- Explain to users that they should give up at whatever point they feel they would at home
In this post we’ve examined four very common ways in which our brains can lie to us in a user test. However most of these issues can be resolved by following one simple maxim: Judge expectations before the fact, and judge reactions after the fact. If we keep expectation and reaction as discrete as possible in our testing and analysis, we can avoid many of the cognitive biases inherent to the brain, and hopefully guarantee more valid results.