Not everything is UX.. is it?
The Government of Canada Digital Standards call for ‘designing with users’ and ‘iterating and improving frequently’. These are fantastic guidelines for delivering great work, but I worry that too often we’re blindly following these standards without truly understanding why we’re doing things this way, or how to do so appropriately.
Don’t get me wrong, the appetite to involve users is fantastic. There have been efforts to define usability and user research as distinct from public opinion research so we can work more closely with the public. But my concern is that we are wielding “we talked with users” as a weapon not to learn, but to confirm what we were already planning to do otherwise. That may help us sound more persuasive (maybe?), but does it truly ensure we are delivering better products and services?
If you haven’t read Tanya S. (spydergrrl)’s post on UX Theatre, go ahead and do so. I’ll wait.
Tanya’s post is concerned with those projects where we ‘think like our users’ but don’t actually involve them. That’s obviously problematic. But we can involve people and still not be following good, defensible, user-centred design practices.
This can be even more dangerous, as we use what we think we learned from users to justify something that may not be what they truly need. This need not be on purpose, it can come from simply being unaware of and falling prey to biases.
A good UXer will be aware of these biases and account for them in how they conduct and synthesize research findings, and design solutions. Someone looking to ‘check a box’ may talk to users only until they uncover an anecdote that supports what they are hoping to promote.
Not every engagement with a user is “user experience research”. Customer service folks speak with users regularly. But would we take a single user feature request and just throw it into our product (Say no, please say no, don’t do this please)?
Our user research friends would want to spend the time to understand the context behind the request. What task were they trying to complete? What was their desired outcome? Are there other ways they have tried to get this done? Then they’d want to compare this single users’ experience with others. One person’s experience should not a product-decision make. Every decision (at the interface or feature level) cascades through the product. How does this users’ request align with our product vision? Are there unintended effects that may arise? When my customer support friends receive a support request, they are looking to find a solution for a market of one: the person who called in. Designers and product managers need to solve for the entire user base, and consider the tradeoffs with every decision.
I’ve been asked in recent weeks to connect with users to run different things by them. Which of these options do they like? How would they use this? This is really hard for me.. because it means we haven’t been engaging with them through the process. It means we came up with something, and now want to be able to point to evidence that we didn’t make this decision in a bubble.
When I go to a restaurant and order something off the menu, I’m signaling which meal I prefer from the options I’ve been provided (at that moment). When I first started writing this post, I stated that I wouldn’t consider that user research, despite the fact that that information can be aggregated over a population and be used to inform decisions. My thought process was that measuring what you already offer is a form of optimization. By measuring how many people buy Big Macs vs Quarter Pounders, you’re never going to arrive at data that will tell you to introduce a plant-based burger.
But I guess that doesn’t mean measuring what you have isn’t user research, it just means it’s insufficient on its own. I gravitate towards user research that enables us to dig into the why, to better understand the rationale behind the decisions, so we can take the next step to anticipate how people will use and find value in what we provide.
And now I’m just thinking about why users ‘hire milkshakes’ and I don’t really want to go down that path (especially since it’s never been confirmed that McDonald’s actually conducted this research).. but suffice to say that its one thing to measure what people do, and quite another to seek to understand the why.
What people say, what people do, and what they say they do are entirely different things.
- Margared Mead
Depending on what type of decision we’re trying to make, we may engage with users in different ways. Usability testing can help us measure which of some options is more usable/helps people get tasks done easier and with fewer errors. We are measuring their behaviour, not their opinions. Similar to the Big Mac vs Quarter Pounder measurement above: we’re examining current behaviour. We are seeking to eliminate barriers so users can get done what they’re trying to with less friction, not (probably) trying to promote a whole change to their behavior.
I’ll be honest, I hesitate to advocate for usability testing too readily as folks are just starting their ‘user-centric’ journey, because it can be abused. I can design a solution and test whether it’s ‘usable’ without spending the time to know whether the entire product or feature should even be built. Just because I can understand how to perform a task on a platform doesn’t mean I ever will. So I don’t want people to think that we can design something in isolation and then just pull in users at the last moment to test out our solution and call that user-centric.
The Government of Canada Digital Standards are a good starting point, but simply having the standards in place isn’t enough. We need to truly engage with users throughout the design process: not to say we’re ‘being user-centric’, not to point to cherry-picked evidence to justify our decisions. But to do the work we’re all here to do: deliver valuable products and services that improve the lives of those we serve.