Weeknotes #19: Sept 26–30, 2020
These weeknotes are a little late: I was working remotely from my parents’ house this week, and then promptly took a week off for vacation. For those wondering, I got a covid test before I travelled. :-)
Usability testing is a common “introduction” to user research. Rather than releasing something widely to the public and hoping it goes well, a team may first run usability testing, where someone who is similar to the target user will be asked to perform a certain task. Their ability to complete the task is used as a measure of how usable the solution is. If participants struggle, the solution should be revisited and improved prior to a wider release.
Doing usability testing is valuable: it ensures you are releasing something that is usable. However, it ignores a potentially more critical point: is it useful?
We often consider UX part of the delivery or development team: the business decides what we should do, and UX addresses how we should do it. (I take some issue with this, but for now, let’s just use this as a starting point). Unfortunately, this often means we’re limited in the impact we can have on improving the overall user experience.
My team works on a tablet application for inspectors. It enables inspectors to look up reference material and complete inspection reports while in the field.
The application (which we control) needs to interface with different data sources in the backend (which are out of our control).
What inspectors are reviewing is driven by policy and legislation (which are out of our control).
Some of the most time-consuming aspects of their work are related to interfacing and scheduling with stakeholders (which is definitely out of our control).
So we may have the most intuitive, usable application in the world, but ultimately our inspectors may still be frustrated and inefficient in getting their job done.
When we decide to do user research, we implicitly or explicitly decide the sort of information we are open to gathering. If we show someone an interface and ask them to perform a task, we are not measuring whether they actually would ever perform that task. We are not measuring whether the feature that enables that task is more or less important than another feature. We have already made a series of decisions and now we are looking to defend them.
We’ve already answered:
- what could we design/build?
- what should we design/build?
- how should we design/build it?
and now we’re trying to answer:
- did we design/build it right?
But user research can and should come into play well before this stage. Ideally, we are doing our user research well before development teams get involved, to answer the “what should we design/build” question. We should be focusing on understanding the biggest challenges our users face, and then directing our efforts to address those challenges.
Unfortunately, a lot of organizations aren’t set up this way. We are organized around products and services, and decoupling user research from teams that are set up to deliver things is foreign and scary. Where do their insights go? What if they uncover things we don’t want to hear? What if they uncover things we don’t know how to address?
We can choose to ignore what our users need, but eventually we will have to deal with it. In the private sector, there’s the looming threat of competition. In the private sector, we may “only” have to deal with users’ lack of trust, errors, or lack of participation. Either way, we are not providing solutions in the best interest of those we’re meant to serve, so that’s not great, is it?
We have to change how we view user research. User research isn’t a final step so development teams can be sure their work is satisfactory. User research needs to happen sooner, and focus on the user, not on our own solutions. Otherwise we risk missing huge opportunities across the user experience simply because our own organizational structure is focused on specific products or solutions we deliver, and not on what the user is trying to get done.