Research, design or test with users. Please, just do something.
Poor unappreciated user research. Poor misunderstood discovery.
These are really important activities to understand user problems, but sometimes (often? most times?) a team isn’t looking for new problems to solve and doesn’t see the value in such activities.
They have to replace an aging system. They need to introduce some new functionality to meet external regulations or requirements.
Maybe they’re procuring a SaaS, and so they know they’re limited in what they can do anyway. No need to do a bunch of research into an ‘ideal workflow’ when you know you’re largely limited to what the vendor has made available.
It can be tempting to believe that the vendor has done the research, has figured out what users need and come up with a strong solution to their problems.
But if there is a SaaS product out there that could be used to address your needs, there’s probably more than one. Ok, so those products must differ in some way. So that must mean that there are different factors to consider, and one may be a better fit for you.
How do you know, if you haven’t spent the time to understand what is important? Every decision comes with trade-offs. What are your decision-making criteria?
I’ve talked before about how working on search at LexisNexis meant figuring out the trade-off of curated vs exhaustive search results. Librarians had different expectations than paralegals. Whose needs are you prioritizing when making product decisions?
In the case of SaaS, some of those decisions may have been made, or some of them may be available via configuration. Are you clear on which priorities you are focused on?
Even if you aren’t starting from scratch and designing something greenfield, you can still start with research to understand your users’ workflow and needs. What jobs are they trying to get done, and where are they struggling? What needs to change with the new system to call it an improvement?
Usability testing
If you are replacing an existing system, you could start with usability testing. A baseline test on the existing system could involve asking users to perform a typical task. This could be something they perform frequently, or something that is quite critical. Ask the user to perform the task, and observe them doing it. How long does it take? Where do they struggle, or worry about making errors? Are they able to be successful? How difficult do they perceive it to be?
This serves as a baseline — once you have a proposed new system, you can ask the user to perform the same task and measure if there is improvement. Obviously one challenge is familiarity: if a user is very used to working with the old system, it may take them longer to try out something new. But you can gather some useful quantitative and qualitative data through this experience.
Product demos
If you are procuring a SaaS, you will often get the opportunity to see or work with demos of the software. Consider involving your end users! They will be able to spot things that are confusing, or don’t align with their way of working. You want to ensure that the feedback you get is focused on the behaviours of the system and not aesthetic things like colour or font. Will this system help them do their job quicker, with fewer errors? What is missing that they will need before being able to adopt this product?
Product vision .. and configuration?
This isn’t a user involvement activity per se. But it depends on your users. As mentioned earlier, different products on the market can be designed to best suit different types of users. There are a bunch of different ways to segment your user group; by role, by ‘digital acumen’, by device, by risk avoidance, etc. What works for another group of (insert job description here) may not work for yours, depending on the environment as well as the actual people you are dealing with. We are humans, we’re not generic robots programmed to just do a task.
Delivering a successful digital product isn’t just about launching something into the world. It is also about adoption. And adoption isn’t just about people using the thing. It’s also about people not dreading the use of the product and trying to find ways to avoid it.
The problem with a super configurable SaaS solution is that it CAN do so much. Because it may be loose enough to accommodate all sorts of different segments and users.
The risk with product design (probably any design, but this is the space I know best) is that a lack of a decision IS a decision. Turning everything on doesn’t make something easier to use.
From a user perspective, you could say that a SaaS vendor’s target is a buyer, not an end-user. They are building a solution that can be used by many, but it may not be optimized for anyone. That is on the buyer to configure the product to meet their needs.
So who decides how it should be configured? Hint, if you have real input from the actual people using the product, you can be a lot more confident in those decisions. (I won’t say the decisions are easier. The easiest decision is to put on blinders and guess. But if you care about the outcome, its way better to make an informed decision)
User acceptance testing
Generally I’m not a huge fan of user acceptance testing as at a high level, it seeks to answer the question “does the product meet the requirements?”. This can lead to issues if the requirements were poorly documented or interpreted. Something can be utterly unusable, but “performs as specified”. However, if you aren’t able to engage users throughout the development process, this is at least one opportunity to gather a lot of feedback before launching the product. It is then up to leadership to make the call whether the product should be launched, or needs more work.
Because ultimately, product success shouldn’t be about whether we perfectly met the requirements. It shouldn’t be about whether our requirements were perfectly articulated up front. It should be about whether the resources we just spent on this product are going to result in improved outcomes: faster service or fewer errors.
The British Columbia Digital Code of Practice recommends “Designing with people”. But this isn’t just about designing up front, imagining a perfect experience and then throwing it away in the face of real constraints. It is about involving the people for whom you’re working, understanding what is important for them to be successful, and then ensuring that the solution actually works to enable that.
I am not going to rigidly say that every project must go through a specific type of research or service design. But I implore you to think about opportunities to engage with the people who will be using the thing you’re spending your hard time, effort and resources on. It can really make a difference in the quality of what gets out into the world, and the impact you can have on the people you’re trying to serve.