Last week I attended my first ever Nielsen Norman Group training session. Their courses, conferences and certifications are probably the best known in the business, but I’ve never attended anything before. Having a virtual event made it easy for me to sign up for a single course mid-week. No travel, no hotel or transportation costs. And, better coffee than at a conference centre! Win-win-win!
The course I attended was “Storytelling to present UX work”. A lot of my career has been directly applying UX research findings into features, and I am looking to enhance my presentation skills to engage and inspire others. It’s one thing to report back on success rates and time on task, but quite another to tell a memorable story.
I was really impressed with the course, and was immediately able to see how I could improve my presentation skills in making sure I was presenting information at an appropriate level for the audience.
The one challenge that I see in applying this to my work at the CRA is that right now we’re doing a lot of scenario-based usability tests. We tell users the scenario and observe how they find answers to the questions we pose. In the course, they talked about how to frame questions and tasks to elicit personal anecdotes or stories; this is a pretty different approach than we take now.
I have done a lot of discovery workshops in the past, and I know the power of open-ended questions and conversations. But I’ve always seen those earlier in the design phase, whereas with the usability tests we’re doing now, we’re more in an evaluative stage. We want to assess whether someone can add multiple locations to their rent application, whether or not their personal circumstances are such that they have multiple locations.
There may be something there; does the nature of the presentation also change depending on the stage? We are not looking to stimulate creativity and problem solving at the point we’re reporting back success metrics on a prototyped solution. We’re looking to measure success. The story is about our solution and how it addresses user needs, rather than really exploring those needs themselves. Hopefully earlier in the process there was some discovery were the users unmet needs were identified and explored, which led us to the point we’re at now, but in the testing I’m doing, we’re focused on the metrics of whether our solution is acceptable or not.
All in all, it was fun to take this course and think through things differently. A lot of the work I do is practically second-nature after working in the field for nearly 20 years, so it’s fun to be exposed to something a little different and see how I can incorporate it into my work.