Designing Connected Services
Today I was fortunate enough to attend the “Designing Connected Services” masterclass with Lou Downe. This course was positioned for practitioners and change leaders looking to create the conditions for transformative change and lead good service design and delivery at scale.
I will admit, I initially thought others may be better suited to attend.. I am in Central Agency-land, not the ministries directly delivering services to the public, and have done a fair amount of service design in the past. Silly Andrea — over the course of the day I sheepishly realized that I too am involved in service delivery, with internal users. [as well as a representative of a constraint that limits the delivering of connected services, but I definitely was aware of that going in]
I was thrilled to be at a table with Kelsey Singbeil and Laura from Service Transformation @ ENV (BC Gov), and others from across their ministry. We had quite a good cross-section of different roles and program areas, and it was fun deconstructing a service with them.
I definitely found myself sprinkling in some “outcome-driven innovation” tidbits, including trying to reduce the # of steps on the map, or framing improvement by “minimize the time to” or “minimize the likelihood of”. Those are really just powerful conventions that I think can help people focus on opportunities.
I will say, though, that I was a bit disappointed that the course didn’t spend more time on “creating the conditions for service design to happen”… though as I write that, I suppose running a course evangelizing service design is building capacity, which is a pretty essential condition for it to happen...
I really liked the idea that there are two approaches: to speak the language of your interest-holders, or to raise their digital literacy so that they can understand your language. If speaking the language of your interest-holders, talk about risk, or money, or outcomes. [that’s my language!]
At the end of the session, I went over to Lou and explained I work in funding.
I mean, it’s not ACTUALLY me. But in my role within the Digital Investment Office, we need to understand what we are investing in.
Currently, that’s often “capital funding to build a tangible capital asset”.. which may or may not truly address user needs, reduce risk, increase efficiencies and achieve outcomes.
What’s helpful is that I sat in a day of training, furiously nodding along with everyone that designing a good service isn’t about knowing exactly what you’re going to build, and then ending funding once that thing is released.
I like to let people know I work in funding, yet I’m on their side. I recognize the current system isn’t ideal, but it is the one in which we work.
Sitting in the room, knowing full well what the current constraints are regarding capital funding, gives me some space to poke at the edges of the current environment and consider how the …service of funding digital services… could be improved.
One of my observations (and I’ve seen this with service design before) is that it can be really easy to lose focus. You have this giant map with lots of opportunities. How do you decide what to focus on?
Lou did advise us not to get too caught up in worrying about the perfect first step, and instead to just think about a good next step. They didn’t say this, but that idea of “progress, not perfection”.
Yet one thing I have found a bit frustrating can be when folks are presented with a sea of options, they may have trouble focusing on one thing at a time.. and also, focusing on something they may actually have control over.
So maybe we need to look at funding that way. What are some incremental changes we can make, what are some things we can set in motion to prototype changes, to reduce risk and address assumptions?
In the session, we talked about hypothesis driven design. If we want something to be different, can we be explicit in what we think the impact will be? How can we tell? By being explicit, we can start a conversation about assumptions.
In our table example, I developed a hypothesis that by providing more information upfront about the service journey, users would not “miss a step” along their journey. I realized I had assumptions about the steps being linear and sequential. When I tried to set a hypothesis about measuring improvement (“reduce the likelihood of someone doing Y without doing X first”.. “we will know this by asking program Y if they are seeing fewer premature inquiries”), I was able to identify a bunch of questions about how the process worked, what data was available and honestly a gut check if this was even the problem we were trying to solve. This was an incredibly powerful activity as it required a level of specificity I think we sometimes miss when we dwell in the land of ideation. There are plenty of things you COULD do. Why do you think this is going to have the impact you claim you’re going for?
Another one of the activities in the session focused on being explicit about how you operated across organizational lines: were you coordinating, sharing, partnering or doing it yourself. I felt like this could have been an entire session of itself!
As we explored our example, I suggested that it may make sense to adopt a different strategy depending on the stage of the user journey and how much deep subject matter expertise was needed. So through early stages of discovery/awareness, it could make sense to coordinate, but then you’d want more control over the experience* as it was closer to your core work.
A bit of a tangent, on the plane ride over I was reading Platformland by Richard Pope, so in the back of my mind I was also thinking about the digital infrastructure that could enable this type of service delivery. Where a service owner is responsible for the user experience during their journey to get something specific done, but the bits and bobs that enable the service may not necessarily need to be maintained so locally.
But COORDINATION! MONEY! TIME!
I get it. Teams are trying to optimize for their own service delivery. Especially if you have to request capital to build a thing to achieve a user outcome, it can be hard to justify deviating from that path (especially if you don’t even know what that even means yet).
One of the other attendees, Martha Edwards, asked about the Gov.UK service design standard, and whether it was effective in ensuring the design of good services. Lou actually said that they felt that it was a bit of a poisoned chalice as people felt it had been thrust upon them, rather than something that they worked on collaboratively. This is definitely something I need to think more on. Last summer I had discovered an old BC Govt Service design playbook (circa 2015, perhaps?) that mentioned that the OCIO could require service design as a condition for capital funding. So there used to be precedent for that.
Yet Lou’s words gave me pause. I’d like to know a bit more about whether it was that there was any standard that was the issue, or if it was how the standard came about, or how it was enforced. For example, I wouldn’t want for this to be a checkbox exercise for people to comply with, if it didn’t actually lead to better outcomes.
Ultimately, I’m left with the lingering question: where to start? Prioritization, amirite?
Is there something to prototype, to test a hypothesis? Do we already have data we can look at?
What are our biggest risks?
Actually, that’s where to start, isn’t it? What is preventing us from shifting how we fund things? (Aside from that tiny detail of policy, of course).
My assumption is that it’s around “will we invest in the right thing? how will we know? what if these expensive resources just fool around and spend millions of dollars on something that doesn’t really make a difference?”
This is where outcomes come in. If we fund USER PROBLEMS TO BE ADDRESSED, not STUFF TO BUILD, and we measure improvement against the stated problems, isn’t that.. success?
This is why I love those “minimize” job statements. Because you know the direction of improvement, AND you have an ideal state. Ideal is “zero”.
Any movement towards zero is an improvement, and at any time, the organization can say “great, this is enough for now, let’s go work on improving something else”.
This is how we shift to agile service delivery. You have a hypothesis on how to move towards zero. Go try to prove it. Maybe sometimes you’re right, and sometimes you’re not. Interest-holders have the option to redirect to more pressing initiatives. Iteration like this means more often check-ins, more opportunities to course-correct.
I know it can be scary not to have the certainty of X years of funding for a capital project. And perhaps this is where my private sector/MBA bias comes in. Maybe having a little bit of urgency around releasing value to users is good. When I was an intrapreneur and needed to keep making pitches back to my execs about whether I’d found market fit and had good retention of users, I never took for granted that I had the luxury of time to figure things out. If I was successful, the project was (likely) going to keep being funded. But if I wasn’t demonstrating value, then it made sense for my organization to invest in something else.
They didn’t owe me anything.
Yet you could argue they owed their customers ongoing improvements. If my product wasn’t hitting the mark, my efforts should be applied somewhere else.
So you see, being user centric can actually mean an organization choosing not to invest in something. There is not enough time and money to do everything. Organizations need to make choices to optimize with their limited resources.
It is our job as service designers to make it easy for our interest holders to support what we believe is best.
And.. I’m spent. For now. apologies for the rambling. Many thoughts.. not sure which strings to pull on yet.
“I write to find out what I think” — Stephen King