How do you measure a service?

Simon Penny
7 min readOct 18, 2022

The objective of service design is to deliver services which help people to do something they need to do in a way that makes it easy for them, whilst at the same time also being sustainable for the service provider. Designing good services, therefore, requires organisations to think beyond simply meeting a business objective and focus too on designing services which are desirable to use and result in a good user experience — and therein lies the rub. As a service designer, how do you know that the work you are doing is having a positive impact on the way users experience your service?

Whilst there is no shortage of measurement frameworks out there, getting a true sense of how a service is really performing can still feel quite difficult due to the intangible nature of services and the subjective nature of experience. The problem is the metrics which are out there often focus on measuring the things that are easiest to measure rather than the things which are most meaningful to users. These metrics, which are often quantitative, only provide half the story — at best resulting in unanswered questions and at worst driving behaviours which make the service experience worse, particularly when measures are incentivised; Goodhart’s law suggests that “when a measure becomes a target, it ceases to be a good measure”. As the old saying goes, what get’s measured, gets done — but that’s not always a positive thing. Despite this, decision-makers often prefer to seek safety in the numbers, favouring what they perceive to be fact-based measures at the expense of more subjective qualitative insights.

In order to truly understand how a service is performing, it’s important to combine both quantitative and qualitative data. It’s not that one kind of measure is more important than the other, but rather that different measurement frameworks are better at measuring certain things. Two metrics that are commonly utilised by organisations are the net promoter score (NPS) and the user satisfaction score (CSAT). Both metrics are relatively straightforward to produce and are based on large datasets of easy-to-gather quantitative user feedback. Users are asked to rate elements of their service experience on a scale of either 1 to 5 or 1 to 10 respectively. An aggregate calculation is then made and an overall score is produced. Whilst these metrics have their place, they also have limitations. Firstly, whilst NPS and CSAT scores might make it easy for an organisation to measure month-on-month progress or benchmark performance against a competitor, they fall short of being able to explain why a user scored the way they did. Secondly, both metrics are ‘lagging’ indicators.

Lagging indicators are easy to measure but hard to change. They tell you what has happened, but aren’t good at predicting the future. On the other hand, leading indicators are dynamic but difficult to measure. If a lagging indicator describes how a service has performed in the past, a leading indicator looks to the future by providing live feedback which can lead organisations to immediate adjustments which improve performance right now. Whilst the latter is more desirable, it’s often the former that wins out. This is due in large part to how we manage services. Focusing on an end-to-end user journey means that the services we design often span several teams and/or departments. Whilst it might be easy to attribute ownership of a touchpoint or a specific part of a service, it’s harder to attribute ownership of the entire service journey. Sometimes, there is no owner. Whilst individual touchpoint owners might adopt a number of lagging indicators to measure their part of the service, if done in isolation, these can have a negative impact on other parts of the service. Whilst to one part of the business the service may appear to be operating well, the user experience over their entire journey could be less than desirable. It is therefore important that we measure the outcomes of a service, rather than simply the outputs.

There are broadly three high-level stages to any user journey — pre-engagement, engagement, and post-engagement. Each stage is equally important to the overall experience a user has of the service they are engaging with, but as a service provider, you won’t have the same level of control over them all. Delivering seamless user experiences across each stage of the user journey is tough. Understanding the circumstances which bring a user to your service as well as those that follow the outcome are important factors when measuring user experience.

Measuring a service well isn’t impossible, but it does require some careful consideration in order to ensure that what is being measured is both meaningful to users and meets the needs of the service provider.

Here are 6 things I like to consider when thinking about measuring a service:

Identify stakeholders

Services will often span several teams, departments, and sometimes organisations. Each entity will likely have its own leaders who in turn will have their own priorities and objectives, as well as their own way of collecting and storing data. In order to create meaningful measures which cut across these silos it is important to identify each stakeholder and touchpoint owner and understand their motivations and pain points. Often, there will be nobody responsible for the entire end-to-end service. In the short term, this role will sometimes be taken up by the service designer themselves, but this isn’t sustainable in the long term. In order to measure the end-to-end service, getting buy-in from each touchpoint owner is essential. To do this it may be necessary to persuade them of the value of measuring the service in a different way — they will want to be sure that any new measures will also give them what they need to meet their individual objectives. Ultimately, it’s not the role of the service designer to own the holistic service measure(s), so it’s important to engage stakeholders early in the design process and secure their buy-in as early as possible.

Identify existing measures

When meeting with touchpoint owners it’s important to get a good understanding of what measures and data sources already exist. It’s a good idea to plot these on the user journey map. By mapping existing measures alongside both existing quantitative and qualitative data sources, it should be possible to spot the relationship between what’s being measured and what data is available. This in turn will help highlight gaps to be filled and opportunities to be exploited. It’s important to understand what parts of the journey are in and out of your control and acknowledge that it isn’t possible to be in control of everything — it’s not possible to use measures to predict every eventuality, however, the right leading indicators can help mitigate the risk of those out of control factors having too much of a negative impact on the service when they occur.

Identify service stages

Spend time identifying and understanding each stage of the service. Map each stage along with its touchpoints and stakeholders (there could be more than one of each). Pre-engagement, engagement, and post-engagement are good places to start, but in many cases, these high-level stages can be broken down further. It’s important to make a judgement call on how far to break down each stage — make a stage too big and it will be difficult to deal with, too small and it won’t give you the information you need. Think about the service from a user’s perspective — what’s important to them? What are the stages a user goes through to meet their overall service objective?

Offboarding is important too

It’s perhaps unsurprising that organisations tend to focus on onboarding new customers and maintaining existing customers, especially in competitive sectors. Offboarding customers can often be seen as something which can’t be avoided, treated as collateral damage and prioritised less than onboarding their replacements. As designers, we too can be guilty of placing more emphasis on designing for beginnings at the expense of designing for endings. However, in terms of the end-to-end service experience, what happens after the service outcome has been achieved is important. Loyal customers are a valuable asset. It’s important to understand what customers might need following service delivery. The Government Digital Service (GDS) understand this. For example, after you check your MOT status online, you are asked if you would like to receive a text message reminder one month before it’s due to expire. After-service interventions like these are important, so don’t forget to measure how effective they are.

Measure quality, not just performance

If performance is how well a service does what it says it will, quality is how consistently an expected level of performance can be achieved. Lagging indicators are good at highlighting good or bad performance. Leading indicators are good at highlighting quality. Think beyond simply measuring the outputs of service and search for measures which are dynamic and allow you to not only measure quality retrospectively but also make adjustments to the service on the fly, ensuring that those weak signals don’t become a performance downturn. Make sense of big data and use it in ways that enable you to be proactive rather than reactive.

Understand your audience(s)

Different audiences will require different types of information and this in turn will drive different forms of reporting. Dynamic dashboards are useful for service owners and service stage owners — enabling them to make decisions to improve user experience on the fly. In contrast, Red Amber Green reports (RAG status reports) are used to summarise qualitative data and are particularly suited to reporting lagging indicators. RAG status reports may be useful to stakeholders who wish to benchmark and track month-on-month performance, but if used without sufficient qualitative data, they only tell half the story. It’s important to also include qualitative feedback in the form of fact-based stories which help communicate the why behind the what.

Above all, stay close to the metrics and be ready to flex. Nothing stays constant, so always be prepared for the unexpected.

Originally published at http://simonpenny.wordpress.com on October 18, 2022.

--

--

Simon Penny

Design Leadership. Service Designer at SPARCK. Formerly lead design roles in-house with Local Gov, NHS and Housing. Founder CheekyGuerrilla and GSJShrewsbury.