Gathered here are some recommendations on how to approach both the 2017 TAP survey data and follow-up communications and planning. Many of these suggestions come directly from our colleagues at UC San Diego, who administered the survey for us and have over 20 years’ experience administering similar surveys; others come from UW colleagues.
Congratulate your team, if congratulations are in order. More likely than not, they are. Survey results were higher than expected in many areas. If a team was called out for great service, it’s worth taking a moment to celebrate with the team and to highlight their work in unit communications.
Consider a nuanced approach. This is not about who is “good” or “bad,” but rather input on how units can be better. Even units with top scores can find insights into what areas customers would like them to improve.
Put the data into context for your staff, and think carefully about how you communicate results with them. This is our first survey, and staff may be apprehensive about how the results will be used. Most units received top marks for their “courteous and professional staff.” Emphasize this – and other positive responses – and then share areas for improvement. Also, be clear that the goal is to inform forward-looking continuous improvement. If results don’t point to clear actions, discuss with your staff how you might gather more detailed feedback via focus groups or follow-up surveys.
Lower marks in “moving in a positive direction” can prompt more feedback gathering. According to UCSD, lower scores in this category often indicate concerns about upcoming changes, or concerns about changes that respondents would like you to make. From UCSD’s experience, units that scored well in customer satisfaction, but had “moving in a positive direction” as a primary opportunity, saw their satisfaction drop over time if they didn’t follow-up to learn more. Focus groups and follow-up surveys helped uncover reasons for this score.
Explain that a “marginal” score is not as significant as it may appear. Respondents rated satisfaction with each service on a five-point scale. Ratings were averaged to create a mean score for each service. Scores were then grouped and labeled to show where they fit on the curve of all UW results.
In UCSD’s report, a “marginal” score means only that a score is on the line between “low” and “good” on the curve. If your staff are focused on the “marginal” language of the curve compared to the “somewhat satisfied” language of the Likert scale, it may help to share that the UW is a place of great service (hence the top-heavy curve) and that we all aspire to “excellent” scores of 4.30 and higher.
Compliance units can — and do — get rated highly for customer service. UW customers recognize that great service is not about always saying “yes.” It also includes saying “no” quickly, clearly and with a reasonable explanation. Units should not assume their compliance function will keep scores low, as many compliance units were rated highly.
Verbatim comments are for insight. It is up to the unit leader to decide, after reading verbatim comments, how best to use them. They can be helpful to provide context or details to numerical scores. The anonymous nature of the comments can occasionally result in hurtful or inflammatory remarks. Leaders can choose whether to paraphrase the insights from verbatim comments or to share them directly with staff. If you do share them, encourage your staff to think about how comments can help them do things better. Even off-topic comments may indicate that there’s confusion about what your unit does and may point to the need for better communications. Trying to figure out who said what can lead to inaccurate assumptions, hurt feelings or more.
Just because you don’t think someone is your customer doesn’t mean they don’t think of themselves as your customer. You may be tempted to disregard ratings or comments from a group that you don’t normally serve. However, think about how they may be a secondary or invisible customer. Or if they truly aren’t using your service, how you might better clarify your service to avoid confusion.
A low number of responses is not a cause for inaction. If you didn’t get enough responses to result in actionable data, consider follow-up surveys or focus groups. How else does or could your unit get a fuller picture of how your customers feel about your service?
Large units do not need to tackle improvements in every core service simultaneously. In this time of limited resources, leaders need to prioritize improvement efforts, not just within each service, but across the unit. A unit with seven core services, for example, may choose to focus on two or three for action this year, saving the others for the future.