Gathered here are recommendations to assist you and your staff with interpreting 2019 TAP survey data, conducting follow-up communications and planning for service improvements. Many of these suggestions come directly from our colleagues at UC San Diego (UCSD), who administered the survey for us and have over 20 years’ experience administering similar surveys; others come from colleagues at the University of Washington.

View as PDF

The Customer Satisfaction Survey was launched in February 2019 and was sent to all faculty, staff and students at all UW campuses apart from employees of the UW Medical Center. It was the second time the survey was administered. The first was in January 2017.

The survey asked staff, faculty and students to rate their experiences with core campus services. Survey feedback can help guide service improvements and show where units can celebrate teams for service well done. Results from the survey are integral to informing continuous improvement efforts — a key goal of the UW Transforming Administration Program (TAP).

Overall results showed that UW services were rated higher on the 2019 survey than in 2017.

Who we heard from

The 2019 Customer Satisfaction Survey was sent to over 77,500 faculty, staff and students across all UW campuses, more than double the number invited to respond in 2017.

The total number of survey respondents in 2019 was 14,841 (9,747 students, 1,171 faculty and 3,923 staff), a response rate of 19.1%. In 2017, 4,277 (11%) of the invited participants responded to the survey. Students accounted for the vast majority of this increase in response.

Survey respondents provided over 20,000 written comments about UW core services. Colleagues at UCSD identified common themes in verbatim comments and provided light redaction of individually identifying information, where appropriate. Unredacted verbatim comments are available to unit leaders upon request.

Improvements from 2017

The survey tool and services listed were improved in 2019 based on lessons learned and input from the 2017 survey.

  • An improved list of services. More UW units were added to the survey and many units added or refined services. The result was an increase in the total number of core services on the survey from 94 in 2017 to 132 in 2019. While most were included on both 2017 and 2019 surveys, 26 services were removed or reorganized and new services were added. The result was a clearer, more comprehensive list.
     
  • Filters to rate only the services used. The survey listed services to rate based on who was responding. Students only saw those services that students use, for example.
     
  • Easier access to the survey. Emails, newsletters and social media included a direct link to the survey, making it easier to promote participation at every level.
     
  • Feedback from everyone. All students, faculty and staff were invited to participate. As result, in 2019 the number of respondents more than doubled.
     
  • Comparing better data more quickly. Reports include comparisons to 2017 survey results, if available, and to UW-wide results for each question.

Reading your results

When reading your reports, please consider the guidance and recommendations listed below.

Satisfaction Scores

All core services are rated from 1 (Not at all Satisfied) to 5 (Extremely Satisfied) for overall satisfaction and for six additional areas. On page 1, the report provides mean scores for the service (blue arrow) compared to all UW services (purple dot) and similar data for 2017, if applicable.
A detailed breakdown of responses can be found at the top of page 1 and on page 2.

If your unit refined the description of a service from 2017 to 2019, this may affect scores. In most cases, rewording clarified a service for more accurate responses.

Strengths & Opportunities

A summary of strengths and opportunities can be found on page 1 with more details on page 3. A scatterplot shows how each question correlates to “overall satisfaction.” Based on requests after the 2017 survey, a second scatterplot was added (page 4) that correlates each question to “understands my needs and requirements.”

When reading your scatterplots, focus on the right side of the graph to see what matters most to respondents. The four-quadrant scatterplot on pg. 2 of your report can help you understand the relative importance of each item, which may assist in prioritizing improvement efforts. This “leverage analysis” correlates survey respondents’ scores for each question with their scores for overall satisfaction, producing a “correlation coefficient” between 0.0 and 1.0. The closer the number is to 1.0, the more the question is linked to overall satisfaction. A correlation of .70 or higher is considered a strong correlation.
 

“strengths” (top left) are high scores that don’t correlate highly with overall satisfaction

“influential strengths” (top right) are high scores that strongly correlate with overall satisfaction

“secondary opportunities” (bottom left) are low scores that don’t correlate highly with overall satisfaction

“primary opportunities” (lower right) are low scores that strongly correlate with overall satisfaction

 

When planning improvement efforts, it helps to focus on the right side of the graph where the correlations to satisfaction are highest. “Primary opportunities” are the areas in which service improvements matter most.

How big is your ‘n’?

As you read unit reports, look for the number of respondents and how many are faculty, staff or students. The larger the number, the more reliable the results. If too few people responded, your unit might consider ways to follow up with UW partners to gather additional feedback.

Pages 5 and 6 of the report include information on Satisfaction by Classification and Division with ratings by faculty, staff and students, both overall and by division. The number of respondents in parentheses shows how many from each unit rated the service. The higher the number, the more reliable the data. If the number is small, or if there is no number at all, results are less representative and staff may be tempted to guess at which individual/s provided feedback. Leaders with concerns about low response rates among different groups may decide, as a result, not to share pages 5 and 6 of the report with unit staff.

See how your results compare to UW scores, strengths and opportunities overall: tap.uw.edu/tap-admin-survey

Communicating Results

When communicating results, colleagues at UCSD and UW recommend the following:

  • Consider a nuanced approach. This is not about “good” or “bad” scores, but rather input on focus areas to help services improve. Units in any score range can find insights into strengths and possible areas for improvement.
     
  • Congratulate your team, if congratulations are in order. More likely than not, they are. Many services have higher scores than in the inaugural 2017 survey and 78% of all services rated 4.0 or higher for courteous, professional staff. If a team was rated highly or called out in verbatim comments for great service, you may choose to celebrate their efforts via internal and external communication channels.
     
  • Put the data into context for your staff and think carefully about how you communicate results with them. This is our second campus-wide survey, and staff may be apprehensive about how results will be used. It may help to emphasize areas of strength, high scores and other positive responses before areas for improvement. Also, be clear that the goal is to inform forward-looking continuous improvement. If results don’t point to clear actions, you might discuss with your staff how to gather more detailed feedback via focus groups or follow-up surveys.
     
  • Provide context for “mid-range” scores. Respondents rated satisfaction with each service on a five-point scale. Ratings were averaged to create a mean score for each service. Scores were then grouped and labeled to show where they fit on the curve of all UW results. In the report, a score depicted in red or pink means only that a score is in the first and second cluster of scores on the curve for the UW overall. If your staff are focused on the ”below-mean” implication of the color or the curve compared to the “somewhat satisfied” language of the Likert (1-5) scale, it may help to share that the UW is a place of great service (hence the top or right-heavy curve).
     
  • Pay attention to ‘unexpected’ customers. You may be tempted to discount ratings or comments from groups that don’t frequently use your service. Instead, think about how they may be a secondary or invisible service consumer. Or, if they truly aren’t using your service, consider how best to clarify your service for these groups to avoid confusion. 
     
  • Compliance units can — and do — receive high marks for customer service. Great service is not about always saying “yes.” It also includes saying “no” quickly, clearly and with a reasonable explanation. While providing excellent customer service for compliance-related work can pose unique challenges, units should not necessarily assume their compliance function will keep scores low, as many compliance units were rated highly.
     
  • Due to a change in survey demographics, student-facing services listed on the 2017 TAP survey may see changes. The 2019 TAP Survey was provided to all students at all campuses, rather than a representative sampling as was done in 2017. As a result, many more students provided input, leading to significant score changes for some student-facing services compared to 2017.
     
  • Verbatim comments can provide insight. It is up to unit leaders to decide, after reading verbatim comments, how best to use them. They can be helpful to provide context or details to numerical scores. However, the anonymous nature of the comments can occasionally result in hurtful or inflammatory remarks. Leaders may choose to paraphrase the insights from verbatim comments or to share them directly with staff. If you do share them, it may help to encourage staff to focus on how the comments can inform their work. Even off-topic comments may indicate that there’s confusion about what the unit does and present an opportunity to improve communications. Trying to figure out who said what can lead to inaccurate assumptions or hurt feelings. For the 2019 Survey, we asked UCSD to categorize the comments into themes, which allows units to learn from the frequency and consistency of feedback without the need to read each comment individually.

Planning your next steps

Recommended next steps from UCSD and UW colleagues:

  1. Share results with your staff (high-level or detailed —up to you). It may help to review the advice above about communicating results.
     
  2. Decide what improvement efforts to take on and what further information you may need to gather. If needed, support is available from the Partnership for Organizational Excellence, a new team of organizational development consultants at POD. The Office of Educational Assessment can help with additional surveys and feedback gathering.
     
  3. Communicate with your key partners, “We heard you!” Sharing some high-level results and actions you plan to take in response to feedback will let those who rated your services know that you were listening.

A few things that may be helpful to consider while planning:

  • Lower marks in “moving in a positive direction” can point to a need for more feedback. According to UCSD, lower scores in this category often indicate concerns about recent changes, upcoming changes, or changes that respondents would like you to make. From UCSD’s experience, units that scored well in customer satisfaction, but had “moving in a positive direction” as a primary opportunity, saw their satisfaction drop over time if they didn’t follow up to learn more. Focus groups and follow-up surveys helped uncover reasons for this score.
     
  • Large units do not need to tackle improvements in every core service simultaneously. With limited resources, leaders will likely need to prioritize improvement efforts, not just within each service, but across the unit. A unit with seven core services, for example, may choose to focus on two or three for improvement this year, saving the others for the future.
     
  • A low number of responses is not a cause for inaction. If a service did not receive enough responses on this survey to result in meaningful or actionable data, it may point to a need for follow-up with known service partners. More targeted surveys or focus groups, especially if anonymous or confidential, can help provide specific feedback on what’s working well and what could be improved.