Benchmarking UI's

UX research | UX design

Background
What is Microsoft Advertising?
Microsoft Advertising (ads.microsoft.com) is a website that advertisers use to create, launch, and manage their search advertising campaigns. Advertisements posted using this platform reach nearly half a billion people worldwide.
Problem statement
Microsoft Advertising's website underwent a complete UI refresh towards the end of 2019. With the upcoming holiday season fast approaching, our team needed to ensure that the site's new look and feel was usable and intuitive for our 150k+ advertisers.
Old UI (left) vs. New UI (right)
Project overview
My roles
UX research   →   planned research, recruited participants, executed study, analyzed data
UX design   →   created prototypes, led future work prioritization discussions
UX management   →   assigned work items, prioritized backlog
Research goal
Establish a set of quantitative and qualitative user experience metrics to gauge the usability of Microsoft Advertising’s new UI. 
Research questions
1. Task outcome: Can users successfully complete core/critical success factor tasks in the new UI?
2. Ease of use: How easy/difficult is it to perform these tasks?
3. Verbatim: What qualitative feedback do users have as they perform these tasks?
4. Confidence: Are users confident that the new UI is headed in the "right direction"?
5. Satisfaction: How satisfied are users with the new UI?
6. SUS response: How do users score the new UI's usability? 
  
Stakeholders
•  Design team (~10 designers)
•  PM team (~20 product managers)
Methodology
•  Summative usability test
•  Online, unmoderated
•  ~40 minutes to complete
•  20 participants
•  17 tasks
Analysis
SUS response
What it is   →   the System Usability Scale (SUS) provides a “quick and dirty”, reliable tool for measuring usability. It consists of a 10 item questionnaire with five Likert scale response options for respondents.
Why I used it   →    this questionnaire is used widely by the industry and has been instrumental for both competitive analysis (comparing our UI's usability with our competitors' UIs) and internal benchmarking (assessing change in usability YoY) on our team.
**Example SUS score shown
Click paths (helpful for identifying problematic pages)
Task analysis
What it is   →   the most critical part of any usability study. The framework I created to analyze each participant's session included data points surrounding success rate, task ease, interaction telemetry, and much more.
Why I used it   →    my standardized method of analyzing performance allowed me to clearly identify and document all usability issues, accessibility gaps, and bugs that users had experienced during each individual task.
UI problem matrix
What it is   →   a user by problem matrix (i.e. UI problem matrix) organizes all of the known issues in a product and prioritizes them by criticality and/or frequency.
Why I used it   →    this analysis method objectively prioritized all of the discovered usability and accessibility issues in the UI and served as an effective visual summary of the large amounts of telemetry gathered during testing. With stakeholders, this matrix provided a foundation for me to guide solution generation discussions using each row (i.e. each issue) as a prompt.
Conclusion
Findings
•  Uncovered massive underlying “reflow” design failure apparent on all pages across site
•  Informed redesign of global navigation's information hierarchy
•  Opportunities identified in research resulted in new feature ideas for the new fiscal year (ex. introducing broad match search to global navigation, drag/drop grid functionality, and new customer service entry points)
My impact
•  Established team’s first official set of UX health metrics, now tracked by leadership YoY (study now performed annually)
•  Uncovered and addressed 52 usability, accessibility, and responsive design concerns. Provided design recommendations and next steps.
•  The insights from this study now serve as foundational measures for the longitudinal evaluation of the UI; both release-over-release (internal benchmarking) and for comparisons against competitor UIs (external benchmarking).