← Back to projects

Max - Explicit Signals/User Ratings

Designing a user rating system for Max to give viewers greater control over their algorithmic recommendations.

StreamingUX DesignMax

A clear sign of the increasing sophistication of streaming video users is their strong desire to have some control over the algorithm that controls what they see and don't see. Over the years we have consistently seen not only a clear understanding of how algorithms work, but also a deep frustration with being pigeonholed or at the mercy of a black box system. It had been my dream for years to create a system to give users some power in the process of choosing and prioritizing content. Giving them the ability to rate things that they liked or disliked was the first step along this very important road.

Max explicit signals and user ratings feature

Role

Product Design Manager/Project Lead: As the design lead for both the Content Detail pages and the user actions (add to list, download, etc.), this feature fell largely within my purview, though had fairly critical impacts on our algorithms/personalization designers as well. I was tapped as the point person for the project all-up and served as the point person for the design strategy of the feature and coordinated efforts across our two teams for research, ideation, and design (UX, Visual, motion) of both the in-app experience, but also the downstream affects of users rating content. I also served as the voice of the project, presenting to executive leadership and advocating for the feature in prioritization discussions.

I moved to Scotland just before launch, so don't have any direct insight into how the feature is doing, but I strongly believe that what we delivered was key to solving these major customer pain points: only 58% of users indicated satisfaction with personalization and recommendations on Max; the 'Ability to Hide Content' feature ranked 3rd among customer-requested features (10% of all requests); and among the 20% of users who expressed dissatisfaction with recommendations, 30% wanted the ability to customize their recommendations.

In a Nutshell

Most Fun

The pixel-level refinement work on the thumbs icon design — evaluating variations between 'blobby' and realistic hand depictions to find the perfect balance.

Biggest Challenge

Securing roadmap prioritization amid competing demands. The project required significant advocacy and leadership buy-in during the 'Early Days' phase.

In Retrospect

Would have pushed harder to include ratings in the end-card experience from the start, as it represents a natural moment for user feedback.

Customer Impact

  • Only 58% of users indicated satisfaction with personalization and recommendations on Max, indicating they are not aligned with their taste and preferences
  • The 'Ability to Hide Content' feature ranks 3rd among customer-requested CX features, representing 10% of all feature requests
  • Among the 20% of users who expressed dissatisfaction with recommendations, 30% wanted the ability to customize their recommendations
Customer feedback showing desire for content control and rating capabilities

Early Days

The idea of incorporating user ratings certainly wasn't new. We had been talking about it since the HBO NOW days, as a way of learning a bit more about customer preferences, and then again at the launch of (the original) HBO Max when I designed the multi-user profile feature and we could then target an individual instead of all account members. And even more recently, some of our designers had done further investigation into the different systems used by other services (thumbs up/down, stars, numerical, etc.) and the pros and cons of each. In each case, the idea would gain attention, initial work would begin, and then eventually interest from the executive level would wane as other priorities demanded attention from the design team.

In 2023, though, the stars aligned where I now led design for the bulk of features that were involved in delivery of a robust explicit signals experience. I had been at the helm of user actions (my list, downloads, continue watching, etc.) for over a year and now also had the detail pages too. I also was highly motivated to get this project moving, as it was the obvious first step in my longly held desire to give users the control over their recommendations. I had repeatedly heard from users that they wanted this level of control since at least 2018 when I was first doing research into the Profiles feature, and heard about users going to absurd lengths (deleting profiles, creating new profiles, playing content JUST for algorithm changes, etc.) to have some level of control over what they see.

To kick things off, I worked with my product partners to assess whether there was room within our roadmap to focus on explicit signals and if there was any appetite among product leadership to tackle it right now. There were always many projects vying for our attention, and that moment was no different. However, we decided that it was worth proposing as one of about three large efforts going on at that moment. With that decision made, we started resuscitating some of the older but still quite good work that had been done prior to us being involved, and dug more deeply into existing research to identify any gaps we needed to fill prior to fully settling on goals and scope.

Forming and Ideation

The first step in getting this feature off the ground was to bring together all of the teams that were involved. I worked with my DPM to put together a project plan with a work-back schedule and kickoffs, followed by recurring meetings with our team (Core features design and product) and the algorithm/recommendations product and design leads. These meetings would ensure alignment so that we would deliver a rating mechanism that would be consistent and cohesive from the moment of rating, to next browsing session, all the way to the long-term value of a truly custom recommendation set. I also set up an ideation session where we could talk through:

  • Review previous work and the existing research (from previous efforts, app store reviews, abandoner surveys)
  • What type of rating system (thumbs, stars, numerical) we wanted to use?
  • What feedback we wanted to provide the user on rating?
  • What affect ratings should have? How would it be different for positive vs negative?
  • What open questions would we need to get answered before we could start design.

The session was a great opportunity to get a lot of great ideas out, and to build excitement around the project. We were also able to organize next steps for both my team and the algo team to proceed without worrying about misalignment. We would focus on things like where users could rate, what the specific rating system would be, and start to delve into the interactions across mobile/web/CTV.

I embedded the design brief in our workshop FigJam to ensure alignment on our goals.
We started this round by assessing what has been done before and identifying what we liked/didn't.
We then dug into what rating would actually DO for users to improve their Max experience.
We were also very careful to consider the right-sized approach to user education: no surprises and no overload.
We took the opportunity of the workshop to review what the data-design team was proposing for rating's effect on the browse experience.

Early Design

After aligning on purpose and general approach, the two design teams (my team and the algorithm/personalization design team) started digging deeper into our specific areas and started designing in earnest. I had set up weekly meetings and working sessions for us to regroup and align to keep us all moving forward in a coordinated way, which helped us get quick feedback and prepare for biweekly check-ins with product and design leadership.

For our part, we focused on a few fundamental questions about the feature:

  • Where should users be able to rate titles? The Content Details page would be the primary place for rating titles, with secondary options being action drawers throughout the browse experience on web and mobile, and on the end card after a user has completed a title.
  • How much do we need to explain the feature? We opted for minimal explanation — no need for disruptive coachmarks or dialogs or verbose explanatory text — since the feature was generally well understood by users, and a shorter general explanation would allow us to experiment and improve things without the need for re-education.
  • How do we make it as easy and quick as possible to rate? After a lot of exploration, we landed on using drawers and menus with rating options exposed to enable single-tap rating whenever users were exposed to the rating CTA.

During this phase, we did many different explorations around interaction patterns for the ratings, and consulted heavily with design systems as well as accessibility to ensure what we were designing would feel aligned with how the app looked and felt generally and that it would work well for all of our users. We also started explorations into the exact iconography we would be using for the different ratings (love, like, not for me). I don't think I have ever thought so much about the human thumb in my life!

Final Design and Delivery

After much exploration we had landed on a really solid approach that fit nicely within our existing design system and used (with some enhancement) existing components to really nice effect. The MVP scope broke down into a handful of key capabilities:

  • Enable Explicit Signaling from CDP (all) and kebab menu (mobile and web) for Entertainment content
  • When a user "Loves" a title, Max generates a "Because You Loved" rail and takes that a signal into the algo
  • When a user "Likes" a title, Max takes that as a signal into the algo
  • When a user indicates "Not For Me", Max takes that as a signal into the algo, hides the title from all non-factual rails incl. end card, the title does not power a BYW rail (title is still found in search and A-Z, and Watch History is retained)
  • Max ingests "Not for Me" as a signal for Marketing & Push Notifications (other signals to come later)
  • Extend Explicit Signals to End Card

This MVP version of the feature hit all the major points we had hoped it would. It gave users a mechanism to have some control over their experience, and the outcomes from rating matched the expectations we had heard in our initial research.

With the final scope defined, we moved forward with the fine details of the experience. We landed on using drawers as the primary component for serving up the rating UI. This built off of users' expectations for how they can interact with titles, while also using existing components to limit development scope. This meant we could focus on how best to integrate rating into these systems as well as fine-tuning the motion and animations to make it as lively and interesting as possible. I worked closely with our motion design team to ensure that feedback was as clear as possible. The team created some incredible visuals to capture the spirit of really loving and liking a title, with a nice and understated thumbs down to reflect "not for me." We made some slight variations to accommodate the different placements — within a button, at the bottom of a drawer, and so on.

The end result was an intuitive and beautiful experience for users. After much persuasion, we were even able to get the green light for including rating as part of the end-card experience, which was a huge win since it did add a step before the user had an option to continue watching content. I am really proud of the work done by our team and all of our incredible partners in this project.