Poll Everywhere is a live-polling SaaS company, serving 75% of the Fortune 1,000.

Here’s how it works: Presenters add live polls to their slide deck and audience members respond on their mobile device. The chart updates live, in real-time, in the presentation.

Our job is to provide value for Presenters, which usually means increased confidence while presenting, higher audience engagement, and better knowledge retention.

Involvement

As Design Team Lead, I facilitated product design work and was responsible for the output and outcome of the design function.

Design Team lead Facilitator
Stakeholder POC
 

Context

Our problem statement

Poll Everywhere relies heavily on the skill and foresight of the Presenter to entice audience participation. This is unreliable.

I framed the problem as job stories, across 3 of our existing 5 Archetypes, to be validated.

✏️ Creators

When I’m building a presentation;
I want to introduce moments of high-engagement;
So that the audience can better retain key concepts.

👩🏽‍🏫 Presenters

When I seek feedback in a vulnerable state,

reliable

📲 Participants

When I choose to engage by responding to polls

give me a reason to participate

Our hypothesis

Inside every human being is a desire for mastery and a need to belong. Presenters can leverage this better if they have the ability to present gamified polls, perceiving a lift in audience engagement.

 

Project

🏁 Goals

Virality
More Participants becoming Presenters after experiencing a Competition (also the KPI)

Retention
Increase Presenter 2nd month usage, tracked as monthly-active-users

⏱️ Metrics

Quantitative

  • Adoption, signaled by third consecutive re-use

  • Multiple-choice polls within a Competition receive more responses

Qualitative

  • Observed intent to use a Competition in the near future with a real audience

  • Ease-of-use; time-on-task

💎 High-level constraints

Competitions should only work from the web at first, and only in our native apps once validated.

Creating a competition should use the same components and creation flow as any other poll.

Competition questions should provide the same amount of time to answer, regardless of length.

 

Process

Screenshot 2019-02-18 16.25.19.png

⚗️ 1. Generative research

Our internal ad-hoc User Research team interviewed 15 customers and compiled a list of assumptions and how-might-we’s. This formed the foundation of our design work and refined our job stories.

Key insights (Chili’s, ties, formative assesment)

🍱 2. Design banquet

I facilitated a 6 hour Design Banquet, based on the double-diamond design framework. The Banquet included stakeholders, designers, subject-matter experts, the product manager, one founder, and developers.

competitions-1.jpg
competitions-2.jpg
competitions-3.jpg

The output from the Design Banquet included detailed sketches of high-value areas, describing the flow for creating, presenting and participating in a competition.

📈 3. Evaluative research

We scheduled 18 remote customer interview sessions across 5 weeks. The goal of these sessions is to evaluate if our customers could imagine themselves presenting a competition, thus justifying development. I facilitated each session.

I encouraged one of the two designers on the project to build a clickable InVision prototype, that we would manually advance over a video call.

This gave customers a chance to react to a product that appeared to be real.

We discussed our observations after each session and incorporated it into the prototype for the following session.

zoomed-out view of the InVision prototype

zoomed-out view of the InVision prototype

✅ 4. Validation

As soon as Engineers produced a working version of Competitions, we retired the InVision prototype and switched over to the real thing, this time letting the user drive. This allowed us to observe friction, ease-of-use, and time-on-task, as well as adding character to our job stories.

“ Competitions got a resounding response, echoing in the room and over the next few days how amazing it was. I had people say, “OMG” and give me high fives.

As part of our validation model, we asked each customer if they could imagine themselves presenting a Competition.

 

Output

The majority of what’s shown below was crafted by the heavy-hitters on my team, not by me.

Creating and configuring a Competition

Create the Competition

Legacy poll creator UI

Legacy poll creator UI

Configure the Competition

New configurator UI

New configurator UI

Presenting a Competition

Showing the question with timer

Showing the question with timer

Question is active, timer has started

Question is active, timer has started

Question locked, responses shown

Question locked, responses shown

The final Leaderboard shown on the Presenter’s screen

The final Leaderboard shown on the Presenter’s screen

Participating in a Competition

Basic response flow

Legacy UI with screen name input

Legacy UI with screen name input

Legacy UI with multiple-choice input

Legacy UI with multiple-choice input

Legacy UI with score counter

Legacy UI with score counter

The final Leaderboard for Participants

Leaderboard-participant-full.png
 

🎉 Outcome

⚖️ Trade-offs

Timer, click to advance, filmstrip, which controls are nice-to-have vs MVP

Outcome summary

game mechanics

Collecting feedback


📦 Iterating through bi-weekly releases

Our Product Manager empowered the Engineers to ship bi-weekly releases across 4 months. Each release included improvements directly based on customer feedback, and many bug fixes.

Phase 1
Internal-only alpha

Phase 2
Limited alpha

Phase 3
Limited/opt-in beta

Phase 4
GA rollout 1/9/90

 

🎁 What I learned

Maintain relationships with test participants
The people that give early feedback on rough prototypes end up being the best advocates, and are deeply engaged with the product.

Hire designers with strong soft skills
I made a bad hiring decision for my team, which resulted in added risk to the project. I gave too much weight to hard skills and not enough credence to soft skills, particularly communication.

Game mechanics
kjnasdf jhsdjfgh