A 66% boost in data-based decision-making

Removing barriers to company-wide user research

ResearchOps: Training and tooling, organizational impact, and continuous research rituals
Client
Everyday Speech
Role
ResearchOps lead
Date
H1 2024

The numbers that matter

User research was happening, but not at the scale or speed it needed to. Folks were dependent on me to make research happen but I did not have the capacity. Teams struggled to make confident product and business decisions.

Over six months, I led the efforts to turn that around:

                         
  • 110 user interviews conducted, up from a dozen in the same timeframe.
  • 66% increase in user-centered and analytics-informed decisions.
  • 21 employees trained in research and data analytics.
  • Key processes automated, standardized, and documented to make research more efficient.

These numbers signaled a shift: more voices in decision-making, more structured research, and better-informed teams.

Bar chart comparing how often employees relied on customer data to make decisions in September 2022 and June 2024. There is a strong increase in June 2024, with a 33-point rise in the “Always” and “Often” categories combined

From two teams to company-wide change

At first, the project focused on two teams. The goal? Help them run frequent user research for six months. But momentum built fast. The success of these teams in integrating research made it clear: this needed to scale across the entire company. I've seen this pattern before in change management: starting small, proving value, and expanding only when the results speak for themselves.

The starting point: identifying the biggest barriers

Before making changes, I needed to understand what was broken. I surveyed internal stakeholders, asking them to rank the most disruptive challenges in the user research process.

From there, I prioritized the 20% of problems that would drive 80% of the impact. Every month, I ran retrospectives with stakeholders, adjusting as I learned what worked and what didn’t.

Survey results showing the most burdensome steps in the user research process, as ranked by 7 participants.  Top 3 burdensome tasks:  1- Updating meeting invites with Zoom links and inviting interview leads and shadowers. 2- Creating the contact list. 3- Writing and sending the first recruitment email. These were rated as the most time-consuming or difficult steps compared to others like follow-up emails or booking schedules, which ranked lowest in burden.

Low effort, high reward fixes

Participant management was a time sink, so I automated it

Research teams spent hours securing interviews, distributing them, sending reminders, and handling incentives.

To cut down on manual work:

  • I introduced Calendly Round Robins for seamless scheduling.
  • I collaborated with a designer to create email templates for research invitations and thank-yous.
  • The result? Less time wasted on logistics, more time for research.

Sales and customer experience teams feared over-contacting users

One of the trickiest barriers wasn’t operational, it was political. Teams worried that frequent outreach could harm customer relationships or disrupt sales efforts.

Solution: “Do not contact” list, automatically updated with:

  • Users who had been invited to research in the past month
  • Key accounts tied to active deals or renewal conversations.
  •              

This reassured teams while giving researchers a clear, confident path to reaching the right people.

Shadowing interviews created buy-in across teams

Whenever someone led a user research call, I encouraged them to invite stakeholders, especially engineers, to shadow.

                   
  • More exposure to user pain points made teams naturally apply UX logic in their work.
  • It built confidence for people to run their own research.
  • To make shadowing easy, my colleague Aya created a "how to shadow a user research call" guide created so research leads didn’t have to explain the process every time.

Smarter recruitment emails reduced mental load

Instead of sending weekly requests for interviews, I batched outreach:

                   
  • One email to 2,000 users secured enough interviews for an entire semester.
  • Calls were only bookable on specific days (e.g., Thursdays), reducing scheduling chaos.
  • This required careful budget planning to control interview slots, but saved countless hours.

Fixing no-shows with personalization

Early in the project, 50% of scheduled interviews resulted in no-shows. The team’s hypothesis? The emails felt too generic, making cancellation feel like no big deal.

We redesigned Calendly confirmation emails to:

  • Use the participant’s first name.
  • Introduce the call facilitator personally.
  • Express that I was genuinely looking forward to the conversation.

It worked. No-show rates dropped to 16%.

High effort, high reward fixes

Lack of incentive budget? I got creative.

Budgets were tight, and I couldn't always offer cash incentives. But valuable incentives don't always require money.

  • We brainstormed PDF-based incentives that were meaningful to users.
  • Cross-functional teams helped create resources like an IEP planning bundle for educators: something genuinely useful.
  • The result? Higher participation without additional cost.
  •              

Setting a minimum research baseline

A big issue was inconsistency: teams would do research sporadically, then stop. To fix this:

                   
  • I set a goal: 3 customer interactions per sprint.
  • Frequent, lightweight research meant teams focused on learning over perfection.
  • It built research into their routine instead of making it feel like an "extra" task.

Centralizing insights in Dovetail

To make research findings more accessible and actionable:

  • I standardized thematic analysis training in Dovetail.
  • I built a repository with a custom taxonomy that teams could search when starting a project.
  • The downside? Documentation wasn’t enforced as strictly as I’d have liked. But at least all data was in one place.

Built to outlast me

This project started as a simple OKR initiative for two teams. It grew into a company-wide shift in how user research is done.

Was everything perfect? No. Some initiatives needed more enforcement, and some trade-offs had to be made. But the core goal of removing barriers to research and making decisions more user-informed was met.

And the best part? It didn’t rely on me orchestrating everything anymore. The processes, automations, and mindsets I built will keep driving impact long after I’m gone.