I See Patterns for a Living. Here’s What I’ve Learned.
When I first started at IBM, I spent a lot of time watching people stare at dashboards.
Not in a creepy way. In a researcher way. I’m a user experience (UX) researcher and I make my living studying how our users interact with our products and translate that information to the team so they can make actionable decisions.
I was conducting a series of studies on monitoring dashboards in the form usability tests and concept tests where I watched users try to make sense of what they were looking at in real time. And what I kept seeing was the same thing over and over: confusion.
The designers had used purples and blues. Similar shades. Nothing to signal that one number was fine and another was on fire. They hadn’t put thresholds thresholds or any sort of visual hierarchy that told you THIS ISSUE MATTERS RIGHT NOW.
Users would scan the screen and not know where to look. They didn’t know what was critical and what was noise.
Across sessions, different participants, and studies, they’d universally say some version of the same thing. They wanted traffic light colors. Red. Yellow. Green.
When we tested dashboards with those status indicators, they relaxed. They knew what to do. Red meant address this now. Yellow meant I can probably wait on this. Green meant I don’t need to think about this at all.
That clarity — that immediate knowing of what requires your attention — stuck with me. I started noticing it everywhere. It became the seed of what I now call the Three Zone Framework, which I use for a lot more than dashboards. But that’s a different essay.
What it gave me professionally was something more specific: a way of thinking about signal versus noise. What actually needs my attention right now? What can wait? What’s fine and doesn’t need to exist in my brain at all?
Those questions follow me into every research project I take on. Because, as I learned through experience, research itself has the same problem dashboards do. There’s a lot of data. Not all of it matters equally.
And without a way to sort what’s critical from what’s just interesting, you end up with a beautiful study that sits in a folder somewhere and slowly becomes irrelevant.
Here’s the thing nobody tells you early in your research career: doing the study is the easy part. Getting someone to actually do something with it is the real work.
What Makes Research Actually Matter
I used to think my job was to run good studies. It took a few years — and a few playbacks that led absolutely nowhere — to understand that my job is to make sure research lands.
The answer, I’ve found, almost always lives in the questions you ask before you ask any research questions at all: if we learn what we’re setting out to learn, what will the team do about it? Will the research change anything in the product? Will it inform any decisions or product direction?
Most research doesn’t fail at the analysis stage. It fails before the study even starts — when nobody stops to ask what the investment is in the outcome of the research.
1. Start with the question. But first, ask if the question is worth asking.
I learned this the hard way. You can conduct an amazing study that no one does anything with. The team nods, thanks you, and then does whatever they were already planning to do. I’ve been there. It’s one of the most discouraging things to happen to a researcher.
So now before I design anything, I ask: will this research actually be used?
That sounds simple and obvious. You might think, of course the research will be helpful. Helpful does not mean it will be used.
For the research to be used, it needs to have some kind of impact in relation to the object of the research. In my corporate environment, that means it needs to shape the future of the product I’m working on in some way.
In order to evaluate this, I have to look at the timing and ask: does this team have a decision coming up where the research would land? I have to understand the stakeholder: are they genuinely curious about what users need, or are they looking for validation?
I also have to be honest with myself about whether the research question is real or whether someone is just checking a box.
Sometimes I’m the one who spots the research question first. I’ll be in a meeting listening to a product manager talk about a direction they’re considering and I’ll hear the uncertainty about it. There’s a research question hiding in there. I’ll pull it out, bring it to them, and see if it resonates. If it does, we’ll set up a planning conversation and discuss.
Research planning conversations are important to me. They help me understand what the team plans to do with the research. I’ll ask them to make a commitment. Not in a formal, bureaucratic way. More like: if we learn X, what will you do with it? I want someone to answer that before we start. It doesn’t have to be a big dramatic promise. But there has to be something on the other end of this. A decision that’ll get made. A product change that’ll happen. A direction that might shift.
It’s better to not do research than to do research that won’t get used.
2. Design for what the team actually needs, not what would make a perfect study.
I care about rigor. I also care about not being precious about it.
The “ideal” study sometimes takes six weeks. Six weeks is a long time when a team is trying to ship. So I’m always balancing: what level of rigor does this question actually require, and how much time do we realistically have?
Sometimes the right answer is a quick round of five interviews. Sometimes it’s a full usability study with quantitative benchmarking. The method serves the question and the moment, not the other way around.
I’ve gotten more comfortable over time with sizing the research correctly. It took me a while. Early on, I think I was more attached to doing research the “right” way. Now I’m more attached to doing research that lands.
When I work with the team, I don’t just show up with a study already designed. I bring the team in early and I get them to brain dump a bunch of questions they want to ask our participants. It helps me know what they’re thinking about and then it also helps me know what to ask. Especially in a space where I’m navigating a new domain. I rely on the expertise of my team to make sure we’re asking the right questions and I rely on my expertise as a researcher to gather the data.
I believe it’s our research collectively as a team. I never run off and conduct a study without the team being involved every step of the way. They’re never surprised at the end when I come back to them with the findings.
3. Capture more than you think you’ll need
Here’s something I’ve learned: you don’t always know what questions will come up six months from now.
This is why I’m meticulous about how I store and organize research data. I built a research repository in Airtable that tracks everything — studies, sessions, observations, findings, research questions — and links them together so you can follow a thread across multiple studies.
I also built structured notetaking forms so that when my teammates observe sessions with me, their notes actually go somewhere useful instead of living in a doc that nobody can find later.
I have many examples of why this is so important. A recently example is when people from another team kept asking me questions about business intelligence. We’d never done a study specifically on BI. But when I went into the repository, we had actually touched on it tangentially across several studies.
The data was there. I could answer the question because I’d built a system that let the research live beyond the original purpose it was designed for.
Research has a longer life than most people think. The infrastructure you build around it determines whether you can access that life or not.
4. Close the loop. And actually close it.
By the time I present findings, my team shouldn’t be surprised by anything I show them.
That’s intentional. I invite stakeholders and teammates to observe research sessions as they happen. I send interim updates — a Slack summary here, a quick email there — so the insights are landing in real time, not all at once at the end. The readout isn’t a reveal. It’s a culimination of what we’ve been building toward together.
I learned this the hard way. Before I started involving the team this consistently, I met more resistance. Not because the research was wrong, but because it felt like something being done to them rather than with them. That distinction matters more than almost anything else in this work.
After synthesis, I present the findings and we work through what’s feasible together. Then we make it real. I work directly with designers to translate research into GitHub tickets that are specific, actionable, and buildable. I work with product managers to shape findings into Aha! epics for the roadmap.
Research that ends in a presentation might get remembered. Research that ends in project management software actually gets built.
And that’s the goal. Not a beautiful deck. A changed product. Real business impact.
Final thoughts
There’s one more thing I’ve learned about research that I didn’t expect and it has nothing to do with methodology.
Research travels. Once it leaves your hands, you can’t always control how it’s used. I’ve seen findings cited out of context. I’ve watched data get summarized in ways that drifted from what it actually showed. I’ve had someone discover an old study without the surrounding context and overstate what it meant.
This is part of why, when I write — here, in newsletters, in the book I’m working on — I tend to stick to my own observations and experience rather than citing studies. I know too well how research can get misrepresented, even by well-meaning people. I’d rather say here’s what I’ve seen and experienced than here’s what a study says and have that study mean something different by the time it reaches you.
It’s made me a more careful researcher and a more careful writer. And it’s reinforced something I keep coming back to: the work isn’t just in gathering good data. It’s in staying close enough to it, long enough, that you can protect what it actually means.
When I think about those early dashboard studies, what strikes me is how patient I had to be to let the pattern emerge. Participants kept saying the same thing across sessions, but it took running enough of them to be sure it was real, not just a few vocal people. That accumulation of evidence — that’s what makes you confident enough to advocate for something.
I take that that patience and willingness to sit with data until it tells you something true into every project I do. And honestly, into a lot more than that.
I’m Rachel a UX researcher who has spent years doing research in highly technical domains at companies like IBM. This is one of many places where my professional practice and my broader thinking about intentional, embodied intelligence overlap. If this resonated, you’ll feel at home in my newsletter. I write about technology, research, wellness, and the places where they all run into each other. It’s called Renaissance Rising and you can join below.
