Stop Guessing What They Want
We live in the golden age of feedback.
It has never been easier to ask people what they think. With just a few clicks, any organization can spin up a free survey, blast it out to an email list, and get a colorful pie chart showing exactly what their stakeholders believe.
It feels empowering. It looks like data. It gives leaders the confidence to launch that new program, rebrand their organization, or shift their internal strategy.
But there is a dangerous trap here. Bad research is often worse than no research at all.
When you have no data, you know you are guessing. You proceed with caution. You test the waters. You remain open to the idea that you might be wrong. But when you have flawed data, you have the illusion of validity. You move forward with full confidence, often in the completely wrong direction.
As an organizational research firm, we see this constantly. A client comes to us confused because they surveyed their members or employees and the vast majority said they wanted a specific change. So, the organization made that change, and nobody used it. They invested time and budget into a solution that theoretically had approval but practically had no uptake.
Researching Is a Science
The answer usually lies in the methodology. Asking is a simple conversation. Researching is a science. Without the right methodological framework, your data is likely suffering from invisible biases that skew the results before you even open the spreadsheet.
One of the most common issues is selection bias. When you send out a generic link, you rarely hear from the average user. You hear from the vocal minority who had the time and inclination to click that link. These are often the people who are either very happy or very angry. The vast middle ground, the silent majority who actually make up the bulk of your audience, often goes unrepresented.
Another issue is social desirability bias. Your questions might unintentionally pressure respondents to give the polite answer rather than the honest one. If you ask employees if they value a collaborative culture, they will almost certainly say yes because it is the correct thing to say. It does not mean they actually want more meetings or open-concept offices.
Perhaps the most significant challenge is the gap between what people say and what they do. Humans are notoriously bad at predicting their future behavior. If you ask a customer if they would buy a hypothetical product, they will often say yes to be helpful or agreeable. If you ask them to actually pay for it, the number drops precipitously.
To get actionable strategy research, you have to move beyond surface level questions. You have to design the inquiry to reveal the truth. This is where the distinction between a survey tool and a research consultancy becomes clear. Tools give you percentages. Consultants give you implications.
Consider a classic example of how a slight tweak in research design changes the strategic outcome. Imagine an organization wants to improve its workspace to boost morale.
If you take the amateur approach, you might ask a simple question like: Is safety important to you in a workspace?
The result is predictable. Ninety-nine percent of people will say yes. Based on this, the organization might conclude that they need to spend their entire budget on safety upgrades.
However, a professional research approach looks different. We might ask a trade-off question. We could present a scenario where the employee has one hundred dollars to invest in their workspace and ask them to split it between faster technology, better coffee, or safety upgrades.
In this scenario, we often see a completely different result. Participants might allocate sixty dollars to technology, thirty dollars to coffee, and only ten dollars to safety.
The conclusion here is nuanced. Safety is a hygiene factor. It is expected, but it is not a motivator. Investing there will not drive satisfaction or improve morale because people already expect to be safe. Investing in technology, however, removes a daily friction and actually improves their work life.
The first approach validates a bias. The second approach provides a trade-off analysis that actually informs resource allocation.
The Cost of Getting It Wrong
Whether we are running opinion polls for a public campaign or conducting focus groups for internal organizational strategy, our job is not just to tally votes. Our job is to design the mechanism that bypasses biases to find out what truly drives behavior.
This is critical because the cost of a wrong decision is rarely just financial. When you launch a strategy based on bad data, you burn social capital. You tell your team or your customers that you are listening, but then you deliver something that does not solve their actual problems. This creates cynicism. The next time you ask for feedback, they will be less likely to give it because they have seen that it does not lead to meaningful change.
If you are making a low stakes decision, like where to order lunch for the team, a quick poll is fine. But if you are setting organizational strategy, launching a market facing product, or reviewing policy that affects thousands of people, you cannot afford the illusion of validity.
Innovation and strategy do not require you to be a magician. They just require you to be a researcher. The smartest move is not to guess what your stakeholders want. It is to use the right tools to uncover what they actually need.