Having been delighted to recently add two outstanding and rapidly-ramping product managers at CrowdFlower, I was reminded just how very differently people with “Product Manager” titles may have gone about their jobs in the past. With such diverse backgrounds, it’s incredibly important to come to a shared understanding of what constitutes “Good Product Management.”
You cannot maximize the value of that diversity while mitigating its risks to your product if you don’t agree on this definition, whether it’s in software or one of the industries software is currently eating. Tools will not save you — the good ones are all infinitely customizeable. People skills are great, but can still mask a poorly-managed product, which risks your team skipping happily off the cliff together, with the cold comfort that they probably won’t blame the PM on the way down.
Despite the excellent work of Marty Cagan’s Silicon Valley Product Group and some emerging standards around tools like Aha!, Pivotal Tracker or Jira, methodologies like Agile User Stories and Acceptance Criteria, there are precious few best practices that can be applied across product types or organizational cultures. There is no entry-level career path for PM to compensate for this. The big guys like Google, Facebook and Yahoo try it with fancy rotational programs, but those are really just for hotshot climbers who would normally join Goldman Sachs, but got the memo that investment banks add zero value to the world.
Either way, pedigree and even functional background likely matter less than just the raw talents of being smart but humble, intuitive but data-friendly, and willing to put in the work to understand the ecosystem and gain buy-in on tough choices.
So how do you generate predictably good results from Product Management activities given all that? Let sufficiently-talented people do what has proven to work for them but ensure they can always answer 5 questions:
- What is the goal of this project or feature?
- What hypothesis would have to be true if we met that goal?
- Can we prove it works?
- What now?
- Does it matter?
What is the goal?
If you can’t clearly state the goal, team members like designers, engineers or customer-facing folks can’t be expected to be able self-solve problems or get to optimal solutions in their area of expertise, and the product you build is guaranteed not to be the one you needed to build. As a simple example, this is the difference between “The home page is ugly and needs to be upgraded” and “We want people to stop bouncing off our home page and engage with the rest of our site”. There hasn’t been enough requirements-gathering or discovery when you get disagreements with the goal statement, and the PM just needs to do more research and customer engagement to get there, and bring the team along for the ride if need be.
What hypothesis would have to be true?
This statement should explain why we have designed and scoped the feature a certain way (and NOT other ways). It describes the test that you have given users what they want in service of the goal. When done well (e.g. “Users who see several product photos will abandon the page at a lower rate than others”), this statement also:
- Surfaces assumptions to be questioned or agree are good risks to take
- Limits the scope of stakeholder input to what will most test-ably address the goal
- Stops irrelevant pushback/scope creep from tangential ideas
If you are the PM, when you state and prove/disprove a hypothesis, you achieve two selfish aims as well: it makes you more responsible for any eventual success, and makes everyone happy with the process even when a project fails to meet its goal.
Can we prove it works?
This is the single datapoint that captures success, and it can be qualitative or quantitative. Sometimes hard data just isn’t going to be realistically available when you need it — but having no proof point means no way of declaring victory, moving on from failure, or learning what worked. So, make sure Product picks the approach that makes the most sense for the given feature — draft the dream press release, describe the number that changes, capture the survey outcome, write the quote you would want to see from user testers, social media, whatever. Whatever it is, if the data point is “SMART”: Specific, Measurable, Achievable, Relevant, Time-bound, it’s good enough.
What will you do with this information from this datapoint once you get it? Just flip to the other campaign? Do research on users or other metrics to understand true impact? Yank the product? The range of responses tells you how much of the project budget should go into specifically tracking the performance data, and if existing data will work as a close-enough proxy, that’s more time you can spend elsewhere. Answering this question upfront most importantly helps manage expectations (like how much time might be needed to monitor and improve after initial launch) and keeps nerves calm with ready contingency plans for when things go wrong.
Does it matter?
Why do something that you can’t tie back to one of a very few key business metrics in a clear and logical (even if indirect) way? There are only a couple that can matter: Incremental orders, new customers, profit. Maybe staff retention, or Internal Rate of Return, or a social-good outcome in unique cases, but it’s not revenue without cost, or pageviews without lifetime value. Comparing and connecting these things even when they’re differently denominated is how you know the investment level is proper and how tradeoffs can be captured. For instance, lower funnel conversion is almost always OK if there’s more total profit at the bottom of that funnel.
These are questions that have obviously broader utility than just software product management. But when you’re trying to build something useful that didn’t exist before in an industry that is only about one generation old, addressing fundamental uncertainty with basic questions is crucial to avoid getting trapped in an echo chamber. Whether that echo comes from Silicon Valley or merely whoever is loudest on your team, answering these 5 questions demonstrably cuts through the noise and offers a path to delivering value as effectively as possible.