Reach & Impact prioritization
Decision
To prioritize work, we will focus on Reach and Impact towards a specific outcome, mediated by the company goals.
-
Clearly state which outcome the team is working toward
-
Define how to measure progress toward this outcome - overarching metric by which to compare features
-
Seek input from other teams/stakeholders on the set of features that should be considered
-
Write mini-business cases to compare the most promising opportunities. Mini-business cases should include:
-
Reach - how broad will the impact of this feature be?
- Customer facing: how many customers, how much revenue, how important are the customers, how many new prospects would it attract?
- Internal: how many campaigns are affected, how many recipients, what percentage of campaigns, invoices, customers, etc.
-
Impact - using the defined metric, how big is the impact of this feature?
-
Confidence & Risks (if relevant)
- How confident are we that the feature will not have unintended consequences/that the impact will be positive?
- How can we monitor risk, what are metrics to pay attention to to determine if bad outcomes are happening?
- How to mitigate risk? What changes to onboarding or user education are needed?
-
Alternative solutions
-
-
Prioritize features based on a combination of reach and impact
-
Consider risks & confidence level - discuss with team & stakeholders to identify features with high risk or low confidence that should be postponed or eliminated
-
If there's a clear winner in terms of impact and reach, we're done. :dancer:
-
If no clear winner, estimate effort with the team and start with the lowest effort feature.
Problems
The prioritization process isn't unified across teams. This leads to:
-
Difficulty aligning each team's objective with the company strategy.
-
Difficulty prioritizing features or improvements that could be handled by multiple teams (impact multiple outcomes)
-
Issues with communication/collaboration: it's not clear to the rest of the company when/how to contribute or give feedback on the prioritization.
-
Teams cannot learn from one another and implement improvements to their processes.
The business value of features or improvements isn't always clear in the current prioritization process.
-
Stakeholders don't feel confident that each team is working on high priority tasks (this could be developers, other teams, management, etc.).
-
We want to make data-driven decisions in every team. Without quantitative data, it's difficult to make an argument for or against a given feature :slight_smile:
Context
Currently, we don't have a unified framework for prioritizing product work at optilyz - Deliver, Create, Collect, and Price teams all used different prioritization strategies.
-
We spent lots of time on prioritizing work and aren't consistent across teams.
-
We don't have high confidence when it comes to Reach and Impact
-
We don't always write business cases to compare opportunities to one another
In some teams, we've been taking effort into account when prioritizing. In some instances, this has led us to prioritize 'lower effort' features over higher impact work.
Options
There are many frameworks for prioritizing work, and we've experimented with a few different methods.
-
RICE (Reach, Impact, Confidence, and Effort)
-
Measuring 'total impact' of a feature - impact towards multiple different outcomes
-
Focusing on Reach and Impact towards a specific outcome, mediated by the company goals (process as described above)
-
Cost of Delay
Reasoning
Level of detail
The process we describe is intentionally flexible/lacking detail. Different teams work towards different outcomes with different (internal or external) stakeholders, so defining a single metric to calculate Reach or Impact will not work in every case - it must be adapted to fit the team goal, company strategy, and primary stakeholders.
Why not incorporate development effort estimates?
In recent experience, we've underestimated some projects by months (visual check) or years (new client dashboard), causing us to delay higher impact work. Thus we would argue against using development effort as a primary metric in our prioritization framework. This led us to rule out the RICE framework (as effort estimates are a major factor) and not look further into Cost of Delay, as we assumed accurate time estimates (at least relatively speaking) would also play a role in calculating cost of delay.
Why not prioritize based on total impact?
The company strategy should act as a guiding light in the prioritization process: each team's work should be aligned with the company strategy towards a specific outcome that contributes to that strategy. Thus it doesn't make sense to calculate total impact toward many different outcomes.
Another factor: total impact is incredibly difficult to define and measure, and without a clear scale to compare features to one another it doesn't help us prioritize.
Consequences
How do we teach this change?
Inside the team: Product managers and development teams will need to learn to write and discuss business cases and metrics for quantifying impact.
Company-wide: We can present slides to the company now, or we could take some months to experiment and refine the process and only then present to the company? External communication is quite important as one of our goals is to enable other teams to contribute ideas and feedback and give them transparency into our process.
What could go wrong?
-
Maybe the process is too long/complex or slows us down
-
Other stakeholders could object to excluding effort estimates from the prioritization process
-
The process as described could be too vague or difficult to implement
-
Not all features can be measured directly in a quantitative way - we may use proxy metrics that are not accurate enough, leading us to incorrect prioritization.
What do we do if something goes wrong?
In the initial learning phase, we plan to check in frequently to get feedback on impact and reach metrics.
We plan to hold retros each quarter to discuss how things went in the teams and refine the process as needed.
What is still unclear?
-
How exactly to incorporate feedback/contributions from retention and acquisition teams
-
How to calculate impact, what kinds of metrics are reliable and useful?