ICE Scoring – What does impact mean?
February 18, 2019
February 18, 2019
When we talk about ICE scoring, we tend to get a lot of questions about each of the factors that play a role in the scoring model about what specifically each part should be including. The ICE scoring model which is made up of Impact X Confidence X Ease in a 1 through 10 scale where the highest and most important feature would get a score of 1,000 and the features that should never be worked on get a 3 is a powerful way for product managers to size up their backlog quickly but the model is just a model and requires some discussion within the product teams first before adopting.
The first part of the ICE Scoring model is the Impact component of your feature. Impact can mean so many different things to different product teams. Factors such as the size of your business (SMB vs Enterprise vs Startup), the drivers of your business, the key metrics or objectives you are tracking as a business, all of this play a role in defining the impact of your product feature and how you should score it.
If you are considering adoption of the ICE Scoring model, you will want to take time with the concept of Impact with your team and layout a breakdown of how as a group you decide to rate impact using the 1 through 10 scale. Here is an example of a 1 – 10 scoring breakdown as an example of how one company can use impact:
ICE Scoring Impact: Definition – will the feature result in increased revenue
- 1 – No impact to revenue
- 2 – 5: Some/minimal impact to revenue
- 6 – 8: Will make an impact to revenue
- 9 – 10: Significant increase to revenue
Again, this assumes that as a business your product roadmap is set around a goal where revenue is the driver you are trying to accomplish and therefore the impact of a feature is being rated as the ability to help you accomplish that. If the current driver of the business is not revenue, then you can choose a different factor or definition in which you use the impact variable to rate against.
It is important to get buy in from your team on the definitions of your use of impact and the scoring scale so that when a feature is scored, there should be some background and supporting data points to show or explain the use of the score it received based on the definition provided. The last thing you want is to end up with a backlog with a feature set of all items scored 1,000 or all scored 3. If you are seeing this, chances are you are not on the same page with the usage of the variables or need to get more creative on your backlog!