As if it is not difficult enough to plan and estimate a single feature, sometimes we are required (or imagine that we are required) to plan and estimate a release or a version which includes many features, and dependencies, and system tests, and integration between different modules \ components (Such as hardware and software).

 

The agile movement embraced the fact that it is not worth while to invest a lot of time and effort in effort estimations, and developed (actually learned from other domains) several techniques of estimating features (or user stories as we sometimes call them).
Techniques such as planning poker, bucket sorting or even (god forbid) #NoEstimates.

Once we have relative estimates, we can measure progress using velocity and alike. In case you don't know: Velocity = Total of complete effort in a specific iteration. So we have some ways of estimating a single feature, and we know how to measure progress, but many people ask me: How can I plan a release filled with a hundred features or more? The easy answer is to estimate each feature planned for the version, divide the total total effort against the average velocity and see how many iterations are required to complete the version content.

While this is simple, it is also very time consuming and wasteful. In order to do that, we need to estimate each and every item in the version, this will probably take a lot of time (that can be used to develop more valuable software) and that goes against the basic idea of maximizing value, prioritizing and limiting work in progress. Additionally, there is a good chance that at least some of the features will not be developed, so there is no reason to invest time in estimating them.

I prefer a different approach

it is a very simple relative estimation approach which combines historical data:

  1. Prepare the features - Review the features in the version and break them down into reasonably sized items - this should be doable task for experienced product owners, and this is something you would anyway do if you want to estimate the entire version
  2. Calculate the average estimate of all features that are already estimated (Complete or not) – you would normally get something like 8.43 or 13.4 – This is good.
  3. Assign the average estimate to all unestimated stories and continue as you would if you had actually estimated them.
  4. As the backlog is refined and more stories are estimated make the necessary corrections to the data. 

Why is this better?

It is simple.
It requires a little effort.
It is based on empirical historical data, and it is self correcting

It might even lead you to see that you can stop estimating all together. It might not. 

Another cool “feature” of this approach is that whatever tool you are using to manage your features, when you input the “fake” estimate, most chances are it will be a unique value (5.43) that will make it very clear to see if the feature effort estimate is based on our “fake” estimate or was this estimate done by the teams.

 

Would love to know what you think.

*Want to know more about managing complex projects? Check out Johanna's Agile Program Management workshop.