Project Estimation Methods

by Josh

Project estimation can be a headache for new project managers.  There are lots of opinions out there and not much that weighs various methods…rather you find people who feel strongly about one way or another.  To give a quick overview of some ideas and then call for your input, I recorded this video.  Please leave a comment and share your thoughts on this topic!

Leave a Comment

{ 23 comments… read them below or add one }

drpauldgiammalvo July 14, 2010 at 1:08 pm

Hi Josh,
I don't know…… We bid on hard money contracts all the time in a highly competitive market with single digit profit margins and we use three point estimates in preparing our bids. Most of our projects are in the $500,000 to 1.5 million USD range, so maybe that makes a difference but I don't think so.

One of the reasons I think our system works is because our data is based on Activity Based Costing, meaning we have considerable granularity and level of detail so even though we roll up the costs to say level 3 or 4 of the WBS for bidding purposes, we are comfortable that the three point estimate distributions are not outrageously broad. (Very rarely does the variance exceed a range of +/- 3 sigma and usually close to +/-2 sigma) But, we also have many years of data, which also helps narrow the bands.

Bottom line- I like using 3 point estimates and recommend others consider it, provided the granularity of detail is high and there are sufficient data points to keep the variance at not more than +/- 3 sigma.

Dr. PDG, Jakarta


galleman July 14, 2010 at 1:28 pm


Maybe I'm misunderstanding your post. Are you talking about capturing samples or the variance in the values of those samples and there the confidence of the samples making up the estimate?
When we capture samples for estimates of cost or duration, we need the variance of those samples be be within 1 STD of the mean to have any confidence in the numbers.

+/- 3 sigma, 3 standard deviations contains 99.7% of all possible values of the underlying sample population. “Keep the variance at not more than +/-3 sigma.” you can't have more than 3 sigma. OK, 6 sigma gets you 8-9's of the population 0.999999998027.

Why would you have “hard dollar” estimates that have variances this wide. This would be unheard in any domain I work in. We need cost and schedule confidence intervals in the 80% confidence of the variance being no more than -10% and +25% from the mean. That means all samples are within one STD and skew is right leaning and the 2nd and 3rd order cummulatnts are essentially flat.

3 sigma distributions are essentially wild ass guesses, they are NOT narrow bands.

As well more data may or may not “narrow the band,” (have smaller variances) the narrowing is dependent on the underlying statistics of the processes generating the variance. More samples simply mean higher confidence for the variance. The variance comes for the sample space.


derekhuether July 14, 2010 at 4:17 pm

I wanted to take a moment to add my two cents. Though I certainly believe estimating should be more science than art, I look at estimates from a different perspective. As a disclosure, I'm not the one doing the estimating, therefore I'm not going to say I agree or disagree with any one of the above-mentioned techniques.

What I would like to add, from my perspective, is the need for expert judgment. If you are an expert in a given estimating technique and it gives you the results you and your customer(s) need, does that not validate it as an acceptable estimating choice?

If the estimating technique does not produce the desired results, wouldn't it fail the metaphorical sniff test?

Just a thought.

Best Regards,
Derek Huether


galleman July 14, 2010 at 9:33 pm


The estimating techniques are likely domain sensitive. But there is also a fundamental set of principles in estimating for each domain. In the software domain, these are found in the works of Galorath, Putnam, and others. In the aerospace and defense world, Steven Book, and The Aerospace Corporation, in heavy construction there are AACE cost estimating guidance, just as there is cost estimating (and some schedule) found at the GAO level.

In some domains, expert judgement is allowed or maybe even preferred, in other domains it is model based and experts must come after models. Much of the work in software estimates shows that expert judgement is seriously flawed when compared to models.

I've loaded the SW estimating background on for Josh, you're welcome to it, if you'll accept the LinkedIn invitation I just sent.


Josh Nankivel July 15, 2010 at 12:45 am

Glen directed me to this paper by Jorge Aranda which I am going through now. Thanks!


Josh Nankivel July 15, 2010 at 1:58 am

Glen, I read the Aranda thesis. Was that the paper you mentioned that drove you away from 3-point estimates? Perhaps I missed something, but the problems explored with anchoring of estimates seems (to me) to be unrelated to the issue of 3-point estimation. Did you mean that the likely estimate tends to anchor the optimistic and pessimistic estimates?

It would seem to me that the biggest concern would be to ensure the likely estimate does not get anchored by an external expectation.


Glen July 15, 2010 at 2:12 am

It's a combination of papers, all based on the Tversky and Kahneman materials, then Edmund Conrow's book and his mentoring of our programmatic risk effort on the Crew Exploration (now Orion), and then finally the attendance at an American Petroleum Institute meeting here in Denver, where “estimating” was a track.

Then the rest of the Tversky and Kahneman work on biases in upper and lower estimates from some Aerospace Corporation materials (not publicly available), finally a revisit with Ed on Basiean network and their connections with forecasting compared to “expert opinion.”

The core connection is that when we have 3-point estimates – that is asking the ML, Optimistic, and Pessimistic, we are “anchoring” the estimates based on 1) the order of the questions; 2) the biases of the estimators from past experience, and 3) the disconnects with the underlying statistics of the process that drive the variability of the activities.

This background has led NAVAIR and some sections of NASA (manned spaceflight) to rely on stochastic models rather than capturing upper, ML, lower values from engineers.

Continue with the reading in the BOX.NET file to see how models are more reliable that experts. It was a journary, but now that I've arrived there is no going back.


Travis July 15, 2010 at 2:43 am

There are many ways to come about an estimate. Be it a 3-point estimate or a PIDOOMA estimate, the important thing to remember is to capture the basis of estimate. Has anybody have any examples or formats for archiving BOEs? In an IBR, the panel wants to see that your estimate is sound and substantiated. I am interested in seeing some good examples that show the WBS structure, activities, and the estimates in a defendable format.

The proof is in the pudding. Any suggested sites or examples would be appreciated.



Duncan July 15, 2010 at 3:27 am

Josh — clarifications …

1. The article that I posted here is specifically and explicitly limited in scope to developing effort estimates for individuals.

2. I have NEVER said that the people doing the work should NOT prepare the estimate. What I have said is that an estimate is generally required before we have any idea of who will be doing the work. I have also noted that the people doing the work often lack experience with or knowledge of how to develop a good estimate. The traditional orthodoxy that the person doing the work is best qualified to develop an accurate estimates is generally wrong.

3. Glen's approach seems more geared to control than to planning and management since there is no consideration of the potential for underruns.

4. I share Paul's success with 3-point estimating. I have a number of clients who are using the approach and who have gone from consistent overruns to consistent on-budget performance.



Josh Nankivel July 15, 2010 at 12:09 pm

Thanks Duncan. I didn't mean to misrepresent you in # 2. I see now that what you are saying is that the most experienced/knowledgeable expert available should be participating in estimation; not necessarily the individuals who will be doing the work (but in some cases, the two are the same).


galleman July 15, 2010 at 12:20 pm

The approach we use is in place starting at the proposal through project close out, it is not geared to control, it is the source for our Basis of Estimate, the continuous calibration of the ordinal range values, and the continuous statistical forecast of EAC.

The 3 point approach suggested by Paul and yourself is certainly “usable” as long as you are aware of the unfavorable outcomes, which can be large in many cases where “uncalibrated” statistical distributions are present.

And what was the source of those consistent overruns prior to applying the method you describe?

And what further improvement might have been made in the confidence levels, using calibrated ordinal ranges? There of course is no way to tell, since the anecdotal descriptions, while reflected success, have no underlying assessment, in the ways Tversky and the Navy Research Labs materials do.

But 3-points are an improvement of single point estimate.


galleman July 15, 2010 at 5:16 pm


we use Pro-Pricer for BOE's. Similar tools are around.


galleman July 16, 2010 at 2:04 am


Just noticed the comment on under-runs. The technique we use considers under-runs in the same way over-runs.


galleman July 16, 2010 at 12:14 pm

Sorry for dribbly out responses.

What domain does the example live – where the estimate is prepared before “we have any idea of who will be doing the work?”

In your example, who prepares the estimate, and do know “who” will possibly be doing the work?

In our government domain, the Control Account Manager (CAM) is accountable for the Basis of Estimate on execution. During proposal development, the estimate is prepared in a variety of ways, but the most successful is to derive it from the Integrated Master Schedule, which is developed by the CAM as well.


Duncan July 17, 2010 at 12:44 pm

Glen — in my experience, estimates are often prepared at different points in the project and for different reasons. For example, one of my clients is an IT organization, and they prepare what they call a “budgetary estimate” with two main purposes. First is as an input to the business case: based on expected revenue and/or savings, does it make sense to do this project? Second is as an input to project portfolio management: do they have the resources to support this project in the year ahead?

When this estimate is prepared, they have absolutely zero idea who will eventually be assigned to the project. The business analyst who prepares the estimate is often assigned to the project, but not always. The project manager, systems analyst, programmers, and other project staff all remain far in the future. They usually break the project down into 6-8 chunks at this point, use 3 point estimates for each chunk, then use the method of moments to compute an expected value for the project as well as a standard deviation. They realize that the math is an approximation, but it serves the above two purposes.

Once the project is scheduled, a PM is assigned, usually along with a team to develop requirements. The team that is working on the requirements may or may not estimate their own work based on their level of knowledge and expertise. They will usually at least help with the preparation of 3-point estimates for their assignments. Sometimes the PM has to submit a budget before assignments are final. In this case, the PM develops 3-point estimates, usually with a wider margin of error, although some PMs prefer to estimate for an “average” resource and then budget a contingency to address skill or knowledge shortfalls.

The requirements team is generally expected to submit an estimate for the design and development phases. Again, they use 3-point estimates with either a broader range or a contingency allowance. They often get assistance from someone in development, but that person may or may not be the person who will actually do the work.

Once a full breakdown is available, the staff members assigned are typically asked to estimate both effort and schedule. Effort estimates are 3-point estimates and are checked against the estimate used to prepare the budget. Significant variances are discussed and corrective action taken where needed. Sometime the corrective action involves increasing the budget for that activity, sometimes it involves redistributing the work, sometimes it just involves clarifying the scope and work expected.

Schedule “estimates” are actually time-boxed commitments based on the team member's availability: many of their technical staff are assigned to multiple projects.

Did I answer your question?

William R. Duncan, Project Management Partners
Director of Certification for asapm
Primary author of the first (1996) version of “A Guide to the Project Management Body of Knowledge”


Duncan July 17, 2010 at 12:45 pm

Okay. I was just going by the table on your website that only showed overruns.


galleman July 17, 2010 at 2:11 pm


I apologize for not listening to the video.
Here's a fundamental problem with example in the GAO Cost book. I've talked to two of the authors and they're working to correct it.

The example of adding the probability distributions to produce the “normal” distribution – the return the mean process. This ONLY works if the all the distributions to the left of the = sign are INDEPENDENT of each other and their ar a large number of them. You can calculate how many independent distributions you need to “tend to the mean” from the types, correlations, and other attributes of those distributions.

But that figure should be seen as “notional” and not representative of real underlying statistical processes.


galleman July 17, 2010 at 2:23 pm


The method you describe is common usage.

Maybe the impacts of Anchoring and Adjustment would improve the outcome of your approach.

In some areas of DoD – NAVAIR and some NASA center, they have learned, along with the Oil & Gas reserve forecasting folks that that approach is seriously flawed.

The work of Anchoring and Adjustment is now applied in many areas outside project management. Possibly it is time to look beyond for better – more confidence – approaches to making estimates in the presence of uncertainty.


Duncan July 26, 2010 at 7:17 pm

I don't think there is any anchoring going on since the PM asks for an estimate without offering any form of guidance. And there is an extensive feedback loop, so people learn if they are consistently high or low.

But most of all, the process works.


Duncan July 26, 2010 at 7:22 pm

Glen — yes and no.

Absolutely “yes” to the need for independent distributions for the method of moments to work. “No” to the concerns for large numbers. Any basic statistics text will tell you that 25 is almost always large enough, so unless you have fewer than 25 activities on your project, you should be okay.

As to the underlying statistical processes … the probability density functions themselves are notional. Statistical rigor with hypothetical distributions that can never be verified or validated is kinda like debating whether 1/3 is 0.33 or 0.333333333333.


galleman July 26, 2010 at 7:56 pm

That basic statistics class will also tell you, 25 works ONLY if they are independently distributed. I know of no project or program we work, where the work package durations are independent. Would like to hear if you have different experience with the coupling and correlation between work packages on any non-trivial project. Our program typically run in the 500 0 to 2,500 active work packages on any single rolling wave. The project you work may be different, so I'd be interested in those as well.

Probability density function are NOT notional, you're painted a red herring here. Single digit confidence 80% confidence of completing on or before a date is just fine.

Your conjecture that the distributions cannot be validated ignores completely the past performance databases mandated for DoD, DOE, large construction estimating shops. AACE, ( NASA CEH (…/263676main_2008-NASA-Cost-Hand… ), GAO ( and SCEA ( tells you how to set one up, use it for forecasting, and maintain it for future improvements in cost management, unplanned EAC increases, and cost credibility during proposal activities.


galleman July 27, 2010 at 4:39 pm

I reread your post again. It's not the number of tasks that need to be larger that 25, its the number of independent distributions that need to be independent.

This reminds me of Deming's quote “In God we trust, all others bring data.”

Could you provide a reference (book and page) where you would derive a number like 25, That number (what ever it is) it's not 25, unless you've specified the Z-test sample size, with confidence and error.


Mohamed Ahmed Zaki April 18, 2012 at 11:32 am

Other techinques for project estimation it includes top down estiamtes i.e (Analogeus Estiamtes) which it based on refering to a simliar project sizing and bases as project base estimation for the acquired project to be estimated . Another techinque would be adopted during project planning is using a Bottom Up Estimation i.e using WBS to decompe to a 4 level done and rolll up estiamtes from group activative to Workpackage level to Control Account with adding contingency reserves with adding a buffer of 20- 50 % risk factor on cost baseline to get Project cost estimated and then adding a managment reserve with a range of 5-10% on project cost budet .

At time top executives they may ask for a rough estimate for a project were a project manager needs to respond back with a rough estimate in a short time . The only techinques that can be used either to depend on exprt judgment from previous projects that we had gain in our experince and take in consideration a risk buffer of 5o to 100 % . Another way is to capture a quick rough estiamtes from Project Technical team and set a buffer on them .


Previous post:

Next post: