You're reading...

Featured

Fools and liars

My ears are burning (metaphorically).  I’ve just read Bent Flyvbjerg’s paper on quality control and due diligence in project management.  His theme is the inaccuracy in forecast project costs and benefits.  Specifically, the tendency to underestimate costs and overestimate benefits is attributed to the ‘planning fallacy’, a creation of our old friends Tversky, Kahneman and co.  The source of the fallacy is the ‘inside view’, focussing on the constituents of the planned action.  Its cure is the ‘outside view’,  looking at how similar actions have panned out in the past.

On the basis that detailed risk analysis creates the (fallacious)  inside view and inspecting the entrails of past projects provides the (remedied) outside view, Flyvbjerg’s research provides an interesting critique of my life’s work.  What can I learn?  This is an important point given that Flyvbjerg ends with some unusually harsh words for forecasters, calling them fools and liars, inviting their clients to fire them, ask for their money back and even sue them.  What’s more professional associations should disbar them.  It ought to concentrate risk consultants’ minds a bit.  For the first time I’m glad I fork out on that hitherto useless PI policy.

It’s worth first separating costs and benefits.  Much of Flyvbjerg’s discussion focusses on benefits.  His case study of a new railway chronicles a horribly overestimated revenue forecast by a consultant who could not demonstrate a track record in accurate forecasting.  The investors had the sense to withdraw, unlike other such ventures where they lost their shirts.  In spite of his later bluster, Flyvbjerg rather wetly refuses to name the consultant.

Now revenue forecasting is legendarily uncertain.  Any investor who bets his farm on one is a turkey in Taleb-speak.  The great man makes a late appearance in the paper with his trenchant views on the ethics of forecasting.  Flyvbjerg recommends him as suitable teacher for forecasting fools; I’d be more interested in hearing Taleb’s views on pseudo-academic (‘scholarly’)rehearsals of the intensely practical.  Certainly investment turkeys are better going straight to Taleb’s advice on how to be antifragile (review coming shortly) than trying to sue slippery consultants.  Sticking your pension fund into an infrastructure project strikes me as a very bad idea, though one we’re set to hear more of.

Cost forecasting is different in the extent to which it lies within the control of the sponsor and project manager.  Furthermore you can just about sketch out a picture in which the  baseline forecast is one in which everything goes OK and then risks happen which mess it up.  So you can see that, give or take a bit of polite value engineering, there is some form of reasonably defined benchmark which could form the starting point for an inside view.  This is less well-defined, I think, for revenue estimates.

So the first question is whether an estimate which consists of a  fully risked baseline comprises an inside view from the Flyvbjerg viewpoint.  I would argue that no cost estimate which did not recognise risk could be regarded as acceptable.  My answer: on the one hand ‘yes’ and on the other hand ‘no’.

On the one hand, you have to recognise that risk analysis is a fertile stamping ground for fools and liars.  You can compile a giant risk register, forget about dependence, throw in a load of made-up numbers (after all, all probabilities are subjective as I am the first to remind all who will listen) and dress the result up in a load of pseudo-scientific claptrap.  And most analysts who do this will be under pressure to get the right answer, not to include risks which reflect badly on the organisation, and, specifically, not to end up with too broad a range.  Realistically broad ranges obviously hobble decision making (at least with decision processes aimed at optimisation).

This description fits well with the idea of an inside view created by the project sponsors and not taking objective account of  all the data available.

On the other hand, risk analysis gives you the opportunity to go beyond the inevitably rather coarse data you have at the project level – with the obvious questions about its relevance to your specific project – by making suitable adjustments, by eliminating irrelevant risks and identifying new ones, and by using a broader range of information which may be fairly objectively based on previous experience or may be a more subjective amalgam of expert opinion (if you can buy into the idea this can be useful).

So we can harbour the aspiration that risk analysis will take us to some kind of outside view greatly superior to that which you can get from a few vaguely relevant previous projects.  But how?  This is a good question, and I don’t think the answer has been comprehensively written down anywhere.

For example, the UK Government has for 10 years now been insisting on an approach to project appraisal (ie looking at costs and benefits) in which the baseline cost of the project is inflated by some factor which is supposed to reflect an outside view.  The factor, which for unenlightening historical reasons is called optimism bias, was derived from a rather old and limited sample of projects.   Not a bad idea given that the Government recommended that this be replaced, or rather updated, with a more thorough consideration of the project-specific risks and uncertainties at the earliest opportunity.  This got forgotten with the result that projects blundered on with their coarse optimism bias uplifts, potentially right through to execution.  Arguably one of the reasons the update was forgotten was that it is actually quite hard to prescribe how to do it other than to just say ‘replace the uplift with a risk assessment’.

Coming back to Flyvbjerg, his remedy is a bit of quality control and due diligence by asking some fairly obvious questions and using the outside view as a benchmark.  This does not sound terribly original, but the extent of the incompetence he asserts in his case study certainly gives pause for thought.

It also leads to a rather immediate issue.  The obvious criticism of optimism bias is that it doesn’t take any account of the way the baseline has been constructed.  If risk has already been comprehensively recognised, you should not need to apply the same prescribed uplift as where the baseline represents a nice, cosy, optimistic inside view.  You could maybe justify applying a fixed uplift to a properly defined baseline cost estimate which represents everything going well as I described earlier.   As as I also said this is more problematic with revenue estimates.

But without quite saying it, I think Flyvbjerg is recommending that you should apply the average ratio of outturn to forecast in your relevant sample to your specific project, whether this is cost or revenue or other feature.  Certainly this is what he does in the case study.  And this is a really bad idea, not to say incoherent, if you do not recognise what has gone into the relevant forecasts.  This where a proper analysis of risks and data is essential.

What about shooting the forecaster?  Well, why not, if you can prove they were lying?  Of course they may have been lying under orders (the defence known as commercial coercion!) in which case it’s not right for the forecaster alone to take the bullet.

Flyvbjerg notes an interesting case (the Clem Jones tunnel in Brisbane) in which a forecaster is being sued by some unfortunate pensioners.  I think the basis of the case is that they were lying: supposedly the forecaster, AECOM, had provided a forecast for someone else with half the revenues.

Fools are more difficult.  There may be clear negligence, but even in cases where the forecasts are well documented there are likely to be unsupported ‘judgements’ which could not easily (or at all) be avoided.  I think we have some way to go before we could set up a set of professional standards for risk and forecasting which are both sensible and auditable.  It worries me that there are unrealistic expectations: you are forever meeting people who think that with a bit more work you can get an ‘accurate’ risk estimate.’  That’s just an infantile chimera.

So actually the main responsibility lies elsewhere.  (I would say this, of course.)  Any risk analysis, any forecast should list its main assumptions.  I shall get around another time to doing a post on how risk analyses should be regarded as painting risk by numbers: if you assume this the numbers are this; but if you assume something else they are that.  It is up to the consumers of the analysis to review the assumptions and understand their import.  It drives me nuts when they ignore the assumptions and just take the answers as the scientific truth.  They are the real fools who should be held to account when things go wrong.  Did you read that assumption?  Did you think it was right?  Well then!  Bang!!

Print Friendly