Secretary of Defence Donald Rumsfeld said in a news briefing on February 12, 2002: “There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know”
For this feat, Mr. Rumsfeld was generally ridiculed and in 2003 awarded the recognized ‘foot in the mouth’ award. However, I take the statement to be perfectly OK, though expressed in a slightly awkward and imprecise modelling language, and lacking clarity of thought. The real problem with Mr. Rumsfeld, and for which he will be remembered, was his inability to instil the right decision-making processes in the various organizations he led, and the disastrous decision-making blunders he made in matters involving Iraq and Afghanistan.
There is a strong interrelationship between thinking and decision making, and I will later in this article explain the relevance of the above statement and Mr. Rumsfeld’s decision-making processes to the topic of today, which is quantitative models for big bet industrial decision making. On a personal note, I decided some weeks ago to extend my niche advisory firm with a quant practice for big bet industrial decision making. The decision resulted from some Excel-based financial projections for 2015-2020. However, like many other such decisions in the annals of business, the real analysis starts after the decision, when having to create a credible plan for converting a good idea into serious money, and a strong narrative for communicating the value that my company could generate for my clients. This blog post is part of such analysis.
Let us first start with the concept of quantitative methods. I will in the following use the concept about the use of quantitative models for decision making that goes beyond simple spreadsheet-based models. Typical examples are models based on decision trees, linear programming, system dynamics, probabilistic models, game theory, and real option theory. Of course, mostly everything can be implemented in Excel, especially if expanded with VBA for Excel, but the point is that for a number of reasons these models go significantly beyond what is typically found in the spreadsheets used for decision making in business.
In principle, it should be easy to identify typical applications (e.g., M&A, new product introductions, entry into new territories), typical benefit areas (rationality in the decision process, more accurate valuations, better uncertainty management, and organizational learning), and thus the business case for quantitative methods. Fact is, quantitative methods are not generally used in big bet industrial decision making (with some exceptions, like pricing complex commodity contracts and deciding on whether or not to drill oil wells).
I will in the following argue that they are not used due to five myths about quantitative methods in big bet industrial decision making:
- Myth 1: Quantitative methods do not creative value beyond DCF-based decision making.
- Myth 2: One cannot quantify the unquantifiable.
- Myth 3: Quantitative methods are easy, and most MBAs use them on a regular basis.
- Myth 4: Quantitative methods are too complex, and do not create value.
- Myth 5: Quantitative methods = quantitative models = statistics = big data (and we do not have big data).
Regarding Myth 1 (that quantitative methods do not create value beyond DCF-based decision making): The underlying assumptions in DCF-based decision making is to calculate deterministic after-tax cash flow (probably prudency adjusted) based on all decisions being taken before project start, transform that into an NPV using a risk-adjusted discount rate, and accept all project proposals with NPV > 0. In theory OK, in practice incorrect and here is why: i) Technically inaccurate: use of WACC instead of project-specific cost of capital, no assignment of value to optionality or flexibility (e.g., the option to abandon, exit, or expand; always positive and often significantly so), and no explicit modelling of uncertainty. ii) Conceptually incomplete: no modelling of staff morale, market dynamics and competitive behaviours (after all, strategy without conflict or competition is not strategy, but planning). iii) Organizationally less relevant: not conducive to epistemic learning, not easy to share with or communicate to others. In contrast, there are established and well-founded quantitative methods for dealing with (i)-(iii).
Regarding Myth 2 (that one cannot quantify the unquantifiable): This is of course true, but in practice most qualitative or textual models can be mapped into a quantitative framework, and most qualitative variables / relationships have quantitative brethren. Often, by doing so, we end up with a sharper qualitative model or a more precisely defined qualitative variable / relationship. And, regarding uncertainty, this is an area where quantitative models excel, in the form of probability theory and statistics. One may indeed wonder why McKinsey, the strategy consulting firm, recently republished an article from 2000 about strategic decision making under uncertainty (see http://www.mckinsey.com/insights/managing_in_uncertainty/strategy_under_uncertainty) without any reference to a quantitative approach to managing strategic uncertainty.
Regarding Myth 3 (that quantitative methods are easy and most MBAs use them on a regular basis): Yes, most MBAs come out of business school with a good basic understanding of quantitative methods, say in game theory, decision analysis, and real options theory. My experience is however, that confronted with three business realities they rapidly after graduation revert to traditional spreadsheet-based DCF analysis. Reality 1: Reality is complex and does not look like the 3-10 node decision tree in the standard decision analysis textbook. Reality 2: Most organizations have standard decision-making processes, and standard spreadsheet-based DCF models tend to be the lingua franca in investment appraisals and similar processes. Reality 3: One most probably needs specialized modelling software to do say a full-fledged decision tree analysis.
Regarding Myth 4 (that quantitative methods are too complex for practical use): The universal law in modelling is that of Occam’s razor, or that among competing hypotheses that predict equally well, the one with the fewest assumptions should be selected (source: Wikipedia). Contrary to popular perception and even though say system dynamics or decision analysis has a certain learning threshold, models based on quantitative methods tend to be significantly less complex relative to say spread sheets or qualitative models (in text), for a given level of predictive power.
Regarding Myth 5 (that quantitative methods is somehow big data): I fully concur with the case for big data, but big data is not quantitative methods, as we understand them here. In fact, for most big bet industrial decision making, the amount of data available and necessary to make a decision is relatively modest, and understanding structural relationships is more important than discerning patterns in massive amounts of multi-dimensional data. On a side note, I have previously explored this issue in another blog post, see https://crispideas.wordpress.com/2015/04/19/why-big-decisions-are-about-small-data-and-why-big-data-is-mostly-about-monetization-or-many-small-decisions/.
Having read my attempt to debunk some myths that hinder the adoption of quantitative methods in big bet industrial decision making, the interested reader may wonder: What can we do to make effective use of quantitative methods in our organization? I have four pieces of generic advice:
- Use quantitative techniques to drive qualitative insights, and qualitative thinking to drive quantitative modelling. Essentially: Use quantitative methods to drive deepness of thought (and way beyond your simplistic DCF models).
- Develop a standard framework in your company for modelling optionality, uncertainty / risk, feedback loops, time delays, and competitive behaviors; it may lead to significant adjustments to your NPV calculations.
- Don’t trust your CFO when it comes to these matters; through their formal training they tend to see strategy as planning, uncertainty as something that can be managed through discount rate / cost of capital, and optionality and feedback loops as something to be discussed in a qualitative way and with no financial value.
- Recruit quant talent, grow your quant capabilities, and invest in proper tools (you will not be able to do this in Excel).
(1) may indeed be a good starting point for any use of quantitative methods, and I have just finished the compilation of a causal loop diagram / influence diagram of a set of marketing activities, including the market’s response to same, for a client in software space, with no explicit ambition beyond getting a better understanding of this client’s marketing strategy (including strategic levers, sensitivities, interrelationships, and uncertainties).
Going back to Mr. Rumsfeld’s quote in the introduction, I am not sure what modelling approach could possibly have saved him (modal logic and probability theory would probably be candidates). However, it is fair to say that quantitative methods do offer benefits in the areas of transforming unknown unknowns into known unknowns, known unknowns into more precisely known unknowns, and generally awkward thinking into clear and precise thinking. With more accurate models and sharper thinking come better decisions, and possibly Mr. Rumsfeld could have avoided his decision-making blunders.
Have a good day!