Friday, December 9, 2016

Metric madness: McNamara and the military

Previously, I posted about a historical precedent for managing by metrics: economic planning in Stalinist Russia.

I recently learnt of a capitalist analogue, starting with Ford motor company in the USA.
I found the following account illuminating and loved the (tragic) quotes from Colin Powell about the Vietnam war.
Robert McNamara was the brightest of a group of ten military analysts who worked together in Air Force Statistical Control during World War II and who were hired en masse by Henry Ford II in 1946. They became a strategic planning unit within Ford, initially dubbed the Quiz Kids because of their seemingly endless questions and youth, but eventually renamed the Whiz Kids, thanks in no small part to the efforts of McNamara. 
There were ‘four McNamara steps to changing the thinking of any organisation’: state an objective, work out how to get there, apply costings, and systematically monitor progress against the plan. In the 1960s, appointed by J.F. Kennedy as Secretary of Defense after just a week as Chair of Ford, McNamara created another Strategic Planning Unit in the Department of Defense, also called the Whiz Kids, with a similar ethos of formal analysis. McNamara spelled out his approach to defence strategy: ‘We first determine what our foreign policy is to be, formulate a military strategy to carry out that policy, then build the military forces to conduct that strategy.’ 
Obsessed with the ‘formal and the analytical’ to select and order data, McNamara and his team famously developed a statistical strategy for winning the Vietnam War. ‘In essence, McNamara had taken the management concepts from his experiences at the Ford Motor Company, where he worked in a variety of positions for 15 years, eventually becoming president in 1960, and applied them to his management of the Department of Defense.’ 
But the gap between the ideal and the reality was stark. Colin Powell describes his experience on the ground in Vietnam in his biography: 
Secretary McNamara...made a visit to South Vietnam. Every quantitative measurement, he concluded, after forty-eight hours there, shows that we are winning the war. Measure it and it has meaning. Measure it and it is real. Yet, nothing I had witnessed . . . indicated we were beating the Viet Cong. Beating them? Most of the time we could not even find them. 
McNamara’s slide-rule commandos had devised precise indices to measure the immeasurable. This conspiracy of illusion would reach full flower in the years ahead, as we added to the secure-hamlet nonsense, the search-and-sweep nonsense, the body-count nonsense, all of which we knew was nonsense, even as we did it. 
McNamara then used the same principles to transform the World Bank’s systems and operations. Sonja Amadae, a historian of rational choice theory, suggests that, ‘over time . . . the objective, cost-benefit strategy of policy formation would become the universal status quo in development economics—a position it still holds today.’ Towards the end of his life, McNamara himself started to acknowledge that, ‘Amid all the objective-setting and evaluating, the careful counting and the cost-benefit analysis, stood ordinary human beings [who] behaved unpredictably.’
Ben Ramalingam,  Aid on the Edge of Chaos, Oxford University Press, 2013. pp. 45-46.
Aside: I am working on posting a review of the book soon.

Given all this dubious history, why are people trying to manage science by metrics?


  1. What did McNamara do after the war? Did he end up in a University VC chair?

    Funny, I read this just hours before your post, which is what made me ask the question:

  2. Quite stunning indeed.

    My 2 cents to your last "why" question would be that people are lazy.
    Comparing numbers does not require effort and bears the semblance of being objective.
    And the worrying trend to distrust scientists makes people to oppose letting science be judged by scientists. When such peer evaluation is out, and without (scientific) understanding of the contents, nothing but number crunching remains.

    So, in the very long term, I think working on having society trust scientists again may be a (not the only) root cause.

    As a side note: I think the "people" could arguably be replaced by "funding agencies" - I know that's where people work, but my view is that if funding agencies change (stop looking so much at numbers), universities will change too because it will be clear that quality trumps quantity.

    1. Thanks for the comment.

      I agree that laziness is a factor. Lack of trust is also a factor. This is not just of scientists but of everyone receiving public money from artists to politicians to welfare recipients to public school teachers to companies receiving "bail outs". In some cases this lack of trust may be justified and greater public accountability is required. On the other hand it seems the focus seems to end up being on people at the "bottom of the pile" [welfare recipients and Ph.D student travel funds] rather than those at the "top" [university VCs and bank CEOs].

      However, overall I think the problem is political: the rise of the neoliberal managerial class. Their power, income, and social standing requires the use of metrics to justify their decision making.

  3. Since replying, I see that McNamara did not ascend to a University Vice-Chancellorship, Deanship, etc. but instead went on to head the World Bank. I'd be interested in hearing your opinion on his tenure there.

    1. McNamara had as much success at the World Bank as he did in the Vietnam war because he had the same metric and management mentality. Unfortunately, his approach set the model for poverty alleviation programs. In hindsight, many of these did more harm than good. Many actually enriched USA companies rather than helping the poor in the Majority World. The goals and methods of these programs were set by wealthy Westerners rather than by consulting with locals. Moving beyond these World Bank failures is what the "Aid at the Edge of Chaos" book is about.

      McNamara's example shows that management by metrics may work well for a Ford car assembly plant but fail in vastly different contexts: a jungle war in South East Asia, or alleviating the poverty of subsistence farmers in rural India.
      Thus, we should be wary about applying it to scientific research.

  4. You know, Ross, I think that a post on the works of Taylor and Gilbreth would be very appropriate for this blog. Would you consider it?

    1. Seth, thanks for the suggestion. But, I don't think I am qualified. I had to Google their names to find out who they were.

  5. I agree, PCS, but I also think that asking funding agencies to do this is a little like asking fire ants to retreat from Queensland because it is more liveable for us without them.

    1. Except that it's not the excplicit role of fire ants to serve the public (in [finding a way to and then] funding the best science).

      I agree if your remark is based in how feasible it maybe.
      Although for important issues it may be good to be idealistic...

  6. The h-index, the citation rating, impact factors and the aspiring researcher
    Til Wykes, Sonya Lipczynska & Martin Guha, Journal of mental health, 2013, 22(6), 467-473. Extract below from the above paper

    "Take the case of Ike Antkare who was outed as a fake in 2010 in a paper purportingto be written by himself (Labbé, 2010). At that time, he had 102 publications and an h-indexof 94 which at that time was less than Freud, (h-index of 183), but better than Einstein in the 36th position with a h-index of 84. Using one of the other Google metrics, the hm-index, IkeAntkare was in the sixth position outclassing all scientists in his field (computer science).This was carried out using a computer programme that generated fake papers, which each referenced all the other papers generated. All that had to happen to be included on Google Scholar was to refer in the paper to a paper already referenced in Google Scholar.Once on Google Scholar, of course, there were then references to Ike in all the other metrics generating systems"

    Einstein has different number in google scholar . Here is the number from
    There are 876 listed with higher h index than Einstein
    Albert Einstein Institute of Advanced Studies Princeton h index =109 citations =97303 Freud top the list in google scholar with h index of 266.

    yes, why manage science by metrics?

  7. There's a particularly good discussion of the ideology and problems with planning via metrics in James Scott's 'Seeing Like A State', along with numerous examples of how not to do centralised metric-based planning.