Two former officials, in the administrations of Barack Obama (Peter Orszag) and George W. Bush (John Bridgeland), claim that “Based on our rough calculations, less than $1 out of every $100 of government spending is backed by even the most basic evidence that the money is being spent wisely.” (“Can Government Play Moneyball?”) So on what basis are decisions to spend money being made?
One of the questions I frequently get asked concerns evidence. What is the evidence that the approaches outlined here work and are effective?
All evidence purports to tell a story. The aim of this story is to convince. Here’s an example from The Daily Telegraph, “Coalition misses 70 election pledges.” Clearly the purpose of the headline and associated story is to convince readers that the coalition is failing.
Evidence is almost always contested, or rather it should be.
In more innocent times we relied on experts to contest the details and then to provide us with evidence. The battle raged far above us, as experts and specialists duelled to out-do each other. Times, however, have changed dramatically. (See “Revisiting Post-Normal Science In Post-Normal Times & Identifying Cranks”)
We live in an era where expert evidence can and frequently is challenged by non-experts. This is because we now see more clearly how decisions impact us, as well as being able to do something about it.
Non-experts have a wealth of data at their fingertips, often more time and occasionally more drive to dig through the data. The Guardian runs a “data journalism” website. Challenges to expert positions therefore abound and the battle for evidence is taking place both on the ground and in the air above us.
As complex challenges grow in scope and scale, the stakes get higher. Proposals for action are fraught with consequences. Politicians, the media and the public demand accountability. Choices become divisive. In this instance, evidence increasingly becomes tactical, an instrument in wider strategic battles. Evidence loses, in some sense, its integrity.
In 1974 the Noble Prize winning physicist wrote a short essay disparaging what he called “cargo cult science,” a science of going through the mechanical motions of science. He commented,
“But there is one feature I notice that is generally missing in cargo cult science. That is the idea that we all hope you have learned in studying science in school–we never say explicitly what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.”
This sort of scientific integrity is increasingly difficult to maintain when science and scientists are called upon to shore up political decisions. One example of this is with climate science. Climate scientists have admitted to making arguments simply to prove a political point. (See “The Climate Fix: What Scientists and Politicians Won’t Tell You About Global Warming”)
One consequence of this loss of integrity is that evidence must almost necessarily be read critically. Those who provide evidence can either help or hinder this task. A lack of transparency hinders the work of critical evaluation. Opening up the sources of an evidence base, providing the raw data and the rationale helps the task of critical evaluation.
We can try to meet a standard for transparency in two ways. Firstly by providing in good faith evidence and links to the evidence base we are drawing from, which ranges from published books and articles, through to the learning histories, proceedings and our own notes from the labs we are running. The second way is through making accessible the underlying logic, the basis and the sources of the ideas presented here. This hopefully allows readers to draw their own conclusions from first principles should they wish to.
In The Social Labs Revolution I make two core claims:
1. Business-As-Usual (BAU) responses to complex social challenges are not simply ineffective strategies but are guaranteed to fail in time.
2. The strategic approach outlined in the book, generalized as a theory of systemic action, is more effective at addressing complex social challenges than BAU responses.
The two core claims prompt two corresponding questions for the reader which need to be considered systemically, first, are BAU responses guaranteed to fail over time? Second, do the approaches outlined here have a greater than zero chance of succeeding?
In evaluating effectiveness a comparison is being made. For example, I often hear the phrase “…well its better than doing nothing.” To which I would point put that in the situations discussed here, “doing nothing” is never really an option on the table. Rather there are always competing courses of action and competing choices. This is what we should be evaluating.
In other words, the case with social labs should not be made without an accompanying critique of what the BAU response to the same challenge would look like, what it would cost and what results is it likely to deliver.
All too often, innovative strategies are asked to meet a standard of evidence that BAU strategies cannot hope to achieve. (The quote from Orszag & Bridgeland is an example). And all too often we fall for this “evidence trap”.
One of the challenges in making the case against BAU responses is the lack of work done in studying the evidence base for dominant approaches. The article by Orszag and Bridgeland quoted at the start of this blog post is a rare case where the evidence base for BAU responses is honestly considered.
To some extent we’re all taught to rely in rationality, to basically construct a rational argument for the course of action we’re advocating for. Unfortunately, the corollary of this training is that we fail to perceive that “evidence” is increasingly used to rationalise decisions that are made by those in power. Bent Flyvbjerg in his study of “democracy in practice” makes the case that “In open confrontation rationality yields to power.” We need to remember that when making the rational case for action and we need to remember then when presented with an argument.