A reluctant review of “Lab Matters: Challenging the practice of social innovation laboratories” by Marlieke Kieboom, Kennisland
“This paper first gives a diverse overview of different lab practice. Second, it identifies and discusses four common omissions, namely that labs are falling prey to solutionism, tend to overlook the power of politics, overemphasize scaling of solutions, and underestimate the messy nature of human beings. The paper concludes with ten practical suggestions for social labs to move forward.”
A Weary Confession
“Discourses are practices which systematically form the objects of which they speak.” – Foucault
I started reading this paper with a lot of excitement. I was looking forward to being stretched by a well-thought out series of challenges, to genuinely learning something new. Who can argue with the notion that we need to look critically at our own practices? That energy left me like an overinflated balloon exploding. And it wasn’t remotely because I was challenged.
The intention behind this paper cannot really be faulted. The substance, the crux of the argument, is largely hot air – a lot of puffing around a set of badly constructed straw-man arguments. Not entirely but mostly. I was immediately reminded of the old adage “the road to hell is paved with good intentions.” The simple summary would be that this paper is a triumph of intention over substance.
Good intentions are not enough.
For those who don’t want to wade through the details here’s a summary of the 4 core problems with this paper:
1. The paper rests on an evidence-base that is very weak (or non-existent).
2. The critiques presented are inconsistent, internally contradictory and generally confused.
3. The author is blissfully unaware that, in some cases, decades of work has been done on the critiques she raises.
4. The author has constructed a straw man argument, which she spends pages and pages knocking down.
5. The author provides no account of their own experience or practice, leaving us guessing as to the ground on which their criticism rests.
Where’s the evidence?
This paper makes a set of accusations against a relatively young field accompanied by an almost endless series of grandiose claims with little or nothing that passes as evidence. And making such accusations without being grounded in anything that even vaguely passes as evidence, data, phenomenon, or as disciplined observation masks a very basic failure to understand the nature of good feedback. (Oh and I’m restraining myself from typing this in caps. Surely any undergraduate knows that URLs do not constitute evidence? Can I say it again? URLs do not constitute evidence.)
The preamble to the paper states that “‘Lab Matters’ stems from insights, experiences, and findings gathered at the event ‘Lab2: a lab about labs’ (organized by Kennisland and Hivos) in which 40 practitioners from 20 social change labs gathered in Amsterdam to learn from each other and exchange ideas.” It isn’t clear how a 2-day (undocumented?) conversation is the source of little more than anecdotal evidence. I’m sure many things were discussed and much of it was interesting. But really, does that constitute the basis for a 44-page paper castigating an entire field?
A typical example of the acute lack of evidence is as follows, this one used to back up the claim that lab-practices are apolitical: “This was also observed by Lab2 participant Anna Lochard (27e Region) after closing the event: “I tried to bring the question of the political visions of our organizations up, but it is obviously a concern that is both really French and linked to our action in the public sector. I would love the other labs to realize that they are acting also politically…”
Ermm what? Hold up. So you ran a workshop and someone said something that confirmed what you believe? That’s evidence? I think not.
I’ve never met Anna Lochard. I mean, who is to say what Anna meant when she made her comment? Citing her in this manner seems a little decontextualised and a little unfair. Maybe she has spent the last decade secretly studying all the labs (really?) that everyone in the lab-field (what’s that anyways?) has worked on but that’s news to me.
This selective, anecdotal and depressing confirmation bias is what passes for evidence throughout the entire paper.
Critiquing diverse monocultures thinly
In addition to confirmation bias, the paper falls under the weight of its own inconsistencies.
The paper constantly refers to the “varied” and “diverse” lab landscape. Instead of actually describing the landscape of these “varied” practices the author provides us with a single paragraph listing lots of “labs” and then proceeds to describe in a numbered list a single, abstract, de-contextualized set of principles and practice.
If lab practice is indeed as varied as the paper claims – and as varied indeed as I believe it is – then where all of a sudden did this flimsy mono-cultural description of a “lab” come from? What are these diverse practices? Does the 2-day “lab” the author ran meet this description? Why does the author list a random collection of “labs”? What’s so special about this list? Is it because they all have the word “lab” in their name? What does the author mean by a “lab”? Is it a brand? Once again, it’s impossible to tell, because there’s nothing that could be mistaken for an evidence base.
I’m guessing that the author means to describe a practice or rather a set of practices. Instead of doing so, providing, for example, some “thick” descriptions of the lab-practice being criticized, we get a very, very “thin” set of abstractions.
Welcome to the party
Don’t get me wrong. There is probably some truth to the very grave accusations being made here. Unfortunately it’s impossible to tell from the evidence presented in the paper – because there is none. Instead I have to rely on what I know and am aware of. We’ll return to the question of how grave the accusations made actually are.
If I stick simply to my own practice, to what I’ve written, what I’ve been saying publicly, and the conversations I’ve been involved in, then I would say that many of the issues raised by the author are ongoing conversations.
In fact many of these issues are covered in my 2014 book “The Social Labs Revolution: A New Approach to Solving Our Most Complex Challenges.” The author cites my book a few times. Bizarrely, this is only in instances when they disagree with me. In instances where I share the authors concerns, for example about scaling, they ignore what I have to say. In other instances where I spend pages discussing power, and how in the various labs I’ve worked on we struggled with power, they ignore these discussions. I guess acknowledging these parts of my book would mean also acknowledging that the field is not as blind as the author claims it is.
For sure some of the concerns raised by the author are newer than others (the critique on scaling for example), but by and large to argue that use of a label such as “poor” is problematic (ie. “one needs to realize that in making a distinction between saving or helping ‘poor or vulnerable’ victimized people…”) is hardly news. That one is about five decades old. Welcome to the party.
What’s more, because the author says nothing much about what a “lab” is or what constitutes the field of “lab practice”, many different fields are collapsed and conflated into one.
For example, international development as a field has been fiercely studied and critiqued for decades. A number of the “labs” mentioned (simply by name) are little more than international development efforts branded as “labs.” To lump these efforts into the “landscape” of labs is needlessly confusing. It would beg the question of why a website (or any workshop) labeled a “lab” isn’t also being included as part of the field? It’s also ironic that the authors ran a 2-day workshop which is labelled “a lab about labs” – if that’s a lab then, again, there are probably many hundreds, if not thousands of events that constitute “labs.” None of these are obviously referred to. Once again, an ugly confirmation bias is lurking behind these choices.
I reluctantly conclude that the author isn’t very familiar with the struggles shaping this field. Nor it seems is the author aware of the broader discourses that the critiques made in this paper emanate from.
In turn all of this makes confronting the avalanche of claims made in the paper a deeply tedious exercise.
In the interests of everyone’s sanity I will therefore limit my comments to the four core accusations this paper casts against the nascent field of social labs.
The charge of solutionism
The word “solutionism” came into play fairly recently with the publication in 2013 of Evgeny Morozov’s book “To Save Everything Click Here: The Folly of Technological Solutionism,” The author cites Morozov saying, “In his view solutions are incremental improvements in systems that fail us, to “end up being OK with doing things just a little bit better.”
Unfortunately, the author hasn’t really bothered to understand Morozov. More importantly they also haven’t understood what the problem with Morozov’s critique is – which is that it suffers precisely by the same problem that Lab Matters falls to – the charge of constructing a straw-man critique. Just type Morozov straw man into google.
If you want to critique “solutionism” in contemporary culture, and extend that critique to lab-practice, then I would suggest Jacques Ellul’s “The Technological Society” is a more serious contender. Ellul’s exhaustive studies traces the consequences of a society organised around and obsessed with “technique.”
And “lab-practice” is vulnerable to this criticism, as are the wider field of social innovation. There is widespread obsession with both tools and technique. I have pointed out several times that this is akin to being obsessed solely with kitchen knives and frying, while professing an interest in cooking.
Then if you want to critique technocratic thinking, the idea of incrementalism, then perhaps examine the dominant culture of planning and optimisation. (As I do in Chapter 2 of my book.)
Finally the logic the author constructs as examples of “solutionism” is so confused that I can barely decipher what’s going on. Here’s a sample:
“Root causes are (unfairly) technically formulated: i.e. malfunctioning institutions and policies, a lack of cooperation, poverty.”
“This is reflected in the way we combat root causes: one can create ‘to-do lists’ to create systemic action: constitute a diverse team, design an iterative process, and actively create systemic spaces (Hassan 2014: 109). But solutions are most evident in the production of labs: more affordable toilets, better mobile applications, and new tools to monitor election violence. And the list is growing.”
Whoa, slow down. Let’s first recover from the horror that “the list is growing.”
So “malfunctioning institutions” or “malfunctioning policy” or “a lack of cooperation” or “poverty” are examples of root causes that are “(unfairly) technically formulated”? And evidence for this is a single statement “the way we combat root causes: one can create ‘to-do lists’…”?
So let me get this straight. “Constitute a diverse team,” “design an iterative process” or “actively create systemic spaces” are examples of “solutionism” because they are…part of a to-do list? The mind boggles. Seriously?
Perhaps creating a to-do list with the item “send Mum a Mother’s Day gift” or “write a paper critiquing lab practice” could also be considered as examples of “solutionism” if they are part of a longer to-do list but who the heck knows?
This section is hopelessly confused.
The author has taken a list that I provide in my book. The list is not directly about “how to tackle root causes” but how to take systemic action, which is a broader challenge.
To this list are then appended examples like “more affordable toilets” and “better mobile applications” that come not from any of the labs I describe in my book, nor any labs I’ve ever worked on, but some other set of initiatives labelled “labs” that don’t do anything of the things I specify needing to be done in order to address root causes. At best this conflation is confused and at worst it’s deliberately misleading.
To top it all off, here’s the coup de grace “In practise (sic), systemic change is a daunting exercise. Let us provide another example, our food system.” And then comes a paragraph describing what systemic change might mean in the food system. At this point I barely have the energy to mutter “but there’s a whole chapter on food in my book…and we worked on food systems for a decade and we did all the things in practice that you describe in abstract…oh whatever.”
The charge of being apolitical
I think the point of the complicated metaphor of an imaginary house in this section is “people are not equal.” Yes, indeed.
The charge of being “apolitical” is simply a long, badly rehashed version of the argument made by Robert Chambers in “Whose Reality Counts? Putting the First Last” (1997) and made countless times since. It culminates in the question “Does this process not just consolidate existing power divisions?” Yes, it totally would…if none of us had ever heard of Robert Chambers or had spent less than a minute thinking about power differentials and how to deal with them.
The opportunity lost at this point is acute. A more informed discussion of how power could be handled in practice (maybe the imaginary room will help?) would have been an invaluable service to the wider community. Instead we have yet another badly constructed straw-man argument, designed mostly to rationalise a single, rather obvious point.
Then, once again comes a second coup de grace, again delivered a few decades too late:
“We cannot deny the political economy of labs: the vast majority is dependent on the same kind of funding structures as other organised efforts for change, such as donor grants and government subsidies. This factor makes it particularly difficult, if not impossible, to propose radically different ideas, especially to the party who funds it. For example Reos’ sustainable food lab is sponsored by Kelloggs, while Mindlab is funded by the Danish government. This will make the offered outcomes largely dependent on the structures that are accepted by donors, and thus has the potential to limit the ability for labs to seek discontinuous change.”
Of course this is a serious point right? Sure. And critique is easy. At this point instead of a poor lecture on political economy, I would have appreciated some reflexivity, so what has the author done to tackle this problem in the labs she has been involved in? Where did funding come from? I would be eager to learn what sources of “clean money” the authors have discovered. Any hints would be appreciated. But alas, none are forthcoming.
The change of being obsessed with scaling
This is probably the only section of the paper that I would rate above a failing grade. We are sorely in need of critical thinking about scale and scaling. The entire field of social enterprise is indeed obsessed with the idea of scaling and unfortunately this obsession has bled over into labs-practice.
The author takes a stab at criticising this obsession. The main point is…well, that we should think critically about scaling. Once again though, the paper lapses into tedious generalisations, such as this one: “After all, most labs tend to be operative in a supportive context of a stable, economically rich, rule-bound state with relative predictability in institutional behaviors and accountabilities.”
What? Most labs are? If anything, isn’t the point of a lab to be a stable platform? But “relative predictability”? Where? I don’t think so.
There’s a short discussion on the alternatives to scaling in my book, in a section called “The Scale-Free Laboratory.” In it I write:
“Interestingly, scale is one of the issues that most preoccupies actors working in the social realm. The usual assumption is that we start small and then grow big. Common questions, particularly in donor communities, include “How will your initiative scale up?” and “What is your scaling strategy?” These concerns are, however, largely irrelevant.
Just as a game of football can be played almost anywhere with very little equipment or can be played with professional teams in vast stadiums, social labs can be run at any scale. This could range from a school or an organization to a community, a city, a country, a region, or the world.”
The charge of not understanding the messiness of human nature
“…we would like to challenge the image of humans as happy-go-clappy-post-it-sticking enthusiasts.”
This critique, while having merit, seems to be a rather random point about social innovation culture in general. It does not specifically cite lab-practice per se but lumps it into a fuzzy, indistinct picture that seems to primarily drawn from marketing brochures.
In many ways, this charge seems to be the laziest of all the charges made against Lab practice.
My colleague Adam Kahane has authored 3 books over the course of a decade of practice and I’ve written 1. All four of these books are fairly candid and have been praised for portraying our practices in their splendid, messy glory. In many of the Labs I’ve worked on, we have tried hard to capture of messiness of practice as we go along, publishing for example various learning histories and documents – none of which the author seems to be aware of.
In our practice we pay a serious amount of attention to both individual and group dynamics. One of my early mentors, Myrna Lewis practices something called Deep Democracy (DD). Originating in process-orientated psychology, DD is a way of working with decision-making and conflict designed to include minority positions in group decisions. Inspired by the very messy work of Arnold Mindell, levelling the charge of not understanding the messiness of human nature to a DD practitioner is laughable.
Similarly, many other colleagues make use of other embodied practices, such as trauma stewardship or somatic coaching. Again, the author seems to be unaware of the existence of such practices.
So yes, let’s not reduce human messiness to 2-dimension caricatures, but if so then let’s start with this paper, which sets up 2-dimensional sacred cows simply for the pleasure of slaughtering them – something I’d expect from a bad undergraduate paper.
Three Friendly Suggestions for Future Critique
I’m guessing that this paper intended to take an inductive approach to the field of Lab-practice. It attempts to give the impression that the charges levelled against the field stem from observation. Certainly no hypothesis is set out and no methodology for verifying a hypothesis is stated.
Here are three friendly suggestions for taking an inductive approach to critique of this field:
1. There is no “we”
In the Labs space there is no “we”. The field is very young, immature and fragmented. Sweeping generalisations do not apply…and so speaking of some “we” makes very little sense. Be specific.
2. Root the case in an evidence base
Being critical is fine. Being critical without an evidence base is not. The evidence base can be personal experience, it does not have to be a longitudinal study. Anecdotal evidence from a 2 day conference is not sufficient to critique a field.
3. Practice reflexivity
Some degree of reflection on one’s own experiences, one’s background and history, even as a short preamble, is essential to locating the critique. Not providing this information means the reader is constantly left guessing as to where the critique is coming from. This lack of reflexivity coupled with the lack of an evidence base is unforgivable.