A Bioengineer called Guru Madhavan gave a talk at the RSA this week pointing out that it makes sense to apply the tools of rational disciplines, like engineering, to the field of public policy. He gave some examples of (what others would call) user-centred design, systematic and multi-criteria decision-making.
I got this stuff with my mother’s milk (slight exaggeration) in that my early academic training was in Operational Research which does all of that sort of thing, and has done for decades. And to be frank, it struggles to have an impact.
Hearing someone else present my own case back to me, but combined with three decades of actual experience of attempting to make stuff happen, helped me to crystalise why it struggles. It’s not that a rational evidence-based approach to policy setting is bad, it’s just that there’s a MUCH better alternative.
It’s called “ideology”.
Making decisions based on ideology is:
- Quick (I know immediately what the right answer is)
- Cheap (no expensive analysis or delay)
- And …it comes with a narrative that is already accepted by many (and if I’m an a position to actually make the decision it’s likely that a workable majority of people are currently agreeing with that narrative, so – job done).
These are enormously desirable characteristics for a decision-making method to have.
So if my ideology tells me that the best way of dealing with criminals is to imprison the reprobates for longer, or for resolving worklessness is to punish non-compliance rather than reward compliance, or that there is no place for the market in public service provision, I can make even the most profound public policy decisions very quickly and easily.
Another useful criterion for a decision-making process of course is that it actually works in practice! In a wonderful book The Blunders of Our Governments Anthony King and Ivor Crewe measure the success of various government initiatives in terms of whether they achieve the policy objectives that were set out for them initially (ie whether you agree with the need for that policy outcome or not, did it succeed). Many failed. It is argued, persuasively, by many, that bringing a rigorous process of analysis to bear would increase the likelihood of success. But such processes are:
- Slow (I have to think about it, gather data, wait to see what happens)
- Expensive (cost of the above, plus the significant political cost of not being seen to “do something”)
- Prone to coming up with results that are really hard to understand, but very easy to misunderstand, all over the front page of the Daily Mail (or, frankly, The Guardian).
Seeing someone argue passionately for the rational approach based on the fact that “it is the right thing to do” (which I agree with) helped me nonetheless to realise where the problem is. We aren’t going to win on the “but it WORKS” turf, because ideology is so much better as a basis for decision making in some other very important ways.
So the agenda for me now is to spot and encourage ways to make the rational approach quicker, cheaper and better narrated. How can we use our skills of design to do this, for example? How can we engineer faster cheaper, well articulated analyses?
Quicker and cheaper, for example, may eventually come from an increased prevalence of open data, or indeed more data that is actually machine readable rather than locked up in chemicals on smooshed trees. It will also come from having done some of the thinking and analysis in advance. The better narrative may come from the data geeks remorselessly forcing themselves to tell the story, or work with people who can help them to do that. It may come from greater analytical skills in the folk who own and support the narrative such as our most senior public servants. New technologies will help tell the story – in some of the analysis I have seen the fact that the data story can be told with maps really helps. But it’s not just about tech, the analysts have to be willing to engage with the wider narrative directly as Mark Henderson points out (with a very compelling narrative) in The Geek Manifesto.
Maybe all of this is obvious, but I think that those of us who have an affinity for an evidence based approach, which is based on what actually works, may have to stop arguing that our method is best, and face up to the fact that on some really important criteria … it sucks.