The Behavioural Insights Team (BIT) has just published a report – “Behavioural Government: Using behavioural science to improve how governments make decisions” which is a fascinating summary of how new ways of thinking can help improve policy-making. Sir Jeremy Heywood, in the Foreword, says it is “essential reading for anyone who cares about how well governments serve their people”.
The report describes a 3-stage model for policy-making to explore how biases can occur and how they might be addressed:
For each of these steps, there are potential biases that can cause policy development and deployment to go awry, sometimes resulting in failure or unanticipated consequences. Here are some of the examples of biases discussed:
- Framing – politicians are more likely to choose a risky policy when it is framed in terms of losses/costs rather than gains/benefits
- Attention allocation means “current” issues may outweigh important or urgent ones, resulting in over-reactions and grasping at quick (but useless) solutions
- Confirmation bias means people look for evidence to support existing views and they are less able to analyse critically and contrary evidence
- Group reinforcement is when people conform to the group majority view even when they think it is incorrect. Group decision-making can even result in more extreme positions being adopted as people reinforce each other, rather than challenge.
- The illusion of similarity means that policy-makers think more people share their opinions than is actually the case and they also overestimate how much people will support the policy.
- Inter-group opposition occurs when members of one group reject the arguments of another group, often believing the other group is biased or dishonest.
- Optimism bias is the tendency to overestimate ability, the quality of plans and the likelihood of success. We see this all the time in Business Cases which are sometimes little more than works of fantasy. This also results in risk-taking based on false assumptions.
- The illusion of control is the tendency to overestimate how much control you have over future events, particularly when a policy may actually result in unanticipated consequences in complex systems.
The report goes on to suggest a series of strategies for dealing with these potential biases but, given the existence of the biases, it seems doubtful that these strategies will even be noticed, deliberated upon or executed by policy-makers!
It says that awareness of biases in policy-making needs to be raised as a starting point before the strategies can even be considered. Finally, it says that governments will need help to develop structural changes to reduce the impact of biases.
It’s going to be a slow journey, I think, but there is definitely a role for behavioural science and behavioural scientists to nudge policy-makers in the right direction.