Back in the grim, dark economic days of 2010, we began a new project at the ESRI. It was called ‘Renewal’. The aim was to produce a series of studies and one-day conferences to show how evidence could be used to tackle policy issues in post-crisis Ireland.
The studies were spread across the policy spectrum: health, education, regulation, public infrastructure, labour market activation and more. Some were broad cross-departmental policy issues, such as how best to execute a large fiscal adjustment or how to improve users’ experiences of public services. Others were more specific, such as whether to introduce a loan-to-value limit for residential mortgages or whether pay-for-performance would improve health services.
The project was greatly assisted by the willing engagement of a good number of senior public servants, both from departments and state agencies. The emphasis on the implications of the evidence for policy produced much discussion and debate, not only about the relationship between evidence and policy but also about the relationship between researchers and policy-makers. ‘Using Evidence to Inform Policy’ (Gill & Macmillan) explores what we learned, using the Renewal studies as examples.
The lessons were not only for policy-makers but for researchers too. Each have their own perspective, and can learn from that of the other, about what evidence can and cannot do.
First, consider what evidence can do. Too often this is narrowly conceived as the evaluation of policy outcomes. It’s true that evidence can be used to evaluate policies with measurable indicators of success or failure but it can do much more than this. In ‘Using Evidence to Inform Policy’, we argue that evidence can be applied at various places in what we call the “policy landscape” which consists of processes that first identify policy challenges, then agree policy goals, consider options for reaching those goals and, finally, determine the specific policy. Evidence can be helpful in all parts of this policy landscape.
For instance, evidence can sometimes identify challenges that previously were not recognised in stated policy objectives. Examples in the book include a study on the perceived quality of public services which reveals that service quality is perceived to be poorer by the economically vulnerable, with variation across different services. This begs the question of how policy might improve the interfaces between users from lower socio-economic groups and those services they find to be of poorer standard.
The general point is that this particular policy challenge arises only as a result of research evidence, which identifies and quantifies the problem. This allows policy-makers to gauge its extent and decide whether improving certain services for those in lower socio-economic groups should be a policy goal.
Research evidence can also contribute to the next part of the policy landscape where once the goal of policy is agreed, there is a need to formulate and assess policy options for obtaining it. Most straightforwardly, this entails the establishment and measurement of key facts. How many people will be affected? Who are they?
More subtly, research evidence can be vital in this part of the policy landscape because it can improve the understanding of those tasked with formulating policy. In addition to providing facts, evidence provides insight into causes.
Most policy areas deal with a complex system of relationships between citizens, and between citizens and organisations, and between various organisations. To change outcomes, policies must have a causal impact on the relevant system, supporting a positive causal effect or blocking a negative one. Evidence that offers insight into what causes what is often suggestive as to which policy option is most likely to succeed.
An important aspect of the role of evidence is its ability to generate surprises. An example concerns the quality of consumers’ financial decision-making. Consumers routinely get into difficulty through excessive borrowing or by leaving themselves vulnerable to asset price movements.
An obvious policy solution is to try to improve financial capability through education. Surely more financially literate consumers will make better decisions? Yet, surprisingly, the evidence suggests not. The causal link between financial education and decision-making is weak. Other policy options need to be considered.
So evidence is about more than policy evaluation. It can identify policy challenges and improve the policy-maker’s understanding by illuminating relevant systems and processes. Part of the Public Service Reform Plan seeks to improve how evidence is employed to evaluate policy, including via the identification of measureable policy outcomes. But perhaps we should also seek systematic improvement in how evidence is gathered and used to identify policy challenges and formulate options.
The Renewal project generated lessons for researchers too about what evidence cannot do. The same piece of research can generate multiple views regarding its implications for policy. This is to be expected but is not always recognised.
Even if the evidence points clearly towards an advantageous policy change, it cannot usually tell a policy-maker what priority to give that change against competing priorities. Nor can research evidence determine what is fair. Most policy changes benefit some people more than others and people differ in what distribution of benefits they regard as fair. Matters of priorities and values are unavoidable in policy-making, whatever the evidence says.
So too are matters of risk and uncertainty. Almost all policy changes involve a degree of risk and uncertainty. Evidence often suggests that a beneficial outcome is likely but rarely that it is inevitable. Success often depends on how those tasked with implementing the policy respond to its introduction – a factor that policy-makers can often judge more accurately than researchers. Even where it can be assessed, how much risk should policy-makers bear? Tolerance of risk varies between individuals and evidence cannot provide a right answer.
Over and above risk assessment, there will always remain true uncertainty. What are the chances that a policy change results in an unintended consequence that no one foresaw? Evidence cannot tell you how much you know of what there is to be known.
Policy cannot be deduced from evidence because priorities, values and assessments of risk and uncertainty are involved in most policy decisions too. Note that these factors do not even include political and financial constraints. Researchers perhaps too often assume that, where policy is at odds with evidence, politics must be to blame. Yet the inference from evidence to policy is far from straightforward even when there is no contentious politics involved.
In summary, there is the potential for evidence to make a much greater contribution to policy in Ireland but evidence takes you only so far down the road to a good policy. How, then, might we set about using evidence better?
In the book, we describe how several countries are trying to adapt their systems to improve the relationship between researchers and policy-makers, making it more systematic and engaged. It is obviously important in such systems to preserve the independence of researchers who must always be willing to entertain, test and report on hypotheses that may not be popular or convenient.
Much could be done in Ireland, where presently the use of evidence for policy is patchy. A more systematic engagement would help researchers and policy-makers to understand each other better and, together, to generate, consider and ultimately benefit from sound evidence.
Pete Lunn is a Senior Research Officer with the Economic and Social Research Institute. A former BBC journalist, he joined the ESRI in 2006 and his primary research interest is economic decision-making.