Last Tuesday (18/10 -2016) the Expert Group for Aid Analysis and Evaluation (EBA, www.eba.se) organised a day to discuss three evaluations of development cooperation in Africa. One of them looked at the impact of 50 years of development cooperation between Sweden and Tanzania. It raised several critical issues and there is much to learn from a careful analysis of its conclusions. However, immediately after the report was presented a number of websites and some daily papers took the conclusions (out of context) and generalised to all aid to Tanzania, and elsewhere. The debate got eschewed and much intellectual effort came to be spent on a ‘quasi-debate’, which was irrelevant and did not address the issues in evaluation reports. It was obvious that many of those commenting had neither read nor understood the evaluation. My pessimistic question is whether it is futile to bring evidence to the debate on development cooperation. It is not possible to prevent a critical analysis from being hijacked by people whos agenda is to put an end to development cooperation rather than improve it. So, how can an intellectual climate for critical reflection be built and maintained?
Yet another discussion at the EES conference was about complexity. It was said that perhaps there are those who have a vested interest in portraying things (policies, projects, programmes, evaluations) as more complex than they actually are. Complexity draws money and is an argument for increased funding – be it right or not. I see the point, but on the other hand, almost everything can be used as a self-serving argument – obviously also complexity. Special methods, read RCTs are an obvious case in point. Unfortunately all skills, including conceptual skills, compete on a market and almost everything can be construed as – and often is – self promoting behaviour and rent-seeking in one way or the other. Not only complexity.
Standards in evaluation have been discussed many times. Quality and standards also featured at the 2016 EES conference. There are several standards around, but the question is who evaluates the standards? Several discussants pointed out that current standards often become straightjackets that hinder innovation, cost-effectiveness and other qualities. It was suggested that they have no empirical base, that is, there is no research actually linking them to quality in evaluations. Furhermore many standrads focus on process issues rather than on the substance of what is in evaluations. Perhaps there is a need to revisit current frameworks of quality standards, such as the Program Evaluation Standards and the OECD/DAC standards.