Introduction: The need for evaluation
The State Aid Modernisation that was launched by the European Commission in May 2012 aims primarily to channel State aid to remedying genuine market failure. Subsequent policy documents have all been based on this underlying aim: to ensure that State aid is truly needed and effective in addressing market failure. The need and effectiveness of State aid are assessed ex ante; i.e. before any aid is granted. Last week, however, the Commission published a paper outlining the rationale and possible methods for evaluating the ex post effects of State aid. If the proposal of the Commission is adopted it will be the first time that the EU’s system of State aid control utilises explicitly such an ex post instrument.
Under the present State aid regime, Member States are not obliged to carry out any ex post evaluation of the impact of State aid they grant. As the Commission paper explains, “the current State aid set-up focuses little on the actual, measured impact of aid schemes. Rather, schemes are approved ex ante on the basis of pre-defined criteria on the assumption that their overall balance will be positive, without a proper evaluation of their impact on the markets and over time.” [p. 2] “Evaluation is typically carried out rarely and on an ad hoc basis.” [p. 3]
The European Court of Auditors in a Special Report published in December 2011 criticised the Commission for not analysing whether State aid indeed achieves its policy targets. Naturally, the Commission cannot evaluate the effects of all State aid it approves. This is a task for the granting authorities.
Surprisingly some Member States have been very reluctant to carry out ex post evaluations. They complain that it is often difficult if not impossible to measure the effects of State aid and, at any rate, it would make the implementation of State aid measures more bureaucratic. This is a strange view. First, if granting authorities do not know how State aid impacts on their economies, why do they grant it in the first place? Second, since they use public money, they should have an informed view as to how well that public money is spent. Third, ex ante and ex post evaluations are not an unusual practice. They already take place in the context of Structural Funds. This begs the question why Member States can do evaluation of Structural Funds but they cannot do it for State aid.
The Commission has received political support for its Modernisation initiative from the Council which on 13 November 2012 agreed, among others, that
“4. … instruments allowing for a better prioritisation and greater simplification should go hand in hand with effective evaluation and control of compliance with the State aid rules at the national and European level, while remaining proportionate and preserving the institutional competences of the Commission and the Member States.”
The purpose of evaluation is different from ex ante assessment which mostly ensures compliance with the relevant rules. Only a minority of State aid measures are subject to full assessment of their likely effects under the so-called “Balancing Test”. Rather the purpose of evaluation, as opposed to assessment, is, in the words of the Commission, to “provide analysis on the effectiveness and efficiency of an aid measure and suggest improvements and lessons to be learnt.” [pp. 2-3]
Do you know we also publish a journal on State aid?
The European State Aid Law Quarterly is available online and in print, and our subscribers benefit from a reduced price for our events.
The Balancing Test is applied to awards of aid above certain thresholds and to measures outside any of the present guidelines. According to the statistics in the Commission paper, 87% of the amount of aid is granted in the context of schemes and GBER-based measures [which are outside the scrutiny of the Commission]. Therefore, any ex ante analysis of the possible effects of aid is limited only to a small minority of all aid.
The Commission argues that its monitoring exercise of 2011-2012 “identified deficiencies in the implementation of a significant number aid schemes”. [p. 4] It suggests that evaluation could help to prevent “some” of these deficiencies. “Firstly, it enables to assess the overall impact of … aid schemes on the market … and whether the objectives of the aid measures have been achieved. Secondly, it can help where appropriate to improve the design of the scheme, introduce corrective measures, calibrate interventions to maximise effectiveness and efficiency. Thirdly, the introduction of State aid evaluation makes ex ante assessment less necessary opening thus the way for an enlargement of the set of measures that can be exempted from notification or subject to a lighter scrutiny.” [p. 4]
The purpose of the Commission paper is to garner the views of Member States and collect information on national appraisal procedures and methods. Although any new rules will not be adopted for some time, the question that arises is whether the ideas which are presented in the Commission paper have merit or not. The short answer is yes. But they can be improved.
The process is important
Logically, ex post evaluation on its own cannot help improve ex ante assessment. For this to happen, an operational “loop” has to be built into the system both at the Commission level and the Member State level. The results of ex post evaluation have to be collected, in the first place, then fed back into ex ante appraisal. This appears to be simple, but in fact it is a complex process.
The complexity is created by the following factors. First, any granting authority would have a natural interest to declare its measures to be successful. Credible evaluation requires at least some degree of independence between those who implement a measure and those who evaluate it. As the Commission paper correctly notes, “proper evaluation should be objective, rigorous, impartial and transparent”. [p. 8] It goes on to state that “evaluation shall be carried out by a national independent body”. [p. 10] This indeed will add credibility to evaluations.
Second, the results of evaluations have to be collected, compared and then transmitted to all authorities that grant State aid or should be kept on record so that they can be accessed by other authorities that may want to grant aid in the future. This comparison and dissemination of information serves two purposes. First, there is a lot to be learned from both good and bad practices across public authorities. Second, comparison confirms that what appears to be good is not just the result of good luck or caused by the specific situation of a particular measure but the outcome of an intrinsically effective practice. The same applies to what appears to be an unsuccessful practice. For this kind of learning to be useful across the EU, the comparison and dissemination of results have to take place at Member State level and at EU level. This means that evaluation results have to be collected and processed by Member States, then transmitted to the Commission which should sent them, possibly with its own comments and analysis, to other Member States for further distribution to their State aid granting authorities. Learning will not happen spontaneously. It will need considerable organisational effort.
The Commission paper suggests publication of evaluation reports on the Commission’s website. This is a good step towards more transparency, but more can be done. The Commission should also add its own appraisal of the quality, method and outcomes of evaluations. It should have a more proactive role in identifying good and bad practices.
Objectives of evaluation
No policy is 100% successful. This is because no policy-maker can predict with 100% certainty the future behaviour of market participants and possible market outcomes. Every policy has an element of uncertainty. This is even more so in the case of State aid where aid recipients have strong incentives to exaggerate their needs and the usefulness of the aid they receive. Effective policy implementation requires some form of ex post appraisal of actual results and subsequent adjustment of policy instruments.
Therefore, the Commission correctly states in its paper that State aid evaluation should have the following three aims:
- “to verify that the assumptions underlying the approval of the scheme on the basis of an ex ante assessment are still valid;
- to assess whether the scheme is effective in achieving the direct objective for which it was introduced;
- to cater for unforeseeable negative effects, in particular the potential aggregated effect of a large scheme.” [pp. 6-7]
However, a major weakness in evaluation methods is that often they have multiple policy deliverables and multiple indicators of whether desired outcomes are in fact achieved. This creates two problems. First, the various policy deliverables can be conflicting. Second, if the responsible public authority can choose from an extensive menu of indicators, it will choose those which are favourable. In the end all policies are declared successful on grounds that some indicators appear to be positive. It is telling that the Commission paper gives several examples of how the effectiveness of State aid can be measured. A wide choice of policy indicators will only weaken ex post evaluation.
To a large extent the problem noted above can be avoided by the fact that evaluations ought to be carried out by independent entities. However, if these entities do not have access to the raw data or do not collect data themselves, they may be either prevented from accessing correct data or may be given only favourable data. It will be important that rules on collection and access to data are laid down beforehand. In addition, the policy outcomes that are to be evaluated will have to be defined ex ante in quantifiable or verifiable indicators to prevent disputes afterwards as to what is being measured and evaluated.
What to evaluate and measure?
On the issue of what is to be measured and evaluated, the Commission paper outlines six relevant questions: They are as follows:
- “What is the market failure to be addressed?
- How is the State aid scheme expected to address this market failure?
- What are the beneficial effects to be expected? When are those effects likely to materialize?
- What are the distortive effects to be expected? When are those effects likely to materialize?
- Are these effects expected to differ to a large extent between beneficiaries?
- Is the design appropriate to ensure incentive effect, targeted intervention and to control potential distortions?” [pp. 8-9]
These questions resemble to a large extent the questions embedded in the Balancing Test and the detailed assessment of State aid. These are valid questions and the application of the Balancing Test so far has demonstrated beyond doubt that when Member States have to provide the evidence to answer convincingly the questions posed by the Balancing Test, their State aid measures improve significantly. Given that the Balancing Test has proven its worth, the question that arises is whether evaluation of State aid should simply replicate the Balancing Test or go beyond it.
The answer is that evaluation should go beyond the Balancing Test in one trivial and two important ways. The trivial difference is that the Balancing Test is an ex ante appraisal of expected effects while evaluation is performed ex post on actual effects. Therefore, it should seek to capture both intended and unintended effects. But the important differences are, first, that the Balancing Test is in practice applied to individual awards of aid. By contrast, evaluation of State aid should be carried out to whole schemes, irrespective of whether the aid recipients are SMEs or not and regardless of whether the individual aid amounts are large or small.
The second important difference is that a properly conducted evaluation should ideally fill a gap in the methodology of the existing Balancing Test. It is rare that Member States actually carry out market studies with the aim to quantify the expected positive and negative effects of State aid they intend to grant. Most market studies seek to establish the existence of market failure and the necessity and proportionality of the aid that is to be granted. With respect to the possible negative effects of aid, most market studies restrict themselves to analysis of market structure, the position of competitors and likely channels through which negative effects may be transmitted. In fact, I am not aware of any case where a positive Commission decision on the application of the Balancing Test has been reached on grounds that positive effects have been measured and have been shown to exceed by a certain degree of magnitude quantified negative effects.
A proper evaluation should quantify, to the extent possible, all effects of State aid so that even when the positive effects outweigh the negative, a clearer understanding can be obtained of how much value is added by State intervention or how large is the margin between the positive and the negative impact of State aid. For example, if the positive effects are 200 and the negative 90, State aid should be granted. If the positive effects, however, are only 100, then, given the distortions caused by the intervention itself, State aid should be reconsidered or re-designed.
Ex post evaluation of State aid is both necessary and desirable in an era of austerity in public spending. Some Member States may not welcome the Commission paper, but they should realise that it is not contrary to their interests. Therefore, the success of a new tool focusing on evaluation of State aid will very much depend on whether it can generate savings without preventing Member States from redressing market failure.
That success will be conditional on both the methodology of and the procedure for carrying out evaluations. It will also be conditional on the ability of the Commission to disseminate lessons learned in different Member States.
 European Commission, DG Competition, Evaluation in the Field of State Aid: Issues Paper, Brussels, 12 April 2013. It can be accessed at:
 European Court of Auditors, Special Report 15/2011, Do the Commission’s Procedures Ensure Effective Management of State Aid Control?, Luxembourg, 15 December 2011. It can be accessed at:
 Results of the 3198th Council meeting. It can be accessed at: