File PDF .it

Condividi facilmente i tuoi documenti PDF con i tuoi contatti, il Web e i Social network.

Inviare un file File manager Cassetta degli attrezzi Ricerca PDF Assistenza Contattaci

ToRs interim evaluation PSUP II 9.10 .pdf

Nome del file originale: ToRs interim evaluation PSUP II_9.10.pdf
Autore: Gary QUINCE

Questo documento in formato PDF 1.7 è stato generato da PDF CoDe 2013.4461 (c) 2002-2013 European Commission, ed è stato inviato su il 13/10/2014 alle 14:14, dall'indirizzo IP 194.79.x.x. La pagina di download del file è stata vista 1657 volte.
Dimensione del file: 115 KB (23 pagine).
Privacy: file pubblico

Scarica il file PDF

Anteprima del documento

Directorate-General for Development and Cooperation — EuropeAid

Regional Programmes Sub-Saharan Africa and ACP wide

FWC BENEF 2013 – EuropeAid /132633/C/SER/multi
Lot n°7 – Governance and Home Affairs
REQUEST N° 2014/ 350918
Mid-term Evaluation of the Participatory Slum Upgrading Programme II (PSUP II)

Table of content


BACKGROUND ................................................................................................................ - 2 1.1.

Overall context of the PSUP.................................................................................... - 2 -


Objectives and results of the PSUP ......................................................................... - 3 -


Key implementation arrangements .......................................................................... - 4 -

DESCRIPTION OF THE ASSIGNMENT......................................................................... - 4 2.1.


Evaluation objectives and issues to be studied ........................................................ - 4 2.1.1.

Objectives and coverage........................................................................... - 4 -


Issues to be studied and Evaluation questions.......................................... - 4 -


Approach and Methodology ..................................................................... - 5 -

The evaluation process ............................................................................................ - 6 2.2.1.

Desk phase – inception ............................................................................. - 6 -


Desk phase – finalisation.......................................................................... - 7 -


Field phase................................................................................................ - 7 -


Synthesis phase......................................................................................... - 8 -


Quality of the Final Evaluation Report..................................................... - 8 -


EXPERTISE REQUIRED.................................................................................................. - 8 -


LOCATION AND DURATION ........................................................................................ - 9 -


REPORTING.................................................................................................................... - 10 -


ADMINISTRATIVE INFORMATION ........................................................................... - 11 -

ANNEX I: KEY DOCUMENTS FOR THE EVALUATION .................................................. - 13 ANNEX II: LAYOUT AND STRUCTURE OF THE FINAL REPORT ................................. - 14 ANNEX III - METHODOLOGICAL OBSERVATIONS........................................................ - 21 ANNEX IV - QUALITY ASSESSMENT GRID ..................................................................... - 22 ANNEXE V - THE STANDARD DAC FORMAT FOR EVALUATION REPORT
SUMMARIES .................................................................................................................. - 24 -





Overall context of the PSUP

The Participatory Slum Upgrading Programme II (PSUP) financed by the 10th EDF, builds on
the previous 9th EDF PSUP I, which ended in December 2011. The origin of the Participatory
Slum Upgrading Programme (PSUP) goes back to the World Urban Fora held 2002 in Nairobi,
2004 in Barcelona, and Vancouver 2006. Following the first two fora, the EC and UN-HABITAT
organized a Joint Regional Workshop in 2005 on urban challenges in Nairobi. The workshop
recommended reinforcement of the cooperation between UN-HABITAT and ACP Countries. In
parallel, UN-HABITAT decided in 2004 to develop and increase a concrete programme in 15
ACP countries by implementing Rapid Urban Sector Profile Studies (RUSPS), a key element of
the agreed approach and actions, using funds generated from non-EDF sources.
The PSUP I was launched in February 2008 for a period of 48 months in 30 ACP countries and
implemented in two phases:
– Phase I (18 countries1): Urban profiling (city-wide and national cross-sector assessment
studies identifying most urgent urban development needs)
– Phase II (12 countries2): Action Planning (development of recommendations for urban
policies, city-wide slum upgrading strategies and project documents for slum upgrading
interventions including a resource mobilisation strategy).
An extension of the programme was approved by the Commission in December 2011 under the
10th EDF Intra-ACP envelope and the PSUP II started on 01/01/2012 (EUR 10 million).
Following the recommendations of the mid-term evaluation, this extension includes a so called
phase III (pilot projects).
A final evaluation of the PSUP I was launched in April 2012 highlighted the following: poorly
designed intervention logic, not clearly linking activities to outputs and measurable outcomes; an
insufficient capitalization of international best practices, lack of adjustment of the Rapid Urban
Profile Studies to the objectives of the EU Contractual Agreement; changes in the substance of
outputs without adjusting the time schedule; considerable delays and significant loss in substance
for the two implementation phases; lack of corrective measures timely set against early warnings.
The main recommendations in view of the PSUP II were the following:

The principles and values of the PSUP to be partially re-engineered and more strongly
affirmed during its implementation;

Refocus the rapid urban profiling approach on slums and slum upgrading;

The program to be scaled-down and centred on achievable outcomes.


11 African countries (Burundi, Cape Verde, Republic of Congo, Cote d'Ivoire, Gambia, Madagascar,
Malawi, Mali, Mauritius, Nigeria and Uganda), 4 Caribbean countries (Haiti, Jamaica, Antigua &
Barbuda and Trinidad & Tobago) and 3 Pacific countries (Fiji, Papua New Guinea, Solomon Islands)


Burkina Faso, Cameroon, DR Congo, Malawi, Ethiopia, Ghana, Kenya, Mozambique, Niger, Senegal,
Tanzania and Zambia


Objectives and results of the PSUP

The overall goal of this programme is to improve the living conditions of the urban poor and
contribute to MDG 7 (ensure environmental sustainability), target 7C (halve, by 2015, the
proportion of people without sustainable access to safe drinking water and basic sanitation) and
target 7D (have achieved by 2020 a significant improvement in the lives of at least 100 million
slum dwellers).
The project purpose is to strengthen local, national and regional decision-makers, institutional
and key urban stakeholders' capacities in slum improvement and prevention by enabling them to
identify appropriate responses to increased urbanisation of poverty and to efficiently implement
pro-poor urban policies, city-wide slum upgrading strategies and slum upgrading demonstration
projects following an approach of three phases.

– Phase I (8 countries): city-wide and national cross-sector assessment studies
identifying most urgent urban development needs;
– Phase II (18 countries): recommendations for urban policies, city-wide slum
upgrading strategies and project documents for slum upgrading interventions
including a resource mobilisation strategy to be implemented in 18 ACP
countries of which 6 countries are additionally adapting and applying the
International Guidelines for Decentralisation and Basic Services for All.
– Phase III (9countries): implementation of slum upgrading projects and strategies
for replication and up-scaling.
Table: Relationship and continuity between phases I, II and III of the PSUP

PSUP activities
undertaken before 2008
(non EDF resources)
9th EDF PSUP 1
10th EDF PSUP 2

Phase I
Urban profiling

Phase II
Action Planning

Phase III
Pilot projects

12 new countries



18 new countries

12 countries


9 new countries

18 countries

Up to 9 countries

Expected results of the programme are as follows:
For phase I: ACP countries’ national, city and community representatives and planning
authorities are enabled to assess urban development needs and urban poverty in ACP countries.
This includes 1) the acknowledgement of the participatory urban profiling approach as a tool for
urban planning by the relevant bodies in charge of urban development and 2) the involvement
and empowerment of Non Governmental Organisation (NGOs) and Community Based
Organisations (CBOs) representing slum dwellers in assessing and following the burning issues
in their cities
For Phase II: ACP countries’ national, city and community representatives as well as planning
authorities are empowered to address slum dwellers’ needs for better living conditions in their
cities with adequate planning tools and realistic resource mobilisation strategies – addressing and
including the slum population directly in designing slum upgrading programmes.

For Phase III: ACP countries’ national, city and community representatives and planning
authorities are empowered to implement innovative slum upgrading projects through 1) increased
funding from various sources, 2) improved living conditions in selected slum areas through
demonstration projects and more comprehensive programmes and strategies, 3) increased
urbanisation and slum growth prevented through reviewed policies and slum upgrading
strategies, 4) opportunities for up-scaling and replication based on demonstration projects
assessed and implemented.

Key implementation arrangements

The programme is implemented in joint management by UN-Habitat. To this effect, a
contribution agreement was signed:
(10th EDF)


EDF contribution: €9.8M
Total amount: €12.1M


Number of countries

4 years
01/2012 – 12/2015

8 countries in phase I
18 countries in phase II
8 countries in phase III



Evaluation objectives and issues to be studied

2.1.1. Objectives and coverage
The Mid-Term evaluation will provide the decision-makers in the ACP Secretariat, the European
Commission and the wider public with sufficient information to:
• Make an overall independent assessment about the performance of the programme,
paying particularly attention to the impact of the programme actions against its
• Identify key lessons and to propose practical recommendations for follow-up actions.
The present evaluation should cover the current implementation period of the 10th EDF
contribution but should also take into account the findings of the final evaluation of the
PSUP I under the 9th EDF and to assess the level of implementation of the recommendations
during PSUP II. In particular the study should evaluate how Country profiles (Phase I) have been
integrated in action plans (Phase II) and consequently in pilot projects (Phase III).
A sample of 12 countries has been selected for specific detailed case studies as part of this
evaluation and for which a specific and detailed field assessment will have to be carried out. The
evaluation team should however develop specific tools (such as questionnaires for instance) to
evaluate the remaining countries covered under the PSUP as the entire sample for this evaluation
includes all 34 countries covered by the programme.

2.1.2. Issues to be studied and Evaluation questions
The consultants shall verify, analyse and assess in detail the issues outlined in Annex II "Layout
and structure of the Final Report". The list of issues is not intended to be exhaustive. The issues
refer to the five evaluation criteria endorsed by the OECD-DAC (relevance, effectiveness,
efficiency, sustainability and impact), and to the EC-specific evaluation criteria (EC added value
and coherence). It should be noted that while always taking into account these standard
evaluation criteria this evaluation will be organised around a set of specific evaluation questions
(a maximum of 10). In such an approach, the criteria will be translated into specific questions,
and each question may address one or more of the criteria in its intent.

The consultants are also requested to verify, analyse and assess the integration and impact of
cross cutting issue in the project. The consultants are required to use their professional judgement
and experience to review all relevant factors and to bring these to the attention of the ACP
Secretariat and European Commission.
The evaluation questions will be identified in the first instance by the evaluation team during the
desk phase. The questions should include in their coverage the following main areas:
– Quality of project design: the appropriateness of project objectives the problems, needs and
priorities of the intended target groups and beneficiaries that the project is supposed to address
and to the physical and policy environment within which it operates. This should include an
assessment of the quality of project preparation and design - i.e. the logic and completeness of
the project planning process, and the internal logic and coherence of the project design.
– Achievement of main objectives and effectiveness of the programme: the Consultant shall
identify all recorded results and impacts, including any unintended ones, and compare these to
the intended.
– Efficiency of the implementation to date: to what extent funding, human, resources,
regulatory, and/or administrative resources contributed to, or hindered, the achievement of the
objectives and results. Special attention should be paid to the organisational set-up of the
programme and on the efficiency of relationships between the many stakeholders involved in
the implementation of the programme.
– Sustainability of the effects: an analysis of the extent to which the results and impact are
being, or are likely to be, maintained over time.
– Key cross-cutting issues: for example gender, environment human rights, HIV/AIDS,
institutional capacity building, etc. Verification should be undertaken, on the one hand, of the
extent that account has been taken of these priorities in the programme design and, on the
other hand, to what extent these issues have been reflected in the implementation modalities
and in the effects of the intervention.
– Co-ordination, complementarity and coherence with EU Members States, other donors and
with EU policies. In particular, coherence and complementarity with initiatives carried out at
the national and/or regional level by the EU and/or other donors should be assessed.

2.1.3. Approach and Methodology
For methodological guidance refer to the EuropeAid's Evaluation methodology website where guidance is
available for both evaluation managers (Commission staff) and evaluation teams (consultants)
The evaluation is managed by DEVCO/E3. The evaluation will be steered by a Reference
Group composed of concerned units at EuropeAid and the ACP Secretariat. The responsible task
manager for the project at EuropeAid will perform the role of the evaluation manager.
The reference group member's main functions are:
– To aggregate and summarise the views of the Commission services and to act as an
interface between the consultants and the services, thereby supplementing bilateral
– To ensure that the evaluation team has access to and has consulted all relevant
information sources and documents related to the programme.
– To validate the Evaluation Questions.

– To discuss and comment on notes and reports delivered by the evaluation team.
Comments by individual group members are compiled into a single document by the
evaluation manager and subsequently transmitted to the evaluation team.
– To assist in feedback of the findings, conclusions, lessons and recommendations from
the evaluation.

The evaluation process

Once the external evaluation team has been contractually engaged, the evaluation process will be
carried out through three phases: a Desk Phase, a Field Phase and a Synthesis Phase, as described

2.2.1. Desk phase – inception
In the inception stage of the Desk Phase, the relevant programming documents should be
reviewed, as well as documents shaping the wider strategy/policy framework. The evaluation
team will then analyse the logical framework as set up at the beginning of the programme cycle.
On the basis of the information collected the evaluation team should:

Describe the development co-operation context.


Interview the programme management, EC services and key partners including
the ACP Secretariat.


Interview by both experts to the UN-Habitat management team in Nairobi, Kenya
and in Brussels.


Comment on the logical framework.


Propose a set of evaluation questions3 and prepare explanatory comments for
each, justifying their relevance.


Identify provisional indicators for each evaluation question and their
verification means, and describe the analysis strategy.


Propose the work plan for the finalisation of the first phase.

During the inception stage an inception report shall be prepared (see section 5 – reporting

2.2.2. Desk phase – finalisation
In the finalisation stage of the Desk Phase, the evaluation team should carry out the following



Review the relevant available documents


Present an indicative methodology to the overall assessment of the programme.

An indicative list of evaluation questions covering the DAC evaluation criteria is presented in Annex II.
The appropriate evaluation questions should be elaborated on the basis of this set of issues.


Present each evaluation question stating the information already gathered and
their limitations provide a first partial answer to the question, identify the issues
still to be covered and describe a full method to answer the question.


Identify and present the list of tools to be applied in the Field Phase, propose
appropriate methods of analysis of the information and data collected indicating
any limitation to those methods;


Present the finalised quantitative and qualitative indicators;


Present the first elements of responses to the evaluation questions and the first
hypotheses to be tested with the stakeholders and in the field;


List all preparatory steps already taken for the Field Phase.


Propose a work plan for the Field phase, including an indicative list of people to
be interviewed, surveys to be undertaken, dates of visit, itinerary, and name of
team members in charge. This plan has to be applied in a way that is flexible
enough to accommodate for any last-minute difficulties in the field. If any
significant deviation from the agreed work plan or schedule is perceived as
creating a risk for the quality of the evaluation, these should be immediately
discussed with the evaluation manager.

At the end of the desk phase a desk report shall be prepared (see section 5 – reporting
requirements). A meeting will be held with the reference group to present the desk report.

2.2.3. Field phase
Following acceptance of the desk phase report, the Evaluators shall undertake the field mission to
twelve ACP countries. Each country should be visited by one evaluator. (except the first visit to
UN-Habitat management team in Nairobi during the inception phase). The fieldwork shall be
undertaken on the basis set out in the inception and desk phase reports and approved by the
reference group.
The Field phase will include field visits in the following countries:
– Phase I (2):

Lesotho and Rwanda

– Phase II (4):

Papua New Guinea, Fiji, Solomon Island, Haiti and Jamaica.

– Phase III (4/5): Cameroon, Ghana, Niger, Burkina Faso and Kenya
The field mission to Kenya should be conducted as the last field visit among the above-listed
countries. This will allow for follow discussions on the preliminary findings of the field visits
with the UN-Habitat management team in Headquarters.
The evaluation team should:

Ensure adequate contact and consultation with, and involvement of, the different


Use the most reliable and appropriate sources of information and harmonise data
from different sources to allow ready interpretation.


Summarise its field works at the end of the field phase, discuss the reliability and
coverage of data collection, and present its preliminary findings in a debriefing
meeting with the European Commission.

If during the course of the fieldwork any significant deviations from the agreed methodology
and/or schedule are perceived necessary, the Evaluators must have received the approval of the
Evaluation Manager before they can be applied.

2.2.4. Synthesis phase
This phase is mainly devoted to the preparation of the draft final report. The consultants will
make sure that:
– Their assessments are objective and balanced, affirmations accurate and verifiable, and
recommendations realistic.
– When drafting the report, they will acknowledge clearly where changes in the desired
direction are known to be already taking place, in order to avoid misleading readers and
causing unnecessary irritation or offence.
If the evaluation manager considers the draft report of sufficient quality, he/she will circulate it
for comments to the reference group members.
On the basis of comments collected by the evaluation manager, the evaluation team has to amend
and revise the draft report. Comments requesting methodological quality improvements should be
taken into account, except where there is a demonstrated impossibility, in which case full
justification should be provided by the evaluation team. Comments on the substance of the report
may be either accepted or rejected. In the latter instance, the evaluation team is to motivate and
explain the reasons in writing.

2.2.5. Quality of the Final Evaluation Report
The quality of the final report will be assessed by the evaluation manager using a quality
assessment grid (see annexe IV). The explanation on how to fill this grid is available on the
following link:


The study will require the number of working-days outlined below to be equally divided between
one senior expert and one junior expert.
Experts who have been involved in the previous evaluation of the PSUP are excluded from the
assignment in order to avoid potential conflicts of interest.
The composition of the team of experts should be balanced to enable complete coverage of the
different aspects of project evaluation (evaluation methods and techniques) as set out in these
terms of reference.

Both experts should have the following profile as minimum requirements:
– In-depth knowledge of project evaluation methods and techniques;
– Very good knowledge of the principles and working methods of project cycle
– Experience in developing countries (direct experiences in ACP will be an asset) ;

– Excellent report writing skills;
– Full working knowledge of English.
At least one expert should have a working knowledge of French.
In addition, the experts should cover the following fields of expertise:
• Expert Category I/Team Leader

– Minimum 12 years’ experience directly related to urban development/local governance
donor-funded support programmes. Familiarity with slum/informal urban context is an
asset. Experience in multi-country programmes is an asset.
– Experience as team leader in minimum three project evaluations in the field of urban
development and/or local governance.
• Expert category II
– Minimum 6 years’ experience directly related to urban development/local governance
donor-funded support programmes. Familiarity with slum/informal urban context is an
– Experience as evaluator (member of evaluation team) in minimum one mid-term or final
evaluation of a donor-funded support programme. Experience as a team member in
evaluations carried out in the field of urban development and/or local governance will be
considered as a strong asset.


The assignment should start in November 2014 and should be carried out over a period of
maximum 6 months. Please note that 6 months correspond to the total duration of the assignment,
which include several periods for the provision of comments on the draft reports. The effective
period of performance is outlined below.
The place of implementation for the desk phase (inception phase) and the synthesis phase will be
the place of residence of the Evaluators with missions to Brussels (if experts are based outside of
Regarding the field phase, the following 12 countries (app. 30% of all 34 PSUP countries) will be
visited: Lesotho, Rwanda, Papua New Guinea, Fiji, Solomon Island, Haiti, Jamaica, Cameroon,
Ghana, Niger Burkina Faso and Kenya,
The table below presents an indicative allocation of working-days per phase and activities, as
well as corresponding staff inputs allocated to each activity.




Initial team meeting/interviews, draft
evalulation questions and indicators
review of existing documentation
Desk Phase - Interviews with UN-Hab Nairobi
drafting inception report
sub mission draft inception report
comments on the 1st report
sub mission final inception report
development of tools, indicators
drafting desk report
sub mission draft desk report
Desk Phase comments draft desk report
meeting with reference group
adjusting draft report
sub mission final desk report
field mission
Field phase
debriefing meeting


drafting 1st draft report
sub mission draft final report
comments on the 1st report
adjusting draft report
sub mission final report
approval final report

by expert


By whom













reference group
reference group

Home-b ased
ACP countries





reference group
reference group





The 60 working days foreseen for field missions correspond to a maximum of 5 working days
(including transfer) per country (12 countries in total).
The Consultant is free to propose any alternative implementation schedule during the first
mission (inception report) including allowing for a reallocation of missions and reports to fit a
reviewed work plan which must be approved by the European Commission.
DEVCO and/or the ACP Secretariat may accompany the Evaluation team on some field missions.


The reports must match quality standards. The report has to be illustrated, as appropriate, with
maps, graphs and tables; a map of the project’s area(s) of intervention is required (to be attached
as Annex).
The consultant will submit the following reports in English:

Inception report of maximum 12 pages. In the report, the consultant shall describe the
first finding of the study (on the basis of the issues listed in section 3.1), the foreseen
degree of difficulties in collecting data, other encountered and/or foreseen difficulties in
addition to his programme of work and staff mobilization.
The inception report shall be submitted to the evaluation manager that will provide
written comments within 5 working-days after the submission of the draft report.
Comments shall be integrated in the report within 2 working days.


Desk report of maximum 25 pages, main text, excluding annexes. The report shall
address the issues mentioned in section 3.2.
- 10 -

The draft desk report shall be submitted to the evaluation manager that will provide
written comments within 5 working days after the submission of the draft report.
Comments shall be integrated in the report within 2 working days.

Final report (of maximum 40 pages) using the structure set out in Annex II and taking
due account of comments received from the reference group members.
The first draft final report shall be submitted to the evaluation manager that will provide
written comments within 2 weeks. If the evaluation manager considers the report of
sufficient quality, he will circulate it for comments to the reference group, which will
convene to discuss it in the presence of the evaluators if necessary.
The second draft final report, amended on the basis of the comments expressed by the
reference group, shall be submitted within 1 week from the receipt of the comments.

All reports will be submitted to the evaluation manager in DEVCO/E3 who will formally approve
the reports.
The inception report, desk report and draft final report shall be distributed in electronic format
only. The final report shall be distributed both electronically and with hard copies. A CD-Rom
with all documents has to be added to each printed report. 5 hard copies (both English and
French) of the final report shall be sent to the European Commission. The final report shall be
translated in French once approved by the evaluation manager.
The consultant will include as an Annex the DAC Format for Evaluation Report Summaries (see
Annex V). The report is to be disseminated under the full responsibility of the Commission.


The language of the specific contract is English.
The contract is a global price contract.
Saturday and Sunday can be included as working days if requested by the consultant.
A management team member will be required for the briefing in Brussels.
As stated in the Global ToR of the Framework Contract, the Contractor will make available
appropriate management and backstopping mechanisms, quality control systems, secretariat and
any other support staff that it considers necessary in order to implement the Contract. The support
team will provide all necessary logistical support both prior and during the assignment to allow
experts to concentrate on their primary responsibilities.
Regarding the specific assignment, secretariat/office renting costs both in Headquarters and
during field missions, which may include rental, communications (fax, phone, mail, internet,
courier etc.), report production and secretarial services both in the Contractor’s home office and
during field missions are considered an overhead included within the fee rates of experts.
Experts shall be fully equipped with portable computers, necessary software and portable printer
including paper necessary for printing reports and other documentation.
Only the following costs can be included in the consultant's financial offer as reimbursable costs:
travels, per diems and translation costs of the final report.

- 11 -


To be provided by the European Commission:
• Commission Decision;
• Contribution Agreements with UN-Habitat (for the 9th EDF and 10th EDF period);
• Final evaluation final report PSUP I
• Reports submitted by UN-Habitat;
• Outputs submitted by UN-Habitat
• Relevant minutes/reports of monitoring meetings;

Note: The evaluation team has to identify and obtain any other document worth analysing,
through its interviews with people who are or have been involved in the design, management and
supervision of the project. Resource persons to collect information and data are to be sought in
the EC services, implementing body and / or public service in the partner countries.

- 12 -

The final report should not be longer than 40 pages. Additional information on overall context,
programme or aspects of methodology and analysis will be confined to annexes.
The cover page of the report shall carry the following text:
‘’ This evaluation is supported and guided by the European Commission and presented by [name of
consulting firm]. The report does not necessarily reflect the views and opinions of the European
The main sections of the evaluation report are as follows:
A tightly-drafted, to-the-point and free-standing Executive Summary is an essential component.
It should be short, no more than five pages. It will focus mainly on the key purpose or issues of
the evaluation, outline the main analytical points, and clearly indicate the main conclusions,
lessons learned and specific recommendations. Cross-references have to be made to the
corresponding page or paragraph numbers in the main text that follows.
A description of the programme and the evaluation, providing the reader with sufficient
methodological explanations to gauge the credibility of the conclusions and to acknowledge
limitations or weaknesses, where relevant.
A chapter presenting the evaluation questions and conclusive answers, together with evidence
and reasoning.
The organization of the report should be made around the responses to the Evaluation questions
which are systematically covering the DAC evaluation criteria: relevance, effectiveness,
efficiency, impact and sustainability, plus coherence and added value specific to the Commission.
In such an approach, the criteria will be translated into specific questions. These questions are
intended to give a more precise and accessible form to the evaluation criteria and to articulate the
key issues of concern to stakeholders, thus optimising the focus and utility of the evaluation.
This annex proposes an indicative list of issues which deserve to be studied in a
project/programme evaluation. The evaluation should focus on a limited number of precise
issues/questions. It should ensure that there is a balance of evaluation criteria.
3.1 Problems and needs (Relevance)
The extent to which the objectives of the development intervention (projects/ programme) are
consistent with beneficiaries' requirements, country needs, global priorities and partners' and EC's
The analysis of relevance will focus on the following questions in relation to the design of the

- 13 -

the extent to which the project has been consistent with, and supportive of, the policy and
programme framework within which the project is placed, in particular the EC’s Country
Strategy Paper and National Indicative Programme, and the Partner Government’s
development policy and sector policies

the quality of the analyses of lessons learnt from past experience, and of sustainability

the project's coherence with current/on going initiatives;

the quality of the problem analysis and the project's intervention logic and logical
framework matrix, appropriateness of the objectively verifiable indicators of achievement;
the extent to which stated objectives correctly address the identified problems and social
needs, clarity and internal consistency of the stated objectives;

the extent to which the nature of the problems originally identified have changed

the extent to which objectives have been updated in order to adapt to changes in the

the degree of flexibility and adaptability to facilitate rapid responses to changes in

the quality of the identification of key stakeholders and target groups (including gender
analysis and analysis of vulnerable groups) and of institutional capacity issues;

the stakeholder participation in the design and in the management/implementation of the
project, the level of local ownership, absorption and implementation capacity;

the quality of the analysis of strategic options, of the justification of the recommended
implementation strategy, and of management and coordination arrangements;

the realism in the choice and quantity of inputs (financial, human and administrative
the analysis of assumptions and risks;

the appropriateness of the recommended monitoring and evaluation arrangements ;

3.2 Achievement of purpose (Effectiveness)
The effectiveness criterion, concerns how far the project’s results were attained, and the project’s
specific objective(s) achieved, or are expected to be achieved.
The analysis of Effectiveness will therefore focus on such issues as:

whether the planned benefits have been delivered and received, as perceived by all key
stakeholders (including women and men and specific vulnerable groups);

whether intended beneficiaries participated in the intervention

in institutional reform projects, whether behavioural patterns have changed in the
beneficiary organisations or groups at various levels; and how far the changed institutional
arrangements and characteristics have produced the planned improvements (e.g. in
communications, productivity, ability to generate actions which lead to economic and social

if the assumptions and risk assessments at results level turned out to be inadequate or
invalid, or unforeseen external factors intervened, how flexibly management has adapted to
ensure that the results would still achieve the purpose; and how well has it been supported
in this by key stakeholders including Government, Commission (HQ and locally), etc.;
- 14 -

whether the balance of responsibilities between the various stakeholders was appropriate,
which accompanying measures have been taken by the partner authorities;

how unintended results have affected the benefits received positively or negatively and
could have been foreseen and managed.;

whether any shortcomings were due to a failure to take account of cross-cutting or overarching issues such as gender, environment and poverty during implementation;

3.3 Sound management and value for money (Efficiency)
The efficiency criterion concerns how well the various activities transformed the available resources
into the intended results (sometimes referred to as outputs), in terms of quantity, quality and
timeliness. Comparison should be made against what was planned.
The assessment of Efficiency will therefore focus on such issues as:

the quality of day-to-day management, for example in:
a. operational work planning and implementation (input delivery, activity management
and delivery of outputs), and management of the budget (including cost control and
whether an inadequate budget was a factor);
b. management of personnel, information, property, etc,
c. whether management of risk has been adequate, i.e. whether flexibility has been
demonstrated in response to changes in circumstances;
d. relations/coordination with local authorities, institutions, beneficiaries, other donors;
e. the quality of information management and reporting, and the extent to which key
stakeholders have been kept adequately informed of project activities (including
beneficiaries/target groups);

respect for deadlines;

Extent to which the costs of the project have been justified by the benefits whether or not
expressed in monetary terms in comparison with similar projects or known alternative
approaches, taking account of contextual differences and eliminating market distortions.

Partner country contributions from local institutions and government (e.g offices, experts,
reports, tax exemption, as set out in the LogFrame resource schedule), target beneficiaries
and other local parties: have they been provided as planned?

Commission HQ/Delegation inputs (e.g. procurement, training, contracting, either direct or
via consultants/bureaux): have they been provided as planned?;

Technical assistance: how well did it help to provide appropriate solutions and develop
local capacities to define and produce results?

Quality of monitoring: its existence (or not), accuracy and flexibility, and the use made of
it; adequacy of baseline information;

Did any unplanned outputs arise from the activities so far?

3.4 Achievement of wider effects (Impact)
The term impact denotes the relationship between the project’s specific and overall objectives.
- 15 -

At Impact level the final or ex-post evaluation will make an analysis of the following aspects:

Extent to which the objectives of the project have been achieved as intended in particular
the project planned overall objective.

whether the effects of the project:
a) have been facilitated/constrained by external factors
b) have produced any unintended or unexpected impacts, and if so how have these
affected the overall impact.
c) have been facilitated/constrained by project/programme management, by coordination arrangements, by the participation of relevant stakeholders
d) have contributed to economic and social development
e) have contributed to poverty reduction
f) have made a difference in terms of cross-cutting issues like gender equality,
environment, good governance, conflict prevention etc.
g) were spread between economic growth, salaries and wages, foreign exchange, and

3.5 Likely continuation of achieved results (Sustainability)
The sustainability criterion relates to whether the positive outcomes of the project and the flow of
benefits are likely to continue after external funding ends or non funding support interventions
(such as: policy dialogue, coordination).
The final evaluation will make an assessment of the prospects for the sustainability of benefits on
basis of the following issues:

the ownership of objectives and achievements, e.g. how far all stakeholders were consulted
on the objectives from the outset, and whether they agreed with them and continue to
remain in agreement;

policy support and the responsibility of the beneficiary institutions, e.g. how far donor
policy and national policy are corresponding, the potential effects of any policy changes;
how far the relevant national, sectoral and budgetary policies and priorities are affecting the
project positively or adversely; and the level of support from governmental, public,
business and civil society organizations.

institutional capacity, e.g. of the Government (e.g. through policy and budgetary support)
and counterpart institutions; the extent to which the project is embedded in local
institutional structures; if it involved creating a new institution, how far good relations with
existing institutions have been established; whether the institution appears likely to be
capable of continuing the flow of benefits after the project ends (is it well-led, with
adequate and trained staff, sufficient budget and equipment?); whether counterparts have
been properly prepared for taking over, technically, financially and managerially;

the adequacy of the project budget for its purpose particularly phasing out prospects;

socio-cultural factors, e.g. whether the project is in tune with local perceptions of needs and
of ways of producing and sharing benefits; whether it respects local power- structures,
status systems and beliefs, and if it sought to change any of those, how well-accepted are
the changes both by the target group and by others; how well it is based on an analysis of
such factors, including target group/ beneficiary participation in design and implementation;
and the quality of relations between the external project staff and local communities.
- 16 -

financial sustainability, e.g. whether the products or services being provided are affordable
for the intended beneficiaries and are likely to remained so after funding will end; whether
enough funds are available to cover all costs (including recurrent costs), and continued to
do so after funding will end; and economic sustainability, i.e. how well do the benefits
(returns) compare to those on similar undertakings once market distortions are eliminated.

technical (technology) issues, e.g. whether (i) the technology, knowledge, process or
service introduced or provided fits in with existing needs, culture, traditions, skills or
knowledge; (ii) alternative technologies are being considered, where possible; and (iii) the
degree in which the beneficiaries have been able to adapt to and maintain the technology
acquired without further assistance.

Wherever relevant, cross-cutting issues such as gender equity, environmental impact and
good governance; were appropriately accounted for and managed from the outset of the

3.6 Mutual reinforcement (coherence)
The extent to which activities undertaken allow the European Commission to achieve its
development policy objectives without internal contradiction or without contradiction with other
Community policies. Extent to which they complement partner country's policies and other
donors' interventions.
Considering other related activities undertaken by Government or other donors, at the same level
or at a higher level:

likeliness that results and impacts will mutually reinforce one another

likeliness that results and impacts will duplicate or conflict with one another

Connection to higher level policies (coherence)
Extent to which the programme (its objectives, targeted beneficiaries, timing, etc .):

is likely to contribute to / contradict other EC policies

is in line with evolving strategies of the EC and its partners

3.7 EC value added
Connection to the interventions of Member States. Extent to which the programme (its
objectives, targeted beneficiaries, timing, etc .)

is complementary to the intervention of EU Member States in the region/country/area

is co-ordinated with the intervention of EU Member States in the region/country/area

is creating actual synergy (or duplication) with the intervention of EU Member States

involves concerted efforts by EU Member States and the EC to optimise synergies and
avoid duplication.

- 17 -

The consultants will make an assessment of the project’s strategy and activities in the field of
visibility, information and communication, the results obtained and the impact achieved with these
actions in both the beneficiary country and the European Union countries.
A chapter synthesising all answers to evaluation questions into an overall assessment of the
programme. The detailed structure of the overall assessment should be refined during the
evaluation process. The relevant chapter has to articulate all the findings, conclusions and lessons
in a way that reflects their importance and facilitates the reading. The structure should not follow
the evaluation questions, the logical framework or the seven evaluation criteria.
6.1 Conclusions
This chapter introduces the conclusions relative to each question. The conclusions should be
organised in clusters in the chapter in order to provide an overview of the assessed subject.
Note: The chapter should not follow the order of the questions or that of the evaluation
criteria (effectiveness, efficiency, coherence, etc.)
It should features references to the findings (responses to the evaluation questions) or to annexes
showing how the conclusions derive from data, interpretations, and analysis and judgement
The report should include a self-assessment of the methodological limits that may restrain the
range or use of certain conclusions.
The conclusion chapter features not only the successes observed but also the issues requiring
further thought on modifications or a different course of action.
The evaluation team presents its conclusions in a balanced way, without systematically favouring
the negative or the positive conclusions.
A paragraph or sub-chapter should pick up the 3 or 4 major conclusions organised by order of
importance, while avoiding being repetitive. This practice allows better communicating the
evaluation messages that are addressed to the Commission.
If possible, the evaluation report identifies one or more transferable lessons, which are
highlighted in the executive summary and presented in appropriate seminars or meetings so that
they can be capitalised on and transferred.

6.2 Recommendations
They are intended to improve the PSUP in the framework of the cycle under way, and to prepare
the design of a new intervention for the next cycle.
Note: The recommendations must be related to the conclusions without replicating them. A
recommendation derives directly from one or more conclusions.
- 18 -

The ultimate value of an evaluation depends on the quality and credibility of the
recommendations offered. Recommendations should therefore be as realistic, operational and
pragmatic as possible; that is, they should take careful account of the circumstances currently
prevailing in the context of the project, and of the resources available to implement them both
locally and in the Commission.
They could concern policy, organisational and operational aspects for both the national
implementing partners and for the Commission; the pre-conditions that might be attached to
decisions on the financing of similar projects; and general issues arising from the evaluation in
relation to, for example, policies, technologies, instruments, institutional development, and
regional, country or sectoral strategies.
Recommendations must be clustered and prioritised, carefully targeted to the appropriate
audiences at all levels, especially within the Commission structure (the programme task manager
and the evaluation manager will often be able to advise here).
The report should include the following annexes:

The Terms of Reference of the evaluation

The names of the evaluators and their companies (CVs should be shown, but summarised
and limited to one page per person)

Detailed evaluation method including: options taken, difficulties encountered and
limitations. Detail of tools and analyses.

Logical Framework matrices (original and improved/updated)

Map of project area, if relevant

List of persons/organisations consulted

Literature and documentation consulted

Other technical annexes (e.g. statistical analyses, tables of contents and figures)

page DAC summary, following the format in Annex V.

- 19 -


The evaluation team should refer to the PSUP's logical framework.
It is suggested that the evaluation team carry out [here refer to the main tools that are envisaged
for data collection, if any (the length of this section may range from very short to rather long,
depending on whether or not the issues have been a subject of preliminary reflection), for

a rapid appraisal through a field visit and a series of interviews
a questionnaire survey involving a sample of beneficiaries
a series of focus groups involving beneficiaries and non-beneficiaries
a series of case studies

The proposal in response to these terms of reference should identify any language and/or cultural
gap and explain how it will be bridged.
The programme is to be judged more from the angle of the beneficiaries’ perceptions of benefits
received than from the managers’ perspective of outputs delivered or results achieved.
Consequently, interviews and surveys should focus on outsiders (beneficiaries and other affected
groups beyond beneficiaries) as much as insiders (managers, partners, field level operators). The
proposal in response to these terms of reference, as well as further documents delivered by the
evaluation team, should clearly state the proportion of insiders and outsiders among interviews
and surveys.
A key methodological issue is whether observed or reported change can be partially or entirely
attributed to the programme, or how far the programme has contributed to such change. The
evaluation team should identify attribution / contribution problems where relevant and carry out
its analyses accordingly.
It must be clear for all evaluation team members that the evaluation is neither an opinion poll nor
an opportunity to express one’s preconceptions. This means that all conclusions are to be based
on facts and evidence through clear chains of reasoning and transparent value judgements. Each
value judgement is to be made explicit as regards:

the aspect of the project/programme being judged (its design, an implementation
procedure, a given management practice, etc.)
the evaluation criterion is used (relevance, effectiveness, efficiency, sustainability,
impact, coherence, EC value added)

The evaluation report should not systematically be biased towards positive or negative
conclusions. Criticisms are welcome if they are expressed in a constructive way. The evaluation
team clearly acknowledges where changes in the desired direction are already taking place, in
order to avoid misleading readers and causing unnecessary offence.

- 20 -

*This grid is annexed to the ToRs for information to the consultants
The quality of the final report will be assessed by the evaluation manager using the following
quality assessment grid where the rates have the following meaning:
1 = unacceptable = criteria mostly not fulfilled or totally absent
2 = weak = criteria partially fulfilled
3 = good = criteria mostly fulfilled
4 = very good = criteria entirely fulfilled
5 = excellent = criteria entirely fulfilled in a clear and original way

Concerning the criteria and sub-criteria below, the evaluation
report is rated:
1. Meeting needs:
a) Does the report precisely describe what is evaluated, including
the intervention logic in the form of a logical framework?
b) Does the report clearly cover the requested period of time, as
well as the target groups and socio-geographical areas linked to
the programme?
c) Has the evolution of the programme been taken into account in
the evaluation process?
d) Does the evaluation deal with and respond to all ToR requests.
If not, are justifications given?
2. Appropriate design
a) Does the report explain how the evaluation design takes stock
of the rationale of the programme, cause-effect relationships,
impacts, policy context, stakeholders' interests, etc.?
b) Is the evaluation method clearly and adequately described in
enough detail?
c) Are there well-defined indicators selected in order to provide
evidence about the project / programme and its context?
d) Does the report point out the limitations, risks and potential
biases associated with the evaluation method?
3. Reliable data
a) Is the data collection approach explained and is it coherent with
the overall evaluation design?
b) Are the sources of information clearly identified in the report?
c) Are the data collection tools (samples, focus groups, etc.)
applied in accordance with standards?
d) Have the collected data been cross-checked?
e) Have data collection limitations and biases been explained and
4. Sound analysis
a) Is the analysis based on the collected data?
b) Is the analysis clearly focused on the most relevant cause/effect
assumptions underlying the intervention logic?
c) Is the context adequately taken into account in the analysis?
- 21 -





Concerning the criteria and sub-criteria below, the evaluation
report is rated:
d) Are inputs from the most important stakeholders used in a
balanced way?
e) Are the limitations of the analysis identified, discussed and
presented in the report, as well as the contradictions with available
knowledge, if there are any?
5. Credible findings
a) Are the findings derived from the data and analyses?
b) Is the generalisability of findings discussed?
c) Are interpretations and extrapolations justified and supported
by sound arguments?
6. Valid conclusions
a) Are the conclusions coherent and logically linked to the
b) Does the report reach overall conclusions on each of the five
DAC criteria?
c) Are conclusions free of personal or partisan considerations?
7.Useful recommendations
a) Are recommendations coherent with conclusions?
b) Are recommendations operational, realistic and sufficiently
explicit to provide guidance for taking action?
c) Do the recommendations cater for the different target
stakeholders of the evaluation?
d) Where necessary, have the recommendations been clustered and
8.Clear report
a) Does the report include a relevant and concise executive
b) Is the report well structured and adapted to its various
c) Are specialised concepts clearly defined and not used more than
necessary? Is there a list of acronyms?
d) Is the length of the various chapters and annexes well balanced?
Considering the 8 previous criteria, what is the overall quality
of the report?

- 22 -





Evaluation Title (and Reference)
(central, 4 lines maximum)

Subject of the Evaluation
(5 lines max. on the project, organisation, or issue/theme being evaluated)

Evaluation Description
Purpose (3 lines max)
Methodology (3 lines max)

Main Findings
Clearly distinguishing possible successes/obstacles and the like where possible (25 lines/lignes

25 lines/lignes max

Feedback (5 lines/lignes max )
Donor: European Commission


DAC sector :

Efficiency, Date of report:
effectiveness and impact.

Subject of evaluation :

Language :

Author :

N° vol./pages :

Programme and budget line concerned :
Type of evaluation :
( ) ex ante
Timing :
Contact person :
Cost : Euro

Start date :

- 23 -

(x ) intermediate / ( ) ex post
Completion date :
Authors :
Steering group : Yes/No

Documenti correlati

Documento PDF tors interim evaluation psup ii 9 10
Documento PDF american woodworker 154 june july 2011
Documento PDF tor rga fwc 2v3 4 lidar final 7
Documento PDF tor for evaluation pbp 2014
Documento PDF tor psr pfm edf 11 programming
Documento PDF report air force security office 1

Parole chiave correlate