Consultant - Independent evaluation of the project “Applied research in ecology and social sciences for sustainable management of Central Africa’s forest ecosystems” (RESSAC, 2021–2026)
Posted 2026-05-06The Geographic Cross-cutting Indicative key evaluation questions The Relevance Scientific Effectiveness, Capacities, Governance, efficiency and implementation Impact, sustainability and forward-looking Methodology and evaluation approach The Overall Sampling The Analysis Limitations Data availability and collection The Data Data The A
evaluation will cover the full programme period (Nov 2021–Nov 2026) and
all four expected results. It will include both programme-level performance and
a purposive sample of research consortia as case studies to examine pathways
from research outputs to uptake and outcome-level change.
scope will be the COMIFAC/CEEAC region and other countries covered by
RESSAC-funded research activities. The evaluation team will propose a feasible
sampling plan during inception, balancing country coverage with depth.
dimensions that the evaluation must address include:
(ecology) and social science: interdisciplinarity in research design, field
implementation, analysis, and translation into usable recommendations.
pathways, mechanisms and evidence of use in policy processes, operational
decision-making and practice by key actor groups.
and institutional capacities (scientific writing, project formulation, research
supervision), including post-doctoral and Master-level support, and enabling
administrative/financial capacities. The evaluation will also analyze the
contribution of post-doctoral fellows to knowledge production, scientific
animation, visibility, and the initiative’s success.
of IPLC and other stakeholders in research and dissemination; gender
responsiveness where relevant to the research portfolio.
(organized by evaluation criteria)
evaluation will be guided by criteria commonly used for research programme
evaluations, including relevance, scientific quality, efficiency,
effectiveness, impact (with an emphasis on outcome-level influence), and
sustainability. This is a reduced list of indicative questions (to be finalized
during inception) that maintains a balanced coverage of themes and key
evaluation priorities.
and coherence
To what extent did RESSAC address priority problems and evidence needs for
sustainable management of Central Africa’s forest ecosystems, as identified by
key decision-makers and practitioners?
To what extent is the programme logic (research – capacity – ICF – uptake)
coherent and plausible, and which assumptions/conditions proved decisive (or
fragile)?
quality, interdisciplinarity and knowledge production
What is the quality, rigor and credibility of the research produced
(biophysical and social sciences), and how is quality ensured at consortium and
programme levels?
To what extent did RESSAC effectively promote and operationalize
interdisciplinary approaches (integrated questions, methods, syntheses,
articulation across scales)?
results and uptake / use
To what extent were the expected results achieved, and what explains variations
across consortia and countries?
To what extent did funded research produce useful deliverables (publications,
data, methods, tools, policy briefs), and are these products accessible and fit
for use?
What evidence exists of appropriation and use of RESSAC outputs by target
groups, and what observable outcome-level changes result (decisions, practices,
strategies, institutional processes)?
To what extent was the ICF strategy effective in moving beyond publications
toward dissemination, training and uptake (portal, briefs, events, etc.)?
post-docs and unexpected outcomes
To what extent did the programme strengthen capacities of Central African
institutions and researchers (including post-docs and Master-level), and what
was the contribution of postdoctoral fellows to the visibility and success of
the initiative (scientific production, mentoring/scientific animation,
partnerships)?
What unexpected outcomes (positive or negative) emerged (partnerships, policy
windows, spillovers, reputation), and why?
learning (including MTE)
and management arrangements (programme and consortia) enable timely,
high-quality implementation, as well as effective partner involvement in
knowledge co-production and use of results?
2. Bottlenecks & MTE: What were the main bottlenecks
(mobility/visa, administrative capacities, transfers, reporting), how were
they managed, and to what extent were lessons/recommendations from the
mid-term evaluation taken up?
perspectives (RESSAC 2)
1. Credible
influence: What
credible contribution can be established between RESSAC-supported research
and observed policy/practice influence (including early signals and
pathways still unfolding)?
2. Sustainability
& future options:
How likely are results/capacities to be sustained beyond the project, and
what design options/strategic choices should guide a potential “RESSAC 2”
(with what supporting evidence)?
evaluation will use a mixed-methods approach, theory-based, suited to research
programmes where outcomes may occur through multiple contribution pathways and
time lags. The team is expected to triangulate evidence across sources and
stakeholder perspectives and to be explicit about attribution/contribution
limits.
design
programme results, governance and enabling systems.
outcome-level change and uptake pathways (e.g., outcome harvesting and/or
contribution analysis) for selected cases.
purposive sample of consortia to examine relevance, interdisciplinarity,
quality, dissemination and uptake.
strategy (to be finalized in inception)
evaluation team will propose a sampling strategy that is feasible and
defensible, balancing breadth and depth. At minimum, the sample should:
and disciplinary profiles (ecology-heavy, social science-heavy, and explicitly
integrated consortia).
stages (completed in 2024/2025 and those finalizing in 2026) to assess both
early outcomes and emerging pathways.
of uptake (e.g., engagement in national policy processes) as well as cases with
weaker uptake, to understand enabling and constraining factors.
IPLC-related themes and gender-relevant research where applicable.
and synthesis
/ results pathway model during inception, including key assumptions and uptake
pathways.
thematic synthesis) of interview and document data.
of portfolio indicators (e.g., outputs, trainings, dissemination metrics) and
survey results.
triangulation to identify patterns, explanations and actionable
recommendations.
and mitigation
document limitations (e.g., time lags in policy influence, incomplete
monitoring data, access constraints) and propose mitigation strategies
(triangulation, careful case selection, explicit contribution claims).
evaluation will draw on programme documentation and existing monitoring
information, complemented by primary data collection with key stakeholders.
sources and methods (indicative)
documents, annual reports, logframe and monitoring data, consortium final
reports, publications and knowledge products.
and in-person): CIFOR-ICRAF team, EU stakeholders, research partners, post-docs
and students, and intended users (field actors, authorities, CSOs, etc.).
structured surveys of consortium leads/post-docs and/or selected user groups to
document uptake, capacity changes and perceptions of usefulness.
review: mapping of publications and products (including basic bibliometrics
where relevant), assessment of quality against defined criteria (relevance,
rigor, credibility, accessibility).
structured review of uptake evidence (citations, minutes, participation in
trainings, adoption decisions) and contribution analysis/process tracing for
case studies.
management and ethics
evaluation team will apply informed consent procedures, ensure confidentiality
of interviewees, and comply with applicable safeguarding and data protection
requirements.
set of key documents will be made available to the evaluation team. The team
may request additional materials, including consortium final reports,
consolidated monitoring data, and evidence of uptake.