Consultant - Independent evaluation of the project “Applied research in ecology and social sciences for sustainable management of Central Africa’s forest ecosystems” (RESSAC, 2021–2026)

Posted 2026-05-06
Remote, USA Full-time Immediate Start

The
evaluation will cover the full programme period
(Nov 2021–Nov 2026) and
all four expected results. It will include both programme-level performance and
a purposive sample of research consortia as case studies to examine pathways
from research outputs to uptake and outcome-level change.

Geographic
scope will be the COMIFAC/CEEAC region and other countries covered by
RESSAC-funded research activities. The evaluation team will propose a feasible
sampling plan during inception, balancing country coverage with depth.

Cross-cutting
dimensions that the evaluation must address include:

  • Integration of biophysical
    (ecology) and social science: interdisciplinarity in research design, field
    implementation, analysis, and translation into usable recommendations.

  • Research and knowledge uptake:
    pathways, mechanisms and evidence of use in policy processes, operational
    decision-making and practice by key actor groups.

  • Capacity strengthening: individual
    and institutional capacities (scientific writing, project formulation, research
    supervision), including post-doctoral and Master-level support, and enabling
    administrative/financial capacities. The evaluation will also analyze the
    contribution of post-doctoral fellows to knowledge production, scientific
    animation, visibility, and the initiative’s success.

  • Equity and inclusion: engagement
    of IPLC and other stakeholders in research and dissemination; gender
    responsiveness where relevant to the research portfolio.

Indicative key evaluation questions
(organized by evaluation criteria)

The
evaluation will be guided by criteria commonly used for research programme
evaluations, including relevance, scientific quality, efficiency,
effectiveness, impact (with an emphasis on outcome-level influence), and
sustainability. This is a reduced list of indicative questions (to be finalized
during inception) that maintains a balanced coverage of themes and key
evaluation priorities.

Relevance
and coherence

  1. Relevance:
    To what extent did RESSAC address priority problems and evidence needs for
    sustainable management of Central Africa’s forest ecosystems, as identified by
    key decision-makers and practitioners?

  2. Logic coherence:
    To what extent is the programme logic (research – capacity – ICF – uptake)
    coherent and plausible, and which assumptions/conditions proved decisive (or
    fragile)?

Scientific
quality, interdisciplinarity and knowledge production

  1. Scientific quality:
    What is the quality, rigor and credibility of the research produced
    (biophysical and social sciences), and how is quality ensured at consortium and
    programme levels?

  2. Interdisciplinarity:
    To what extent did RESSAC effectively promote and operationalize
    interdisciplinary approaches (integrated questions, methods, syntheses,
    articulation across scales)?

Effectiveness,
results and uptake / use

  1. Achievement of expected results:
    To what extent were the expected results achieved, and what explains variations
    across consortia and countries?

  2. Outputs and usefulness:
    To what extent did funded research produce useful deliverables (publications,
    data, methods, tools, policy briefs), and are these products accessible and fit
    for use?

  3. Uptake and outcome-level change:
    What evidence exists of appropriation and use of RESSAC outputs by target
    groups, and what observable outcome-level changes result (decisions, practices,
    strategies, institutional processes)?

  4. ICF / “last mile”:
    To what extent was the ICF strategy effective in moving beyond publications
    toward dissemination, training and uptake (portal, briefs, events, etc.)?

Capacities,
post-docs and unexpected outcomes

  1. Capacities and post-docs:
    To what extent did the programme strengthen capacities of Central African
    institutions and researchers (including post-docs and Master-level), and what
    was the contribution of postdoctoral fellows to the visibility and success of
    the initiative (scientific production, mentoring/scientific animation,
    partnerships)?

  2. Unexpected outcomes:
    What unexpected outcomes (positive or negative) emerged (partnerships, policy
    windows, spillovers, reputation), and why?

Governance, efficiency and implementation
learning (including MTE)

1. Governance & management: To what extent did governance
and management arrangements (programme and consortia) enable timely,
high-quality implementation, as well as effective partner involvement in
knowledge co-production and use of results?

2. Bottlenecks & MTE: What were the main bottlenecks
(mobility/visa, administrative capacities, transfers, reporting), how were
they managed, and to what extent were lessons/recommendations from the
mid-term evaluation taken up?

Impact, sustainability and forward-looking
perspectives (RESSAC 2)


1. Credible
influence
: What
credible contribution can be established between RESSAC-supported research
and observed policy/practice influence (including early signals and
pathways still unfolding)?

2. Sustainability
& future options
:
How likely are results/capacities to be sustained beyond the project, and
what design options/strategic choices should guide a potential “RESSAC 2”
(with what supporting evidence)?

Methodology and evaluation approach

The
evaluation will use a mixed-methods approach, theory-based, suited to research
programmes where outcomes may occur through multiple contribution pathways and
time lags. The team is expected to triangulate evidence across sources and
stakeholder perspectives and to be explicit about attribution/contribution
limits.

Overall
design

  • Portfolio-level assessment of
    programme results, governance and enabling systems.

  • Contribution-focused assessment of
    outcome-level change and uptake pathways (e.g., outcome harvesting and/or
    contribution analysis) for selected cases.

  • Comparative case studies of a
    purposive sample of consortia to examine relevance, interdisciplinarity,
    quality, dissemination and uptake.

Sampling
strategy (to be finalized in inception)

The
evaluation team will propose a sampling strategy that is feasible and
defensible, balancing breadth and depth. At minimum, the sample should:

  • Cover a mix of thematic clusters
    and disciplinary profiles (ecology-heavy, social science-heavy, and explicitly
    integrated consortia).

  • Include consortia at different
    stages (completed in 2024/2025 and those finalizing in 2026) to assess both
    early outcomes and emerging pathways.

  • Include cases with early signals
    of uptake (e.g., engagement in national policy processes) as well as cases with
    weaker uptake, to understand enabling and constraining factors.

  • Ensure representation of
    IPLC-related themes and gender-relevant research where applicable.

Analysis
and synthesis

  • Develop a refined theory of change
    / results pathway model during inception, including key assumptions and uptake
    pathways.

  • Qualitative analysis (coding and
    thematic synthesis) of interview and document data.

  • Quantitative descriptive analysis
    of portfolio indicators (e.g., outputs, trainings, dissemination metrics) and
    survey results.

  • Cross-case comparison and
    triangulation to identify patterns, explanations and actionable
    recommendations.

Limitations
and mitigation

  • The evaluation must transparently
    document limitations (e.g., time lags in policy influence, incomplete
    monitoring data, access constraints) and propose mitigation strategies
    (triangulation, careful case selection, explicit contribution claims).

Data availability and collection

The
evaluation will draw on programme documentation and existing monitoring
information, complemented by primary data collection with key stakeholders.

Data
sources and methods (indicative)

  • Document review: project design
    documents, annual reports, logframe and monitoring data, consortium final
    reports, publications and knowledge products.

  • Key informant interviews (remote
    and in-person): CIFOR-ICRAF team, EU stakeholders, research partners, post-docs
    and students, and intended users (field actors, authorities, CSOs, etc.).

  • Surveys (if relevant): short,
    structured surveys of consortium leads/post-docs and/or selected user groups to
    document uptake, capacity changes and perceptions of usefulness.

  • Research outputs and quality
    review: mapping of publications and products (including basic bibliometrics
    where relevant), assessment of quality against defined criteria (relevance,
    rigor, credibility, accessibility).

  • Policy and practice tracing:
    structured review of uptake evidence (citations, minutes, participation in
    trainings, adoption decisions) and contribution analysis/process tracing for
    case studies.

Data
management and ethics

The
evaluation team will apply informed consent procedures, ensure confidentiality
of interviewees, and comply with applicable safeguarding and data protection
requirements.

A
set of key documents will be made available to the evaluation team. The team
may request additional materials, including consortium final reports,
consolidated monitoring data, and evidence of uptake.



Similar Jobs

Back to Job Board