Final Internal Performance Review Toolkit

A stabilizing board used to craft bracelets.
March 17, 2023 • UPDATED September 24, 2024

Download the introduction ▸
Download the long-duration-program toolkit (14.9mb ZIP) ▸
Download the short-duration-program toolkit (14.6mb ZIP) ▸

Chrome users: If Chrome doesn't download the zip file as a zip file, you can right-click the zip file link and select Save link as and approve the file for downloading.

 

About the Toolkit  

Mercy Corps’ Monitoring, Evaluation, and Learning (MEL) minimum standards state that all programs must conduct an Internal Performance Review (IPR) at the end of the program; this is the “Final” Internal Performance Review (FIPR).  

The FIPR is a program evaluation methodology that taps deep into the collective experiences and knowledge of the team members that design and implement the program’s interventions. It was also designed as an evaluation capacity strengthening tool for MEL and non-MEL program staff. For this reason, it is conducted by the program team, internally, and undertaken for internal use at the program and organizational levels.

Because the FIPR does not collect primary data with participants, should a more traditional final (program/performance) evaluation be required for the program, one need only add a few elements to the SOW for the FIPR to fulfill those additional evaluation requirements. Having completed an FIPR before an external (final) evaluation will assure a smoother, faster and more complete (external) evaluation. Thus, none of the time and labor expended to conduct the FIPR is ever wasted.  

What is new in version 2 of the toolkit?

Version 1 of the FIPR toolkit was released in December 2022 and Version 2 in August 2024. V2 improves and simplifies nearly all of the V1 tools and adds several new ones, including facilitator guidance for running Small Group Discussions (SGDs), extensive hints and tips which make it easier to use FIPR templates, and an expanded scoring system which captures more granularity in the levels of performance assessed. Furthermore, there are now two variants of the toolkit, one for long-duration programs (programs with > 2 year original implementation period) and the other for short-duration ones (≤ 2 years). All revisions and additions are based on feedback from users of the V1 toolkit.

Contents  

Tool #1: FIPR Generalized Events  

This tool provides the user with a general understanding of the events involved in completing an FIPR and the sequencing of those events. These events are now more explicit and comprehensive than those in V1. They are also organized within four phases, instead of three, and suggest the time period during which each phase should be completed. Furthermore, the tool can now be edited by the user to construct a monitorable FIPR work-plan.  

Tool #2: FIPR Scope of Work Template

The SOW prepares the program team to conduct the FIPR. It should be completed collaboratively by the program leads, MEL staff, and key technical staff. The SOW template helps the team: (a) describe the criteria used for program inclusion; (b) operationally define program interventions and when/where they were implemented; (c) document the program’s indicators and targets and contextual data sources; (d) document the assumptions on which the intervention was designed, (e) document if there were major shifts in the strategy and/or interventions offered, (f) define a schedule for completing the FIPR and its constituent tasks, (g) provide an inventory of deliverables. It also provides the fixed objectives of the FIPR, formulated in such a way that the inherent learning question that each evokes is clear to all parties.

Tool #3: FIPR Inception Report Template  

This document details an easy-to-use, step-by-step process for writing an FIPR inception report, covering the purpose of the inception period, the quality and completeness of documents, assumptions of the program interventions, sustainability, GEDSI, SADD and community accountability. The template reduces the LOE required by referencing fixed sections from the SOW on the work plan, objectives and learning questions. Furthermore, unless the program opts for a more complex methodology, the fixed ‘Core Methods’ section can be used as a ‘cookie cutter’ set text for all FIPRs.

Tool #4: FIPR Report Template

This template provides a structure that ensures consistency across various FIPRs so we can compare them over time. The Report Template also clarifies what the readers should expect to find, and how the content should be organized to meet the needs of various stakeholders. It guides the FIPR lead with the approach to analysis and how to summarize and present the findings.

It is organized in nine sections: (1) Executive summary; (2) Introduction (description of the program that can be taken from the SOW and Inception report); (3) Progress assessment (implementation against work plans); (4) Performance assessment (achievements against targets while considering context, assumptions and other data and evidence; (5) Unintended outcomes; (6) Scalability and replicability; (7) Sustainability; (8) Value for money (optional in an FPR); and (9) Lessons learned.

Tool #5: Actuals vs Targets (templates and guide)

The updated Actuals vs Targets template includes several improvements to make it easier to assess program performance. In particular, it now has five ratings instead of three (‘Extremely Below’, ‘Below’, ‘Met’, ‘Above’ and ‘Extremely Above’), which capture a wider range program performance. It also automatically calculates these ratings and generates visually appealing, exportable summary tables, making the overall process of assessing actuals against targets far easier, quicker and more accurate. In addition, the document dynamically responds to individual program characteristics, with options for different program lengths, reporting periods, result-framework terminology and number formats. There is also a new guide which provides in-depth instructions and explanations of how to use and understand the template.  

Tool #6: Intervention-specific SGD (template and facilitator’s guide)

These documents provide guidance on how to facilitate a SGD on a specific program intervention, covering obstacles, enabling factors, potential improvements, sources of evidence, unintended outcomes and sustainability. There are also hints and tips on managing group dynamics, phrasing prompts and follow-up questions, and probing group members’ responses.

Tool #7: Inventory of Deliverables (template and sample)

These tools allow users to document all program deliverables, whether expected (i.e. stipulated in the approved work-plan) or not. They help users categorize deliverables according to outcome/purpose and type (such as data sets, assessments, evaluations and so on), and provide space to specify language, delivery date, location, user(s) and detail any comments. There is also guidance on how to define deliverables, which are sometimes confused with outputs and cause confusion.

Tool #8: Inventory of Events & Shocks (template and sample)

The Inventory of Events and Shocks provides a single repository for users to document all the external developments that occurred during program implementation. It helps users think through each event/shock and how it affected contextual factors (such as food availability and household income), which in turn may have affected program performance. This tool therefore provides a methodical way to assess how factors outside of Mercy Corps’ control may have impacted program success, which then feeds into the overall assessment of program performance.

Tool #9: Folder Structure and Filing Guide

This document provides a standardized structure to organize documents relevant to the FIPR process. It enables all users, whether they were directly involved in the program or not, to easily navigate the program’s documents and conduct searches for specific files.  

Tool #10: Prioritizing Interventions (guide, template and sample)

The Prioritizing Interventions tools help users facilitate and document a structured discussion which ranks single or groups of interventions by their relative importance to the success of the program. In doing so, the discussion prompts group members to think through the mechanisms by which interventions achieved impact and whether they did so independently or relied on other interventions as pre-requisites. The results of the exercise provide useful insight for comprehensively understanding program performance and drawing lessons and best practice for future programming.  

Updates  

The Version 2 toolkit tools will be updated from time to time to improve their clarity and ease-of-use.  

Contact  

If you have questions about the toolkit, please write to either Thomas Scialfa, Ala’a Issa, Tom Clark,or Meri Ghorkhmazyan.