When I was asked a while back to be a guest writer for this blog, it’s not surprising that my ideas kept drifting toward Evaluability Assessments. Anyone who has worked with me knows that I’m a planner and I like to be prepared. (Dare I say over-prepared.) Full disclosure -- I’m a lister, I create outlines and frameworks, and I love roadmaps. I like to have guardrails and structure to help me focus on what is most important. I thrive when I can set a goal in my sights and devise alternative paths to get to my destination. There is no better feeling than reaping the benefits of a well-thought-out plan!
And in the most general sense, an evaluability assessment is an input to developing a well-thought-out (evaluation) plan.
An Evaluability assessment (EA) is a research concept that sounds simple, but there can be a lot more to it when you look under the hood.
EA is a systematic process for assessing the extent to which a program or activity can be evaluated in a “reliable and credible fashion.”[1] The concept arose in the late 1970s as a means to improve the value and utilization of evaluations conducted for the federal government and has since gained traction in mainstream evaluation. EA is widely used in education, healthcare, social services, and other industries that rely on evaluation results for investment and policy decision-making. However, EA is more widely used in other industries and rarely explicitly conducted to plan for energy efficiency program evaluations. (More on this later.)
The earliest paper I could find that advocates for EA for energy efficiency programs presents four conditions for program evaluability: “a program is able to be evaluated (evaluable) when: (1) program goals and priority information needs are well defined; (2) program goals are plausible; (3) relevant performance data can be obtained at a reasonable cost; and (4) intended users of the evaluation results have agreed on how they will use the information. If these points are not met by the current program design, the program should have a timeline for working towards program redesign to meet these points.”[2]
The scope of an EA to determine if the above conditions are met can vary depending on the program and underlying context and evaluation need, but typically covers one or more of the following areas.
A meaningful evaluation requires a well-defined and documented program design; thus an important EA topic is to determine if the program design is well-documented and if the theory of change is rational. Without this clarity, the future evaluation might be misguided and seek to answer the wrong questions. Types of questions the EA should answer include:
Identifying the stakeholders and intended use of the pending evaluation is a critical task of an EA to ensure the study findings and recommendations will meet stakeholder needs. This focus area of an EA will determine if an evaluation framework has been established and the extent to which the evaluation will provide value. As an evaluator who strives to ensure program evaluations are used and useful, this is my favorite EA area of focus! Some questions the EA will answer for this area are:
Confirming how the evaluation results will be used and who will benefit from the information is a critically important goal of EAs but has been somewhat challenging in this industry. This is particularly true outside of the regulatory compliance realm and for evaluation to be used to support program improvements and performance. Summative impact evaluations are required by regulatory authorities but are often completed too long after the implementation period to be meaningful to implementers. Process evaluations and formative evaluation approaches that incorporate stakeholder input and feed interim evaluation findings to program implementers on a regular cadence will increase the utilization of evaluation (but also the evaluation cost). Regardless, figuring this out early in the program lifecycle and developing an evaluation roadmap will set expectations for all stakeholders.
An important objective of the EA is to determine if reliable and valid data are available for the chosen evaluation methodology. Readers are probably most familiar with this area of EA because EAs for energy efficiency programs typically focus on data availability, assuming the evaluation methodology is established and agreed upon. There is nothing worse than needing to change an evaluation strategy midstream because the data is not available or too costly to collect; I bet that most evaluators can provide examples, ranging from unavailable baseline data to missing participant or nonparticipant contact information to lack of documentation needed for freeridership analysis. Questions the EA should seek to answer are:
As previously mentioned, the energy efficiency program industry has not widely adopted EA, but that’s not to say that this important preparatory work is not being done. It’s probably more common than we think, though the work is not explicitly labeled as an evaluability assessment, per se, and it may or may not end up in the public domain.
EA blurs with evaluation design and planning and is probably most often addressed “behind the scenes” by evaluation administrators who are responsible for planning evaluations and integrating evaluation results across multiple studies and through the development of the evaluation RFP or work plan. Ideally, however, EA and corresponding evaluation planning should be undertaken early in the program lifecycle - when consensus of performance indicators and contracts are established, and before implementers are building their data collection and program tracking systems. (Easier said than done!)
In some jurisdictions, some of the EA areas described above are required to approve program funding applications and ensure implementers comply with regulatory reporting requirements. For example, the California investor-owned utilities (as per the CPUC requirements) require a program theory and logic model to be submitted for third-party program proposals.
Finally, I suspect a key reason for the lack of EA use for energy efficiency programs is simply that many program models in the market today have been around for a while, and the industry is familiar with evaluation data needs and uses such that an EA would not be needed or justified. (Think about traditional rebate/incentive programs.)
The need for and benefits of EA for energy efficiency programs have increased as market interventions have evolved and become more complex. The emergence of more complex, data-intensive, unique, and/or longer-term intervention strategies – such as NMEC (normalized metered energy consumption), SEM (strategic energy management), behavioral programs, market transformation initiatives - has necessitated more intentional and thorough evaluability assessments.
These are the types of programs for which early engagement and collaboration between evaluators, implementers, regulators, and other stakeholders can be particularly meaningful to increase evaluation utilization.
Evaluations of market transformation initiatives, for example, require a variety of different types of data to be collected over a long period of time (many years!) to document and quantify market changes. Performance indicators and data requirements to estimate market changes (referred to as “market progress indicators”) need to be established and agreed to by stakeholders during the design of the initiative to ensure data required to track market progress is collected before launch, throughout delivery, and after the intervention is ceases.
The Site-Specific NMEC Evaluability Study conducted for the California Public Utilities Commission was one of the most extensive EAs that I found. NMEC is a relatively new compliance pathway for “high opportunity” energy efficiency projects in California and one that will likely increase traction in the coming years. Thus, the evaluability of NMEC is critically important. Among many other things, this EA study characterized the population of NMEC projects, created a framework to identify which sites are ready to be evaluated, and recommended data tracking and documentation for future projects that will improve accuracy and ensure their evaluability.
A few of the most interesting program models for which I found publicly available EAs are for programs that involve collaboration among utilities and local municipalities and other non-government organizations and supply chain market actors. These market interventions have gained traction in the past decade or so with increased investments in programs with longer-term horizons such as market transformation initiatives, workforce development and training, and collaboratives to facilitate the development of municipal climate action plans (CAPs).[3]
Such collaborative and market engagement program models create interesting evaluation challenges – such as how to estimate freeridership, how to account for spillover, and how to avoid double-counting of savings – and thus EAs can be highly valuable.[4]
[1] GAO-21-404SP, Program Evaluation: Key Terms and Concepts
[2] Strategies and Policies for Improving Energy Efficiency Programs: Closing the Loop Between Evaluation and Implementation (Vine, 2008)
[3] See for example: Evaluability Assessment of the Statewide Energy Efficiency Collaborative and PG&E’s Green Communities Climate Action Planning Programs (2014) and Assessment of Local Government Partnerships
[4] Local governments and non-government agencies are disparate in their record-keeping practices exacerbating evaluation challenges that need to be overcome to continue investment in these types of programs.