Embedded Research & Evaluation – The Process, The Story Continues

Read More

In my last post, I began describing our Embedded Research & Evaluation (ER&E) process by presenting the first three steps in our step-by-step guide. The story continues.

Our ER&E Process: A Step-By-Step Guide, continued

A dedicated ER&E effort is focused on a specific program challenge or question. The ER&E effort allows for experimentation with attributes being studied and with evaluation approaches. Learnings from ER&E inform the program in real time, facilitating much quicker changes to program design and corrective action when issues are identified. It allows for identifying areas of success for quicker replication. And, future evaluation can be adapted based on what is learned – and what isn’t as the ER&E effort progresses.

Steps 1 through 3 described identifying an ER&E project opportunity, setting up the project team, and aligning the ER&E efforts with the program theory. Steps 4 and 5 describe ‘Defining Researchable Issues to be Explored’ and ‘Determining Methods for Tackling Researchable Issues’.

Step 4. Defining Researchable Issues to be Explored. To focus the ER&E effort, the ER&E project team determines specific researchable issues or questions – what the program staff wants to learn from the effort. A good starting point is to ask questions. For example, let’s say that a midstream HVAC program designed to transform the market[1],[2]  is not hitting its energy savings targets. There are many possible reasons for poor program performance. Example questions include:

  • Are program partners engaged with the program?
  • Do they understand the program rules and delivery mechanisms?
  • Are there barriers to program partner participation?
  • Is the incentive structure effective?

Step 5. Determining Methods for Tackling Researchable Issues. Once the issue and researchable questions are defined, we can decide how best to explore the questions. Methods may include:

  • In-depth interviews, focus groups, and/or surveys with program staff, program partners, participants, and non-participants
  • In-field observation of implementation tasks
  • Review of recorded outreach calls
  • Deep data dive and analysis

These methods are not different from traditional evaluation and research. The difference is in the timing, involvement of program staff and partners in the effort, and in the openness to adjusting our research focus and methods as we learn from the ER&E project. For our midstream HVAC program example, a good first area of focus is conducting in-depth interviews with program partners – distributors and contractors – to explore barriers with engagement and their understanding of program rules. This first step can inform where next to explore, or may provide key findings and recommendations that can be acted upon immediately.

In the Next Issue

In the next couple posts, I will talk about the other steps in the ER&E process. Next up are ‘defining evaluation metrics to be assessed’ and ‘determining how metrics will be defined, calculated, and tracked’.


[1] A midstream program provides incentives for retailers, distributors, and/or contractors and installers to stock and sell more energy efficient products. The ENERGY Star guide How to Use Midstream Incentives to Promote ENERGY STAR® Certified Consumer Electronics is a good reference for reading more about how midstream program incentives can work.  https://www.energystar.gov/ia/partners/downloads/CE_Guide.pdf

[2] ACEEE defines market transformation, “The term market transformation is the strategic process of intervening in a market to create lasting change in market behavior by removing identified barriers or exploiting opportunities to accelerate the adoption of all cost-effective energy efficiency as a matter of standard practice.”, https://aceee.org/portal/market-transformation

Picture of Teresa Lutz

Teresa Lutz

Earlier in my career, I worked for a utility supporting the design and delivery of energy conservation programs through evaluation and research. At that time, I did not love the evaluation process or the evaluation community. The value of evaluation was a tough sell to my coworkers, and I agreed the evaluation process and results could be better. We wanted more timely feedback, recommendations we could implement, and insight beyond what we already knew. As a consultant, I hold those experiences close. I avoid doing ‘evaluation for evaluation’s sake’. I am fixated on figuring out the Big WHY of what we do, what works and what doesn’t. It is through knowing this that we can improve and prosper in this industry.

More posts by Teresa Lutz