Implementation Research, continued

 

In my last post, we began to explore the idea of implementation research. Like much of program evaluation and research, implementation research is intended to improve program uptake, operations, delivery, and participants’ experience. In addition, implementation research highlights the effects of the program, including changes to the program, in conjunction with program delivery and adjustments. Thus, the research will directly affect the program, and program outcomes – an objective of implementation research.

A type of embedded research and evaluation, implementation research, provides real-time and near-time data and information to understand if and when program adjustments are required. It allows for experimentation of programmatic design components. For example, we can use implementation research to test the effects of an adjustment to incentive amounts, or how incentives are delivered. We can experiment with different marketing messages. We can test changes in program eligibility, or program support services for customers or partners.

The online article posted on BMJ, Implementation research: what it is and how to do it, provides excellent discussion on the principles of implementation research. An important factor to keep in mind is that implementation research is especially focused on the users of the research – let’s call these folks the research stakeholder group (RSG). For energy reduction or energy management programs, this is certainly the program staff - designers, managers, and implementers. It is also program partners, such as contractors who work directly with customers, and the distributors who sell to contractors. Program partners can be data and information aggregators, or software that provide grid management services. And, program partners might even be regulatory staff – and the customers that the program serves.

Involving RSG members in the research needs assessment, the research design, research project check-ins, and – of course – the vetting and understanding of the results, distinguishes implementation research from a third-party independent effort. This involvement can also help avoid the unintended consequences that sometimes occur when decisions and changes are made in a silo – that is, without the benefit of discussion of each program stakeholder.

The RSG involvement also ensures that each stakeholder receives a benefit from the implementation research – which can lead to overall higher levels of stakeholder buy-in along with a richer understanding of results. This acceptance and understanding leads to research outcomes that resonate along the spectrum of the RSG. And, this resonance is the building block for a deeper shared commitment to the next phase of research.

In the Next Issue

Because implementation research has me buzzing, my next several issues will explore this topic more. Let’s buzz together.

About This Blog

We are on the brink of an evaluation renaissance. Smart grids, smart meters, smart buildings, and smart data are prominent themes in the industry lexicon. Smarter evaluation and research must follow. To explore this evaluation renaissance, I am looking both inside and outside the evaluation community in a search for fresh ideas, new methods, and novel twists on old methods. I am looking to others for their thoughts and experiences for advancing the evaluation and research practice.

So, please…stay tuned, engage, and always, always question. Let’s get smarter together.

Picture of Teresa Lutz

Teresa Lutz

Earlier in my career, I worked for a utility supporting the design and delivery of energy conservation programs through evaluation and research. At that time, I did not love the evaluation process or the evaluation community. The value of evaluation was a tough sell to my coworkers, and I agreed the evaluation process and results could be better. We wanted more timely feedback, recommendations we could implement, and insight beyond what we already knew. As a consultant, I hold those experiences close. I avoid doing ‘evaluation for evaluation’s sake’. I am fixated on figuring out the Big WHY of what we do, what works and what doesn’t. It is through knowing this that we can improve and prosper in this industry.

More posts by Teresa Lutz