Implementation Research

 

In my last several posts, I described our Embedded Research & Evaluation (ER&E) process. In my last post, I said I would write about the importance of adapting evaluation as we learn from the evaluation efforts. But before diving into that, I want to explore ‘Implementation Research’. "Implementation science [research] is the study of methods to improve the uptake, implementation, and translation of research findings into routine and common practices….”[1] Hmmm…how does this differ from other, non-implementation research?

When expanding my knowledge and searching for new ways of thinking, I like to explore resources outside of our energy conservation program research and evaluation community. This online article posted on BMJ, Implementation research: what it is and how to do it, is a good example. “Implementation research can consider any aspect of implementation, including the factors affecting implementation, the processes of implementation, and the results of implementation, including how to introduce potential solutions into a health system or how to promote their large scale use and sustainability. The intent is to understand what, why, and how interventions work in ‘real world’ settings and to test approaches to improve them.”[2],[3]

Ah ha! Implementation research is a step beyond a pilot program. Implementation research takes place after a pilot test – it is integrated into the program delivery processes, just as in embedded research and evaluation. Implementation research is not conducted in a controlled environment, or as a part of a pilot – it is conducted along-side of and in conjunction with program operations. Thus, the research will directly affect the program, and program outcomes. This is, in large part, the point of implementation research!

It is this dynamic aspect of implementation research that most excites me - the ability to directly influence in real-time the outcomes of programs. And to test the impacts and effects of that influence in real-time is exhilarating.[4]

In the Next Issue

In my next issue, I will delve more into implementation research. My curiosity is peaking…how about yours?

About This Blog

We are on the brink of an evaluation renaissance. Smart grids, smart meters, smart buildings, and smart data are prominent themes in the industry lexicon. Smarter evaluation and research must follow. To explore this evaluation renaissance, I am looking both inside and outside the evaluation community in a search for fresh ideas, new methods, and novel twists on old methods. I am looking to others for their thoughts and experiences for advancing the evaluation and research practice.

So, please…stay tuned, engage, and always, always question. Let’s get smarter together.

 

[1] Padian, N. S., C.B. Holmes, S.I. McCoy, R. Lyerla., Bouey, P. D., & Goosby, E. P. (2011). “Implementation

science for the US president's emergency plan for AIDS relief (PEPFAR).” Journal of Acquired Immune Deficiency

Syndromes 56(3): 199-203.

[2] Peters, D. H., Adam, T., Alonge, O., Agyepong, I.A., & Tran, N. (2013). Implementation research: what it is and how to do it. BMJ 2013;347:f6753. https://doi.org/10.1136/bmj.f6753

[3] Hint - substitute ‘a health system’ with ‘an energy efficiency program’ or ‘demand response program’ or even ‘an electrification initiative’ and it is easy to see applicability in our work.

[4] Please note that I am not advocating for the exclusive use of implementation research. There will always remain a need for other research designs.

Picture of Teresa Lutz

Teresa Lutz

Earlier in my career, I worked for a utility supporting the design and delivery of energy conservation programs through evaluation and research. At that time, I did not love the evaluation process or the evaluation community. The value of evaluation was a tough sell to my coworkers, and I agreed the evaluation process and results could be better. We wanted more timely feedback, recommendations we could implement, and insight beyond what we already knew. As a consultant, I hold those experiences close. I avoid doing ‘evaluation for evaluation’s sake’. I am fixated on figuring out the Big WHY of what we do, what works and what doesn’t. It is through knowing this that we can improve and prosper in this industry.

More posts by Teresa Lutz