The Big WHY of Evaluation

More Info

Hello! I’m excited to share with you some thoughts on advancing our evaluation and research practice to better understand how we might change the way we deliver and use energy. We are on the brink of an evaluation renaissance. Smart grids, smart meters, smart buildings, and smart data are prominent themes in the industry lexicon. Smarter evaluation and research must follow.

To explore this evaluation renaissance, I am looking both inside and outside the evaluation community in a search for fresh ideas, new methods, and novel twists on old methods. I am looking to others for their thoughts and experiences for advancing the evaluation and research practice. So, please…stay tuned, engage, and always feel free to question. Let’s get smarter together.

Embedded Research and Evaluation In Practice

I have the good fortune to work with colleagues who embrace the pursuit of continuous improvement – at work and at play. This shared thirst for excellence creates a disciplined mindset that welcomes embedded research and evaluation in our program design and work practices.

Embedded research and evaluation establishes a framework to test program design components, new applications, and delivery mechanisms. It tracks performance and highlights issues as they occur. And because the research and evaluation is embedded within design and delivery practices, it provides fast feedback and facilitates continuous improvement. Want to improve a personal best at the gym? Evaluate your form. Want to improve audit conversion rates at the office? Evaluate your form…and other program aspects as they function.

The tight partnership between program staff and researchers is a meaningful difference between ‘embedded research and evaluation’ and ‘standard retrospective evaluation’. While standard retrospective evaluation allows for a collaborative approach to the research, there is also a wide third-party berth that keeps this type of evaluation distant from program staff. And this is for good cause, as it maintains independent integrity to assure outside stakeholders that the evaluation is conducted objectively.

The tight partnership between program staff and researchers/evaluators when conducting embedded research and evaluation, however, creates a completely open and sharing environment, and a true team approach to understanding program operations and customer experiences. Program staff view the researcher/evaluator as an ally in helping them to achieve their targets. The researcher/evaluator sees program staff as the subject matter experts for their program. Together, they have the insider-view and the outsider-view, creating a uniquely-joined, inclusive perspective to identify what research is useful, how evaluation can create meaningful learning, and how (and when) changes in the research and evaluation approach is warranted.

In the next Big WHY blog, I’ll share an embedded evaluation and research project example, what we are learning, and how we are adapting. You’ll be seeing more from me soon!

Picture of Teresa Lutz

Teresa Lutz

Earlier in my career, I worked for a utility supporting the design and delivery of energy conservation programs through evaluation and research. At that time, I did not love the evaluation process or the evaluation community. The value of evaluation was a tough sell to my coworkers, and I agreed the evaluation process and results could be better. We wanted more timely feedback, recommendations we could implement, and insight beyond what we already knew. As a consultant, I hold those experiences close. I avoid doing ‘evaluation for evaluation’s sake’. I am fixated on figuring out the Big WHY of what we do, what works and what doesn’t. It is through knowing this that we can improve and prosper in this industry.

More posts by Teresa Lutz