Data Gaps
Welcome to the reboot of the Big Why of Evaluation. In these posts, I hope to explore new ideas and different perspectives in research and evaluation in the energy sector. We will tackle both large and small issues and learn together along the way. If you have any questions or ideas, please reach out.
This month, I am thinking a lot about data gaps. I recently read Invisible Women: Data Bias in a World Designed for Men. It was a very eye-opening book, describing how gender data gaps (the notable lack of information collected/analyzed about women due to the assumption that men, usually white, are representative of all humans without accounting for the differences between them and the other 50% of the population) has led to all sorts of inefficiencies and issues that often build on each other. A major theme is that by not including different perspectives in solving problems, our society often fails to develop the best solutions. This is not because of any malevolence but because we are often blind to things beyond our own experiences. In other words, because of data gaps.
At a basic level, our job as evaluators is to collect and use data to answer questions and solve problems. So, naturally, this book made me pause and consider the gaps in the data I use, what biases they may cause, and how I can overcome them. (It should be said that it also made think a lot about issues caused by gender data gaps, but that is not the focus of this post and the author does a much better job than I could to describe that problem.)
Many data gaps in our work are easy to see and attempt to mitigate. We can compare a sample to the population to see if it is representative and create quotas and weights to minimize this issue. We may not have perfect information about a building’s operation or equipment but can use assumptions based on the data we do have and from similar cases. We will never have complete information or the resources to analyze it perfectly. The key is to know what information we have, what information we do not have, and to be as transparent as possible about the methodology so that we (or others) can build on the results in future research.
This book made me think beyond these common data gaps to less measurable issues, such as developing recommendations. A key reason for evaluations is to help programs/initiatives/etc. understand how to improve. After collecting and analyzing data through any number of means, evaluators present results and make recommendations. But data gaps may limit the value of these recommendations. For example:
- The program administrator or implementer may not be able to make the recommended change due to regulatory, budgetary, or practical reasons. At Michaels, we implement utility programs as well as evaluate them and have come across all types of barriers. We use this knowledge to try to make recommendations that are practical, actionable, and have been ground-truthed. Of course, one can make recommendations without direct experience but experience helps. Regardless, it is crucial to consider the perspective of whom you are making the recommendation.
- Evaluators often frame recommendations around why utility customers are not participating in a program by assuming the customer needs to be targeted and changed (e.g., through more education about energy efficiency or the benefits of the equipment) rather than fully understanding what the customer needs and designing a program around that. The data gap here is an understanding of what the customer really needs and their barriers to participation. This gap can be filled through research so that program theory/logic models are based in reality instead of what we think happens.