Why Evaluation Must Start with Learning: A Dialogue between Micah Carr and Meg Long

Why Evaluation Must Start with Learning: A Dialogue between Micah Carr and Meg Long
June 16, 2023

At Blue Meridian, we utilize a rigorous selection process and a performance-based investing approach. Evidence of effectiveness is an essential part of our upfront vetting, as is the understanding that the field is evolving to consider racial equity, technological innovation, and current context. Meg Long, Managing Director, Learning, Evaluation, and Measurement, shares insights from her career as an evaluator, her work guiding evidence-building and evaluation at Blue Meridian, and the importance of learning in this work.

Micah: You have a long history as an evaluator and most recently led Equal Measure. How did you enter this space, and do you have a resulting insight that you find particularly keen?

 

Meg: When I entered the evaluation space, I had no idea that this would become my profession. To me, it was a series of opportunities to ask questions and learn about what is working under different conditions for different individuals. The idea of exploring and learning engaged me. Of course, the professional space has grown tremendously over the last 20 years. That evolved into a set of methodologies, ways of engagement, ways of understanding the world, conceptual frameworks, and more.

Ultimately for me, as the CEO of an evaluation and strategy firm, I found inextricable links between evaluation, organizational leadership, and social sector change. Charged with helping clients understand what learning and evaluation looks like and how to take evidence of actionable insights and translate it into strategy, I saw an opportunity for my firm to focus more on our learning as an organization. What we learned through our process had implications for my firm’s organizational strategy, organizational function, systems, talent, etc., which I sought to apply through my leadership position, and which influences my approach today. That was a threshold moment where I realized that every organization could benefit by going through a process to understand what is working for whom and how they are advancing in service of their mission.

 

Micah: You mentioned the opportunity to learn as particularly engaging for you and indicated how what is essentially a learning process can lead to strategic changes and, ultimately, impact. Blue Meridian flipped from the commonly used “MEL” to “LEM” (learning, evaluation, and measurement) around the time you joined the organization. Why does that distinction matter to you? What does it mean to start with learning and how does that influence evaluation and measurement?

 

Meg: Blue Meridian’s community – our investees, our investors, and our staff – are unified by our drive to achieve the greatest possible impact with the resources, people, and knowledge that we bring to bear. I believe this vision of impact is only possible if Blue Meridian approaches the practice of evaluation and measurement with the goal of learning at the forefront of our mind. Learning must come first! The context of our work – social, political, economic – is continuously evolving, which, for me, means we should be learning continuously. Specifically, for us as a performance-based investor, we need to continuously learn and refine our approach so that our investments are more impactful. Learning can also give us a clearer understanding of the kinds of skillsets, capacities, and other supports that we need to provide for our investees.

 

Micah: As a communications person, I fully respect that language and naming conventions evolve and particularly appreciate how you have created a way to remind us daily of the significant role learning plays in this work. You also bring up Blue Meridian as a performance-based investor, which is a critical part of our work as we aim to fund strategies with “evidence of effectiveness.” But as we approach evaluation and measurement with a learning mindset, how should we and other funders be thinking about the resulting evidence?

 

Meg: When the field of philanthropic evaluation really exploded 2 to 3 decades ago, evaluation was synonymous with accountability. It meant measuring what is happening, somewhat in a compliance mindset – in other words, are you doing what you say you intend to do? For sure, this question was and remains important. But as our understanding of social efforts became more sophisticated and more complex, as we began to advance systems change efforts, as we began to invest in multi-varied strategies to change the status quo, accountability was insufficient. Assessing the extent to which something happened, “yes or no,” was insufficient. As part of that, the skillset for evaluation and evaluators went from a very specific methodological and content expertise to needing to understand facilitation processes, constituent and community engagement processes, and a variety of both Eurocentric and non-Eurocentric conceptual frameworks.

In the United States we have defaulted to consider evidence from White and Eurocentric conceptual frames. This has resulted in a partial – and insufficient – picture and understanding of impact and performance. So, there is an active conversation right now in the evaluation field around what constitutes evidence, what constitutes expertise, what constitutes verifiable measures for which populations, especially as we consider structural racism and oppression (see the work of the Equitable Evaluation Initiative, the Center for Evaluation Innovation, We all Count, and several of the American Evaluation Association’s Topical Interests Groups). So, there is a push on evidence to consider, “for whom.” The field has grown and is growing to encompass learning processes and understanding in other dimensions.

 

Micah: This is really interesting and brings an analogy to mind. To me, it parallels personal finance in some ways. For example, when you do your taxes, you owe something to the government. That’s compliance and accountability – it is not long-range, impact thinking. But nowadays there is much more focus on financial planning and using your money to serve you. If you want to retire at a certain age, what do you have to do now to be able to reach that goal? If you want to take a vacation in a year, what do you have to do now to be able to reach that goal? That is more encompassing. First is understanding your goals and leads advisors to ask, ‘How can I best help you plan?’.

 

Meg: Absolutely! And it’s not that we, in the evaluation space, have abandoned compliance and accountability. We’ve just stretched the variety of ways that we need to engage with evaluation. I think often about polarities and tensions. It’s not about solving the tensions – for me or at Blue Meridian – but rather keeping these tensions in balance. For example, at Blue Meridian, we want to invest in organizations and strategies that we believe have the highest likelihood of having a metric impact on the issues that they and we care about. The act of identifying those investees is partially a compliance activity. Our investees have to meet a certain threshold of evidence that tells us not just that they have a solution, but that their solution caused the change that we’re looking for. At the same time, we are innovative, and want to consider historic, systemic injustices in philanthropic funding. We want to find the approaches that are ahead of the curve. Some of them may not have a fully baked evidence base by virtue of how cutting edge their approach is or because of gaps in the flow of capital. That’s why learning, evaluation, and measurement – resulting in “Evidence of Effectiveness” – is only one of our seven investing criteria. Additionally, our investment strategy includes continuously helping investees build their evidence base. We balance the tensions between compliance and innovation and between finding the best evidence base and helping build that evidence. Social sector organizations are never finished with their evidence-building.

 

Micah: I want to go back to what you said a little bit ago about how the sector is pushing itself to consider non-Eurocentric views. This comes up in so many areas, such as in journalism with the question of, “Who gets to own the record?” How do you manage that?

 

Meg: For me, the answer is not finite, it is active. This is something that I grapple with, that the sector is grappling with, and that – as I see it – Blue Meridian is also grappling with. I think we have to consider, on one hand, traditional evidence building methodologies and perspectives and conceptual frames and, on the other, we have to understand and continue to interrogate the root causes or historical nuance that informed those. When we have a deeply researched causal impact study on a set of interventions, we must ask, “For whom were these interventions effective? Who helped craft the measures of success? Is that really a measure of success for the constituents that we intend to serve and support and that are at the center of our work?”

We also have to understand emerging trends in the sector that are going to affect evidence building evaluation learning, like artificial intelligence. There are ways to assess effectiveness that are different, more creative, that leverage technology, that are less biased, that are more inclusive. I see this as a constant consideration. We must consider: 1) What are traditional approaches? 2) What are the emerging trends in evidence that are influencing this? and 3) What are the contextual shifts that might influence our assessment? Holding that three-part lens is a part of the value we hope to bring to our philanthropic investors and investees.

 

Micah: I appreciate that and see this as another way of balancing tensions. As you mentioned earlier: the traditional evidence that may appear most concrete or understandable and the various nuances we know played a role in gathering that evidence can make it biased or less equitable. How does the balancing of tensions show up in Blue Meridian’s work?

 

Meg: The challenges and obstacles faced by youth, families and adults trapped in poverty are complex and interdependent. Every day we are learning more about what it will take to fundamentally transform mobility in the US. At Blue Meridian, we look at traditional, empirically driven approaches to understand and start to get a sense of what is working, what isn’t working as well, and what the gaps might be. And we interrogate ourselves – what might we be missing because of the limitations of these approaches? What are the latest trends that support – or contradict – this analysis? How do we weigh these? What more do we (our investees, investors, and us) need to understand and know to assess impact? We’ve seen over the years that what might have appeared as ‘objective’ evidence was missing major nuance and understanding, particularly as it pertains to race and how it influences data and evaluation.

 

Micah: Good point. At Blue Meridian, each of us – as staff members and colleagues – are on a racial equity journey. As a white woman, was there a moment in your career that really changed how you view your field or made you see how systemic biases were baked into evaluation systems?

 

Meg: My racial equity journey, like evidence-building, is continuous. But there are a few moments that really stand out to me. In the late ’90s/early 2000s when the sector considered racial equity – and we actually didn’t use that term back then – we talked about data disaggregation. That was the way to see how different racial and ethnic groups were experiencing outcomes differently. That was first: understanding how impacts were disaggregated by gender, ethnicity, and race. And we saw such extreme and appalling differences in outcomes whether in healthcare, education, or mortgage lending. Second for me came from participating in Native and indigenous peoples’ learning sessions. In many of those cultures, storytelling and narrative is the way to convey perspectives, to pass history. I had the opportunity to witness anthropology-based methods of both collecting data and then processing data together. There was a ton of texture that participants in this learning community offered that would have been completely invisible to somebody not from the community, not of the same racial and ethnic group. Third is a more recent trend being led by the Equitable Evaluation Initiative. Their effort has shifted the idea that evaluation is not a sterile third-party way to describe racial equity or inequity, but it is an inclusive and sometimes messy undertaking in service of advancing greater equity. I love that frame.

 

Micah: I love that, too! I understand some people see that as being in conflict with traditional quantitative evaluation, but I know at Blue Meridian we see this as an enhancement. So, let’s switch gears. In your current role at Blue Meridian, you work directly with some investee leaders on the challenges present in their learning, evaluation, and measurement, including those who already may have a large base of evidence. What is a common issue where you see organizations struggle?

 

Meg: For investees, building the capacity to support the continuum of evaluation and learning practices is very aligned with funders’ needs to build different capacities and skillsets that move us from only compliance to more broad-based evaluation and learning. This means people, resources, and time. It means infrastructure and even partnerships, including with third party evaluation firms and researchers. That can be a challenge when you are also trying to do the work and create impact.

 

Micah: It’s another tension, in this instance between needing to do the work and needing to understand what is working most effectively. What then, do you wish social sector leaders could have to best create learning, evaluation, and measurement tools and utilize them effectively?

 

Meg: I wish we could build a data infrastructure system that allows social leaders to have actionable data access in the moment. There’s a lot of data out there. But we don’t necessarily have systems that are functional to support the real-time information needs each leader has. We need systems that collect data and systems that analyze data quickly. As the saying goes, we are data rich, but systems poor.

 

Micah: And, on a related note, what is one thing you wish funders – like Blue Meridian and others – could understand in regard to learning, evaluation, and measurement?

 

Meg: I wish for funders to embrace the multifaceted nature of evidence and evidence-building, in regard to both rigor and causality. I wish funders – individually and collectively – recognized that a randomized control trial might not always be the most insightful design or the most appropriate evaluation approach. There are a variety of methods and evaluation approaches that can help establish causality that may be more appropriate to the intervention. We cannot throw our desire to establish causality out the door, but we can find better fitting methods to help understand what works. Intent is not impact. Just because we intend something does not mean we’re going to achieve it. That’s not what we are saying. I understand that we cannot expect philanthropists to blindly trust that everything is going to happen that may have been intended. However, by widening our view of what constitutes high-quality evidence and evidence-building practices, we might be able to appreciate and bring into the fold much more innovative, inclusive, and diverse initiatives. In turn this helps all of us gain deeper insight into what creates impact. We absolutely won’t get there if we continue to hold a very narrow perspective of how we define evidence. At Blue Meridian, getting to this place is an ongoing journey and we see it as fundamental to ensuring we are doing all we can to support our investees and investors in reaching their ambitions and visions for impact at scale.