Future-Proofing National Evaluation Systems through Capacity Building and Knowledge Sharing

Image
GEI Powered by Evidence podcast episode 7: Future-Proofing National Evaluation Systems through Capacity Building and Knowledge Sharing

What does it mean to future-proof national evaluation systems, and what does it entail? Join Patrizia Cocca, Communications and Knowledge Management Lead at GEI, as she reflects on this question with Jos Vaessen, evaluation expert and former GEI adviser. In this conversation, they break down the essential components of effective and sustainable evaluation systems, drilling down on the importance of strengthening evaluation capacities, creating global spaces for knowledge exchange, and fostering a culture of continuous learning.

 

 

TRANSCRIPT:


 

Dugan Fraser:
[00:00:00] Welcome to Powered by Evidence, a podcast by the Global Evaluation Initiative.  We're a global partnership of organizations that help governments and public institutions create robust, country-owned monitoring and evaluation systems. In this podcast, we invite experts from our international network and other special guests to explore new ideas and revisit challenges that are still unsolved. We examine failures, reflect on successes, and discuss the way forward. Thank you for sharing your time with us. 

Hi, everybody. I'm Dugan Fraser. I'm the Program Manager of the Global Evaluation Initiative, or the GEI. Welcome to our podcast, Powered by Evidence. Today, I turn my hosting duties over to Patrizia Cocca. She's the communications and knowledge management lead at the GEI, so you know you're definitely in good hands. 

And with her is Jos Vaessen. He's one of the most brilliant minds you'll meet in the evaluation world. One thing you should know about Jos is that he was involved in guiding and shaping the strategic development of GEI. Consider him one of the brains behind GEI's evolution. And we're really happy to have him here with us, together with Patrizia. 

They'll talk about something that sits at the very heart of what GEI does. Building strong, sustainable, and future-proof national evaluation systems. But what does it exactly mean to future-proof national evaluation systems? Let's get the conversation rolling. Patrizia?

Patrizia Cocca:
[00:01:31] Thank you. I just wanted to give a little bit of explanation of why we chose the title, Future-Proofing M&E Systems. And it's about trying to make sure that national evaluation systems can meet the demands of future needs that are coming and arising. And as you know, the Global Evaluation Initiative has a focus in strengthening M&E systems at country level. So, today with Jos, I'd like to start the conversation. Jos has been working on creating the work plan, the work program for the GEI on capacity development and trainings. And so, I really would like to start with something broad and basic. If you can explain to us, what does M&E systems and capacity development mean to you?

Jos Vaessen:
[00:02:24] Excellent. Thank you, Patricia. It's a pleasure to be here and have this conversation with you. So, the concept of M&E capacity development is really a complex one, I would say. There are various ways to unpack and conceptualize this concept. But basically, it's about how we strengthen supply and demand for evaluative evidence in particular policy settings. And one way to further more specifically unpack this concept is to think about capacity development in terms of a three-pronged approach.  

So, first of all, we are thinking about strengthening individual capacities. So, it's basically about enhancing the skills and competencies of M&E stakeholders, which include M&E specialists, evaluators, managers, decision-makers, politicians, and so on. Skills and competencies that have to do with enhancing the production and use of evidence in policy settings. So, that's one prong.

The second prong is to strengthen organizational capacity. So, basically, this is about clarifying and strengthening the roles and responsibilities of different institutional actors in an organizational system. It's about how to connect, for example, data systems to monitoring functions, to evaluation functions, to reporting, to different decision-making platforms and so on. And also, more specifically, it could also mean strengthening particular entities within an M&E system. For example, if we are talking about an evaluation function, we would have to think about, okay, how do we clarify the mandate of an evaluation function? How do we think about evaluation modalities, evaluation practices and methods? How is the evaluation function resourced, how is it staffed, how is it connected to different institutional processes that relate to learning, accountability, and decision-making? So, that's the second prong.  

And then, the third prong is really about the enabling environment in which monitoring and evaluation takes place. And this is really a broad array of things. So, first of all, it would include the legislative basis for doing M&E. So, what is the legal basis for the production of M&E evidence and for connecting it to different processes of learning and accountability in organizational settings?  It's also about policies. So, how is this exactly organized and stipulated? But it is also about this broader idea of awareness and understanding among a very diverse group of stakeholders. What is the role of M&E in public policies, for example? So different types of stakeholders will have different levels of understanding, and they also need to have different levels of knowledge in order for M&E to function.  

And finally, it is also about a whole array of actors that need to, in some way, be connected to M&E practices. For example, there's the role of academic institutions who ideally would train M&E specialists or who would train evaluators. There's the role of voluntary organizations of professional evaluation who provide a framework for professional evaluators to come together and talk about principles and practices of evaluation. It's also about, for example, the role of civil society institutions and how they use evaluative evidence or monitoring data to enter in a dialogue with government. So really, it's a very broad array of elements that need to be strengthened in order for an M&E system, especially at the country level, to work. So, as you can imagine, M&E capacity development is really an ambitious undertaking and really requires a systemic approach.

Patrizia Cocca:
[00:06:36] Yes, very complex. But then, so we saw what the system requires. So now, coming down to the individuals, what do evaluators need to be able to do? What are the competencies that they need in order to do, to be able to do their job?

Jos Vaessen:
[00:06:56] Yeah that's a very good question. So, first of all, as you know within the GEI we try to support the individual capacities of any stakeholders, so that's quite a broad group that includes the groups that I mentioned: decision-makers, managers, operations staff, monitoring specialists, evaluators and so on. But really, evaluators and evaluation occupy a very special role in this M&E system. To simplify, evaluation or evaluative thinking, so the idea of asking critical question, critical questions about what works and why, under what circumstances regarding, you know, policy interventions in reality, it’s really an under-invested area in many organizational contexts. So, what you see in reality is that there's actually a lot of data available in a governmental system, for example, but there are limitations in the capacity to analyze these data and even further, there are limitations to analyze data in the framework of asking critical questions around the merit and worth of policy interventions. So, this is what evaluation does, and that's why it's so important. So, we need more and well-trained evaluators.  

So, what do these evaluators need to know? That's actually arguably quite a lot. And so let me go through the list and stop me if I'm getting too far here. But first of all, evaluators need to be very good communicators, and they need to have very good interpersonal skills. They operate in very, they operate in very politically sensitive environments often, and there are a lot of strategic interests involved around policy interventions. So, managing that social context is very important for evaluators.  

Secondly, evaluators need to have evaluation expertise. They need to know the foundations of evaluative thinking. They need to be able to ask and formulate evaluation questions and connect that to methods and processes and so on. They need specific methodological expertise. They need expertise around how to collect data, how to analyze data, which is especially important given the fact that we are now in an age of the data revolution, there's new data coming in, there's new ways of collecting data, new ways of managing data, new ways of analyzing data, so it's a very important area of expertise. They need to have a certain sense of integrity and also understand the ethics, the ethical implications of what it means to be a good evaluator, which means that you have to be able to speak truth to power. You are constantly under pressure and being manipulated from by different actors and within that environment, you have to be able to be candid, to be as unbiased as possible which is not easy and to present your as unbiased as possible findings to different audiences. 

They need also management expertise. They need to know how to manage an evaluation project. Basically, the evaluation project is basically a research project which, you know, goes from A to Z, from formulation to design to implementation to reporting to communication et cetera. All of that they need to know as well.

You need institutional expertise. You need to understand how organizations work, what are their interventions? What are their instruments? You need substantive expertise. If you're evaluating a program in transport or agriculture, education or health, you need to understand the sector. You need contextual expertise if you're doing an evaluation in Yemen or in Chad or in Peru, you need to understand the context. 

So, there's really a lot that an evaluator needs to know. Which brings us to the core question. There is no single evaluator that would meet all these requirements. So, basically, it is not the issue of talking about an evaluator having all these competencies, but within an evaluation, having a team that covers these different competencies. So, it's also about specialization and team composition, and different evaluators may have different knowledge needs to build capacities in different areas of work. Of course, that goes even more so for M&E stakeholders more broadly, but within all that diversity, there are common knowledge fronts. There is really a common language around what M&E is and how it works, and we need to speak all of us that common language so really M&E can work and fulfill its role of supporting learning and accountability.

Patrizia Cocca:
[00:11:49] We need to have supermen or superwomen doing evaluation. So, how do they learn? How do M&E stakeholders learn? And can you explain a little bit how the GEI is now working towards this goal and supporting them?

Jos Vaessen:
[00:12:07] Yeah, that's an excellent question. So, the Global Evaluation Initiative really works on these three prongs, right? Individual capacity development, organizational capacity development, and also trying to strengthen the enabling environment for M&E. But in the different areas of work of the GEI, there is learning taking place. 

So, first of all, the GEI engages in what you could call country-level engagements with key actors in M&E systems. And usually, given the complexity and vastness of the national M&E system, we focus. We cannot cover a national system. You can imagine India, which all with its different states and its different ministries and specialized agencies and all these actors, you cannot cover that vastness. So, what the GEI does is usually work with the apex institution in the country, which is usually a central M&E unit within a ministry of planning or ministry of finance or the presidency, which plays a central role in the national and in the government of M&E system.  And we try to support M&E champions within those entities. And that usually starts with engaging with them, doing some kind of diagnostic of what is the current state of affairs of monitoring and evaluation within the system. On the basis of that, we develop a plan. And on the basis of that, we provide specific technical systems, advisory services, or training. So that would be one area of action.  

The second area of action is specifically around training and professional development. And the GEI is a really interesting network because it has training offers at different levels. There's a global training program. It's called IPDET. There are regional training offers, TAQYEEM and PIFED, Arabic and French language programs. But there are also many country-specific and institution-specific trainings embedded in these country-level engagements. And then there are also other types of professional development activities that we employ. Now, if we talk about learning, of course, training and professional development is really intentional when it comes to learning and individual capacity development. But no less important is learning that takes place in these country-level engagements. There's a lot of learning by doing. And there's also a lot of learning from technical systems and advisory work, especially about the systems aspects of monitoring and evaluation.  

Now, a third area of engagement is what we could call knowledge sharing, knowledge generation, and knowledge sharing. The GEI organizes knowledge engagements. And we share knowledge resources. So, this covers broadly two categories. First of all, how to go about M&E. So, this is about the processes, the methods, the approaches of doing M&E, guidance, et cetera, reflections. And secondly, how to enhance M&E systems. So, how to strengthen organizational structures, how to clarify processes, how to help build a culture of evidence-informed decision-making, where the supply and use of evidence becomes stronger and stronger to support accountability and learning around policy. So, that's a third area. But, of course, obviously, we are here together. And you know this third area quite well because you, in fact, are working in the GEI on these knowledge-sharing activities. So, I think this is a good moment where I stop talking.  And give the floor to you. 

So, I just talked a little bit about these knowledge-sharing events. I know that this is your main responsibility, your main area of work. So, maybe you can tell us a little bit more about how does the GEI actually bring knowledge about monitoring and evaluation and monitoring and evaluation capacity development to different audiences.

Patrizia Cocca:
[00:16:31] Yeah, thank you. And thank you for the overview. I think it's important to understand that as in any knowledge management program, also the GEI use different tools and activities.  So, we have a variety of them. I will not go over them, through all of them. But I would like to present the three that, in my opinion, are the ones that are most effective. Also, because they each kind of target the audience, different kinds of audiences and in a different way. 

So, the first one I would like to talk about. It's the National Evaluation Capacities Conference that we organize on a biannual basis with UNDP IEO. And it's an in-person event. And that kind of to gather government representatives working in M&E in their countries. There is tons of evidence that points to the fact that self-learning has been one of the most effective ways to learn. And there is no amount of lectures, no amount of studies that can replace real-life experience. And when peers working in different countries come together, they really kind of learn from each other, from the constraints. Even if they come from different countries, different cultural backgrounds, often the constraints and the difficulties are similar and therefore solutions may work in different places as well. So, the NEC Conference has been always very successful. The GEI has been helping government countries to participate, you know, sponsoring some of the countries we work with. Last time, I think it was a little special because for the first time, participants kind of subscribed to this so-called Turin Agenda that kind of set up the goals of what should be happening in the next couple of years in order to advance the M&E agenda. And so, we decided to do an online virtual check-in every six months. And this basically helps the conversation going until the next event will take place. 

But then, of course, there are other ways in which we want to engage. And there is the Global Evaluation Week. The Global Evaluation Week is a festival, in my opinion, is like something super large. I would dare to say it is the largest knowledge-sharing event on evaluation that takes place.  And in the span of a week, we saw over 300 events organized all over the globe with something like 20,000 participants gathering and sharing knowledge and sharing experience. So, this is really targeting M&E practitioners in the broader sense that you can imagine. And really, the idea is that to create a movement and to strengthen the network and the connection that there is among practitioners around the world, and of course, share knowledge and learn from each other. 

And then I would like to go over, of course, the repository of knowledge that we are curating and where we also post some of the original pieces that the GEI produce. And this is betterevaluation.org. It is a platform that has been existing for years. It is not something that the GEI created from scratch. But also, when we received this platform, we decided to expand it and enhance it. And so, we've been working a lot. Better Evaluation is mostly focused on evaluation methods and processes and has been a tool for years for evaluators to kind of refresh their knowledge or learn new skills. But now we are expanding to include also M&E system resources because we want also non-government people, but also government people to be able to access these resources. And this is what we are working on. A lot to do, I think.  

Jos Vaessen:
[00:20:53] Excellent. That's really a great overview. And I think it also gives testimony to the fact that because the GEI brings together all these different organizations, we are able to offer some of these activities, which really can take knowledge sharing to scale. And of course, there's a lot to learn and a lot to improve there. But I think this is important. This is important work. 

The last thing that you mentioned was the Better Evaluation platform. And we have been working on this quite a lot. So, what is it exactly that the Better Evaluation platform offers in terms of resources? And how can we make it even better?

Patrizia Cocca:
[00:21:29] Yeah, I think I already covered a little bit of the resources. But this year, we really, we launched new resources. I wanted just to briefly mention because we spoke so much about training and capacity development. And I think it's really pertinent to mention the fact that we actually, you led the creation of this directory of academic evaluation programs. That is a great resource for whoever wants to start a career and wants to upskill their professional standing.  So, we posted online and we launched this year this directory, together with another global directory of GEI-offered training, and these are recurring trainings where people can participate and not just GEI, but also partners training. So, I think this is an excellent step to move forward and provide additional information and additional tools to people that are interested in evaluation and M&E systems in general. However, I was, as I mentioned, I think really because the GEI is really about strengthening M&E systems at the country level, now we are focusing and hopefully we'll launch soon another section, which will be a compendium of M&E resources, M&E system resources that will move a little bit farther. And then we're also thinking about some specific audience that we have in mind, and so this year I will also focus on developing a specific portal for young and emerging evaluators. I think this is great because it's going to be a section in which content is going to be curated for the specific audience with this specific audience in mind. And also, another section will be how to work on M&E systems and evaluations in a fragile context. So, I think this is a good step ahead. There’s still much more that can be done, but for the moment, this year, we will be focusing on this. 

Jos Vaessen:
[00:23:42] Excellent. This is very useful, and I think I hope that people will want to check out Better Evaluation. Let's think a little bit in terms of, you know, blue skies, blue sky thinking. Suppose we suddenly have a lot more resources, we have a big partner that really is interested in beefing up our knowledge repositories on monitoring and evaluation, and monitoring and evaluation capacity development. What would be some of the things that we could do to make the platform even better?

Patrizia Cocca:
[00:24:11] Yeah, that's like the dream, the dream question, right? Moving forward, I think there is one thing that I, for me, is very important as a KM practitioner. So far, lessons learned are mostly being captured once the program is over, the project is over. You look back and you kind of make a summary of what stands out in your opinion. Good and bad. We have seen lots of lessons. We have seen, unfortunately, the lots of lessons are very familiar. They become very familiar because they seem to sound very similar. And although progress and uptake of these lessons have been made, sometimes it makes me wonder, are we really doing the right thing?  So, I think knowledge management and knowledge sharing in the future, at least in the future of the GEI, will need to be more intentional in the way we create knowledge. And therefore, as we do an evaluation where you have questions in mind and then you try to answer the question as you go through your evaluation, I think KM has to do the same, has to have a need, a knowledge need in mind that you need to fulfill. And then by the end of the project, of the program, you can look back and say, what did we learn that we can do differently for next phase and can inform our future? And then, of course, there is the really, if we've got a lot of money, then we can really have fun. I think AI now has really made steps, giant steps, and could really become handy in sifting through these thousands of documents and publications and knowledge pieces that we have that often are overwhelming for somebody that is starting to search for information, so possibly that would be the next step.

Jos Vaessen:
[00:25:58] So we need basically an improved ChatGPT connected to M&E knowledge repositories.

Patrizia Cocca:
[00:26:04] That would be awesome.

Jos Vaessen:
[00:26:05] And then we can just take a holiday. No, there's a lot of work to do yet. This has been very interesting.

Dugan Fraser:
[00:26:13] Thanks for listening to Powered by Evidence. I hope you enjoyed the discussion. This is our pilot season, so we'd love to hear what you think. Please join the conversation. You can find us on Twitter and LinkedIn or leave a comment on the podcast page on the GEI website at globalevaluationinitiative.org. And don't forget to subscribe wherever you listen to your podcasts.