Hello and welcome to one of a series of podcasts exploring key issues, or areas of interest of impact evaluation today. We hope you enjoy the podcast and please don’t forget to Tweet your thoughts at #impact frameworks. Thank you for listening.
I’m Imelda Bates, I’m a Professor in Clinical Tropical Haematology at the Liverpool School of Tropical Medicine. But the reason that I’m contributing to this podcast is because I also head up a research team at a centre called The Centre for Capacity Research. So we generate evidence and do research around research systems in lower and middle income countries with a particular focus on Africa. So I’ve been doing capacity strengthening in Africa for 35 years or so. I used to live in west Africa and all of my research is based mostly in Africa although I am physically sitting in Liverpool.
Hello, I’m David Phillips, I am the assistant vice president of research strategy and impact in York University in Toronto in Canada. And I’m also the network director for a network we call Research Impact Canada which is 21 universities who are all collaborating on methods and processes used to help our researchers and students and their partners, work together, to be able to maximise the impacts of their research.
And finally we have Katharine Wright who is the assistant director at the Nuffield Council on Bio Ethics, and was responsible for the council’s latest project looking at how research can be conducted ethically in global health emergencies which made a number of recommendations to funders in particular as to how they could do more to support ethical research. She is also on the advisory board to the Rich project which brings together researchers in Kenya, South Africa, Thailand, and the UK, to look at critical gaps and ethics guidance for responsible research with women, children, and families in low income settings.
Hello and welcome to one of the four podcasts discussing various aspects of impact frameworks and cultural change. My name is Saumu Lwembe and I’m a senior programme manager at the National Institute for Health Research. Today we will be discussing issues surrounding the impact and impact assessment of global health research. I’m joined by three discussers who have a wide range of expertise in impact frameworks, knowledge mobilisation and ethics of impact assessment. Their contribution will undoubtedly help shed light on some of the issues that the NIHR and other global health research funders might need to consider when it comes to impact assessment in global health research.
So Imelda, perhaps I could start with you. Please tell us about your work on strengthening research capacity and laboratory assistants in low and middle income countries, what does that involve in particular?
Well we actually do work on strengthening systems, not just for laboratories, but also research systems and we use a similar approach for both in that we work very closely with partners on the ground and we have a focus on the institutions. So capacity systems are generally thought of in three levels. One is the individual level and in research systems that would be the researchers and the research teams. And then we think about the institutional level which is organisations and institutions where researchers work. And then we have the societal level which is a sort of broad level that includes the network of researchers and research users. So we always think about those three levels to start with. But our research focuses on the institutions because they’re in the middle of those three levels. So they’re influential up and down the system. And we realised a long time ago that the evidence to support how you actually do research capacity strengthening was very weak. And also a lot of funders are putting lots of investment into building research capacity, the evidence to guide the different models and to see which ones were most effective and to measure the impact of those models and where the investments were going. The evidence was very poor and virtually non existent. So for the last 10 years plus, we have been trying to generate better evidence about how you can build capacity to do better research and to set up essentially the research systems and strengthen them. So that’s the broad way that we approach things, and then we do practical projects where we actually work with institutions, we talk to them about what their goal is for their institution in terms of capacity, where they want to be over the next five or 10 years and then we use a standard approach, we have a five step approach that we always use when we do these capacity strengthening programmes. And after we’ve defined goal we then carry out an assessment against a benchmark and then we find the gaps altogether with the partners on the ground, and then we develop a joint action plan and then we move into a sort of normal project cycle where we implement the action plans, develop indicators to see how they’re progressing, and then advance and change the action plans as the capacity is slowly built over time. So that’s the theory, in practise there’s lots and lots of challenges which I’m sure we will come onto discussing later. But every different programme that we work with enables us to learn more to generate new evidence and to publish. Because we use the same tools and the same approach no matter what the capacity assessments are that are needed. So whether it’s for, for example a laboratory that’s testing insecticides and now wants to become accredited certified laboratory, or whether it’s a school of public health that might want to set up an excellent programme for their early career researchers. So we can use the same approach, the same tools across all those different contexts which is really helpful because it enables us to learn much more quickly than we could if we were just dipping in and out of various projects with different tools and methods.
So you did mention about the five step approach to capacity strengthening programmes, do you mind shedding more light on that?
On paper they sound very simple. So the first thing which we found was often, almost completely neglected, was to sit with the partners on the ground and work out the goal of their capacity strengthening project. Where did they want to go? What did they want to be? What was their vision for their new laboratory or for their new school of public health? So we sit down with them and work out with them exactly what it is and we write it down. What does that new capacity look like? What will they be able to do in five years that they can’t do now? And then we try and map out a pathway to achieve that. It’s a bit like a theory of change, or a logic model. And that actually takes quite a lot of time because people often haven’t thought in depth about what they want and how they’re going to get there. And then when we’ve got that vision mapped out and documented we then look around in the literature and through our own resources which are now very huge, to see if we’ve got a framework or a benchmark that we can use to then go in and do an assessment with those partners on the ground. And it’s really important to have a benchmark because none of us know everything about such a complex situation. So we have to trawl through the literature, talk to experts, look at published frameworks and standards and then bring all that together to design a benchmark that we can then use when we go and do an assessment. For laboratories that’s much easier because there are very good standards, there’s ISO standards, certification standards. So for laboratories it’s much easier to have a benchmark. For research systems we’ve – because we’ve done so many now, we’ve done, I don’t know, 30, 40 institution assessments of research capacity, we’ve now developed a sort of master benchmark which we can pick and choose from when we go and do assessments on the ground. So the first step is to set out the goal. The second step is then to go and do this assessment using a benchmark and then once we’ve done the assessment we can identify gaps. So then the next step is to turn those gaps into an action plan to fill the capacity gaps. And interestingly, no matter what the setting is, what project it is, we’ve found that about two thirds of the gaps can be filled using no resources at all, or very little resources. So for example it might require new policies to be drawn up or student handbooks to be produced, or PhD seminars to be instituted. So many of those gaps can be set up without external funds. And then that’s really helpful because it leaves about a third of the activities needing some sort of external support. So it helps the institution to focus their, for example, grant applications, or their applications to government for additional funding, for very targeted activities that they have identified. So along with the action plans we work with the partners on the ground to identify indicators that we can use to measure as time goes on to see how those gaps are being filled. That’s also quite a time consuming step in this five step process because often the indicators are difficult to measure, they might well be quantitative rather than qualitative and so sometimes we need to invest in particular skills, people with special skills in order to be able to collect good quality data against the indicators. And then the final step, the fifth step, is to review the action plans, see what’s been achieved, what gaps have been filled, review the plan so that we bring in new gaps to address. And so it then becomes a rolling cycle of monitoring, tracking, introducing new activities and so essentially that can go on for many years or it could be short and sharp in which case it’s obviously much more difficult to measure outcomes and impact. But at least we know that our partners are on a trajectory to achieving the capacity that they set out in the first place, in their goal.
David, as an expert practitioner in impact assessments, how do you think it will be possible to assess the impact of which work like Imelda’s. What are the kind of things that we should be mindful of?
Well thank you very much for this invitation. Thank you, Imelda, for presenting the project that you have ongoing. I think I would be delighted to work with Imelda in the future on collecting the evidence of impact. First off, because she said a very important thing. That she said, number one identified indicators. Number one identify partners, engage with them, understand the gaps, put in action plans, identify indicators, those are very important, and then talked about the time possibly over years, that it would take to be able to collect the evidence of impact. So I think Imelda’s project is well set up and she’s conceptualised a lot of the elements that are required to be able to collect the evidence of impact. You will note that I have not said impact assessment. That is a very assessment driven paradigm with the UK’s research excellence framework driving those concepts. I don’t like to talk about assessment, I like to talk about collecting and communicating the evidence of impact. Because it is less about you’re good and you’re bad, that’s an assessment, and it’s more about how do we know if change has happened? And one of the things that we see in assessments is that it’s often the university or the researcher that’s being assessed. But it’s not the researcher who’s making the impact, it’s not the researcher that’s making products, industry does that, and the researcher’s not developing policy, government does that. And the researcher isn’t delivering social services, those are done by community organisations. So we need to go to those people where the impact is being felt or who the impact is being made. Because those are the people from whom we collect the evidence of impact. Impact happens when research – the outcomes of research are peer reviewed and they’re published, that’s great and those have impacts, got a lot of impacts. But that’s not what we’re talking about here. What we’re talking about is when research is taken up by an organisation that can do something about it. And if they take it up they’re going to evaluate it and they’re going to say if it’s any good, then they’re going to implement that research evidence into a policy or a product or service. And ultimately it’s that policy or product, or service that has an impact on the end users. So when the gaps are identified in Imelda’s research, when action plans are – and then those gaps, the action plans are developed and taken up by the partner university. And then implemented those actions plans, and then Imelda, over time, can see if these indicators are telling that they’re working. I think that I also like to say when we’re thinking about collecting the evidence of impact, that we use mixed methods so it’s qualitative and quantitative measures that we’re looking at and Imelda’s indicators are likely both quantitative and qualitative so if we think of a research office, and I happen to run a research office in the university, then we want to get the stories of successful practises and the stories of successes but we also want to look at the annual reports. We want to look at the research income over time. We want to look at how many partner organisations have been engaged. So we want to have both qualitative and quantitative indicators. But what Imelda illustrates very well with her five steps is that she thinks about this from the beginning. She doesn’t start a project unless she’s got a good conceptualisation of how to plan for that impact. And it’s that planning for the impact that ultimately is then going to engage partners and within the partners who the individual stakeholders to be interviewed, and ultimately interview for qualitative data and collect documents for quantitative data, and that is the evidence of the impact that is experienced by the partner organisations.
I think arguably we could say that impact assessments of the UK research is well established, or well understood, or actually let’s say impact assessment of research within the global north is well established, well understood and arguably not done very well, but done within a non context. Is impact assessment within a global health context well established, understood, or done very well. Work like what Imelda’s done, do you think?
I would again, I’m going to guess here because I certainly haven’t done a systematic review of lower, middle income country impact schemes. Let me first comment that I think the ref is well understood, I don’t think that impact of research is necessarily well understood and ref is only one example of ways to about that. I think that the more that we get funding programmes, global health research health programmes that are focused on the benefits to the lower and middle income country end users, and the more we get researchers like Imelda who understands these processes, that we will increasingly see good structures to collect and communicate the evidence of impact. But the impact assessment in industrialised nations as exemplified by the ref, I think is only one way of conceptualising impact assessment.
Okay so Katharine building on expertise and years of experience in research ethics, what do you think are the ethical issues around this particular area of work?
I’d do a little bit like David. I’d like to start by pushing back a little bit on the language. And in particular by how we think about ethics. Because I think for a lot of listeners the word ethics is synonymous with ethical review or possibly getting through ethics or the hurdle of ethics or other perhaps less complimentary descriptions. I’d like to suggest that really research ethics needs to be seen much more holistically as being about the values that underpin the whole enterprise of research from start to finish. And if we think about ethics in that much broader way, that’s got real implications for who we think of as being responsible for research ethics. So we think about researcher’s ethics committees as having this particular role, but if we think about ethics in a much more broad value based way, the thing that also brings in the role of funders and research institutions and publishers and so forth, and it’s been really interesting, I think it’s a real privilege of being the third speaker hearing some of the examples that Imelda and David gave, of seeing that (unclear 18:40) out of the descriptions they gave. So I’d suggest I think a key ethical issue that underpins research that’s being carried out in a low income country, even in partnership with high income researchers or funded by a funder from a high income country, or both, is thinking about whose voices are being heard all the way through. So starting with the question of what research is actually taking place, about who prioritised it, about whose interests and perspectives are held to matter, and then thinking about the impact and what’s really making a difference. And I think that’s really great to hear Imelda describing that process of sitting down and setting goals together. Because in some of the work we’ve done with the ethical council, looking at how research takes place in emergencies for example, we hear accounts of how people want to have ethical partnerships but in practise because of the way funders have very short timescales or because of other pressures, effectively the high income partner ends up setting many of the parameters only bringing low income researchers in later and I think it’s that sense of how you – I think were we to sit down together to do that kind of process and make sure it is a genuinely endeavour, there’s a fundamentally ethical question about this kind of research. I was really struck when I was preparing for this podcast looking at the NIHR global health principals and how those emphasise both equitable partnerships and community engagements, I thought that was really important to see that set out there and I think the next level challenge is thinking about how actually the procedures any funder uses can either facilitate or perhaps act against those kind of principles. And then moving specifically onto this idea of impact. And I think focus on impact is again an ethical imperative because we shouldn’t be doing research if we’re not aiming to make a difference. We’re asking our research participants to give us time and energy, and we need to actually make sure that it does make a difference and think who it’s making a difference for, and the extent again, to which those voices are coming into play. And then there was something I was delighted to hear David talking about in terms of thinking of actually who does what afterwards? The researchers create evidence but what happens after that, who’s going to take it on? And I’d suggest another sort of ethical consideration of research, from the beginning is thinking about it, is the buy in from governments and research institutions to fund in a sustainable way, the beneficial practise of identified through the research. Again I suppose struck by Imelda’s point that actually two thirds of what’s found in her projects actually can be implemented without extra funding and that’s something that’s a very positive thing to hear. But still thinking about that other third, is the buy in from the start going to be convinced it’s going to have that kind of impact or at least have a good hope. And then, coming back to ethical review, because I think it is important to recognise that’s an important part of the process. I think at best it can improve research by highlighting issues that haven’t been thought about. But I think in practise if the ethical review process itself is not well managed or is not well funded, and that will often be the case, it can seem disproportionate and unhelpful and that can give ethics a bad name in a way that is really, I think, very unhelpful all round. So I think actually focus on actually how those processes can be made more responsive both by ethics committees and by researchers themselves taking this broader view of ethics could be immensely helpful.
So I was quite struck at your point that ethnics should be everyone’s business, include all voices, and I quite agree with that, rightly so. But my question is, in a context of global emergencies like the Covid-19 pandemic that we are all experiencing, how does one reconcile the tensions between a rapid impact assessment and the need to meaningfully involve all voices, especially those of the communities and patient groups, which we know takes time and resources?
Absolutely, I mean that’s the absolute sort of plastic challenge I think of research in emergencies, and actually ironically the Nuffield council actually published a report on this three days before Covid was declared to be a public health emergency of international concern. And we drew heavily on the experiences of Ebola in west Africa as part of our work. And I think the bottom line is about one has to keep these values there; one has to be pragmatic about how they’re implemented. So on the community engagement side and part of our project was talking to those who were involved in community engagement in Liberia during the Ebola outbreak. And the person we spoke to there kept talking about the need to learn and adapt, you can’t actually start with a shiny new community engagement set up before you start because there isn’t time. But you have to actually build the research in a way that there is space to listen and then to change in response to that input. And it was highlighted then I think one of the most dangerous things is to start asking people for input and then say, sorry it’s too late, we can’t change that because that’s highly disrespectful of the kind of input you have. But I think what was striking in the Liberian context is really quite how the community engagement there actually turned round what was a very distrustful approach to research to actually very successful vaccine research. So community engagement isn’t just a nice to have in emergency it’s actually an essential.
David, thinking of your impact literacy, firstly can I ask what does impact literacy really mean?
Certainly, impact literacy is a concept that I developed with Julie Bailie from the university of Lincoln. Impact literacy is a state of knowing. If we understand how to create impacts from research. So I want you to imagine a Venn diagram. And if one of the bubbles is how, if we understand how to create impacts from research. And if the other bubble is what, and if we understand what those impacts are and the methods to collect the evidence of impact. And then the third bubble of the Venn diagram is who, who are the people that are doing this work and what are the skills that we need. So if we understand how to create impact, if we understand what that impact is occurring, and we understand who the people are and the skills that we need. The intersection of those three we call impact literacy.
So are there any gaps or areas of improvement in knowledge mobilisation in a global health research context, that we need to pay particular attention to?
I think we’ve covered it a lot with Imelda’s presentation. So I’ll just reflect on some of those. I think the question, first question on collecting the evidence of impact and I’ve already mentioned it is, who’s doing this work? Who is collecting the evidence of the impact and reflecting on what Catherine has said, is which of the voices that need to be heard throughout? So stakeholder engagement, engaging our lower and middle income country partners and the stakeholders in those jurisdictions, engaging them just as a consultation group, is not a very authentic way of engaging. So incorporating those voices as co-applicants, engaging them through the research and then having someone embedded in those systems to be able to understand where the evidence is and how to authentically in the right voice, interview some of those subjects. So the question is who is doing that work? Another question that I have in this space, because it’s one thing to get impact arising from a particular funded piece of research and that’s very important, but that’s not often the only potential. If we discover through a piece of research, an intervention in one city in a lower middle income country, well whose job is it to scale that through other cities, so that the impacts of research can be scaled and felt in other jurisdictions, throughout that country or through other countries. So whose job is that? That’s a piece that is often not thought through very carefully. And then I would say we’ve got a couple of gaps. One gap is, and I see this in a lot of funding, is if we draw a pathway to impact, we tend to fund activities in the boxes. What we don’t fund is the arrows between those boxes. And the arrows, how you move from one stage to another in your impact pathway, that’s the knowledge mobilisation. It’s helping mobilise that knowledge down the pathway, so we have to think about funding those arrows as well as funding the boxes. I think also that we talked – so Catherine talked a little bit about this was, is there funding for those pieces in Imelda’s programme that can’t be implemented with little or no funding? So what is follow-on funding like, and is it really funding to the researcher who is often not in the lower middle income country, or are we now actually looking at funding the implementation of that evidence so that it can have an impact. And that would be funds going to the partner organisation. And then finally just take away what Catherine said, making sure we know whose voices need to be heard through the process.
Imelda, I know you’ve mentioned in your previous conversation around some of the challenges that you face, but I want to explore that a bit more. So what would you say has been the main challenge for you in research capacity strengthening in global health, and in particular are there any tensions between funder practises vis a vis the reality of capacity strengthening work on the ground?
So our main end users, apart from the organisations that we’re working with in the lower middle income countries, are research funders and the reason they’re very important end users for us is because we’re generating learning all the time about how to fund research capacity strengthening programmes. What models work well and particularly these different models have an influence on the research culture within institutions which is becoming increasingly commonly focused on now. So one of our main challenges has really been to encourage the funders to use the evidence that there is, although it’s not very robust, there’s not a lot of evidence about how to build research capacity there is some, and it is increasing rapidly. And we really would like to encourage funders to use that evidence to guide how they design programmes and how they invest in them. And a huge part of that is how they plan to impact from the start, exactly as David has mentioned and Katharine’s also mentioned, bringing in all stakeholders. So I think funders are very influential in how these programmes can be done better, but they’re not really using that influence, they’re not picking up the evidence as fast as I think they could, and they’re not incorporating it into their cause. And also they’re not really providing the sort of standard of guidance and resources that would help researchers and research teams to build research capacity and to support others to do that. And I think when I talk about research teams I really do mean that research is generated by more than just researchers. There are people who are absolutely critical to generating research, such as programme managers, the grant accountants, laboratory technicians, they’re all a key part of the research system but they’re really overlooked and neglected in terms of project training and courses and opportunities. So again it builds on what Katharine and David were both saying about having to take this holistic view and to try and encourage funders to be the sort of guiding light for setting very good standards for doing research capacity strengthening programmes.
So that’s one of my main challenges I think is encouraging funders to use this to build better programmes. And one of the other key challenges is that there are very few people actually generating evidence about how to build research systems and how to strengthen them. And those researchers are fragmented across the globe, we have a network but it’s a bit tenuous, they publish in all sorts of different journals because it crosses sectors, we can learn from education, we can learn from natural sciences. So I think the fragmentation of research is also problematic so trying to get together a community of researchers and practitioners who are interested in generating evidence around these pathways to input to research would be really, really helpful.
Katharine, you did mention earlier on an observation, and that is something that we’ve observed as well and trying to address that, the tendency for high income partners to set up priorities and then bringing LMIC partners later. I want to explore that a bit and especially the issue on shifting the gravity in addressing power in equalities in global health research is coming up quite a lot in many conversations. What are your views on this, especially from an ethics angle?
Both these points about the responsibility of funders and how we long term shift the centre of gravity as it were, not only for leading the research but even funding the research to the place where research is taking place. I think these are at the heart of what it is to conduct research ethically in these circumstances. So I’d really like to echo some of the comments Imelda was making, it’s a real pleasure to hear both the points she’s making and the description of the research she’s done. In terms of thinking about how funders test themselves, I think about the way day to day cogs of their practises actually make it possible, or make it difficult, for these partnerships to be genuinely equitable. I think you can start off with a great intention that a project will be equal and there’ll be a call for a partnership and so on. But then because of practicalities of timescales, because on the whole in high income countries you’re much more likely to have the funder time to be able to turn round applications quickly and so on. It can, with the best will in the world, it can end up being led primarily from the richer country institutions. So I think it’s really thinking about all the what appear not to be ethical practises at all, they’re day to day nuts and bolts of funding practise, but actually they can make a real difference in how ethically the research is conducted. And we can apply that to things like peer review processes for applications and so on as well. Who’s involved in those, how much time they have to turn them round. I also wanted to pick up one of the things that David was saying about looking to future sustainability. I think again that picks up this idea of shifting the centre of gravity. Imelda’s pointed out that about two thirds of what can be done, what’s identified in one of the assessments she does or the capacity strengthening exercises she does, can be changed locally without the need for extra funding, but then you have to apply for funding for the remaining third. If that carries on being programme funding, if that’s external funding, that’s not going to make this long term shift. I think it’s about getting buy in from governments, but also about longer term thinking about how institutions in low income countries can apply directly for funding without having to rely on partnerships and so on. And then ultimately looking at, for example the African Academy of Sciences I know has partnerships with Welcome whereby some of Welcome’s funding is actually directed through the African Academy of Sciences. I think it’s looking longer term over those kind of partnership models amongst funders too that will gradually make this kind of change.
How best can we support an impact culture in global health research?
I think really it’s just to reiterate what has been said before. I think one is to plan from the start for impact. And that doesn’t just mean in some sort of nebulous non specific way, it means actually sitting down and thrashing out how you’re going to get from A to Z, what is the goal, where are you now and what are the steps that are going to need to be taken to get you to the end where there is impact? And to do that involves a large number of players, so you have to bring in holistically everybody who is impacted or can contribute to that pathway, not just the people generating research but all of those impacted by it, those funding it, those using it. So I think my two messages would be to plan meticulously from the start, it may not work out like that, and be prepared to be flexible. But also to bring everybody onboard and keep them onboard and listen to them and act on their input all the way through.
I’d just like to introduce a new concept around power and we talked about it from all of our perspectives. But how do we authentically engage the voices of lower middle income countries in the actual decisions of who gets funding? There is an active community engagement and inclusion programme at NIHR and I think what I would encourage is how do we, rather than just put community members, add an otherwise pure review scientific table? How do we actually create structures where those pure reviewers actually have decision making, not just let’s inform a decision by the majority of others at the table, but how do we actually create structures that actually provide power to those community and patient and public engagement voices? So that’s the piece that I think is not on researchers, but it’s on the funders and the partners to be able to authentically think through that piece.
I would so much, I think reflect and add to what both Imelda and David have said there. I think if we think about funders as having ethical responsibilities for this conduct of research, I think recognising those power differentials and finding ways to account for them in a way that David describes, is probably the first task of ethical research. I think Imelda described very nicely the starting points for a particular project, or a particular scheme. I think if we think in terms of accounting for those power differentials and really taking action to start minimising them, I think that’s the role of funders, it’s something that they could be doing and it would be great to see them developing it.
So very many thanks for everything today David, Katharine, and Imelda. I think it’s been a very enjoyable discussion. Don’t forget the other podcasts and contact available for download and please keep the conversation going either on Linked In or Twitter with the #impactframeworks.
Thank you for listening to this podcast, it’s one of four in a series exploring different aspects of impact culture. Please return to the website to discover the others. Don’t forget to Tweet us your comments and questions to #impactframeworks. Once again thank you for listening.