Quantitative Methods for Study Design and Data Analysis in Implementation Science Research
July 14, 2023Speaker: Donna Spiegelman
Monday, June 27th, 2022, at 2:00 pm EST
ScD as part of the seminar series at the Institute for Advance Medical Research & Training (IAMRAT), College of Medicine, University of Idaban.
Information
- ID
- 10134
- To Cite
- DCA Citation Guide
Transcript
- 00:00<v Presenter>Okay, there we go. Okay.</v>
- 00:03All right.
- 00:04And this is where I wanted to start talking right here.
- 00:05So, to put...
- 00:08To put implementation science in sort of the context
- 00:12of the whole kind of public health scientific
- 00:15research pipeline, we think about efficacy trials,
- 00:21effectiveness, pragmatic and cost-effectiveness trials,
- 00:25implementation studies and dissemination studies.
- 00:29So things don't always work this way,
- 00:33but this is the idealized sort of research pipeline.
- 00:37And in efficacy trials,
- 00:40what happens is they're usually kind of phase
- 00:43three individually randomized clinical trials
- 00:46of investigational drugs and devices.
- 00:50And they're usually done in very relatively high budget
- 00:55research settings with lots of exclusion
- 00:58criteria and academic researchers and so forth.
- 01:02And they established the biological efficacy
- 01:05of a particular drug or device.
- 01:08Should that be found efficacious,
- 01:11then we might move on to
- 01:13what's now called an effectiveness trial,
- 01:16an often a somewhat synonym
- 01:19is a pragmatic trial and sometimes cost-effectiveness,
- 01:23it is also studied at the same time,
- 01:26and in effectiveness trials we might take that same
- 01:29drug and device, but now we're kind of interested
- 01:32in how well it works at the community level.
- 01:35So oftentimes effectiveness trials and pragmatic
- 01:38trials are cluster-randomized, say, by providers,
- 01:43provider practices, clinics or facilities, villages,
- 01:49neighborhoods and so forth, and on the exclusion criteria,
- 01:54it's encouraged that they be as minimal as possible
- 01:58to exclude as many people who might be eligible
- 02:00for this treatment should it be shown to be
- 02:03effective and cost-effective.
- 02:05And they tend to be larger and maybe run
- 02:08for a longer amount of time.
- 02:10And then cost may be taken into account as well.
- 02:14Then should a particular intervention,
- 02:16now I've moved from the word drug or device
- 02:19to intervention because oftentimes a drug or device may
- 02:24be embedded within a much more complex program
- 02:27at the effectiveness stage,
- 02:30where we'd be looking at not just sort
- 02:33of biological impact or health impact,
- 02:35but also it's how well it can be delivered.
- 02:40In this classic pipeline,
- 02:41should a programmer intervention be shown to be
- 02:46effective and cost-effective, then we move,
- 02:48might move on to an implementation study.
- 02:51And there we might be taking the program that was
- 02:54the multi-level program
- 02:56that may have been shown to be effective
- 02:59and cost-effective at this second level of research
- 03:03and be adapting it contextually, tweaking, adapting,
- 03:07modifying the program for new contexts such as
- 03:12from one country to another, from urban to rural, from,
- 03:16say, in the United States,
- 03:17from the North to the South and so forth.
- 03:20And then also experimenting potentially
- 03:24with cost-effective ways of implementing it to kind
- 03:28of streamline the delivery.
- 03:30Also at the implementation phase,
- 03:32we'd be looking at scale up and scale out,
- 03:35and these things could be done without,
- 03:39with primary endpoints not even being health outcomes
- 03:42at this point.
- 03:43They might purely be things such as adoption,
- 03:46reach and so forth.
- 03:48And then finally in the last stage, dissemination,
- 03:52that's all again about the scale up stage,
- 03:54the scale up meaning making it more available
- 03:58in the particular context that was studied,
- 04:02but to everybody within that context and everybody
- 04:04like those who were in that context.
- 04:07And then scale out meaning to everybody, to other places.
- 04:12And again, there could be further
- 04:16adaption needed at that point.
- 04:23Okay, so what is implementation science?
- 04:26A number of definitions have been posed.
- 04:29And maybe the one at the bottom is the simplest
- 04:32and maybe one that I prefer the best,
- 04:36implementation science is about determining what works
- 04:39in real-life, full-scale settings.
- 04:43It can also, say, the blue boxed definition of systematic,
- 04:47scientific approach to ask and answer questions
- 04:50about how to get what works
- 04:52to people who need it with greater speed, fidelity,
- 04:55efficiency, quality and relevant coverage.
- 05:00And then the middle definition I think is the one
- 05:03that's used by the NIH in the dissemination
- 05:07and implementation science study section
- 05:09that's recently been closed down
- 05:13and Bree issued in with some greater specializations,
- 05:20that that's been defined,
- 05:22implementation and prevention science
- 05:24there that was defined as the scientific study
- 05:27of programs and interventions which promote
- 05:29the systematic uptake of clinical research findings,
- 05:33so here it's hearkening to the pipeline I was
- 05:36just discussing, and other evidence-based approaches
- 05:39into routine clinical practice and public health policy,
- 05:43hence improving the quality, effectiveness,
- 05:45reliability, safety, appropriateness, equity,
- 05:48efficiency of healthcare.
- 05:51So hopefully that gives you some sense
- 05:53of what we're talking about here.
- 05:55It's not that there's a single uniform definition
- 05:59that's kind of universally agreed on by everybody,
- 06:03but it's definitely getting at not so much showing
- 06:07that interventions, programs and so forth
- 06:12are effective because that's already been done
- 06:15in these pragmatic and effectiveness trials,
- 06:19but at getting them to the largest populations possible
- 06:23in an efficient way in making sure
- 06:26that quality is maintained.
- 06:28So very practical, but also very challenging.
- 06:32So another piece of this in implementation science,
- 06:35since we're studying evidence-based interventions
- 06:39is that implementation science studies,
- 06:41we call it the three Rs, Rigorous, Rapid and Relevant.
- 06:46So rigorous has to do with,
- 06:48even though we're studying very practical things,
- 06:51like implementation science in some ways is, you know,
- 06:55the outgrowth of what had been previously called
- 06:58program evaluation, that we might,
- 07:01we still wanna use state-of-the-art methods.
- 07:03The studies use, you know,
- 07:05formal power calculations for cluster-randomized designs.
- 07:11They can take into account multiple outcomes
- 07:15and the methodologies can use causal inference methods
- 07:19and all sorts of multi-level analysis methods and so forth.
- 07:25There's no drop off in the rigor
- 07:28in implementation science, necessarily.
- 07:31And in fact it's very challenging to be rigorous
- 07:33in these kinds of settings where the data may,
- 07:36are imperfect so when we get onto rapid,
- 07:41we also need to get these answers very quickly
- 07:43'cause we're talking about urgent public health
- 07:45questions and we want to have our implementation
- 07:49science work be informative to policy development
- 07:54and formulation and promulgation, not coming afterwards.
- 07:59So in order to be rapid,
- 08:01we wanna make use of existing data,
- 08:04electronic health records, other sorts of records
- 08:08and move things along quite rapidly
- 08:12even though we're trying to maintain the rigor
- 08:15just as we would in a phase III randomized clinical trial
- 08:19in an academic setting.
- 08:21And then relevant, we wanna be answering
- 08:24the most important public health questions of the day.
- 08:27And those would be decided by public health leaders,
- 08:32policy directors and ministry of health officials
- 08:39at the different country,
- 08:42district and even community levels as well
- 08:44as the community itself.
- 08:46So it's different than in the case of, say,
- 08:48academia where somebody is a research oncologist
- 08:51and they're working on breast cancer and trying
- 08:54to figure out some new treatments to cure and prolong
- 08:58the life and quality of life of people with breast cancer.
- 09:01Implementation scientists wouldn't necessarily be
- 09:04choosing the topical area of interest.
- 09:07They would let the public health community
- 09:09make those choices.
- 09:11And then where we might come in is, okay,
- 09:13this is an important policy question,
- 09:16how are we gonna study this and get you some answers,
- 09:19rigorously and rapidly.
- 09:24So given all of what I've said,
- 09:29it might be evident that implementation science
- 09:32is somewhat different from epidemiology,
- 09:35clinical research and so forth.
- 09:38And at the study design level we have
- 09:41these sorts of considerations.
- 09:43So the first one is that implementation science
- 09:47is guided by implementation science theory
- 09:49models and frameworks.
- 09:51What I mean by that are, there are social science
- 09:55theories of behavioral change such as CFIR,
- 10:00the consolidated framework for implementation
- 10:03research or RE-AIM.
- 10:06There's a number of them.
- 10:08And they guide the work in the sense
- 10:12that they help determine where we are in the pipeline
- 10:19of identifying barriers to full uptake at high quality
- 10:25of a particular intervention and
- 10:30what has been facilitating this so far
- 10:33in this particular context.
- 10:35And then figuring out how to expand it,
- 10:38how to adapt it in a new setting and so forth.
- 10:41And many of these things in involve behavioral change
- 10:44and other sorts of human factors that are
- 10:48not typically the objects of study of clinical researchers,
- 10:53epidemiologists and biostatisticians.
- 10:57So implementation science brings in some new team members,
- 11:01namely social scientists, who might be psychologists,
- 11:07social workers, medical anthropologists,
- 11:13and then also economists because we still tend
- 11:16to be looking at cost from the sustainability
- 11:19point of view.
- 11:21So I think I've already mentioned
- 11:22that implementation science tends to intrinsically
- 11:25be multilevel, because in terms of developing
- 11:29and sustaining successful interventions
- 11:32to address important public health programs,
- 11:35we need to engage often the healthcare system policymakers,
- 11:39organizational leaders, healthcare providers,
- 11:43clients and their families and social networks.
- 11:49And social networks, I'd like to just say a word about,
- 11:52it's a little throw in here,
- 11:54but actually it's an area of research for my group
- 11:58and maybe other people who are participating
- 12:00in this discussion that
- 12:04on the provider and client level, at least,
- 12:10it's quite possible and it's starting
- 12:12to become increasingly documented that interventions
- 12:16that not everybody, not all providers and not all
- 12:20clients necessarily need to receive an intervention
- 12:24in order for an intervention to spread
- 12:27throughout a health system
- 12:29or throughout a community because people have
- 12:31social relationships and can influence one another
- 12:35in terms of the adoption of new practices
- 12:38at the provider level or the uptake of new
- 12:41interventions at the client and family, neighborhood,
- 12:45workforce and so forth level.
- 12:48So we're interested in leveraging these networks
- 12:51to perhaps make certain types of public
- 12:55health interventions be more cost-effective and have
- 12:59wider reach and sustainability.
- 13:03Another piece is that implementation science studies
- 13:07tend to be dynamic in that many of you,
- 13:10if you've worked in HIV, it's very well known, say,
- 13:13the HIV treatment cascade,
- 13:16the TB treatment cascade and so forth.
- 13:19And then to say prevent, say,
- 13:22HIV or to ensure the highest quality of life of people
- 13:26who are HIV positive,
- 13:28there are all these different steps along the way
- 13:31that involve different types of treatments, interventions,
- 13:35actors at different levels.
- 13:37And one of the things we do in implementation science
- 13:40is we might map those cascades and think,
- 13:43figure out where the weak points are and then figure
- 13:46out what interventions can we bring in to strengthen
- 13:50the success of the entire cascade by targeting
- 13:57its weakest points.
- 13:59So the timing of delivery of intervention components,
- 14:02can be, along the cascade,
- 14:04can be as important as the delivery itself.
- 14:08And then as I mentioned, just as an effectiveness trials,
- 14:12you know, I can't say for sure there would never be
- 14:14an implementation study that
- 14:18wasn't individually randomized.
- 14:20But in general the implication for design
- 14:23of these sorts of studies is that they tend to be
- 14:26group level assignments to study intervention components
- 14:30and they could be at the district, hospital,
- 14:33facility, practice, provider or community levels
- 14:36and even clients themselves can be group level.
- 14:40If you think that every client is a member
- 14:42of a social network and that by including them
- 14:46we're actually indirectly including their entire
- 14:49social network as well.
- 14:58Okay. I'm now trying to go on to the next slide.
- 15:00It's not behaving. Let me try again.
- 15:05Oh, there we go. So...
- 15:12So there's many study design options
- 15:14in implementation science and they depend on a wide
- 15:18range of factors.
- 15:19The factors are listed here what the research question is,
- 15:23the type of clinical or public health intervention,
- 15:26the type of implementation strategy, feasibility,
- 15:29cost in personnel, the setting, who are the stakeholders,
- 15:34what are the logistics, the target population,
- 15:37timeline, ethical issues come up and they in fact can
- 15:41be very different than those that we're used
- 15:43to in randomized clinical trials of investigated
- 15:47drugs and devices.
- 15:49And that's an area of active development
- 15:51that I'm quite interested in and there might be
- 15:53other people here who are interested in being involved
- 15:56in this as well.
- 15:57And then... And then funding opportunities.
- 16:04So... There we go.
- 16:08Okay, so... Sorry, I got this.
- 16:11Okay, there we go.
- 16:12So we have a number of to study design options
- 16:16in implementation science and the rest of this talk
- 16:20is actually gonna be focusing on this aspect
- 16:23of what I, in the first part I kind of set the stage
- 16:25by kind of talking about some of the key issues
- 16:29in implementation science and then that of course
- 16:31informs what the study design options are.
- 16:36And Ike, I'm not monitoring the chat and I do welcome
- 16:39questions and comments.
- 16:41So if any are coming up, Ike, it would be great
- 16:44if you could throw them in because I'm just,
- 16:48I'm not seeing them at the same time I'm seeing my slides.
- 16:51So we talked about experimental study design,
- 16:55so those are usually,
- 16:57are there any questions or comments so far?
- 17:02<v Speaker>Further you go on,</v>
- 17:03we ask the questions at the end of the lecture,
- 17:05let everybody write down the questions or when we open
- 17:11for question and answer, we can ask.
- 17:14Thank you. <v ->Okay, sure. That's great.</v>
- 17:15We can do it that way as well.
- 17:18So experimental, that's kind of also synonym
- 17:21for a randomized design.
- 17:23So randomization, as many of you probably know,
- 17:28is considered the highest form of, the highest type,
- 17:35the strongest form of study design.
- 17:38It allows causal inference
- 17:42in the simplest ways, with the simplest types of designs,
- 17:46and when we randomize by randomly assigning
- 17:49the intervention to one group versus another,
- 17:55we on average control for all sorts of confounding,
- 17:59ensuring balance between the two groups,
- 18:02that imbalances might not lead to,
- 18:07under the null would not lead to incorrect inferences,
- 18:12and on average will give us valid estimates,
- 18:16it will control for various sorts of selection bias
- 18:20and we don't have any measurement error
- 18:23because we know who we gave who or what groups we gave
- 18:29the intervention to or not.
- 18:31So like we have many, several types
- 18:34of these experimental designs,
- 18:36some of which most people
- 18:38in this symposium would be familiar with RCT
- 18:42randomized clinical trials
- 18:44which are individually randomized.
- 18:46And then that we use the acronym CRT
- 18:48or cluster-randomized trial,
- 18:50they're also group randomized trials,
- 18:52very common in implementation science.
- 18:55And then there's another type of cluster-randomized trial
- 18:59that's become increasingly popular.
- 19:01The stepped wedge design and I'm gonna be talking
- 19:04in some detail about that and our group has here
- 19:07at Yale at the Center for Methods
- 19:09in Implementation and Prevention Science,
- 19:13we've done quite a bit of work on extending study design,
- 19:17CRTs and stepped wedge designs.
- 19:19There's a very useful design called the MOST design
- 19:23that is becoming increasingly popular
- 19:26in implementation science that I'll talk about.
- 19:29And then there's the LAGO design, learn as you go design,
- 19:32which has been developed by our group that I'll also
- 19:35talk about briefly.
- 19:36And then, in the interest of..
- 19:42Yes. Okay. <v ->He is gone. He is gone.</v>
- 19:47<v Presenter>Okay, sure.</v>
- 19:48In the interest of rapid in implementation science,
- 19:53there are also quasi experimental designs.
- 19:56So these designs take advantage of certain features
- 20:00of the data in order to get sort
- 20:04of under certain circumstances that are very well
- 20:07defined and may or may not be valid, but often are,
- 20:10we can get inference that's almost as strong
- 20:13as that in the randomized designs without a having
- 20:16to randomize, and randomization is an expensive
- 20:21and slow process and often may not be,
- 20:25I'm starting to increasingly think myself,
- 20:28the very best way to get the answers that we need
- 20:31in public health rigorously and rapidly.
- 20:36So in the group of quasi-experimental designs,
- 20:40we have pre-post designs,
- 20:42difference in difference designs,
- 20:44interrupted time series designs and controlled
- 20:46interrupted time series designs.
- 20:49And I'll talk a little bit about those as well.
- 20:52And then finally
- 20:54there's observational research which in my view has
- 20:57been underappreciated and underutilized
- 21:00in implementation science because there's such
- 21:02a big emphasis on, I think,
- 21:05probably the rigor and wanting to be able to make
- 21:07causal inference without having to make
- 21:11a lot of assumptions.
- 21:12So there's been a strong emphasis on experimental
- 21:15designs and so far very much less use
- 21:20of observational design such as the classic
- 21:22cohort cross-sectional and case-control studies
- 21:26that can be embedded in the ongoing practice
- 21:30of implementing public health programs
- 21:34and simultaneously evaluating them using
- 21:37observational data methods, in particular,
- 21:41causal inference methods.
- 21:43And then finally there are some other designs
- 21:46that you may have heard about that have come up
- 21:47in implementation science, hybrid designs
- 21:50and mixed methods designs,
- 21:52which I'll also mention as we go along.
- 21:57So, quite a lot of options, actually, as we can see.
- 22:07So, okay. All right, there we go.
- 22:12So here, this slide refers to these two citations
- 22:17that I have down here at the bottom and it's called
- 22:20the PRECIS,
- 22:23Pragmatic-Explanatory Continuum Indicator Summary.
- 22:27And it's a way of evaluating how pragmatic your trial is.
- 22:31And in some cases in the United States we have
- 22:34this pretty important funding mechanism called
- 22:37PCORI and they explicitly fund pragmatic trials.
- 22:42And if your trial is pragmatic, it isn't pragmatic enough,
- 22:47it's very unlikely to be accepted
- 22:50for PCORI funding mechanism.
- 22:53And what the PCORI, this pragmatic trial,
- 22:57there are these various criteria,
- 22:59I've mentioned some of them already, so eligibility,
- 23:04and these all are on Likert scales,
- 23:06so the idea is, again,
- 23:07the fewer eligibility requirements, the better,
- 23:10recruitment, the more general and open the recruitment is,
- 23:14the better, the setting,
- 23:16the more community-based population based the setting,
- 23:20the better, the organization and so forth.
- 23:24So all of these things are used to evaluate
- 23:27how pragmatic a trial is and it's worth doing,
- 23:32if you're designing a study and
- 23:35you're really wanting it to be in the effectiveness,
- 23:40you know, implementation part of the continuum,
- 23:43you can get access to these papers and literally rank
- 23:48your design and experiment with different possible
- 23:52designs and try to get your study to be more
- 23:55and more pragmatic.
- 23:57And I highly recommend it because that's what we need
- 23:59in public health.
- 24:00We need pragmatic trials that will really tell us
- 24:04how well our interventions will work at broad
- 24:07scale in the full population in the community.
- 24:18So, and experimental designs can be pragmatic.
- 24:22That's why I showed the pragmatic slide first.
- 24:25Even a randomized controlled trial, an RCT,
- 24:28can be rated on the precise scale and be
- 24:31made more pragmatic.
- 24:33And then, as I mentioned,
- 24:34cluster-randomized trials and then stepped wedge designs.
- 24:38And here's a paper that was published
- 24:40in the Annual Review of Public Health if you wanted to,
- 24:43I'll read a little bit more of an overview
- 24:46of the study design options
- 24:47for dissemination and implementation science.
- 24:54So now I'm gonna talk about cluster-randomized trials
- 24:56and I'm gonna give an example of a trial
- 24:59that I worked on myself.
- 25:02I went a little too far. I think I did again, sorry.
- 25:07No, no. Okay, there we go.
- 25:08So cluster-randomized trials, this is like a graphic,
- 25:12it gives a graphical view of the difference
- 25:15between cluster-randomized
- 25:17and individually randomized trials.
- 25:19So over here on the left hand side we have
- 25:21a individually randomized trial and on the right hand
- 25:25side we have a cluster-randomized trial.
- 25:28You can see that they can have the same amount of people.
- 25:31So each one of these smiley phases is a participant
- 25:34in the study.
- 25:35But in a cluster-randomized trial, we randomized by groups.
- 25:40So there were four groups here.
- 25:42And then two of them became part of the intervention
- 25:45group and then two of the groups became part
- 25:47of the control group.
- 25:48And then of course every group member within each
- 25:51of those groups became part of the intervention group
- 25:54and so forth for the control group.
- 25:56And so that's the idea.
- 25:58And then when that is done,
- 26:00and oftentimes it has to be done
- 26:03because the intervention is actually applied
- 26:05at the group level.
- 26:06And even when it doesn't have to be done pragmatically
- 26:10and in terms of rapidity, for example,
- 26:14it might make sense to have a study design
- 26:18that's cluster-randomized anyway, that's the way
- 26:21we would go and that's usually what I see.
- 26:25And the study design calculations and so forth are all,
- 26:28and the analysis are somewhat different when we have this.
- 26:31So now I'm gonna go over a case study that I was
- 26:35involved with when I was
- 26:36at the Harvard School of Public Health that was
- 26:39in collaboration with Management Development for Health
- 26:42in Dar es Salaam, Tanzania, an NGO, that's the PEPFAR,
- 26:46implementing partner in the greater Dar es Salaam Region.
- 26:50And together we conducted the study called
- 26:52Familia Salama and it was a 2x2 cluster-randomized trial.
- 26:58So 2x2 factorial means,
- 26:59it's another design feature that can be used
- 27:02in any ones of these types of studies.
- 27:05But where it's a very old idea but it allows us
- 27:10to study multiple interventions
- 27:15or implementation strategies components at the same
- 27:19time in one single study.
- 27:22And there's no kind of statistical limit.
- 27:25You don't lose power.
- 27:28You could do a 2x2x2 or a 2x2x2x2 and so forth.
- 27:32The main limitation to how many things we can study
- 27:36at one time using this factorial approach is actually
- 27:39just logistics and feasibility.
- 27:42It gets very complicated and confusing
- 27:44when you're trying to run a study at scale
- 27:48at a population level and like every group like
- 27:51this one is doing this, this and this,
- 27:53and then the next one is doing some other combination,
- 27:57it can start to become unmanageable.
- 27:59But I also would like to bring it up and encourage
- 28:01people like at least if you are conducting a study
- 28:06and you're going and you have like, say, you know,
- 28:08if you have a well-funded study, you know,
- 28:11a population-based effectiveness or implementation
- 28:14trial with for very little, for no increased sample size,
- 28:21you can basically add another factor and study two things,
- 28:26and it's like why not?
- 28:28So just throwing that out 'cause it's underutilized
- 28:31as a design approach.
- 28:36But here we did this where we compared,
- 28:38we were looking at an enhanced community health
- 28:41worker versus standard of care
- 28:44to increase uptake
- 28:46of the World Health Organization's recommendation
- 28:49that all pregnant women should have at least
- 28:52four ANC visits.
- 28:54And then that was crossed with option A versus option
- 28:58B, which are two approaches to prevention of mother
- 29:02to child transmission of HIV.
- 29:04Now option B has been universally adopted among
- 29:08HIV positive mothers.
- 29:10So we had these two things crossed in this trial.
- 29:17And here is the name of the paper that's actually
- 29:21recently been submitted, just actually just publishing
- 29:25on one of the two factors,
- 29:26"The impact of community health worker intervention
- 29:29on uptake of antenatal care:
- 29:31a cluster-randomized, pragmatic trial..."
- 29:33And see, very, very large trial,
- 29:36almost 250,000 women in Dar es Salaam, Tanzania
- 29:40who reported to ANC at least once
- 29:43and were found to be pregnant.
- 29:45And you can see that these kinds of big population
- 29:47based effectiveness and implementation trials are
- 29:51often highly collaborative and they involve a big team,
- 29:55if we're working together in an international partnership,
- 29:58you can see that there are people you know from different,
- 30:02both from, say, the host country as well as, say,
- 30:05the part technical support partners and so forth.
- 30:09In our case we had people involved
- 30:11from Norway, Germany, the United States
- 30:14and Tanzania involved
- 30:15in this trial and here's all the different
- 30:17institutions that we all came from.
- 30:22And so here's a schematic of the design.
- 30:24And so this sort of an intervention was implemented
- 30:30and rolled out at the ward level.
- 30:32So in Dar es Salaam there were two,
- 30:35at that time there were three districts
- 30:37for the whole greater Dar es Salaam region
- 30:40and we included two of the three districts.
- 30:43And as many of you may know,
- 30:44Dar es Salaam is one of the largest cities
- 30:47in Sub-Saharan Africa.
- 30:49And within those two districts there were 60
- 30:53political wards and we randomized them to one of,
- 30:57you could say, four arms,
- 30:58where first the 60 wards were randomly
- 31:00assigned to either the community health
- 31:04worker intervention or not.
- 31:06And 36 were assigned to the community health
- 31:10ward intervention, 24 to standard of care.
- 31:14And then of those 36, 22 went to option B, 14 to option A,
- 31:21and then among and so forth.
- 31:22So you can see how that's divided up.
- 31:24And then you might be wondering, well,
- 31:26it's kind of imbalanced by ward, and why was that?
- 31:30This is a tricky aspect of balanced design
- 31:34in a cluster-randomized trial because the wards
- 31:38did not have the same populations,
- 31:41and the same expected populations of pregnant women
- 31:44who would be delivering during the study period.
- 31:47So we were trading off both kind of having
- 31:50a sufficient number of wards with having
- 31:53a sufficient number of women within the wards.
- 31:56And so you can see that what ended up happening was
- 32:02we expected to have around,
- 32:05in the first intervention we expected to have
- 32:08a certain number of pregnant women in option A
- 32:12versus option B.
- 32:14And what we observed was quite different.
- 32:17And then similarly for the community health
- 32:20worker intervention, we did our best to try to balance
- 32:25the number of women,
- 32:27pregnant women who would be in the community health
- 32:29worker intervention versus standard of care.
- 32:33And then what happened was we saw something very different.
- 32:38And then in addition you might
- 32:40notice something else which some of you who have
- 32:42actually run studies may have also seen that it can
- 32:45be quite challenging to specify what some
- 32:48of these input parameters are before a study is conducted.
- 32:52So one thing that happened was you can see
- 32:56that what we expected
- 33:01and then what we observed was
- 33:03we observed, you know,
- 33:04quite a few more pregnancies during the study period
- 33:09than were expected.
- 33:10We underestimated the fertility rate in this area,
- 33:13based on the population level data we had.
- 33:16And then you'll also see a little later
- 33:18on we also overestimated the HIV transmission rate.
- 33:24Or actually what we really overestimated was
- 33:28the proportion of pregnant women who are gonna
- 33:29be HIV positive.
- 33:31So as many of you know,
- 33:35the various programs that have been implemented to kind
- 33:38of end the AIDS epidemic have been quite successful.
- 33:41We're not completely there,
- 33:43but there were huge improvements that were made.
- 33:46And even during the study period,
- 33:48from the time we designed the study to the time
- 33:50the study was actually conducted,
- 33:53the HIV positivity rate went down substantially.
- 33:57I think we predicted that it would be around 12% based
- 34:01on data existing at the time when we designed
- 34:05and the study and submitted it for funding.
- 34:09And when we actually ran the study I think it had
- 34:12dropped to 6%.
- 34:14So when those kinds of things happen,
- 34:15they can really detract from the power
- 34:18of your study unless other types of adaptive study
- 34:22designs are brought to bear.
- 34:25But luckily, in our case,
- 34:26'cause we hadn't planned in an adaptive study design,
- 34:31we also had a higher pregnancy rate,
- 34:33so we compensated for the lower HIV rate,
- 34:38which of course is a wonderful thing,
- 34:40with a higher pregnancy rate,
- 34:41so we ended up maybe with around the power
- 34:44that we might have hoped to have had at the start
- 34:48with these input parameters being quite different
- 34:50than what happened in the reality.
- 34:55So the results here,
- 34:59intervention significantly increased the likelihood
- 35:02of attending four or more ANC visits by around 42%.
- 35:06And the intervention also had a modest beneficial
- 35:10effect on the total number of ANC visits.
- 35:13It increased them by 8%.
- 35:15It wasn't successful in improving the timing
- 35:19of the first visit.
- 35:20'Cause another kind of secondary goal was to hope
- 35:23that women might become aware of their pregnancies
- 35:26earlier on and get an initial ANC visit
- 35:29even in their first trimester.
- 35:32And so what we concluded was that trained community
- 35:35health workers can increase attendance of ANC visits
- 35:38in Dar es Salaam in similar settings.
- 35:41However, earlier additional interventions
- 35:44would be necessary to promote early initiation of ANC.
- 35:49And then the study also demonstrates that routine
- 35:51health systems data can be leveraged
- 35:53for outcome assessment in trials and program evaluation.
- 35:57I neglected to dimension that among
- 36:00these 250,000 pregnancies in these 60 wards
- 36:04at that time there really wasn't electronic health
- 36:08records in the facilities.
- 36:10So what was there were log books where people,
- 36:15intake healthcare providers would be entering certain
- 36:19data elements at the different types of clinics
- 36:22that women were going to.
- 36:23And what we did was at the end of the day we had
- 36:27study staff coming in and we had like a database set
- 36:32up that looked just like the log book and they would
- 36:35come in and they'd enter all the data from the log
- 36:37book for that day.
- 36:39And we did that for all of those pregnancies.
- 36:41Now, ideally, as more and more healthcare systems
- 36:45around the world become reliant on electronic
- 36:48health records, that wouldn't even be necessary.
- 36:52In a study like this, you know, at full-scale,
- 36:54250,000 pregnancies over a period of around two
- 36:58years could be conducted rapidly and rigorously
- 37:02with no additional research data collection,
- 37:06which is kind of the goal.
- 37:10So that's an example of a cluster-randomized trial,
- 37:14and that's sort of an effectiveness implementation trial.
- 37:22And maybe I could say something about that,
- 37:24that the endpoint of, say, women having, say,
- 37:28four or more ANC visits during their pregnancies,
- 37:32there's no health outcome there.
- 37:34It's a pure implementation outcome
- 37:37where there's an evidence-based intervention,
- 37:40the WHO has reviewed the data very carefully,
- 37:43they've made this recommendation and presumably
- 37:46the idea would be by having four or more ANC visits
- 37:51and of course receiving quality care at those visits,
- 37:53that we would be able to increasingly bring down
- 37:57the maternal mortality rate,
- 38:00which has also happened all around the world,
- 38:02as well as the sort of under five perinatal
- 38:05and neonatal mortality rates.
- 38:07But we weren't even measuring those as primary outcomes
- 38:10in the study, that's already been established
- 38:13through effectiveness and efficacy trials.
- 38:16And the implementation science we're all
- 38:18about studying can we successfully implement
- 38:22a proven intervention, and in this case, at scale,
- 38:26that's the purpose of the study.
- 38:29So that's actually, it's, you know,
- 38:31when I first started doing this,
- 38:33having a biostatistics and epidemiology background,
- 38:35it's kind of shocking to think that you could propose
- 38:38a big study like this and not even have a health outcome,
- 38:42just be studying an implementation outcome.
- 38:45But you come to realize that's really what we need
- 38:49for the most part.
- 38:50And if it's possible to also look at the health
- 38:52outcome without slowing down the trial,
- 38:55increasing the expenses greatly, it's a great thing to do.
- 38:59And later on I'll talk about hybrid designs.
- 39:01But now we're gonna talk about
- 39:04stepped-wedge cluster-randomized control trials.
- 39:07So stepped wedge designs are popular
- 39:11for a number of reasons.
- 39:13So let me just explain what this schematic diagram means.
- 39:16So what happens in this particular case is the rows
- 39:22are clusters, so we have,
- 39:26or they could be groups of clusters, actually.
- 39:29And here there are five groups of clusters.
- 39:33The columns are time points.
- 39:35So what happens, let's just look at this first row.
- 39:40So this row, this group one, so there could be one village,
- 39:44say, that's in this group or there could be five
- 39:46villages in this group,
- 39:48but they were randomly assigned to pattern one.
- 39:51And what that means is that those villages assigned
- 39:54to pattern one, we take baseline data for everybody,
- 39:59but then at time two, which could be in six months, a year,
- 40:03two years, it's often, in my experience,
- 40:06something like four months, six months,
- 40:09all the clusters randomly assigned to pattern
- 40:13one transition to the intervention.
- 40:17And then for the next 1, 2, 3, 5 periods,
- 40:20they're in the intervention group.
- 40:22Then those clusters assigned to pattern two,
- 40:26they stay on control for two time periods and then
- 40:31it's at time three,
- 40:32at time step three they go onto the intervention group
- 40:35and so forth.
- 40:36And so what you notice is we have baseline date on everybody
- 40:44and then at the end of the study we also have,
- 40:47every cluster and every individual
- 40:49within those clusters are assigned to the intervention.
- 40:53So two things happen.
- 40:56One is that oftentimes in implementation science,
- 41:00we're studying evidence-based interventions.
- 41:02So you could say,
- 41:04how could we withhold having well-trained community
- 41:07health workers from anybody?
- 41:09Well, you know,
- 41:12that's not necessarily an optimal thing to do, but
- 41:16a sort of compromise is that by the end of this study,
- 41:21all women living in all of the wards,
- 41:23if it had been a stepped wedge design, which it wasn't,
- 41:26would have access to this enhanced community health
- 41:29worker intervention, all the facilities would be
- 41:32trained to this new evidence-based intervention.
- 41:35So it addresses some of the ethical issues that might
- 41:41come up in implementation science
- 41:43when we're studying evidence-based interventions.
- 41:48What's in equipoise is not whether the intervention
- 41:51works or not,
- 41:52what's in equipoise is whether this implementation
- 41:56of the evidence-based intervention will work.
- 41:59So it's a little bit of a subtle difference
- 42:03from an ethical point of view and it's also why people
- 42:07are discussing and writing about new ethics
- 42:09for public health.
- 42:11And then the other thing that I wanna point
- 42:15out about stepped wedge designs that make it very,
- 42:17make them very rigorous from a causal inference point
- 42:21of view is that at each time point,
- 42:24which clusters and then which individuals
- 42:27within the clusters are in the intervention group or not,
- 42:30is completely randomly assigned.
- 42:33So when you contrast time two, these four to this one,
- 42:37it's a random contrast.
- 42:39And then time three,
- 42:40these three to these two is a random contrast and so forth.
- 42:44So between clusters at any given time point we have
- 42:49this fully randomized design.
- 42:51And then what we also have is this element of pre-post,
- 42:56because, say, we could,
- 43:00even with just this first row,
- 43:03we could estimate and test the effect
- 43:05of the intervention because we have at time one all
- 43:09of the villages or clusters assigned to time one are
- 43:13not in the intervention group and then we have
- 43:16five periods where they were
- 43:18and the before/after can be compared.
- 43:21And so what we worry
- 43:22about with the before/after design
- 43:24and why it's called quasi-experimental rather
- 43:27than experimental is because there's one other
- 43:30thing that's changing, if it's the same villages
- 43:34clusters and people or comparable people
- 43:39in this one row of all the clusters assigned
- 43:42to this row at this one time,
- 43:44they should be the same with respect to all time
- 43:47and variant confounders.
- 43:49But time is happening here,
- 43:51so there could be no intervention effect,
- 43:54but let's say between time one and time two,
- 43:57something else happened in the background like
- 44:00the government instituted a new program or some new
- 44:03drug became widely available
- 44:06or there is a natural disaster.
- 44:09So then comparing
- 44:11the before/after within the same clusters will be
- 44:16biased by these time varying effects.
- 44:18And so without these contemporaneous clusters happening
- 44:23at the same time, we can't control for those effects.
- 44:27So it's almost like an enhanced pre-post design
- 44:30where we're controlling for time varying effects
- 44:33through randomized contrasts.
- 44:38So it's a very strong design,
- 44:42and here's a good paper to read if you wanna learn
- 44:44about it a little bit more.
- 44:48Now I'm gonna give an example of a study I worked on
- 44:53that was a stepped wedge design.
- 44:56It was called early access or it studied early access
- 45:00to ART in what's now called Eswatini, Swaziland.
- 45:04Its primary funder was
- 45:07the Clinton Health Access Initiative.
- 45:09Other funding sources were brought
- 45:11in, the Dutch Lottery and some other sources.
- 45:15And what we were looking at here was the impact
- 45:18of early initiation of ART versus national standard
- 45:22of care of antiretroviral therapy in Swaziland's
- 45:26public health sector system.
- 45:29And it's called the MaxART study.
- 45:31So what made this study a little bit different
- 45:35than the studies of the other big trials
- 45:37of early initiation or early access
- 45:40to ART that were happening around the same time was
- 45:44that we were looking at the impact not
- 45:47of controlling community HIV incidents,
- 45:51but we were actually looking at the impact of early
- 45:53access to ART on the clients themselves,
- 45:56the HIV positive participants who would under standard
- 46:02of care not be initiated to ART until later on
- 46:07in the development of their disease,
- 46:09when they started to develop different types of symptoms,
- 46:12which I know many people on this call would be familiar
- 46:14to when CD4 is dropped below 250 and then it was
- 46:18changed to 500 and so forth or certain symptoms
- 46:21and AIDS-defining conditions developed,
- 46:23that was the standard for many,
- 46:26many years until early access to ART happened.
- 46:29And it really wasn't known what the impact would be
- 46:34of early access on HIV positive people themselves,
- 46:38both in terms of their health outcomes as well
- 46:41as even more implementation type outcomes such
- 46:45as retention in care and this issue of, say,
- 46:48developing resistance if people are being initiated
- 46:51very early on when they're not showing any signs
- 46:54of illness and so forth.
- 46:55So again, because I think partially both
- 46:59for logistical reasons, which is another reason
- 47:01why we like stepped wedge designs,
- 47:03it was as Swaziland, Eswatini didn't have early access
- 47:08to ART when the study started.
- 47:10And so to train the providers to do this,
- 47:14to get the medications in at the scale and volume
- 47:18that was needed, to have the testing facilities scaled
- 47:21up and so forth,
- 47:23it wasn't possible to do that all and be ready
- 47:27on August 14th when the study started here.
- 47:31So by phasing it in, it made it possible.
- 47:34We randomly, in this case there were 14
- 47:37facilities included, so two in each one
- 47:40of these clusters and we were,
- 47:44randomly rolled them in to early access,
- 47:47giving us time to properly set up each pair
- 47:50of facilities to be able to implement early access
- 47:53in a high quality manner.
- 47:55And then, of course, at the end,
- 47:56all the facilities were in early access.
- 48:00And in fact, what I didn't mention,
- 48:02I mentioned this early on about like this was a high grade,
- 48:06you know, research quality study,
- 48:08there were extra resources put in and so forth,
- 48:11and somewhere in the middle of this,
- 48:13WHO decided everybody should have early access
- 48:17and Eswatini immediately adopted that recommendation
- 48:20and that was the end of the standard of care
- 48:22we were studying.
- 48:23So our power was compromised, not fully, luckily,
- 48:27but that's, this issue that I'm mentioning,
- 48:31that rapid is just such an important aspect
- 48:34in implementation science, I think it's an area of,
- 48:38you know, sort of research.
- 48:39We need good examples as well as possibly
- 48:42new methodologies of moving these studies
- 48:44along so that the policy, when policies are being made,
- 48:49the policymakers would actually have data
- 48:51from studies like this to inform their decision making,
- 48:55which wasn't the case here.
- 48:58So here is a published paper on the protocol
- 49:00if you wanted to learn a little bit more
- 49:02about the design of this study.
- 49:06And then the results.
- 49:08So this study was conducted
- 49:10between 2014 and 2017,
- 49:133,405 participants were enrolled.
- 49:18And the 12 month HIV care retention rates were 80% and 86%.
- 49:25So there was a, you know, it was an improvement,
- 49:276% retention means alive and remaining in care.
- 49:32So it's a comprehensive outcome that both includes sort
- 49:36of the implementation aspect of not losing people
- 49:40for coming in, getting their medications,
- 49:42being checked to make sure their disease isn't
- 49:45advancing and then also their health outcome,
- 49:47making sure they're still alive.
- 49:49So again, 80% to 86%, it's not huge, but it's still,
- 49:54you know, a nice improvement.
- 49:55And then the 12 month combined retention
- 49:58and viral suppression endpoint rates were 44% versus 80%.
- 50:05So that's very, very big.
- 50:06And you know, as we all know,
- 50:08getting people in ART really improves
- 50:10viral suppression rates.
- 50:11So that was shown to be very beneficial
- 50:14and highly significant.
- 50:16So we've considered this to be good news in terms
- 50:21of early access to ART also being strongly beneficial
- 50:25to the clients themselves, not just society as a whole.
- 50:29And at the same time we noticed
- 50:31that there were significant gaps
- 50:34in the healthcare system's ability to provide viral
- 50:37load monitoring with 80% participants in standard
- 50:42of care and 60% in early access each having
- 50:46a missing viral load.
- 50:48So that's an example of a stepped-wedge
- 50:50cluster-randomized design that both was looking at kind
- 50:54of a combined implementation health outcome
- 50:57as its primary outcome.
- 51:03Okay, I think I'm gonna, well, all right,
- 51:06so a little bit about, I wonder, I'm sorry,
- 51:11I just thought maybe I can get rid of this.
- 51:14Oh, okay, now I got rid of all these drawings, sorry,
- 51:17there were all these colored pencil things on here.
- 51:20Stepped wedge designs, when are they useful?
- 51:24When there's evidence to support of the intervention
- 51:27or resistance to a parallel design where only
- 51:30half received the treatment.
- 51:32Another aspect of stepped wedge designs
- 51:34it's often believed, and this is on the ethical side,
- 51:37that the treatment is service delivery or policy change.
- 51:40And that it's often believed
- 51:42that when what's being studied is a service delivery
- 51:46issue or policy change,
- 51:47we don't need individual informed consent.
- 51:51And then when the intra-cluster correlation is high
- 51:55or the cluster size is high.
- 51:57So I haven't talked about the inter cluster correlation,
- 52:00but that's a very important input parameter
- 52:03when we look at cluster-randomized designs.
- 52:07And what that measures, we obviously call it
- 52:11the ICC or sometimes you'll see it indicated
- 52:13by the Greek letter rho is that it tells us
- 52:17how highly correlated the outcome is,
- 52:21particularly the primary outcome
- 52:22within the clusters compared to between the clusters.
- 52:26So, let's say, if we were, let's say,
- 52:29in the MaxART study in Eswatini, if certain facilities,
- 52:35let's say, had very high mortality rates and then
- 52:38other facilities, you know at baseline had low
- 52:41mortality rates, that would suggest a high ICC.
- 52:44And when you have a high ICC, a lot of variability
- 52:48in the event rate between clusters, you lose power,
- 52:53in a standard cluster-randomized trial,
- 52:56you lose a lot of power.
- 52:57It's dramatic.
- 52:59In fact, the ICC like any other correlation
- 53:02coefficient ranges from zero to one and when it's one
- 53:06that means that the only variation is between facilities,
- 53:10there's a, you know,
- 53:11no variation between individuals within a facility,
- 53:14then your sample, your effective sample size
- 53:17is essentially the number of facilities, say,
- 53:1914 in MaxART, compared to, say,
- 53:233,205, which would be the effective sample size
- 53:27if there was no variation in the event rates
- 53:30between facilities and all the variation was
- 53:33just between clients.
- 53:35So when the ICC is high,
- 53:38you're gonna need a lot of clusters to get power
- 53:42in a cluster-randomized trial.
- 53:44Whereas in a stepped wedge design,
- 53:46because of the feature that I showed you,
- 53:48that stepped wedge designs completely exploit
- 53:52the within facility contrast,
- 53:55the pre-post contrast within facilities,
- 53:58you lose very little power when you have a high ICC.
- 54:02So it's a feasible way of running
- 54:05a cluster-randomized trial when there's a lot
- 54:07of heterogeneity between the groups.
- 54:10And then of course because of that it can be
- 54:12more efficient over the parallel design.
- 54:16I'm gonna skip this point about why there might be caution,
- 54:21but one caution is this piece about clusters not
- 54:25being able to follow the randomization schedule.
- 54:28So, you know, we can say okay,
- 54:30you start at time two and you start at time three
- 54:32and so forth.
- 54:33But, you know, we're talking about pragmatic
- 54:35trials embedded often within public health systems,
- 54:40and there's other things that come up, maybe
- 54:43some issues have come up, some people have left,
- 54:46it's just not possible to start at time two,
- 54:48they have to start at time three and that sort of thing.
- 54:51And then once the random patterns start to be violated,
- 54:57then you no longer have the strength
- 55:00of the causal inference from the randomization
- 55:02and it becomes more like an observational study,
- 55:05where the facilities just chose when they were gonna
- 55:08start the intervention.
- 55:12Okay. So how are we doing on time, Ike?
- 55:19<v Speaker>Thank you, Donna, please continue.</v>
- 55:22We would like to just have what you have prepared for us.
- 55:26I'm sure...
- 55:31We still, that's, go on.
- 55:33Please, go on. I'm sure we'll be okay.
- 55:35We're happy to have you. We're enjoying this.
- 55:39<v Presenter>Okay.</v>
- 55:40'Cause I tend to underestimate how quickly I can
- 55:44get through a talk and I like to, you know,
- 55:49enrich it with things that aren't necessarily
- 55:52on the slides and then it goes a lot more slowly.
- 55:55But just let me know if you feel like I need to wrap up,
- 55:58otherwise I'll just keep talking about everything
- 56:01I've prepared to discuss today.
- 56:04So now we're gonna move from
- 56:07experimental studies to quasi-experimental
- 56:11and non-experimental studies.
- 56:18Next. So observational.
- 56:24Okay. Observational study designs.
- 56:27So for those of you who have studied biostatistics
- 56:31and epidemiology, we know about these very well.
- 56:35We're studying and assessing phenomena
- 56:38as they occur naturally.
- 56:39We can look at policy initiatives.
- 56:42It's hard to think about randomizing a policy initiative.
- 56:45We're not manipulating.
- 56:48Cohort studies can be conducted
- 56:51within electronic health records as well
- 56:54as cross-sectional studies.
- 56:55And of course we don't necessarily need electronic
- 56:58health records, but they sure do make it easy to do
- 57:02very quick evaluations of interventions
- 57:07and implementation strategies as they're occurring
- 57:12in the health system in a completely pragmatic manner.
- 57:15And then here's a bunch of papers
- 57:17in the implementation science literature about the use
- 57:21of observational studies.
- 57:29And then there's quasi-experimental study designs.
- 57:32I listed those out earlier, the before/after design.
- 57:37So when we, I just wanted to see
- 57:39what I'm gonna do next here.
- 57:41Oh yeah, okay.
- 57:45The before after design, that would be,
- 57:47as I illustrated with the stepped wedge design,
- 57:50if we just had one of these rows,
- 57:52we could just compare a facility or group of facilities,
- 57:57what their outcome rates were at baseline compared
- 58:01to what their outcome rates were after a certain lag,
- 58:05after the intervention was delivered.
- 58:08And because we're comparing clusters to themselves,
- 58:15we're controlling for all known and measured risk factors,
- 58:20which could be potential confounders that are,
- 58:24time and variant through this pre-post design.
- 58:28That's why it's called quasi-experimental
- 58:30because there's full control of confounding for time
- 58:33and variant characteristics in a pre-post design.
- 58:36And you know, the individual level analog
- 58:39of a pre-post design would be an individual, you know,
- 58:43match pair study where we use, say, the paired t test
- 58:47to evaluate the results and assess the impact
- 58:50of a individually applied intervention.
- 58:53And then there are cluster analogs
- 58:58to the paired t test
- 59:00when we conduct a pre-post or before after design
- 59:04in a clustered setting.
- 59:06And then the controlled before after design,
- 59:09which you might have also heard is the difference
- 59:11in difference design is a pre-post design enhanced
- 59:18by having other clusters or groups
- 59:21for which there's no intervention is applied.
- 59:26And so by subtracting the change in the groups
- 59:29where no intervention was applied from the change
- 59:33in the groups where the intervention was applied,
- 59:35we can subtract out all the time invariant
- 59:38characteristics as well
- 59:40as the time varying characteristics.
- 59:42So that's a very nice design, and again,
- 59:47doesn't require randomization.
- 59:50Then there's interrupted time series designs
- 59:53where we look at multiple assessments prior
- 59:56to and following introduction of an intervention
- 59:58and we might be able to more accurately assess
- 01:00:01the outcomes or behaviors than in a single pre-post design.
- 01:00:06And it's kind of like a interrupted time series
- 01:00:10design where time actually becomes a continuous variable,
- 01:00:14instead of before/after we've got the whole time sequence.
- 01:00:18And I'm gonna illustrate that shortly.
- 01:00:20And then finally there's another quasi-experimental
- 01:00:23design that's been used, again,
- 01:00:25to evaluate public health interventions
- 01:00:29and implementation strategies without having
- 01:00:32to randomize the regression discontinuity design
- 01:00:35where individuals and groups can be considered
- 01:00:39to assign to intervention or control based on some
- 01:00:42a priori score or metric that's kind of independent
- 01:00:46of their outcomes.
- 01:00:47And then it's sort of in a causal inference perspective,
- 01:00:50it's an can be treated as an instrumental variable
- 01:00:54and causal inference can be made.
- 01:00:56And for those of you who are health economists
- 01:00:59on this call or have exposure to this,
- 01:01:02these quasi-experimental designs have been put forward,
- 01:01:06I think they're, putting them forward
- 01:01:08has been led by health economists as opposed to, say,
- 01:01:12biostatisticians and clinical researchers.
- 01:01:15And so it's a way that I think stepped wedge designs
- 01:01:18and cluster-randomized trials kind of gain traction.
- 01:01:22And many of the methodologies were worked
- 01:01:24out in the health sciences,
- 01:01:27and then these quasi-experimental study designs.
- 01:01:31A lot of the literature arose in health economics
- 01:01:35and now has kind of crossed over into clinical
- 01:01:38research and biostatistics and so forth.
- 01:01:43And then here's an article that I can recommend
- 01:01:46that would talk about these quasi-experimental
- 01:01:49designs sort of from a, you know,
- 01:01:52in a way that's very accessible, I think,
- 01:01:54to a large audience.
- 01:01:57So the interrupted time series design is a way
- 01:02:01that you can look at it is through the schematic graphic.
- 01:02:05So it's considered one of the strongest
- 01:02:07quasi-experimental designs and it's increasingly
- 01:02:10advocated for use in the evaluation of health
- 01:02:13system quality improvement interventions,
- 01:02:16when randomization is impossible,
- 01:02:18it can also be used to evaluate other like
- 01:02:21population level changes in health policies.
- 01:02:25So here what we do is we observe an outcome rate.
- 01:02:30You know, it could be, say, HIV incidents rates,
- 01:02:34it could be, say, suicide rates,
- 01:02:41any sort of health outcome
- 01:02:43or even an implementation outcome, maternal mortality,
- 01:02:47under five mortality, you know, any sort of health outcome.
- 01:02:51You know, ideally, usually that's measured at kind
- 01:02:54of more of a population level or even measured...
- 01:02:57And it doesn't have to be measured by everybody like
- 01:02:59say using DSS survey data,
- 01:03:02it can be monitored through sampling techniques
- 01:03:05before the intervention.
- 01:03:07And so we might expect, in this idealized situation,
- 01:03:12we're seeing that before the intervention,
- 01:03:14this outcome rate is stable, there's not getting worse,
- 01:03:16it's not getting better.
- 01:03:18But you could also have,
- 01:03:19it doesn't have to be a flat kind of slope.
- 01:03:22It could be getting worse or better,
- 01:03:24it could go in either direction.
- 01:03:26And then the idea is that when,
- 01:03:30if the intervention didn't happen,
- 01:03:33it would just trot along at the same rate
- 01:03:35that it had prior to the intervention.
- 01:03:39And then when the intervention happens,
- 01:03:41we might think that the rate drops,
- 01:03:44let's say if this was an adverse health effect,
- 01:03:47this would be around the time of the intervention
- 01:03:49and people can also hypothesize lag.
- 01:03:51So maybe it wouldn't be immediate,
- 01:03:53maybe it would be six months later or a year later,
- 01:03:56you'd see a drop in the rate if this was something
- 01:03:58to improve health.
- 01:04:00And then you also might see in addition to the drop
- 01:04:02we might see a change in the slopes
- 01:04:05so that it might continue to improve slowly over time.
- 01:04:09So there could be a trend change.
- 01:04:11It could also happen that there's no drop,
- 01:04:14but that we just see the trend change,
- 01:04:17or there could be a drop and then no further trend change.
- 01:04:21And an interrupted time series design
- 01:04:23at the analysis stage would allow any
- 01:04:26of these possibilities.
- 01:04:29And then with the controlled interrupted time
- 01:04:32series design, we would have other groups that we might
- 01:04:35be observing before the intervention,
- 01:04:38they could be at the same level or a different
- 01:04:40level 'cause what we really care about is around
- 01:04:43the time that the intervention happened,
- 01:04:45we would hope to not,
- 01:04:46if we see any change in them of a drop in the level
- 01:04:50or a change in the slope,
- 01:04:52that we'd subtract that out and not attribute
- 01:04:54that in the group that had the intervention,
- 01:04:57we could attribute that to part of the drop
- 01:05:00and that part of the change in slope
- 01:05:01to these background time effects.
- 01:05:04And so that's why we like the control group.
- 01:05:06And here's an article about if you wanted to learn
- 01:05:09more about interrupted time series in public health.
- 01:05:15So a few examples.
- 01:05:21So here, this was a project that I worked
- 01:05:24on in Mexico, and we were
- 01:05:28thinking about a learning healthcare system
- 01:05:31in Mexico for evaluating
- 01:05:38the performance, in Mexico
- 01:05:42there's something like, I think, 34 states,
- 01:05:46just like the United States we have 50 states,
- 01:05:48so they have 34, and these are the acronyms for each
- 01:05:51of the states.
- 01:05:52And then we use the electronic medical records,
- 01:05:58they're trying to use them for chronic disease prevention,
- 01:06:01screening and care.
- 01:06:02So here there was almost 2 million patients included
- 01:06:05who had at least one clinic visit that included
- 01:06:08a chronic disease diagnosis,
- 01:06:11a chronic disease was defined here as hypertension,
- 01:06:14diabetes, dyslipidemia, or obesity,
- 01:06:17among over 12,000 healthcare facilities.
- 01:06:22And then there was a implementation outcome developed
- 01:06:27that was indexed the quality of care being used
- 01:06:30for the prevention, screening and treatment
- 01:06:32of diabetes that was called ICAD.
- 01:06:35And then that was able, through the health records,
- 01:06:38we were able to score each facility at every month
- 01:06:43during the study period as to how well they were
- 01:06:46doing between June, 2016 and July, 2018
- 01:06:50on their quality of care for the prevention,
- 01:06:53screening and treatment of diabetes.
- 01:06:55And so what we see here is,
- 01:06:57and I apologize 'cause this work was done
- 01:07:00in Mexico, so the graph is in Spanish, but I think
- 01:07:03what you can see graphically is what is the point estimate,
- 01:07:07which is this, the black vertical lines
- 01:07:10is the mean quality of care ICAT index for the state,
- 01:07:15and then the 95% confidence intervals
- 01:07:18for how it varied over the study period.
- 01:07:21And so here actually what happened was that these,
- 01:07:27there were two,
- 01:07:30only two states that actually ended up doing worse.
- 01:07:33And then there were two states that significantly
- 01:07:36worse because the 95% confidence intervals aren't
- 01:07:39touching the null value, which means no change.
- 01:07:43And then there were two states that,
- 01:07:45two additional states or three additional states
- 01:07:48that did worse but not significantly so.
- 01:07:50And then you can see these, you know,
- 01:07:52these are sorted by how well they did on this ICAD score.
- 01:07:57And you can see that there was a huge variation
- 01:08:03in Mexico among these 34 states.
- 01:08:06So then, in addition to seeing like who needs help,
- 01:08:10we can also see that, you know, there's big,
- 01:08:14oftentimes in the United States we talk a lot
- 01:08:16about disparities, but in many other countries
- 01:08:20there are big disparities as well.
- 01:08:22And here's, you know,
- 01:08:23sort of a graphical illustration of how big
- 01:08:26a disparity might be between some of the wealthier,
- 01:08:29more urban, higher SES states and some
- 01:08:32of the poorer, rural states.
- 01:08:35So this is a starting point in terms of documenting
- 01:08:38what are the issues and then we might wanna go
- 01:08:40in and figure out the next steps,
- 01:08:43which we would've liked to have gotten to would be,
- 01:08:46let's say, let's take the top five highest
- 01:08:48performing states, understand what's working
- 01:08:51there at the facility and client and system level
- 01:08:55that they're able to achieve these very high ICAD scores.
- 01:08:59And then what are the barriers to those sorts
- 01:09:01of implementation strategies that are happening in some
- 01:09:05of these states where there's either no improvement
- 01:09:08or things have actually gotten worse.
- 01:09:11And then how can we adapt these implementation
- 01:09:14strategies to create a new intervention that might
- 01:09:17improve chronic disease prevention,
- 01:09:20screening and care in some of these states
- 01:09:23for which these disparities exist.
- 01:09:30And then another example here of a paper
- 01:09:33that I along with others recently published that gives
- 01:09:36an example of a controlled interrupted time series
- 01:09:40is the looking at the causal impact
- 01:09:44of the Affordable Care Act on colorectal cancer
- 01:09:47incidence and mortality.
- 01:09:48So colorectal cancer incidence and mortality are
- 01:09:54one of the biggest causes of cancer cases
- 01:10:00and deaths in the United States.
- 01:10:02And with the changing nutrition epidemiologic transition,
- 01:10:08I think colorectal cancer is understudied
- 01:10:10around the world, but it only stands to reason,
- 01:10:13just as we've seen the increase in rates of other
- 01:10:16chronic diseases such as diabetes and heart disease,
- 01:10:20breast cancer and so forth,
- 01:10:22we'll be seeing soon increases in the incidence
- 01:10:26in mortality of colorectal cancer.
- 01:10:29And it's known we have a efficacious
- 01:10:33evidence-based intervention for colorectal cancer.
- 01:10:36It's called colonoscopy and it involves an examination
- 01:10:39of the colon for polyps and removal of polyps
- 01:10:44before they have an opportunity to develop
- 01:10:47into pre-cancerous and cancerous lesions.
- 01:10:50And it's been found to be at least 50% efficacious
- 01:10:55in randomized trials.
- 01:10:57It's also expensive, in the United States it costs
- 01:11:00at least $3,000 per colonoscopy screening.
- 01:11:04So many Americans were not able
- 01:11:06to afford colonoscopy screening.
- 01:11:09And when President Obama brought
- 01:11:12in the Affordable Care Act, it's also called
- 01:11:14Obamacare, one of the main tenets that it brought in,
- 01:11:18which people in the public health community really
- 01:11:20liked is that it guaranteed funding
- 01:11:24for evidence-based preventive interventions.
- 01:11:28And colonoscopy was among those and maybe among
- 01:11:32the most important.
- 01:11:33So here's a perfect example where we can study
- 01:11:36the impact of the Affordable Care Act on colorectal
- 01:11:40cancer incidence and mortality.
- 01:11:42Well, we know from these trials that if people
- 01:11:45manage to get colonoscopies, their rates, you know,
- 01:11:48on the population level will go down by around 50% or more,
- 01:11:53but can, by just simply changing the law,
- 01:11:56if you think of the cascade,
- 01:11:58there's so many steps that have to happen
- 01:12:00before people might actually get these colonoscopies
- 01:12:03and get them on the recommended schedule and then see
- 01:12:06the impact on reduction in colorectal cancer and incidence.
- 01:12:11So what we did was we were able to, through,
- 01:12:16we have a very big health system
- 01:12:19in the western part of the United States called
- 01:12:21the Kaiser Permanente system,
- 01:12:23and then they're divided into kind of subgroups.
- 01:12:25So I had colleagues
- 01:12:27at Kaiser Permanente in Northern California.
- 01:12:30It's an integrated healthcare delivery system.
- 01:12:33It's a private system with over 4 million members
- 01:12:35who are representative of the regional population.
- 01:12:39And so we used an open cohort
- 01:12:40of Kaiser Permanente of Northern California members
- 01:12:44who were 50 years or older between January 1st, 2000
- 01:12:48and Decembsr 31st, 2017.
- 01:12:50So there were over 1 million such individuals
- 01:12:54who were part of the study population at some points
- 01:12:57in time over this period.
- 01:13:00And with around 220 million person months of follow up.
- 01:13:04And during that time,
- 01:13:06there were almost 20,000 colorectal cancer cases occurred,
- 01:13:11and over 2,600 people died of colorectal cancer.
- 01:13:17So that's basically the study population here.
- 01:13:23And then here is our interrupted time series design.
- 01:13:27So it wasn't a controlled time series design,
- 01:13:30but what we saw is, so this is colorectal cancer incidence
- 01:13:34and, on the Y axis,
- 01:13:38so it's how many cases per hundred thousand were
- 01:13:41occurring in the study population.
- 01:13:43And then here's the red line.
- 01:13:45That's when the ACA, the Affordable Care Act was
- 01:13:47rolled into public policy.
- 01:13:49And then here's the after data.
- 01:13:51And what we're seeing, and then these, the very,
- 01:13:55very jagged lines are the natural variation
- 01:13:58in the monthly rates, which is the kind of thing we see,
- 01:14:01statistical random variation.
- 01:14:02We don't see smooth curves when we look at rates
- 01:14:05on a very fine scale like this,
- 01:14:08they're kind of going up and down.
- 01:14:10And then the red line kind of smooths these curves
- 01:14:14without testing any particular hypothesis.
- 01:14:17But we see that, you know,
- 01:14:19the rate before the Affordable Care Act came in,
- 01:14:22it was kind of fluctuating up and down a little bit.
- 01:14:27It's not that, when I showed you that earlier slide
- 01:14:29of the sort of schematic of an interrupted time
- 01:14:33series design, there was just a straight line
- 01:14:35going through here, it's not quite that,
- 01:14:37this is real-life data,
- 01:14:39but that we do see after the AC came in,
- 01:14:41even just, you know, not imposing any structure on the model
- 01:14:45that we see that the colorectal cancer
- 01:14:47incidence went down fairly quickly.
- 01:14:50And you might wonder why.
- 01:14:51Well, they take out these polyps that are
- 01:14:54pre-cancerous and you don't get cancer.
- 01:14:58So it can happen very quickly.
- 01:15:01And then what we did was then we fit
- 01:15:03that classic interrupted time series model to the data.
- 01:15:07And so what we saw,
- 01:15:08that's this line here where you can see
- 01:15:10that what was happening was colorectal cancer actually
- 01:15:13in the background is slowly going up a little bit
- 01:15:16in this part of the country and probably everywhere
- 01:15:19in the United States.
- 01:15:20But then ACA came in, we actually,
- 01:15:23I mean we couldn't believe it, you know, there was this,
- 01:15:25we saw this drop and then just like in the classic design,
- 01:15:29and this was significant,
- 01:15:31and then we saw this continued slower decrease in trend.
- 01:15:37So right, it was at this point that everyone
- 01:15:39in Kaiser Permanente was able to get access
- 01:15:42to colonoscopies, and so it lowered the rates right
- 01:15:46away and then the rates continued to decline.
- 01:15:51So that's an example of
- 01:15:54an interrupted time series design to study
- 01:15:58the implementation of an evidence-based intervention
- 01:16:01at the policy level,
- 01:16:02namely through the Affordable Care Act.
- 01:16:07And here are the co-authors,
- 01:16:10and then here's the publication.
- 01:16:16So now I'm gonna talk a little bit more about some
- 01:16:19more innovative designs, because really everything
- 01:16:22I've talked about so far, the stepped wedge design,
- 01:16:25cluster-randomized trial, interrupted time series
- 01:16:28and so forth, those have been around for quite some time.
- 01:16:32But now we can go into a little bit more of some model,
- 01:16:35I mean some novel designs that are just starting
- 01:16:38to be considered in implementation science.
- 01:16:41In particular MOST, SMART and LAGO.
- 01:16:47So the MOST design is one design that's very well
- 01:16:52suited for complex,
- 01:16:55multi-level, multi-component interventions.
- 01:16:59That can be a very hard thing to set up at the start
- 01:17:02of a study where when you've got all
- 01:17:05these different features at different levels
- 01:17:07to know exactly how to constitute
- 01:17:10your intervention package, both
- 01:17:12what's in it and what's not in it,
- 01:17:14and at what kind of dose or strength
- 01:17:17of implementation should each one of these components be.
- 01:17:22So in the MOST design,
- 01:17:23which was developed and promoted by a researcher
- 01:17:27named Linda Collins,
- 01:17:28and here's two of the key citations
- 01:17:30to this design down here, there are three phases.
- 01:17:34So what the first one is preparation,
- 01:17:37and that's where things would be done,
- 01:17:42such as developing the conceptual model
- 01:17:44for what the implementation strategy might be,
- 01:17:48to identify sets of candidate components and to
- 01:17:55conduct pilot tests and identify optimization criteria.
- 01:17:59And so what we mean by this is is
- 01:18:01this might be done largely through qualitative research.
- 01:18:04This is the first time I've mentioned qualitative
- 01:18:07research in this talk.
- 01:18:08It's a very important part of implementation science.
- 01:18:11Because what would happen in a MOST design, for example,
- 01:18:14and even often informally in all of these other
- 01:18:17designs we've talked about or many of them,
- 01:18:19is that qualitative researchers,
- 01:18:22social scientists will conduct focus groups
- 01:18:25and individual interviews of stakeholders
- 01:18:33at the different levels.
- 01:18:34Clients, providers, health systems leaders, network,
- 01:18:39social network members to find out both what are
- 01:18:43the facilitators and barriers to them taking advantage
- 01:18:47of or utilizing and promoting
- 01:18:52this evidence-based intervention.
- 01:18:54And then also what their views might be
- 01:18:58about different components of an intervention strategy,
- 01:19:02or an implementation strategy that would make
- 01:19:05this evidence-based intervention be adopted,
- 01:19:10be more acceptable, be used with fidelity,
- 01:19:15be sustainable and so forth.
- 01:19:18And so with that kind of information
- 01:19:20at the preparation phase,
- 01:19:24you wouldn't really determine definitively
- 01:19:26what the package would be,
- 01:19:27but you would get some ideas of what should
- 01:19:30and shouldn't be in the package,
- 01:19:32it could be a much larger set than
- 01:19:34what you ultimately will study.
- 01:19:37And then at the optimization phase,
- 01:19:40you would conduct a factorial design
- 01:19:43that would take as many kind of combinations
- 01:19:47of these implementation strategies and components
- 01:19:51as possible and test them for response for some sort
- 01:19:57of very short term implementation outcome,
- 01:20:01which could be maybe even acceptability,
- 01:20:03appropriateness and feasibility, and there are
- 01:20:07five item scales that have been developed
- 01:20:09by implementation scientists that can be used in that way.
- 01:20:13And then based on, say, the responses,
- 01:20:15you can then pare down
- 01:20:18what the implementation strategy,
- 01:20:21what the intervention package should be to then roll
- 01:20:25out in a formal either stepped-wedge design
- 01:20:30or cluster-randomized trial.
- 01:20:32So what MOST does is it adds
- 01:20:34on to these randomized designs that we had talked
- 01:20:37about earlier these two phases, the preparation phase,
- 01:20:42which can often be largely qualitative,
- 01:20:45and then the optimization phase,
- 01:20:48which involves a very short term pilot
- 01:20:53high level factorial design to weed out the less
- 01:20:58promising intervention package components.
- 01:21:03And there's some examples of using the MOST design
- 01:21:08and it definitely could be used a lot more often.
- 01:21:12And hopefully people can see that this is like, you know,
- 01:21:15sort of a more scientific and rigorous way to use data,
- 01:21:20not just quantitative data, but also qualitative data to,
- 01:21:25you know, sort of rigorously design a complex
- 01:21:28intervention package before it's rolled out.
- 01:21:46And then there's the question of adaptation
- 01:21:49versus fidelity, and then that's gonna come up
- 01:21:53for these next two designs.
- 01:21:55So even after, say, using a MOST structure,
- 01:22:00which would maximize the chances that you would kind
- 01:22:03of get it right at baseline,
- 01:22:05I'm sure everybody here who's actually rolled out any
- 01:22:08kind of complex public health program of any sort
- 01:22:11knows that the realistic scenario is that this program
- 01:22:18is gonna be adapted as we go along,
- 01:22:20providers are gonna learn, the system is gonna learn,
- 01:22:24clients are gonna learn,
- 01:22:25we're gonna learn like what isn't working,
- 01:22:28what we can improve and so forth...
- 01:22:31And so it's just basically impossible usually
- 01:22:34for researchers to say,
- 01:22:36and maybe even unethical for researchers to say, no,
- 01:22:40you know, this is a randomized trial and you must
- 01:22:43stick with this intervention that we set at baseline
- 01:22:47no matter what.
- 01:22:48Obviously that's what we do in a phase III
- 01:22:51individually randomized clinical trial.
- 01:22:53People either get the new drug or the placebo
- 01:22:57and we don't change the new drug after baseline,
- 01:22:59even if it's, people are getting the feeling somehow
- 01:23:02that it's not doing what it's supposed to do,
- 01:23:05all we can do is like early stopping
- 01:23:07for overwhelming evidence of benefit or harm.
- 01:23:11So this very busy slide is taken from this article
- 01:23:16down here and it's a framework for reporting
- 01:23:20adaptations and modifications
- 01:23:22to evidence-based interventions.
- 01:23:24So the reason, it's complicated,
- 01:23:27but that's because what these researchers are trying
- 01:23:31to do is think of every possible kind of category
- 01:23:36of adaptation that could take place
- 01:23:38after an intervention is started to help people
- 01:23:42record the adaptations.
- 01:23:46Because those of us who kind of want
- 01:23:49implementation science to be relevant,
- 01:23:51one of the three Rs, realize that we should not only
- 01:23:57allow adaptations, we should embrace
- 01:23:59adaptations because they're only gonna likely improve
- 01:24:04the success of these evidence-based interventions.
- 01:24:08But the only way to learn in a rigorous way,
- 01:24:12what aspects of adaptation are actually working,
- 01:24:17is to be able to record them.
- 01:24:19And then once they're recorded,
- 01:24:20later on in secondary analysis,
- 01:24:22we can go back and analyze the data because all
- 01:24:26of these adaptations are just like exposure variables
- 01:24:30in a complex epidemiologic study,
- 01:24:33and using causal inference methods to control
- 01:24:35for confounding, we can look at which adaptations
- 01:24:38actually improved outcomes, which made outcomes worse,
- 01:24:42which didn't do anything,
- 01:24:44we can evaluate their cost-effectiveness and so forth,
- 01:24:47but if they're not recorded, we're stuck.
- 01:24:49So that's why I have this very busy slide here.
- 01:24:52It's a very, very important one in terms of ensuring
- 01:24:57that implementation science produces relevant results.
- 01:25:02In a rigorous manner.
- 01:25:07Now we can talk about the learn as you go design.
- 01:25:10So that's a design that's very dear to my heart
- 01:25:13because as you can see here to the left hand side,
- 01:25:16I'm one of the people who's developed this design,
- 01:25:19and it's a very new design, we just published it last year.
- 01:25:23We are in the process of using it in a study going on.
- 01:25:28Some of you might be part
- 01:25:30of the HLB-SIMPLe consortium that's supported
- 01:25:35by the United States National Heart,
- 01:25:38Lung, and Blood Institute.
- 01:25:39It's a series of,
- 01:25:41I think maybe six or more studies taking place
- 01:25:46in sub-Saharan Africa where what's being looked
- 01:25:50at is different ways of integrating
- 01:25:53hypertension prevention, screening and treatment
- 01:25:56into HIV clinics with the idea that, as we all know,
- 01:26:01the AIDS epidemic has,
- 01:26:02AIDS has become a chronic disease everywhere in the world.
- 01:26:06And we have aging HIV AIDS patients
- 01:26:10and they're getting chronic diseases just like
- 01:26:12those of us who are HIV negative.
- 01:26:15And the idea, the concept of integration of care, I think,
- 01:26:19is a very important one in global health
- 01:26:23and US domestic health.
- 01:26:24And this consortium is playing a role
- 01:26:27in making this happen.
- 01:26:29So I'm the statistician for one of the projects,
- 01:26:33it's called Police and it's taking place in two
- 01:26:36districts in Uganda where we're integrating
- 01:26:39two types of intervention packages
- 01:26:45into HIV clinics,
- 01:26:47hypertension basic and hypertension plus to try to
- 01:26:51increase hypertension screening and treatment
- 01:26:54and prevention in the clinics there.
- 01:26:57And we're gonna be using this LAGO design. So what is LAGO?
- 01:27:02Well first of all,
- 01:27:03the intervention is a package consisting
- 01:27:05of multiple components.
- 01:27:06We've both basically been talking about multiple
- 01:27:09component interventions throughout this talk.
- 01:27:13And it can include combinations with treatments, a device,
- 01:27:19care organization, multiple stakeholders,
- 01:27:22and similar stepped wedge design in a LAGO design
- 01:27:26the data analyzed after each stage.
- 01:27:28And then what makes it like radically different
- 01:27:31in a way from other prior study designs
- 01:27:34is it's actually possible in this design
- 01:27:37to reconfigure the intervention package and not just do
- 01:27:42it sort of in a more ad hoc way,
- 01:27:44as we were talking about
- 01:27:50with the previous slide
- 01:27:51on how to adapt interventions.
- 01:27:56But you do it in a formal way where we have
- 01:27:59a computer algorithm that will take all the data up
- 01:28:02to the current stage, analyze it,
- 01:28:05and then the data itself recommend
- 01:28:08what's the optimal combination of the intervention
- 01:28:11for the next stage.
- 01:28:12And optimality would be determined by both trying
- 01:28:16to guarantee that we have adequate statistical power
- 01:28:19to test the overall intervention effect at the end
- 01:28:22of the study.
- 01:28:23And that it might be that we're trying to achieve also
- 01:28:26a certain outcome goal.
- 01:28:27So like in the Police study I was just talking about,
- 01:28:30we think that about 20% of people,
- 01:28:3520% of adults over,
- 01:28:38HIV positive adults in the clinics might
- 01:28:44be hypertensive and be in hypertension control.
- 01:28:48And then we're hoping to improve that to, say, 40%.
- 01:28:51So the goal of the study is to get to 40%
- 01:28:54through the intervention.
- 01:28:55You might think that's modest,
- 01:28:57but another thing that I've seen is sometimes
- 01:29:00with these kinds of studies,
- 01:29:01people are overly ambitious and they might say,
- 01:29:04we wanna get like 80% or 90% of hypertension control
- 01:29:08by the end of the study.
- 01:29:09But you know, if we're starting from 5%, 10% or 20%
- 01:29:13to get all the way to like, say,
- 01:29:1580%, we're starting to talk about relative risks of 160.
- 01:29:20Those are like, you know,
- 01:29:21huge intervention effects that maybe we're being too
- 01:29:24hard on ourselves when we try to achieve goals like that,
- 01:29:28even though that's what where we might ultimately
- 01:29:30might wanna get to.
- 01:29:33So back to the LAGO design, we can, using the data,
- 01:29:37we can recommend the optimal intervention package
- 01:29:40for the next stage.
- 01:29:42We can also use qualitative data and we don't have
- 01:29:45to just use the quantitative data,
- 01:29:47we can reconfigure the intervention package,
- 01:29:50then we roll it out again and then we repeat that up
- 01:29:53to as many times as was preplanned.
- 01:29:58And then we can, at the end of the study, ideally,
- 01:30:01we would have a final outcome assessment,
- 01:30:04we test the null hypothesis that the intervention
- 01:30:06had no effect.
- 01:30:07We could assess the cost-effectiveness
- 01:30:09of the different intervention components.
- 01:30:13We have a model that we can use,
- 01:30:18that could predict for different
- 01:30:21intervention component combinations,
- 01:30:30what level of the outcome we might expect to have
- 01:30:33and so forth.
- 01:30:34So that's the LAGO design,
- 01:30:36we'll see how it works in Police and hopefully
- 01:30:38some other studies.
- 01:30:39There's some other projects under consideration
- 01:30:42for funding that have also proposed to use LAGO.
- 01:30:46And I'll give an example of LAGO here.
- 01:30:49This is a post-hoc design.
- 01:30:51So it's an illustrative example that we used
- 01:30:53in our paper in this Annals of Statistics paper
- 01:30:57that was published in 2021.
- 01:30:58And by the way, not to kind of toot my horn,
- 01:31:02but just to emphasize the rigor of this design
- 01:31:05because it is, you know,
- 01:31:07very different for people to accept that you can
- 01:31:09actually change your intervention after you start
- 01:31:12the study and still have a valid P value.
- 01:31:15You know,
- 01:31:16the mathematics to prove this were quite high level.
- 01:31:19And the journal where this paper was published,
- 01:31:22the Annals of Statistics, is kind of considered one
- 01:31:24of the top and most theoretical journals in statistics.
- 01:31:28So this design is like really okay, it just,
- 01:31:33it's okay theoretically, but it does need to kind
- 01:31:36of be fleshed out in terms of being used and working
- 01:31:39at the kinks on a practical level.
- 01:31:41And as we start to use it,
- 01:31:42I'm sure we'll start to learn a lot of things and be
- 01:31:45able to further improve it.
- 01:31:47But anyway, we took the BetterBirth study
- 01:31:51as our example in this Annals of Statistics paper.
- 01:31:54It was a multicenter study that was conducted
- 01:31:57in Uttar Pradesh, India, which is a poor state
- 01:32:00in Northern India.
- 01:32:01And its purpose was to test multiple component intervention
- 01:32:07package to improve process
- 01:32:09and health outcomes for mothers and newborns,
- 01:32:12that is to lower maternal mortality and neonatal mortality
- 01:32:17in the state where the rates were unacceptably high.
- 01:32:21The components involved: launching the intervention,
- 01:32:25how many coaching visits,
- 01:32:27how many times healthcare providers received
- 01:32:32coaching visits, how often, the frequency, the duration,
- 01:32:35there's audit and feedback loops,
- 01:32:37which is a very popular method in implementation
- 01:32:40science where it can be through direct observation
- 01:32:44or through electronic health records,
- 01:32:47providers are audited as to what extent
- 01:32:51they're actually implementing the intervention
- 01:32:53and they're given feedback as to how well they're doing,
- 01:32:57and then oftentimes there can be group discussions
- 01:33:00where people talk about, you know,
- 01:33:02what were the barriers to why they didn't do it more
- 01:33:05and how could they do it more often and so forth,
- 01:33:07and that's been shown to be a proven way to improve
- 01:33:10the uptake of a evidence-based intervention program
- 01:33:15at the provider and system level.
- 01:33:18And then of course,
- 01:33:19stakeholder engagement is increasingly taken to be
- 01:33:23an important part of successful sustainable
- 01:33:26interventions conducted at high fidelity.
- 01:33:30And so engaging the district and facility leaders
- 01:33:33was another important part of this package.
- 01:33:36And there were three stages of the study.
- 01:33:41In stage one, they piloted this intervention
- 01:33:44in two centers, then through, you know, non-rigorously,
- 01:33:49not using LAGO, they then adapted the intervention
- 01:33:52package and they piloted it in four more centers.
- 01:33:56And then in stage three they rolled out a full trial
- 01:34:00in 30 centers where the intervention was fixed
- 01:34:03and they couldn't change it any more.
- 01:34:05And in stages one and two,
- 01:34:07they used both quantitative and qualitative data
- 01:34:10to guide the adaptation.
- 01:34:12And so this is again, a very big ambitious trial.
- 01:34:16There were 120 sites in 24 districts
- 01:34:20and it involved almost 160,000 pregnancies.
- 01:34:25And the intervention, the primary outcome was
- 01:34:30an implementation outcome and it was use of the WHO
- 01:34:35safe childbirth checklist with many of you might
- 01:34:39be familiar with, there are 27 different things
- 01:34:42that are supposed to be done at different stages
- 01:34:44when a woman comes in to give birth
- 01:34:47with the different stages of pregnancy and right
- 01:34:50after and so forth.
- 01:34:52And the WHO recommends that this safe childbirth checklist,
- 01:34:56which is a means of trying to ensure
- 01:34:59that these 27 evidence-based, you know, components be used,
- 01:35:05that it should be used at least 90% of the time.
- 01:35:09So that was the goal of the study.
- 01:35:11Again, they were, I think in this study it was
- 01:35:13at something like 5% where it was happening.
- 01:35:16So extremely ambitious and probably unrealistic.
- 01:35:21And then we could look at different outcomes.
- 01:35:23So one outcome is, that we looked at, just, this is, again,
- 01:35:27an illustrative example of the LAGO design was
- 01:35:30adherence to oxytocin administration after birth
- 01:35:35or after delivery.
- 01:35:36And then we could also look at, say,
- 01:35:38seven day mortality of the mother and or the child.
- 01:35:41And then there were also costs, say,
- 01:35:43the costs per coaching session.
- 01:35:52Okay. So that's an example of the LAGO design.
- 01:35:56And in fact, just to say that this study was published
- 01:36:00in the New England Journal of Medicine
- 01:36:02and the design,
- 01:36:06it was published here in implementation science in 2015.
- 01:36:09And then the outcome was the,
- 01:36:11in the New England Journal of Medicine.
- 01:36:13And in fact,
- 01:36:14the trial was not successful in achieving its goals.
- 01:36:18And probably what happened was,
- 01:36:21you could say that's why we said, well,
- 01:36:23if LAGO could have been used at stage three
- 01:36:25when there were all 30 centers,
- 01:36:27there could have been more feedback figuring
- 01:36:29out what aspects of this is working,
- 01:36:33what aspects of these components are working
- 01:36:36and not working.
- 01:36:38And then maybe some other things need to be brought in.
- 01:36:48Oh, somebody else who's maybe unmuted.
- 01:36:53They failed to increase the use of the safe
- 01:36:58childbirth checklist to 90%, but they did improve it.
- 01:37:03But they also, I don't think,
- 01:37:04failed to see significant differences in, say,
- 01:37:07health outcomes of mother and or child.
- 01:37:09So it's an example of how unfortunate it could be
- 01:37:14to have a very big study like this with a very
- 01:37:18complex intervention following,
- 01:37:22attempting to implement WHO standards and then
- 01:37:27being hard-coded in like this.
- 01:37:29So there's no way to adapt and improve
- 01:37:31the intervention when it starts to look like
- 01:37:34it's not achieving its goals.
- 01:37:41So now I'm gonna talk
- 01:37:42about effectiveness implementing-
- 01:37:44<v ->Donna?</v> <v ->Yes.</v>
- 01:37:46<v Speaker>Yeah, it's been a wonderful time.</v>
- 01:37:49I don't know, how many slides do you have left?
- 01:37:52<v Presenter>Yeah, I'm not even sure myself. Let me see.</v>
- 01:37:56<v Speaker>So, because, you know,</v>
- 01:37:58this is Nigeria and it's about...
- 01:38:01<v Presenter>And it's late, right? Yeah, okay.</v>
- 01:38:03Oh, I was almost there actually.
- 01:38:06<v Speaker>Almost there. Okay.</v>
- 01:38:07Oh good, good. Okay. <v ->Yes.</v>
- 01:38:09So, sorry everybody, you know, I'm very excited,
- 01:38:14there's a lot of material to cover and maybe I should
- 01:38:16have weeded it down a little bit more,
- 01:38:18but I really appreciate you all hanging in there with me.
- 01:38:20I can see we've lost very few people there.
- 01:38:23<v Speaker>We're happy with you too.</v>
- 01:38:24And the lecture is quite illuminating.
- 01:38:27<v Presenter>So I'll just quickly say</v>
- 01:38:28that because of an implementation science,
- 01:38:31I've been mostly talking about the interventions
- 01:38:34in the design point of view,
- 01:38:35but there's also this hybrid design framework
- 01:38:39where we can think of combined outcomes
- 01:38:45or differently emphasizing the health outcome versus
- 01:38:49the implementation outcome.
- 01:38:51So there are three types, type 1, type 2, type 3.
- 01:38:55And the goal here in using these hybrid designs
- 01:38:58is to accelerate transition from effectiveness trials
- 01:39:02to implementation trials.
- 01:39:04And this is a very unique design.
- 01:39:06Here is the reference for it in implementation science.
- 01:39:10And so the type 1, 2, and 3, I'll show
- 01:39:13you on the next slide.
- 01:39:15So the type 1, the focus is the clinical intervention.
- 01:39:19So that would be, say, in, let's say,
- 01:39:23in the examples I've given actually,
- 01:39:27the clinical intervention, none of the examples
- 01:39:32I've given actually were a type 1 design,
- 01:39:34'cause the clinical, let's say
- 01:39:36with BetterBirth, which we were just talking about,
- 01:39:39the clinical intervention would be,
- 01:39:41I think actually they were powered for combined
- 01:39:43endpoint of maternal and neonatal morbidity and mortality,
- 01:39:47that would make it a type 1 design.
- 01:39:49But then they were also measuring
- 01:39:51the implementation outcome of the extent
- 01:39:54to which the Safe Childbirth Checklist was used.
- 01:39:57That's an implementation
- 01:39:59outcome that wasn't their primary outcome.
- 01:40:01So that makes it a hybrid type 1 design.
- 01:40:04A hybrid type 2 design,
- 01:40:05which a lot of people are very interested in,
- 01:40:07would mean that we jointly think we power
- 01:40:10the study both to ensure that we have the power to detect
- 01:40:14a meaningful difference in the clinical outcome,
- 01:40:17but also in the implementation strategy
- 01:40:20and their co-primary endpoints.
- 01:40:22And then the hybrid types 3 would be,
- 01:40:25focusing exclusively on the implementation endpoint.
- 01:40:29But we're still measuring the health outcome
- 01:40:32just to get some idea that maybe in this new context,
- 01:40:35we're at this greater scale,
- 01:40:37maybe we might see a difference, good or bad,
- 01:40:40in the health endpoint.
- 01:40:42So those hybrid designs I think are very useful
- 01:40:45in implementation science and I'd encourage you all
- 01:40:47to use them.
- 01:40:49So this is my last slide.
- 01:40:52These are a few textbooks on implementation science
- 01:40:55that I encourage people to take a look at, if you can.
- 01:40:58Implementation studies require consideration of context,
- 01:41:02multiple levels, multiple components, timing matters.
- 01:41:06When you're thinking about conducting
- 01:41:08an implementation science study,
- 01:41:10you can identify and rank potential study designs
- 01:41:12then decide, and I've gone through a number
- 01:41:15of the most important ones and discuss some
- 01:41:17of their pros and cons.
- 01:41:19We'll consider randomization and real world rollouts
- 01:41:22when possible to increase rigor.
- 01:41:25But also I'm mentioning if randomization is not possible,
- 01:41:28there are quasi-experimental and observational
- 01:41:31designs available for which causal inference methods
- 01:41:33can be applied.
- 01:41:35And consider some of these innovative approaches
- 01:41:37if they're relevant to your study.
- 01:41:40So thank you all very much.
- 01:41:41I'm sorry for keeping you up so late tonight.
- 01:41:45It's been a pleasure to talking with you all
- 01:41:47and sharing this information.
- 01:41:52<v Speaker>Wow, wow, wow, wow. Donna, you are fantastic.</v>
- 01:41:58We're very, very greatful.
- 01:42:00<v ->Thank you so much, Ike.</v>
- 01:42:02<v Speaker>Everyone...</v>
- 01:42:04So I mean we're all still listening
- 01:42:08to you since you started the lecture,
- 01:42:11that shows that we really appreciate you and actually
- 01:42:16no other person could have delivered this lecture but you,
- 01:42:20the expertise cannot be underestimated.
- 01:42:24We are very, very grateful, and if you look at the chat,
- 01:42:29oh, it's something that is quite encouraging,
- 01:42:32I wish, I've saved the chat, so I will send this to you.
- 01:42:38Wonderful time,
- 01:42:40highly appreciated lecture, stimulating lecture...
- 01:42:45It's still coming. Wow, great.
- 01:42:48Thanks to the lecturer. So you can imagine.
- 01:42:52So we're really very,
- 01:42:54very happy and really I'm sure I'm going to be
- 01:42:59bombarded with requests for you to come.
- 01:43:03<v Presenter>Well, Erica Saracho</v>
- 01:43:04who's assisting me with this.
- 01:43:06Erica, are you still on?
- 01:43:08Because we should capture the chat,
- 01:43:10a number of people are asking for the slides
- 01:43:12and we can get back to everybody with these slides.
- 01:43:16<v ->Good.</v> <v ->Yes, Donna,</v>
- 01:43:19I saved the chats.
- 01:43:20<v ->Thank you so much, Erica.</v> <v ->Thank you very much, Erica.</v>
- 01:43:24And so let us have some questions.
- 01:43:29I'm sure some of us would want to clarify
- 01:43:35whatever gray areas they have.
- 01:43:38So I've asked them to send in their questions
- 01:43:42but I've not seen one.
- 01:43:44What we are just seeing is email addresses,
- 01:43:47send me lectures.
- 01:43:49So our colleagues there,
- 01:43:53could you please send in your questions
- 01:43:58because she's here now,
- 01:43:59she can clarify things and also give you some
- 01:44:04better understanding of the slides
- 01:44:06that you are requesting for.
- 01:44:08Please... <v ->I know,</v>
- 01:44:09but maybe it's so late, Ike, maybe people,
- 01:44:12it's too late to actually take questions now.
- 01:44:14I mean I'm fine but I understand it is quite
- 01:44:17late for people and I understand if people would like
- 01:44:20to just say goodbye at this time.
- 01:44:25<v Speaker>Let me just wait, maybe let's, can you unmute?</v>
- 01:44:30Erica, can you unmute and let's see
- 01:44:33anyone raising up his hand, anyone raising up?
- 01:44:40<v Speaker>I think there are two questions earlier</v>
- 01:44:42if you scroll up.
- 01:44:46<v Speaker>Yeah, there is one I actually, okay,</v>
- 01:44:48there is one here that says how do we balance
- 01:44:53between rigor and relevance in implementation size?
- 01:45:01<v Presenter>Yeah, so that's a great question.</v>
- 01:45:03<v ->Do you have-</v> <v ->Absolutely.</v>
- 01:45:05Yeah, that's a great question.
- 01:45:07And you know,
- 01:45:09my view is there's no right answer to that.
- 01:45:13That, you know, it's really, you have to say it depends,
- 01:45:16but, you know, in my opinion I feel
- 01:45:20that implementation science so far has
- 01:45:23overemphasized rigor over readiness and relevance
- 01:45:29and therefore many of these big studies,
- 01:45:33including these examples that I've given,
- 01:45:35have missed the boat in terms of policy.
- 01:45:39So I might say, maybe we need to, if we have to choose,
- 01:45:45and I've also given examples of studies and designs
- 01:45:48that can maybe be used that could still be rigorous
- 01:45:52and rapid, that if we have to choose,
- 01:45:56maybe we need to go over to the other,
- 01:45:58that the rigorous needs to be, let's say,
- 01:46:01especially randomization needs to be softened up
- 01:46:05a little bit so we can get,
- 01:46:07we can contribute to policy decisions.
- 01:46:13<v Speaker>Okay, thank you very much.</v>
- 01:46:16There is another question.
- 01:46:17What is the difference, if any,
- 01:46:21between implementation science study
- 01:46:24and implementation research?
- 01:46:29<v Presenter>Yeah, I don't think there is a difference</v>
- 01:46:31in my opinion.
- 01:46:34You know, an implementation science study is a type
- 01:46:36of implementation research, but there's a lot
- 01:46:39of terminology floating around.
- 01:46:41So sometimes people say implementation research,
- 01:46:44sometimes they say implementation science,
- 01:46:46in the United States, a lot of people say D&I research,
- 01:46:50dissemination and implementation research
- 01:46:53because they wanna emphasize the dissemination piece more.
- 01:46:56They feel like there's still inadequate uptake
- 01:46:59and scale up and scale out of a lot
- 01:47:01of evidence-based interventions for whom, you know,
- 01:47:06acceptable implementation packages have been developed.
- 01:47:10So these words, these various words are,
- 01:47:12from my point of view, more or less synonymous.
- 01:47:18<v Speaker>Oh, thank you very much.</v>
- 01:47:19Another question is is there a difference
- 01:47:23between clinical and implementation outcomes?
- 01:47:29<v Presenter>Mm-hm. Yes.</v>
- 01:47:30So I probably should have actually had one
- 01:47:33or two slides about that,
- 01:47:35'cause that's actually an important concept
- 01:47:37in implementation science.
- 01:47:39So I'm sorry I didn't talk more about that,
- 01:47:41but thank you for the question.
- 01:47:43So in, and it's related even to the cascade
- 01:47:47on my very first slide, where we go from efficacy research,
- 01:47:53has clinical endpoints, that's it.
- 01:47:56Effectiveness research
- 01:47:58usually has clinical endpoints, that's it.
- 01:48:02Then we get to implementation research,
- 01:48:05we start to have implementation outcomes
- 01:48:07where we're not actually even looking at the impact
- 01:48:09of the intervention on the health endpoints.
- 01:48:12We're looking at the impact of the intervention
- 01:48:14on how the evidence-based intervention
- 01:48:17is being implemented with the idea that the actual
- 01:48:21public health barrier at this point in time is not
- 01:48:24like discovering a new intervention,
- 01:48:27it's rolling out an existing intervention that's useful,
- 01:48:32and that's where the type 1, 2 and 3 hybrid designs
- 01:48:36come in, where in a type 3 hybrid design you would
- 01:48:39just look at, is the safe childbirth,
- 01:48:41is the uptake of the safe childbirth checklist,
- 01:48:45has that increased?
- 01:48:47We're not looking to see, did fewer mothers die?
- 01:48:50Did fewer babies die?
- 01:48:51We know, if these 27 things have been done,
- 01:48:54fewer mothers and fewer babies are gonna die.
- 01:48:57So we just wanna get more providers using these 27 things.
- 01:49:03So that's pure implementation outcome.
- 01:49:08<v Speaker>Well, thank you very much.</v>
- 01:49:11I think the last one here or there's one about how do
- 01:49:14we calculate the sample size for these designs.
- 01:49:20I wonder how that will be addressed,
- 01:49:24but that's the question. <v ->Okay.</v>
- 01:49:27So that could be another like one or two hour talk
- 01:49:30or even a whole class in itself.
- 01:49:34But I can say a few basic principles is if you know
- 01:49:38how to calculate, say, a study,
- 01:49:41let's say, I'll just say
- 01:49:42for a cluster-randomized trial compared
- 01:49:47to an individually randomized trial,
- 01:49:49you can do the sample size calculation
- 01:49:52for the individually randomized trial.
- 01:49:54And there's even like, you know,
- 01:49:55in most statistics textbooks,
- 01:49:57you know even basic statistics 101 type courses,
- 01:50:01you'll see the formula for power or sample size
- 01:50:05for a test for the difference between two sample means
- 01:50:09or two proportions in two groups.
- 01:50:13You can do that sample size or power calculation
- 01:50:16and then adjust it by what's called the design factor,
- 01:50:22takes clustering into account.
- 01:50:24And the design factor also is a very simple, you know,
- 01:50:28it's like one plus the number of clusters minus one
- 01:50:31times the intraclass correlation coefficient
- 01:50:35and you multiply the sample size by that,
- 01:50:38or I don't remember the exact details actually, I'm sorry,
- 01:50:41I don't wanna give a wrong formula and I don't remember
- 01:50:44it off the top of my head.
- 01:50:45But you can modify, without a computer,
- 01:50:48just using a hand calculation,
- 01:50:50you can modify a sample size calculation
- 01:50:53for an individually randomized trial
- 01:50:55with this design factor that only takes,
- 01:50:58that all it needs to calculate it is the number
- 01:51:02of clusters and the intraclass correlation coefficient.
- 01:51:05And then you get your new sample size or your new
- 01:51:09power for your cluster-randomized trial.
- 01:51:14There are also, in R,
- 01:51:16there are R packages for doing these kinds of calculations
- 01:51:20for stepped wedge designs
- 01:51:21and for cluster-randomized trials.
- 01:51:23In fact we have an R package that calculates sample
- 01:51:28size and power for a whole bunch of different
- 01:51:31variations of step wedge designs with continuous
- 01:51:34outcomes and binary outcomes and repeated measures
- 01:51:39and all sorts of things.
- 01:51:40It's called SWD_PWR, stepped wedge design power
- 01:51:46and it's an R package that's freely available to everybody.
- 01:51:50So that's just a little bit
- 01:51:52about how to do this and what's involved.
- 01:52:00<v Speaker>Yeah, thank you.</v>
- 01:52:01I think I'll just take two more questions.
- 01:52:05There's one that is quite important and I think
- 01:52:12would also help Donna to see how she can help us
- 01:52:17for that, especially those who are interested
- 01:52:19in implementation research.
- 01:52:22The comment says, there is generally poor knowledge
- 01:52:26of implementation research among low to medium
- 01:52:33middle income countries researchers
- 01:52:36as evidenced especially by the number
- 01:52:39of publications in Africa.
- 01:52:41Where can one get specific training opportunities
- 01:52:44in implementation science research.
- 01:52:49<v Presenter>Yeah, so that is an extremely important point,</v>
- 01:52:53and I can tell you a few things about this.
- 01:52:56The first one is, I'm pretty sure
- 01:52:59that there's a West African Implementation Science Society.
- 01:53:03Is there anybody on this call who's a part of this society?
- 01:53:08<v Speaker>Yeah, CAWISA, there's CAWISA and there's NISA</v>
- 01:53:11in Nigeria also.
- 01:53:15<v ->Yeah, so I don't know,</v>
- 01:53:16would you like to say something about that and
- 01:53:20what the society is doing,
- 01:53:21at least in the West African context,
- 01:53:23in terms of promoting implementation science,
- 01:53:27supporting new researchers at implementation science
- 01:53:30and so forth?
- 01:53:32<v Audience Member>I know that NISA</v>
- 01:53:34holds an annual conference on implementation science
- 01:53:37in Abuja, I've been, I've attended that before.
- 01:53:41I'm very aware that CAWISA is Central and West Africa
- 01:53:45and they currently are expanding.
- 01:53:48I think they have six countries
- 01:53:52in their court and they're currently expanding.
- 01:53:56I could send to, the details of, you know,
- 01:54:02the two organizations who do training,
- 01:54:04therefore they have NIH grant I think to support...
- 01:54:08Yes, they do have one or two NIH grants
- 01:54:11to support implementation.
- 01:54:13They just received a grant 443 to do it.
- 01:54:17<v Presenter>Wow. Wonderful.</v>
- 01:54:21<v Speaker>Thank you very much.</v>
- 01:54:22I believe that is Professor Bavanela.
- 01:54:26<v ->My name is-</v> <v ->That is professor-</v>
- 01:54:28<v ->Oh. Sorry-</v> <v ->She's my friend.</v>
- 01:54:33<v ->Okay. (laughs)</v>
- 01:54:37Okay. Could you just type the names of the societies?
- 01:54:40<v Presenter>Yeah, can I get the link?</v>
- 01:54:41<v Speaker>Maybe give the others hint on this. Thank you.</v>
- 01:54:47<v ->Okay. So that's one thing-</v> <v ->In the chat.</v>
- 01:54:51Yeah, so, go on.
- 01:54:53<v Presenter>And I know that the World Health Organization</v>
- 01:54:59has an implementation science academy that's focused on,
- 01:55:05there's one that's more focused on infectious disease
- 01:55:08and then there's one that's focused on chronic disease.
- 01:55:11But I don't have the links for either of those.
- 01:55:14I'm not sure if there's anybody on this call
- 01:55:17who's participated in either one and they're,
- 01:55:20I know the chronic disease one, I'm pretty sure,
- 01:55:23is done in the summer and I don't remember
- 01:55:26if before COVID it might have been
- 01:55:29that people had to apply and go to Geneva, but maybe
- 01:55:32now it's done by Zoom and it can be more inclusive.
- 01:55:35I'm not really sure but I'm wondering if there's anyone
- 01:55:38on the call who is involved with either of those trainings
- 01:55:43that are connected to WHO.
- 01:55:52<v Speaker>I guess please,</v>
- 01:55:53let me advise our participants to actually use the Google.
- 01:56:00You can type this in the Google and get some
- 01:56:03of the resources.
- 01:56:05There are some training programs too.
- 01:56:11I know the NIH also have the implementation
- 01:56:15science training program and we can,
- 01:56:19I mean you can actually apply for it. It is online.
- 01:56:22And then in December- <v ->Didn't you do that, Ike?</v>
- 01:56:25<v Speaker>You join the group. Yeah, that one.</v>
- 01:56:27<v ->Didn't you do that one?</v> <v ->Yes, I did. I did that.</v>
- 01:56:32<v Speaker>Maybe you could say a little bit more-</v>
- 01:56:33<v Speaker>The conference.</v>
- 01:56:35<v Presenter>Yeah, maybe you could say a little bit more</v>
- 01:56:37about what was involved because that was a pretty
- 01:56:39in-depth training I think that you were able to access.
- 01:56:43<v Speaker>Yes, it was actually for about three months</v>
- 01:56:47or so and we had the training online,
- 01:56:52and we had exercises,
- 01:56:57assignments and we had also facilitators
- 01:57:05or resource courses for the different lectures,
- 01:57:09and a lot was given on the theories.
- 01:57:15I mean they really went in depth so that they were
- 01:57:20well grounded in the theory of implementation science.
- 01:57:24And then the various examples.
- 01:57:28And this was capped by a meeting
- 01:57:33in Washington and there, there was
- 01:57:38a conference and we also had some sessions,
- 01:57:43small group sessions and I mean
- 01:57:48just to experience
- 01:57:50the different kinds of implementation research
- 01:57:55that has been carried out and it was quite helpful.
- 01:57:59So I guess with the emails we have, but like you said,
- 01:58:04you can just Google and you can actually access all
- 01:58:09of this, and like Donna said,
- 01:58:12there is also this WHO implementation science
- 01:58:16training that's also free.
- 01:58:19And so we can,
- 01:58:20I mean you can access all of this at some point,
- 01:58:24but you can get back to me if you need further
- 01:58:29information on this.
- 01:58:31And we have also heard about others who can help.
- 01:58:35So if we get that, the names of the society organizations,
- 01:58:40it'll also help us to link a network amongst ourselves
- 01:58:46on this implementation science and implementation research
- 01:58:52across the continent and even the group.
- 01:58:56So the lot of network is there for us.
- 01:59:01I have to keep raising up their hands.
- 01:59:03I have to allow them before we end this,
- 01:59:07I have Dr. William and I have... (indistinct)
- 01:59:12Dr. William, please let it be brief. Thank you.
- 01:59:17Dr. William?
- 01:59:19<v Presenter>I better tell Dr. Spigelman that too.</v>
- 01:59:22<v Speaker>Pardon me. Donna?</v>
- 01:59:23<v Presenter>I said you better tell Dr. Spigelman</v>
- 01:59:25let it be brief also. (laughs)
- 01:59:29I'm just joking.
- 01:59:30<v ->Hello?</v> <v ->Okay.</v>
- 01:59:31<v Speaker>Hello. Good evening.</v>
- 01:59:33<v Speaker>Hello Dr. Is that Dr. William? Yeah, thank you.</v>
- 01:59:36<v Speaker>Yes. Good evening, ma'am.</v>
- 01:59:38<v ->Yeah we are hearing you.</v> <v ->Good evening, ma'am.</v>
- 01:59:40<v ->Yes, we're hearing you.</v> <v ->Good evening. Hello.</v>
- 01:59:44<v Speaker>Yeah, the lecture is awesome.</v>
- 01:59:48I had a lot of new things that were,
- 01:59:52was being taught but my question majorly is concerning
- 01:59:58the hybrid research that you mentioned
- 02:00:04that it is the same thing as mixed method research
- 02:00:07and that since HIV is the chronic disease now,
- 02:00:14would is it be better to do the research that you want
- 02:00:20to do in East Africa, that's in Uganda, also
- 02:00:23in West Africa also to see if there are changes in the,
- 02:00:28though there are both flat,
- 02:00:30but the different terrains and all that who also help
- 02:00:37in managing the patient.
- 02:00:38So those are the issues I have. So, thank you.
- 02:00:44<v Presenter>Great.</v>
- 02:00:45So I'm glad you asked that question.
- 02:00:47Hybrid designs and mixed methods are not the same thing.
- 02:00:52So hybrid designs are, well, let me say,
- 02:00:57mixed methods, which I didn't really talk about.
- 02:00:59That's another thing I could have actually included
- 02:01:02in this talk.
- 02:01:03But mixed methods involve the mixing of qualitative
- 02:01:06and quantitative research along the entire study process.
- 02:01:11And there's different types of mixed methods
- 02:01:15designs depending on what's considered to be
- 02:01:18more important, the qualitative or the quantitative.
- 02:01:21So like you could say, the MOST design,
- 02:01:23which I did talk about is a mixed method design,
- 02:01:26because phase one of the MOST design
- 02:01:30at least has a qualitative component.
- 02:01:32We use qualitative data to kind of narrow down
- 02:01:36the intervention package components.
- 02:01:39But then we'd use quantitative data in phase two
- 02:01:42in MOST to further weed them down to what we're gonna
- 02:01:46use for the intervention we're gonna roll
- 02:01:48out in the full trial.
- 02:01:50But it's recommended, and even though I have really
- 02:01:54almost no social science training,
- 02:01:57I've come to deeply appreciate and value the role
- 02:02:00of social scientists in implementation science.
- 02:02:03And I would say that we need qualitative research
- 02:02:06along with quantitative along the entire like pathway
- 02:02:10from qualitative and quantitative data
- 02:02:13about what is and isn't working about the intervention
- 02:02:16if it's been in place or what people think
- 02:02:20about a new kind of way of adapting the intervention
- 02:02:24to a new situation.
- 02:02:26And then you kind of roll out your trial,
- 02:02:28whatever kind of trial you have.
- 02:02:29And then while the trial's going on,
- 02:02:31it's really important to collect qualitative data
- 02:02:34because if it doesn't work, we wanna know why.
- 02:02:39So like in the BetterBirth study that I mentioned,
- 02:02:42because it wasn't a mixed method study,
- 02:02:44we have no idea why there was this failure to take up
- 02:02:49the Safe Childbirth Checklist.
- 02:02:51Was it that the turnover of staff was too high
- 02:02:55or the supply is not in the facilities?
- 02:02:57I mean there's just so many reasons, we have no idea.
- 02:03:00So the qualitative piece while the study is going on
- 02:03:03is really important.
- 02:03:05And then you do your quantitative evaluation
- 02:03:07of your endpoints.
- 02:03:08And then a lot of people advocate, after that,
- 02:03:11further qualitative data collection to find out
- 02:03:15what people thought of the intervention,
- 02:03:16what suggestions they have for improvement,
- 02:03:19what they think the next step might be in terms
- 02:03:21of scale up or scale out.
- 02:03:24And so you'd have qualitative and quantitative trading
- 02:03:27off along the whole continuum.
- 02:03:30And then also there's also formal ways of doing
- 02:03:35mixed methods analysis where, when you evaluate outcomes,
- 02:03:39you actually integrate the qualitative
- 02:03:42and quantitative data in some formal ways
- 02:03:45that I know exist,
- 02:03:47I haven't actually had the opportunity to do that yet.
- 02:03:50And so I'm looking forward to learning more
- 02:03:52about how to do that.
- 02:03:54So that's very different now I hope you can see
- 02:03:57from the hybrid design where we have a standard quantitative
- 02:04:01study design like a CRT or a stepped wedge design
- 02:04:05and the hybrid design is just more about,
- 02:04:08is the primary outcome, health outcome,
- 02:04:11a health outcome and an implementation outcome
- 02:04:13or an implementation outcome only?
- 02:04:20<v ->Okay, thank you, Donna.</v>
- 02:04:22(indistinct)
- 02:04:27<v Speaker>Yeah, thank you. Thank you very much.</v>
- 02:04:29<v Speaker>One of the universities in Nigeria.</v>
- 02:04:32<v ->It's all right.</v> <v ->You're welcome.</v>
- 02:04:34<v Speaker>It's all right. Thank you very much.</v>
- 02:04:37Professor Donna, Professor Ajayi and everybody here.
- 02:04:41<v ->Thank you.</v> <v ->My network,</v>
- 02:04:43I was on the road so my network was off and on.
- 02:04:46I started hearing, listening to implementation science
- 02:04:50I think around 2016 at NIH and one the last speakers
- 02:04:55spoke about, showed up when they draw the map,
- 02:04:59it was still new then,
- 02:05:01you would just see a lot of studies on East Africa,
- 02:05:05hardly anything in West Africa.
- 02:05:08You know, I'm just trying to look at the gaps
- 02:05:12that we need to be filling.
- 02:05:14I'm challenging those of us present here and our speaker,
- 02:05:18what really causes that?
- 02:05:21There's not a balance.
- 02:05:25You see a lot of studies on East African coast
- 02:05:29and very minimal...
- 02:05:31That's one thing I observed.
- 02:05:33Secondly, each time I go for these meetings,
- 02:05:37with all due respect, you are talking about trials,
- 02:05:41you're talking about these implementations,
- 02:05:43hospitals, everything.
- 02:05:45We hardly see those who handle these drugs.
- 02:05:49We hardly see, I mean I'd love multidisciplinary research,
- 02:05:53that's why I'm here.
- 02:05:54I believe in it.
- 02:05:56But we hardly, it could be the fault of the,
- 02:05:59those drug handlers, the pharmacies,
- 02:06:02those that handle drugs.
- 02:06:04Because I was in another group with Harvard on malaria
- 02:06:07and it's the same thing.
- 02:06:08They were even shocked.
- 02:06:09They say you are the only pharmacist we've seen
- 02:06:11in this thing and you're talking about medicines,
- 02:06:13and they're not involved, in the hospitals,
- 02:06:16when I listen to them, they don't even want to mention them.
- 02:06:21So we should really be looking at involving everybody
- 02:06:26in this healthcare sector for the implementation
- 02:06:30to work in.
- 02:06:31In terms of community pharmacists, they do a lot in Africa.
- 02:06:35They really need to be brought on board.
- 02:06:38They do a whole lot.
- 02:06:39They're the (indistinct) when they're ill.
- 02:06:43So we should look at those gaps and fill the skill sets
- 02:06:48in that area.
- 02:06:49And then finally is
- 02:06:54that all these other websites
- 02:06:57and all that, it would be good for those of us here,
- 02:07:00maybe or Professor Ajayi to really do certification,
- 02:07:05social certification courses on these things
- 02:07:08so that we will be well trained.
- 02:07:11It's obvious that you need a lot of statistics.
- 02:07:13Some of us may not be good
- 02:07:15as why multidisciplinary approach is very important.
- 02:07:19Well, thank you very much for everything.
- 02:07:24<v ->Thank you.</v> <v ->Oh thank you very much.</v>
- 02:07:27<v ->I think it'll be more interesting just ahead.</v>
- 02:07:30<v Speaker>Okay. Donna, I'm with you.</v>
- 02:07:31<v ->Oh, okay.</v>
- 02:07:32Well, I think these comments are probably best
- 02:07:36discussed by the many participants on this call more
- 02:07:40than me, 'cause they have to do with, you know,
- 02:07:45the role of implementation science
- 02:07:47in West Africa and what are the questions,
- 02:07:50and some of them,
- 02:07:52the only thing I would just say one small observation that,
- 02:07:56you know, it's an unintended consequence
- 02:08:00of the fact that, you know,
- 02:08:02I think, to some extent the issue you're bringing up
- 02:08:05is because HIV rates
- 02:08:09were so much lower in West Africa than East Africa.
- 02:08:13So there was so much funding being poured
- 02:08:16into East Africa in terms of mitigation
- 02:08:19of the HIV AIDS epidemic.
- 02:08:22And then now as the epidemic has lessened,
- 02:08:25it's starting to evolve into some of these other topics.
- 02:08:28Whereas in West Africa there just wasn't as much
- 02:08:31because luckily the AIDS epidemic was
- 02:08:33just so much less severe.
- 02:08:40<v ->Yeah, thank you very much for that response.</v>
- 02:08:42And I think it is a challenge also to researchers
- 02:08:46in this part of the continent.
- 02:08:49<v ->I know.</v> <v ->And with this lecture</v>
- 02:08:51I think we should start
- 02:08:53thinking of materials opportunities that are
- 02:08:58there for us to tap into.
- 02:09:01So thank you very much, (indistinct)
- 02:09:04for that observation and I think
- 02:09:07we should try and bridge the gap and come up with...
- 02:09:12(indistinct)
- 02:09:19<v ->Please.</v>
- 02:09:20So I think we need to really round up,
- 02:09:24I want to mention that Donna actually moved
- 02:09:30from one lecture to ours.
- 02:09:32So we are really very,
- 02:09:33very grateful because we know by now you should be
- 02:09:37resting from the various assignments you had all morning.
- 02:09:44So to Randolph, Donna, I've just observed
- 02:09:48we're wearing the, our clothes are of the same color.
- 02:09:54<v ->I know, I noticed it also. It's kind of amazing.</v>
- 02:09:59<v Speaker>Oh, what a coincidence. I'm so happy.</v>
- 02:10:02<v ->I know.</v>
- 02:10:02<v ->And that means I need to come back there soon.</v>
- 02:10:06<v ->Yeah, that'd be great. That would be wonderful.</v>
- 02:10:09<v ->Thank you very much.</v>
- 02:10:11So we have Dr. Kiemi who is going to do the vote of thanks
- 02:10:16on behalf of the Institute for Advanced Medical Research
- 02:10:20and Training.
- 02:10:22So Dr. Kiemi, are you there?
- 02:10:28<v ->Yes, I am.</v> <v ->Okay, please,</v>
- 02:10:31the floor is yours now, thank you.
- 02:10:33<v ->Right, thank you very much,</v>
- 02:10:37Professor Donna Spiegelman, for a very exciting
- 02:10:42and illuminating lecture on implementation science.
- 02:10:46In this vote of thanks,
- 02:10:48I just like to say that this lecture is coming
- 02:10:52just at the heels of an African summit
- 02:10:57that we had just last week.
- 02:11:00And one of the strong takes of that summit was
- 02:11:03that we need implementation science to reduce burden
- 02:11:08of stroke in Africa.
- 02:11:10We discovered that in Africa only 7%
- 02:11:13of hypertensives are controlled.
- 02:11:16And that begs the question of the need
- 02:11:19of interpretation science to, you know,
- 02:11:23to improve awareness about hypertension and, you know,
- 02:11:27and other risk factors and to improve optic
- 02:11:31of hypertensives to enhance control of hypertension,
- 02:11:36and of course to our body.
- 02:11:38So it's very germane to the field
- 02:11:41of non-communicative diseases on the continent.
- 02:11:44And I'm sure you wouldn't mind partnering with us
- 02:11:47in the years ahead, you know,
- 02:11:49to undertake implementation science research in reducing
- 02:11:52the burden of stroke in Africa.
- 02:11:55So on that note, I'd like to,
- 02:11:57on behalf of the director
- 02:11:59of the Institute for Advanced Medical Research
- 02:12:01and Training College of Medicine... (indistinct)
- 02:12:05I'd like to say very big thank you for the time
- 02:12:09you have invested in sharing with us these deep
- 02:12:13thoughts from your profound wealth of experience
- 02:12:17and knowledge in the field of implementation science.
- 02:12:21Director, the entire staff and indeed the purpose
- 02:12:25of the College of Medicine, and a time not
- 02:12:27about the community and including colleagues
- 02:12:30who have joined from other institutions in Nigeria,
- 02:12:32across the continent, but they be grateful.
- 02:12:36We trust, we hope to build on this foundational
- 02:12:40knowledge and share with us to advance the field
- 02:12:43across the continent.
- 02:12:45Thank you very much and God bless.
- 02:12:48<v ->Thank you, everybody.</v>
- 02:12:51Nice- <v ->And let us</v>
- 02:12:52all give an applause.
- 02:12:55<v ->Thank you.</v> <v ->Thank you so much.</v>
- 02:12:57<v Presenter>Thank you. It's been a pleasure.</v>
- 02:13:00Thank you so much. <v ->Thank you.</v>
- 02:13:02<v ->So much, Donna.</v> <v ->Thank you so much.</v>
- 02:13:03<v ->Thank you. Bye bye.</v>
- 02:13:05<v ->Bye.</v> <v ->Thank you.</v>
- 02:13:07<v ->Bye.</v> <v ->Thank you.</v>
- 02:13:10<v ->Bye.</v> <v ->Bye, Donna.</v>
- 02:13:13<v Student>Bye bye, thank you.</v>
- 02:13:16<v ->Thank you, thank you.</v> <v ->Thank you.</v>
- 02:13:21<v Speaker>Thank you, Donna. Thank you again.</v>
- 02:13:24<v Speaker>So much.</v>
- 02:13:28(indistinct)