A century ago, cervical cancer was a leading cause of death in the United States, killing 40,000 women a year. Then a test called the Pap smear made it easy to detect anomalous cells. Today, robust screening programs have greatly reduced cervical cancer deaths in this country.
The story sounds simple: a public-health triumph. But preventing cervical cancer is far from straightforward. It requires a chain of events made of links like "recommend test," "call patient with results," "make follow-up appointment," and so on. Just one broken link due to things such as unreliable bus schedules or clunky software can interrupt care.
Many good interventions ultimately fail to benefit public health. That's why Susan Dwight Bliss Professor of Biostatistics Donna Spiegelman, ScD, founded the Yale Center for Methods in Implementation and Prevention Science (CMIPS). Implementation science studies the barriers that keep evidence-based interventions from going mainstream, as well the factors that speed up that transition.
In fact, she adds, only 14% of biomedical research results are translated into practice--and that process takes an average of 17 years.
"There are very complex, multi-level reasons for why that happens," she said, “and addressing that shortfall is exactly what implementation science is all about."
When it comes to implementing research results in the real world, a host of factors on several levels can make a difference.
For example, providers may or may not have the time, training, and supplies to offer patients evidence-based care. Patients may or may not follow through on recommended care depending on costs, conflicting obligations, wait times, or cultural beliefs about certain diagnoses. Systems-level factors matter, too. Is there enough political will to fund health systems? Are there funds for large-scale prevention campaigns? Buses so patients can reach their appointments? Policies that let them take time off work?
Tackling one factor, or focusing only on one level, may not be enough to ensure an intervention takes effect. So, Spiegelman explains, implementation scientists develop multi-component strategies that are cost-effective.
Doing that is surprisingly complex, but the science is underway. CMIPS has made it a priority to develop new statistical methods and equip the field with stronger tools.
For example, suppose you have resources to improve cervical cancer outcomes in Mexico, where the chain is fragile and many women still die from the disease. Which interventions will work best? Should you offer nurses more training? Step up efforts to reach patients? Open more clinics? Early on, it's hard to say.
So CMIPS researchers pioneered a trial technique called the learn-as-you-go or LAGO design. This type of trial begins with a bundle of interventions. Then, at specified stages, researchers check which have been most effective. With this knowledge, they adapt the interventions, keeping what works and discarding what hasn't. They do this over and over as new trial data arrive, continuing to optimize the interventions.
Another CMIPS tool in development involves examining how public-health interventions affect people who are not their intended targets. For example, though a workplace fitness program may be aimed only at employees, a worker who brings home a new exercise habit might draw family members into fitness activities, too.
"The current paradigm for public health research doesn't typically estimate these spillover effects," Spiegelman said. "It only estimates the effect of interventions on those who are directly receiving them. Our emphasis on developing new methods is unique to CMIPS."