ADVERTISEMENT

Feature Article

The Adaptive Approach to Clinical Trials Can Benefit Patients Without Sacrificing Credibility
2012;17(7):6.

Adaptive study design for clinical trials may help bolster volunteerism while still yielding scientifically valid results, researchers said.

A new approach to testing medical treatment options could ensure that more patients get the most beneficial treatment but still yield valuable research results that stand up to scientific scrutiny, according to William J. Meurer, MD, MS, from the University of Michigan, and colleagues.

As explained in a “Viewpoint” article in the June 13 JAMA, the “adaptive” approach tries to overcome a huge chicken-and-egg problem in medical research: Not enough people volunteer for studies of new treatments, partly because researchers can’t promise the studies will help them—but without enough volunteers, researchers can’t study new treatment options.

The investigators said that the adaptive approach makes the most sense in situations where time is of the essence (eg, emergency care) or where the medical stakes are high and there are few good treatment options—as is the case with some forms of cancer. The researchers also noted that there are many situations where adaptive design isn’t feasible or needed.

But for patients who are being asked to participate in research studies, an adaptive design could help tip the balance between saying yes and saying no. It could also help patients who enter trials have a clearer understanding of what the stakes are for them personally—not just for the generation of patients who will come after them.

Learning as They Go
The adaptive approach to clinical trial design centers around how patients are assigned to one of the two or more groups in a study. In a nonadaptive trial, everyone who volunteers—from the first patient to the last—gets assigned with what amounts to a coin toss, and the groups end up being of similar size.

But in an adaptive trial, the trial’s statistical algorithm constantly monitors the results from the first volunteers, and looks for any sign that one treatment is better than another. Without patients or study physicians knowing the differences, more patients are randomly assigned into the group with a better expected outcome than into other study arms. In other words, the trial “learns” along the way.

“It’s a way of assigning patients at slightly less than random chance, allowing us to do what might be in the best interest of each patient as the trial goes along,” Meurer explained.

By the end of the trial, one of the groups of patients will therefore be larger, which means the statistical analysis of the results will be trickier and the results might be a little less definitive. But if the number of patients in the trial is large, and if the difference between treatments is sizable, the results will still have scientific validity, Meurer said.

One example of this methodologic approach is being coordinated by the Neurological Emergencies Treatment Trial network based at the University of Michigan. The study, called SHINE, uses an adaptive method of assigning stroke survivors to a target blood glucose level in the first day after their stroke—with the goal of finding out how much impact glucose control has on how well the patients do overall. The study was designed by researchers at the University of Virginia, Medical College of Georgia, University of Texas Southwestern, and the Neurologic Emergencies Treatment Trials Statistical and Data Management Center at the Medical University of South Carolina.

There are other emergency treatment studies with adaptive design being planned at the University of Michigan with collaborators from throughout the country. These include trials investigating therapeutic hypothermia, one after cardiac arrest and another after spinal cord trauma. In addition, the team has developed an adaptive comparative effectiveness trial to evaluate three different medications to stop ongoing seizures in patients who have not had a response to first-line treatment.

More Sophisticated Analysis
Of the adaptive approach, Meurer said, “It takes more preparation for the researchers up front, and more sophisticated statistical analysis as the trial is going on, but in the end more study volunteers will be more likely to get the best option for them, and the results will still be scientifically sound.”

When time is of the essence and patients or their loved ones are asked to make a decision about entering a clinical trial during a health crisis—and the difference between treatment arms could be large—adaptive design can be most powerful, Meurer said. Pharmaceutical companies and medical device manufacturers have been faster to adopt adaptive design for their trials, but academic centers that conduct large numbers of nonindustry trials have not.

“Adaptive design gives us the potential to get it right and put more people where the bang for the buck is, but still have the change be invisible to the physicians and staff carrying out the trial,” Meurer said. “If a particular option helps patients about 10% more than other options, but the adaptive design’s impact on the statistical results means that you can only say the effect is somewhere between 9% and 11%, the tradeoff is still worth it.”

Suggested Reading
Meurer WJ, Lewis RJ, Berry DA. Adaptive clinical trials: a partial remedy or the therapeutic misconception? JAMA. 2012;307(22):2377-2378.


ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
Featured Jobs from MedOpportunities.com