A recent article in the Atlanta Journal-Constitution said three of Georgia's largest hospital systems were urging people not to come to hospitals to be tested if they had mild or moderate symptoms of the Covid 19 virus. If "stay away" seems like a common sentiment among hospitals in the fight against the coronavirus around the country, it's because they simply don't have enough tests, medicine or human healthcare capital to meet the demand of the virus.
In cases such as this, who gets care? The most sick? The oldest? The youngest? How should policymakers target individuals to maximize intervention effectiveness?
According to Vishal Gupta, USC Marshall Assistant Professor of Data Sciences and Operations, "resource allocation problems" have been around longer than Covid 19, and researchers around the globe have been focused on designing new interventions and treatments to combat serious social problems.
But outlining an intuitive, flexible, and tractable solution for allocating those treatments to the candidates who might benefit most, especially in settings where data are limited, has been elusive, until now.
Gupta and his team, DSO Assistant Professor Song-Hee Kim, Assistant Professor Brian Rongqing Han of the University of Illinois Gies College of Business, as well as Hyung Paek, MD, of Yale-New Haven Hospital, have designed a new mathematical method for selecting candidates for treatment to optimize the global benefit.
In their paper, Maximizing Intervention Effectiveness, forthcoming in Management Science, they introduce a novel robust optimization approach to solving the resource allocation problem, and they demonstrate that their method can offer significant benefits over common practice, particularly when patient response to treatment is varied, the intervention has not been widely tested previously, and the potential for harmful side-effects is real.
"It's a simple idea," Gupta said. "Your gut is to pick the sickest people. But these sickest patients may be too sick to benefit from treatment. Targeting them is arguably an inefficient use of resources. A prudent decision-maker would ideally target those individuals who benefit the most in order to get the biggest bang for buck from the intervention."
The challenge is identifying the potential benefit for each individual prior to administering the intervention, particularly in the healthcare space where sharing individual, patient-level data is complicated. But using only the evidence typically published in a research study, Gupta and his team were able to show that the best way to ensure good effectiveness is to not only target sick patients, but also choose a portfolio of patients that mirrors the demographics of the populations that have been studied in prior research.
Scoring Rules
Gupta's team tested their method on a population of 1,000 Medicaid patients. In a simulation using data from a partner hospital, they selected 200 people to participate in case management, a particular intervention where patients are partnered with a team of social workers, nurses, and physicians who coordinate their care.
The researchers' goal was to test their robust optimization method against "scoring rules," which are current practice for managing resource allocation problems. Scoring rules help practitioners assign each individual in the candidate population a score, and those with the highest scores are targeted for treatment. Gupta's team wanted to know if its model would outperform scoring rules in identifying optimum candidates, and if so, why.
They found that scoring rules work well when the treatment is benign and/or the patient base is nearly homogeneous. "However," they warn, "scoring rules can perform arbitrarily badly when the treatment is potentially harmful. In addition, as heterogeneity in the sample increases, scoring rules can be worse than not targeting at all."
That finding is particularly disturbing, Gupta said. "Allocating interventions in a bad way might actually be worse than doing nothing to address the problem."
By contrast, Gupta's method for maximizing effectiveness performs nearly as well as scoring rules when heterogeneity is small, and much better than scoring rules as heterogeneity increases. Most importantly, unlike current practice, it is never worse than doing nothing.
Leveraging Published Data Results
In addition to outperforming current practice in most cases, the robust optimization approach is the first to address intervention effectiveness using only published study data.
"This restriction to the published evidence is a key distinguishing feature of our work," Gupta said. "Another is its flexibility. Our model is flexible enough to accommodate a variety of real-world constraints such as allocating resources fairly across demographic groups. And it's simple enough to solve large-scale instances within a few minutes on an ordinary laptop. We want it to be useful."
In particular, the researchers want their work to be useful to policy makers. "Our audience is people who make policy," Gupta said. "More and more, there is interest in data-driven, evidence-based research. People want interventions that solve real problems. We want to help them roll-out those interventions in practice in the most effective way."