How Big Data can help you choose better health insurance

By Dylan Walsh

There are plenty of easy consumer choices. Paper clips: easy. Dish sponges: easy. Those products sit at one end of the spectrum. At the other end, impossibly distant, is health insurance.

That’s difficult.

“Tons of evidence suggests that people have a hard time making choices when it comes to health insurance,” says Kate Bundorf, associate professor at Stanford School of Medicine with a courtesy appointment at Stanford Graduate School of Business. The complexity can be overwhelming and, as a result, people often choose suboptimal plans that punish them with higher costs and create inefficient markets. “So we wanted to figure out what types of tools would help people make decisions,” says Bundorf.

With Maria Polyakova of Stanford School of Medicine and Ming Tai-seale of the University of California, San Diego, she developed a web-based tool with an algorithm that matched the medical records of Medicare Part D enrollees with the best health insurance options for prescription drugs. Those who used the algorithm were more likely to change to a better plan. They also reported more satisfaction with the process of choosing health insurance, even though they ended up spending more time on it.

Making insurance choices easier and better

Study participants were assigned to either a control group or one of two treatments. The control group was directed to existing online Medicare resources for choosing one of the 22 prescription plans available to them. Treatment groups, meanwhile, received support from the algorithm, which automatically drew information from their medical records and matched it against prescription drug plans. When reviewing their options, both treatment groups were able to view a table online that showed individualized analysis of likely costs for each of the plans. In addition to this, one of the treatment groups was shown an “expert score” for every plan—a number, from 0 to 100, that the algorithm produced to rank the plans; the three best options were highlighted at the top of the table.

Both treatments encouraged people to change to more favorable insurance plans, but the treatment that included the “expert” suggestions alongside cost estimates proved more effective. Participants in this treatment opted to switch plans 36% more often than those in the control group. “We found clear evidence that the intervention changed people’s behavior, particularly in the case when we offered expert advice,” says Bundorf.

In the context of the experiment, these changes generated $270,000 in savings for consumers. And while this may seem a relatively small number, it is tied to a relatively small pool of 316 treatment subjects who had access to the expert recommendation. If the same effects were extrapolated to the nearly 25 million people enrolled in Medicare Part D—and assuming an equivalent rate of participation as Bundorf and her colleagues saw in this experiment—savings would be on the order of $680 million. This is particularly notable given the tool itself cost less than $1.8 million to develop.

Crossing to the policy world

Though the practical implications are clear, two important considerations moderate the translation of this finding into policy.

First, a small portion of those eligible to join the study chose to enroll. In the end, 1,185 people took part in the study out of nearly 30,000 who were invited; and those who ultimately joined were more tech savvy than those who didn’t. On top of this, the researchers worry that those who would benefit the most might not have elected to take part.

“The people who chose to interact with the algorithm were sophisticated consumers; they were active shoppers who were seeking out information,” says Polyakova. “This suggests that if we want to improve the choices of people who currently have the worst plans, then simply offering the tool online won’t solve the problem.” A more proactive approach is necessary.

Second, the study’s demographics as a whole are not representative of the broader Medicare population. Bundorf and her colleagues partnered with the Palo Alto Medical Foundation to run the experiment, which means those who took part lived in one of the wealthiest and most technologically attuned parts of the country. Whether the results would generalize is unknown. “It’s conceivable that people in other places, who have lower incomes and less exposure to tools like this, may behave completely differently,” says Polyakova.

An algorithm win (and a warning)

Bundorf and her colleagues were not sure at the outset that this intervention would change behaviors. A pile of evidence suggests that simply giving people information doesn’t influence outcomes. But the results point to one of the study’s clever designs: By having two distinct treatments, the researchers were able to measure the effect of information alone—showing the consumer’s total cost of each plan—as well as expert advice paired with information.

“And advice does something different than information,” says Polyakova. “When people are exposed to advice, it not only changes their knowledge about a product, but it also changes how they actually value the features of that product.”

This, she notes, has complicated and important implications. We tend to think of software as neutral—Microsoft Excel has no agenda—but this is not always the case with modern algorithms. Companies can, and likely will, deploy advice-giving algorithms strategically, perhaps to promote a certain product or increase revenue, and concealed in this process will be the ways in which these algorithms alter how we value different products.

“If people are responsive to this type of algorithmic advice, then it makes the very near future quite interesting,” says Polyakova. “Lots of policy and regulatory questions about how to protect consumers from non-benign interventions will soon need our attention.”


This article originally appeared on Stanford Business and is republished here with permission.

Fast Company , Read Full Story

(7)