How Powerful Interests Use Science to Sway Public Opinion

In the 1960s, the Sugar Industry Subtly Influenced Scientific Consensus Without Ever Committing Fraud

How Powerful Interests Use Science to Sway Public Opinion | Zocalo Public Square • Arizona State University • Smithsonian

A study released in 2016 detailed how the sugar industry worked to downplay emerging science linking sugar and heart disease. Courtesy of Matt Rourke/Associated Press.

In a 2016 article, researchers at the University of California, San Francisco, documented a surprising link between the sugar industry and research on fat. They showed that during the 1960s, the Sugar Research Foundation, an industry-sponsored organization, had paid for a group of doctors at Harvard University to write a literature review that downplayed the risks of sugar in heart disease and emphasized the risks of fat. A media storm followed, with widespread coverage of this “sugar conspiracy.” Marion Nestle, professor of nutrition from New York University, described the findings as “appalling.” Many quickly concluded that Big Sugar had shifted the course of nutrition science, with serious public health consequences.

But just two years later, historians David Merritt Jones and Gerald M. Oppenheimer argued that the alleged sugar conspiracy was nothing but science as usual. As they pointed out in an article in Science, the Harvard doctors already thought fat was to blame for heart disease, and had previously published influential research supporting this theory. Meanwhile, the main advocate of a link between sugar and heart disease, John Yudkin, was himself funded by the dairy and egg industries. In a piece for Slate, Jones and Oppenheimer criticized the critics of the sugar industry for their “highly selective and profoundly flawed interpretation of this history.”

Who is right? Which story should we believe? To answer this question, we need a more nuanced understanding of the many ways that industry can—and does—influence science.

The simplistic picture of how this influence works involves fraudulent science for sale: a shady industrial representative delivers a briefcase of cash to some scientist—who promptly publishes the desired results. But in researching this topic, we found that more often representatives of industry are far more sophisticated, using subtle techniques that can shift scientific consensus and which are much more difficult to detect. In fact, they often get the policy results they want without committing fraud—and often they don’t even have to subvert the norms of scientific practice to do so.

One technique they use to pull this off is what we have called selective sharing. Selective sharing relies on the fact that all scientific evidence is probabilistic: Not everyone who smokes gets cancer, and not everyone who gets cancer smokes. This means that some well-run studies, free from industry influence, will yield misleading results. In the case of tobacco, dozens of perfectly good studies have found no link between tobacco products and cancer. Of course, these studies are outliers. Hundreds more have found those risks, and looking at the full body of evidence leaves little room for uncertainty.

But in the hands of tobacco industry propagandists, outlier studies become powerful weapons of misinformation. As Naomi Oreskes and Eric Conway report in their 2010 book Merchants of Doubt, in the 1950s, a research committee funded by Big Tobacco distributed bimonthly pamphlets selectively reporting on studies that showed no link between tobacco and cancer. These were sent to hundreds of thousands of journalists, doctors, and policymakers. There was no fraud involved in this technique, and the research described in the pamphlets was often not funded by or otherwise linked to the tobacco industry. In this case, Big Tobacco did not need to influence scientists to influence beliefs about what the science showed.

The key to this strategy is to bias the data seen by the public. The data may be real, independently generated, and of very high quality, but if the people who use that data to make decisions see only a misleading fraction of what has been found, their beliefs are more likely to be false.

Representatives of industry are far more sophisticated, using subtle techniques that can shift scientific consensus and which are much more difficult to detect. In fact, they often get the policy results they want without committing fraud—and often they don’t even have to subvert the norms of scientific practice to do so.

Even more pernicious is when industrial and political groups shape the body of evidence that is produced and published. One subtle way in which industrial groups influence the production of scientific evidence is through what philosophers of science Bennett Holman and Justin Bruner call industrial selection.

Industrial selection takes advantage of the fact that scientific research is highly diverse and even scientists working in the same field will use a variety of research methods and make distinct assumptions that can influence their results. This diversity of practice can mean that some research groups, by virtue of the methods they have adopted, are more likely to generate industry-friendly results. Industry can then select for funding only the scientists who are already likely to produce the favorable results.

Holman and Bruner illustrate this point with a stark example. Heart arrhythmias often precede heart attacks, so in the 1970s researchers began testing drugs intended to suppress arrhythmia. Their goal was to reduce heart attack deaths. But while some researchers ran studies testing whether the drugs actually reduced mortality, others simply tested whether the drugs reduced arrhythmia. Pharmaceutical companies funded scientists who had adopted this latter method much more than the former, allowing these scientists to run more studies and produce more data. The result was a glut of papers from labs testing for arrhythmia suppression only—and finding that the drugs indeed worked to stop arrhythmia. On the basis of this research, doctors started prescribing anti-arrhythmics.

But other studies showed that the drugs actually increased mortality, even though they prevented arrhythmia. Yet it took years for this fact to become clear—and in the meantime upwards of one hundred thousand patients may have died as the result of anti-arrhythmic drugs.

In this case, again, individual researchers continued producing the research they would have produced otherwise. They just produced more of it, and spread it more widely, because of the extra funding industry provided. Individual scientists were not corrupted, but nonetheless the process was corrupted, leading to thousands of deaths.

The sugar industry offers another version of the same problem. As we saw, Big Sugar contacted and funded researchers who were already convinced of the dangers of fat, but not sugar. In fact, the Harvard researchers involved in nutrition research claimed that the money made no difference to the research they conducted. But that does not mean industry did not influence scientific outcomes. Funding researchers who were already convinced that fat was responsible for heart disease is just another way to use industrial selection to shape the evidence available about the dietary causes of heart disease.

This is not a fine point. There is real danger that if we misunderstand industry influence on scientific research, we can fail to recognize it. Industrial selection and selective sharing do not involve fraud or individual corruption. But they can drastically alter the path of scientific discovery, how policymakers respond to it, and the lives of millions of individuals.


×

Send A Letter To the Editors

    Please tell us your thoughts. Include your name and daytime phone number, and a link to the article you’re responding to. We may edit your letter for length and clarity and publish it on our site.

    (Optional) Attach an image to your letter. Jpeg, PNG or GIF accepted, 1MB maximum.