An Overconfident Public Learns the Limits of Predictive Technology

In Chaotic Times, We Rely Too Much on Big Data to Forecast the Future

David Speer, a holistic health practitioner, with crystal ball and tarot cards, planned to offer psychic readings to the Group of 20 summit when it met in Pittsburgh; July 1, 2009. Photo by Keith Srakocic/Associated Press.

It’s dark outside and you’re bleary-eyed. You search for your phone and it reads 3:17 a.m. Your mind starts to wander: Why does my boss want to meet with me tomorrow? Did I forget to change the diaper on my baby and will I soon be awoken by crying and a wet bed? Will that fun, flirty date turn into something real?

You then use your smart phone to look through emails about the meeting, or to cyber-stalk your love interest. You’re searching for information that might help you predict the future. Depending on what answers you find, you might roll back to sleep relieved, or you might feel more anxious and migrate to news about sports or stocks or war, and wonder what the future holds for them.

In the morning, this reliance on smart phones as counselors and conduits of information continues: weather maps, GPS advice, predictions of airplane ticket costs, and on and on. All of these apps are fancy front ends running on complicated predictive models that use a set of rules to process past and present information to give us little glimpses into the future.

But as someone who spends a lot of time working on predictive models of everything—from how climate change could affect the ways in which species interact, to why a whale sleeps much less than a mouse and is less likely to get cancer—I’m always curious about the level of confidence placed in these predictions. This is because the flip, less-intuitive side of our increased knowledge is that it can reveal how much we don’t know, point to how large our uncertainty is, and increase our anxiety.

Missing from many predictive models is certainty—or, perhaps more accurately, knowledge about their lack of certainty and how to use that in making decisions. The rare exception is the Weather app, which has uncertainty built into it: When the prediction is rain, it’ll tell you if the chance of rain is 10% or 90%, and that difference likely will affect how you dress for the day or whether you grab your umbrella.

But when your navigation system asks whether you want to follow a much more complicated route to save eight minutes on your morning commute, you’re not told if that estimate is being made with 10% certainty or 90%, depending on which lights you catch, how hard it is to turn left onto a busy street with no light, or whether other drivers divert to the same route. If the predicted gain in time is only put forward with 10% certainty, there might be a real possibility you’ll lose time following this new route. In my experience, the predicted eight-minute savings could actually mean I lose 15 minutes.

Of course, I live in the traffic jungle of Los Angeles, so maybe I’m giving the system too difficult a problem to solve. But in this case, and in countless others, I’d love to have the app give me the information I need in order to know how much confidence to put in its prediction. This isn’t how current consumer interfaces are set up, but maybe they should be. Estimates about uncertainty wouldn’t be perfect either, but they would give me a lot more information before I move across six lanes of traffic toward my exit.

One reason we seek predictive models and better technologies is because they promise to give us more control over the world and an increased feeling of security in uncertain futures. This desire for control and security is hardly a modern one. Our brains are encoded with all the fears and anxieties of generations, dating back to when early humans had to reckon with sabertooth cats and woolly mammoths. We have always needed to anticipate the world, in order to avoid being eaten, drowned, or falling off a cliff.

Even when we have lots of data, we may not have the right data or the right perspective, especially for systems that we don’t really understand. Big data and machine learning can be incredibly useful, but I still need to be smart about which information and data I choose to include in the first place.

“Predict or perish” could be an apt maxim for eons of human history and Darwinian behavioral modification. Before the advent of modern science that eventually ushered in apps, GPS, and other digital Nostradamuses, we turned to astrologers, the divinations of shamans, and the formulations of fortune tellers who looked into crystal balls or read palms, chicken bones, or entrails.

In this light, Apple’s ad slogan, “practically magic,” illustrates how technology has replaced our previous sources for security, prediction, and belief. We now have better ways than ever to gauge the likelihood of both natural and man-made phenomena—from climate change to the odds of getting a table for four at 8 p.m. at our favorite restaurant.

Many of these technology-enabled models are now realized through machine learning and artificial intelligence that harness immense computing power and sift through big data to find general trends that predict the future. These approaches have yielded impressive results in some cases by using predictive algorithms to outfox human Grandmasters at chess and Go. What most people don’t realize, however, is that many of our current predictive models produce probabilities of outcomes that primarily follow from betting on the continuation of past events. Thus, algorithms work beautifully for board games for which all the rules are known, and the types of data—moves and overall board positions—that need to be collected are very limited.

However, the chances of predicting whether my town will flood are bleak if I only have local records of previous floods and the heights of dams, but don’t, for instance, have any data on snowpack in the distant mountains that melts and feeds into the river, or patterns of rainfall in areas uphill or upstream. A potential problem with machine learning, big data, and other approaches is that even when we have lots of data, we may not have the right data or the right perspective, especially for systems that we don’t really understand. Big data and machine learning can be incredibly useful, but the point here is that I still need to be smart about which information and data I choose to include in the first place.

Please don’t think I’m suggesting we go back to the Stone Age or even the landline. Modern science gives us much better methods than our primitive forebears had for testing predictions and refining our ideas. But we are still susceptible to wanting more certainty than science and technology can really give us, and we’re very vulnerable to twisting science and technology to fit our own biased, overly confident brains.

In some cases, we are overconfident in our predictive powers, while in other circumstances, we overly denigrate our efforts at prediction. Living in Southern California, it is easy to predict the weather on most days. Being proud of guessing that it’s going to be sunny with temperatures in the 70s to 80s is like being proud you bet the stock market would continue to go up, up, up in the late ‘90s—because both predictions rely on a history of consistent trends with little variation. Large parts of machine learning and its limitations also owe some debt to this line of thinking, because performance is evaluated based on the ability to reproduce “known knowledge.”

There are other ways to try to predict the future that rely more on changing and increasing our understanding of nature or the economy or social interactions. Such an approach also can rely on big data, but it focuses on discovery of the mechanisms and causal relationships that are needed for certain outcomes. (For this sort of work, think of Einstein changing the understanding of how time relates to space, or Darwin changing how we view our origins in relation to other species, or Wegener seeing how the continents move relative to each other and fit together.)

These contrasting approaches raise a key question: Should we judge the certainty of a prediction simply based on how many consecutive times it proves to be accurate? Or should we base our assessment on evaluating different types of data, and determining what conditions need to exist for a particular outcome to happen at all? If a friend flips a quarter five times in a row and I correctly call five consecutive heads, the question is whether my prediction was based on the history of the previous flips and I got lucky (with high uncertainty) or, possibly, because I knew it was a two-headed quarter and had no uncertainty? How would you know if you didn’t first check the coin?

To extend this line of inquiry, let’s recall that prior to last November’s presidential election, the USC/L.A. Times election model correctly predicted a Trump victory, and Nate Silver’s 538 election model revealed high uncertainty in the outcome, suggesting that a Trump victory was not unlikely. These are two of the only predictive models that aligned reasonably well with the outcome of the election. The virtue of Silver’s 538 was that it included an appropriate amount of uncertainty. The virtue of the USC/L.A. Times election model was that, more than other major polls, it included different and arguably more informative data (though less total data), and a different set of assumptions about how to weight the diversity of the population.

What’s the conclusion? No amount of technology, data, or algorithms can overcome a fundamental lack of information that we either can’t get or never thought to ask for. We must embrace imperfection and understand uncertainty in order to better inform our decisions and to help guide us towards better models that require seeing the world in new ways. If we look closely, our predictive models can tell us as much about what we don’t know—and need to find out—as about what we do know. For our globally-connected, internet-enabled, and big data crunching species, Aristotle’s maxim, “The more you know, the more you know you don’t know,” holds as well now as it did in antiquity.

Van Savage is a professor at the UCLA Department of Ecology and Evolutionary Biology, the David Geffen School of Medicine Department of Biomathematics, and the UCLA Institute for Quantitative and Computational Biology. He is also director of the Computational and Systems Biology Inter-Departmental Program and an external professor at the Santa Fe Institute. He lives in Los Angeles with his wife, son, and dog.
Primary Editor: Reed Johnson. Secondary Editor: Lisa Margonelli.

×

Send A Letter To the Editors

    Please tell us your thoughts. Include your name and daytime phone number, and a link to the article you’re responding to. We may edit your letter for length and clarity and publish it on our site.

    (Optional) Attach an image to your letter. Jpeg, PNG or GIF accepted, 1MB maximum.