What Google Can’t Tell You

What Do Depth, Nuance, Context, and Balance Have in Common? You Can’t Find Them Through Search Engines

What Google Can’t Tell You | Zocalo Public Square • Arizona State University • Smithsonian

Back in 2012, we asked scholars of technology and knowledge about the ways search engines were changing how we made decisions, educated ourselves, and lived. Read what they had to say, and register to learn what, if anything, has changed at the upcoming Zócalo/Future Tense program “How Has Computer Code Shaped Humanity?” Join us live on Tuesday, January 31, at 7 p.m. in person in DTLA or online as we discuss the ways the digital world shapes us (and how we in turn, shape it):

Depth and nuance and the benefits of not knowing

Engines are the expression of the values of the engineers who make them. With search engines such as Google, those values favor speed over depth and popularity over nuance. Efficiency is the preeminent virtue, and despite the fact that algorithms are never neutral, we are relatively unquestioning in our acceptance of Google’s certainty. What Google returns in its search results is not necessarily the best information available, but what its proprietary algorithms have deemed appropriate. Google capitalizes on our curiosity, parceling out our attention to advertisers and profiting from our need to know something right now. Often we don’t even know what we are missing because Google doesn’t show it to us in the first place.

Search engines have ruined a few simple pleasures, such as the old-fashioned effort to rely on one’s own memory to retrieve a salient fact during a debate. And they have created new challenges, such as the inability ever to escape one’s past, which is suspended forever online in a kind of digital aspic. But what we most miss out on by relying so often on search engines are the very things that efficiently engineered technologies can’t provide: the power of not knowing some things. The extraordinary convenience of search engines makes it increasingly difficult to delay the instant gratification of an immediate answer and instead to puzzle over a question, and reflect upon it. Google and other search engines are very good at telling us who, what, and where in the fraction of a second. They are terrible at helping us tackle the most important question: Why?

Christine Rosen is senior editor of The New Atlantis: A Journal of Technology & Society.

————————————-

Context and interactivity

In thinking about what search engines miss, let’s consider how people ask questions of each other. People generally ask someone whom they think is knowledgeable on the topic. The question asked is often a series of questions that allows the person to extract the needed information as well as judge the value of the information being given. Search, in its human form, is an interactive process, where one query leads to another related query, and both people involved understand the relationship. Those human interactions have context built into them, whether it is the context of who we are, or what we’ve been doing, or our relationship, or something else.

A search engine can try to mimic how humans query each other, but it is a highly inadequate mimic. A search engine cannot yet do interactive searches, like the interactive queries we do with each other, in which the information from one query becomes the background or context for the next. Search engines can know our location but cannot determine the entire context of our search requests, nor can they understand nuanced questions. In addition, a search engine does not work as a partner with us in determining the precise answer and understanding the value of it.

Is search getting better? Yes. For years, research centers have worked on making search more natural and more human-like. The advances that they are making are impacting what we have available, but not fast enough. Right now what we have available to us is still a poor imitation of what we are used to in human form.

Jill Hurst-Wahl is an associate professor of practice in Syracuse University’s School of Information Studies (iSchool) and director of the iSchool’s Library and Information Science Program.

————————————-

The right keyword

It’s an obvious point, but worth remembering: a search engine won’t turn up things that aren’t on the web. And what’s on the web may not include the best resource for your purpose.

For example, older editions and translations of literary texts are often freely available on the web because they are out of copyright, while more recent versions that represent the latest state of our knowledge are not online because they are still under copyright. As a result libraries remain important repositories of materials, including many of recent origin.

Library catalogs offer a valuable complement to Internet search engines. Internet searching proceeds by keyword. Any word can function as a keyword, offering infinite flexibility in defining a search. Users who can rely on prior knowledge to search for narrowly defined names and terms will tend to get the best results. But a keyword search will miss relevant pages that use other terms than the search terms you chose–in English and in other languages, there are many words for similar things. Early printed indices warned users of this problem, as in this advice from 1586: “If things do not occur under one heading, look for them under a synonym–glory and honor, wealth and riches, guile and cunning and shrewdness, while they agree in reality, differ in terminology so that frequently things will escape the notice of the one studying them.”

By the late 19th century a professionalized library science had devised an efficient solution in the form of a controlled vocabulary to be used in cataloging every item succinctly and consistently. Today a trained cataloger will assign appropriate Library of Congress subject headings to each book, and a crucial research tactic involves identifying the subject headings relevant to one’s search topic (e.g. by following up the subject headings assigned to a book one already knows to be of interest). The act of trained judgment involved in library cataloging is not replaced by Internet search engines, but offers a vital supplement to keyword searching and can be easily carried out for free in online library catalogs (e.g. at http://catalog.loc.gov/)

Ann Blair is a Harvard historian and author of Too Much to Know: Managing Scholarly Information Before the Modern Age.

————————————-

The other side of the story

Search engines have come a long way from their early roots. New features make searching more intuitive and natural. For example, search engines now respond to plain language queries so that users can use search engines to answer specific questions, such as “What is the population of the United States?”

Of course some facts are more controversial than others. Searching for an answer to “How old is the Earth?” yields a predictable answer of 4.54 billion years, but is not without critics (just Google “young earth creationism”). In general, search engines give answers that reflect the wisdom of the crowd (this is especially true when the first search result is a link to Wikipedia). Unfortunately, the crowd is not always right and, in general, popularity trumps accuracy on search engines. In almost every case, an inaccurate news report on CNN will get a higher search ranking than an accurate post by a local blogger.

Recently, some search engines have added personalization, based on data such as past search history, location, and the user’s social network. This means that the search results for a music lover in New Orleans who searches for “jazz” may be different than those of a basketball fan in Salt Lake City. Personalized search gives users more relevant and accurate results, but critics, such as author and Internet activist Eli Pariser, argue that this creates search bias. The risk, he claims, is that by creating “filter bubbles,” users are not exposed to points of view other than their own.

Is the problem the technology or the user? Technology shapes human behavior but human behavior also shapes technology. Many people choose to filter out alternative points of view. Think of liberals who get their news from MSNBC and conservatives who are devoted to Fox News. Individuals who learn how to use the Internet should learn how to use it responsibly. The common refrain “Don’t believe everything you see on TV” clearly applies to the Internet as well. Ideally, a key component of digital literacy should include critical thinking skills. Search is only the first step in what should be a three-step process of “Search. Learn. Think.”

Search engines have become a crucial intermediary between people and the information they seek. But with this power comes responsibility. In addition to asking whether search engines should reflect the wisdom of the crowd and whether they should reflect the preferences of the user, we can also ask whether search engines should help people access illegal content or whether search engines should help governments engage in censorship.

These are thorny questions with no clear answers. Trust me … I just checked Google.

Daniel Castro is a senior analyst with the Information Technology and Innovation Foundation, a non-profit, non-partisan, public policy think tank.