In Part 1, I outlined the basic difference between counting and measuring and gave some examples of how data is not always objectively derived. Now I want to move the conversation to broader issues related to what can be called the “digital mindset.” In doing so, I focus on two key tenets of this mindset or worldview: individual incidents are points of data that can be tallied and extrapolated to predict behavior or even intent; and every piece of data is objectively determined, independent of differences in human perception or attitude that might vary its interpretation according to context.

The past few decades have seen the spread of a social tendency to “get tough” on crime and other moral issues by identifying a “bad thing,” classifying it, and tallying it. The concept of mandatory sentencing goes back centuries (especially for murder), but California implemented an innovation in 1994 called the Three Strikes Law. Under this type of law, a person convicted of a “serious violent felony” must be sentenced to life in prison, regardless of the judge’s view of the case, if he or she has two or more prior convictions in federal or state courts, at least one of which is a “serious violent felony.” The other prior offense may be a “serious drug offense.”

As described by former Attorney General Edwin Meese III in an article in the Federal Sentencing Reporter that year, the purpose of such a habitual offender law is “to protect against violent, repeat offenders who are being released into the community often after serving only a fraction of their sentences for previous crimes.” He took aim at parole boards rather than lenient judges by advocating an additional “sentencing reform” which would require a violent criminal to serve at least 85% of the sentence imposed by the judge.

Around the same time, the meme Zero Tolerance began taking root in school districts around the country, dovetailing with the spread of the new term “superpredator” during the Clinton Era. Zero Tolerance meant that a single incident of misbehavior (one weapon, one joint of marijuana, one beer, one kiss, etc.) could not be forgiven, though most state laws allowed school authorities to use discretion about expelling a student or meting out a lesser punishment.

I still recall a spontaneous kiss on the cheek from my classmate in 2nd or 3rd grade when we were waiting in line to enter the cafeteria. I suppose she thought I looked cute or calm. If she had done this in 1995 instead of 1960, she might have been expelled from school.

Both of these trends “get tough” by viewing bad behavior as an objective event that does not need interpretation by a human being who might temper judgment by weighing mitigating circumstances.

Both of these trends “get tough” by viewing bad behavior as an objective event that does not need interpretation by a human being who might temper judgment by weighing mitigating circumstances. If an infraction is committed, punishment must occur. This type of thinking predated the digital revolution, but the digital mindset has nourished it (or inflamed it, depending on your perspective).

Back in 1992, Nicholas Kristof wrote a short article about the Dang’an system of tracking behavior in China. Any good or bad deed is recorded in a file kept at a citizen’s school or workplace with a copy at the local police station. These records follow a person throughout their entire lifetime. Kristof blithely claimed that the system was “losing its effectiveness”.

Times have changed: early this year, The Atlantic ran a longer article by Anna Mitchell and Larry Diamond about how the use of big data and surveillance cameras has enabled China to expand the Dang’an system: tallying negative behavior such as jaywalking; deriving scores based on the behavior of others in a person’s social network; and ultimately using this “information” to speed or slow an individual’s application for services such as getting a visa to travel abroad. Fortunately, there is no such system in the US beyond school transcripts, medical histories, and police records—and those three are not integrated the way they are in Dang’an.

However, the recent trend of internet shaming has a similar function of tracking and punishing past behavior. An accusation based on a single incident dredged up from long ago can cost a person his job or ruin her reputation as a professional. In a 2013 article for Wired, Laura Hudson wrote: “At its best, social media has given a voice to the disenfranchised, allowing them to bypass the gatekeepers of power and publicize injustices that might otherwise remain invisible. At its worst, it’s a weapon of mass reputation destruction, capable of amplifying slander, bullying, and casual idiocy on a scale never before possible.”

She pointed out that people can be disingenuous about consequences when they use social media to call out misbehavior, because shaming “isn’t simply saying something is ‘not cool’; it’s a request to have someone put in the digital stocks, where a potentially unlimited number of people can throw digital stones at them.” Hudson addressed the ephemeral nature of online interaction by reminding readers that physical distance fosters detachment. She criticized the digital mindset directly by stating that “our friends and followers are just abstract numbers on a social-media profile.”

The sci-fi TV series The Orville satirized this type of judgment-by-tallying-social-media-votes in episode 7 of season 1. A crew member who beams down to a planet as part of an investigating team makes lewd gestures toward a statue. His behavior is caught on camera and posted on the entire planet’s social network, generating huge numbers of “thumbs down” votes. The crew member will soon be executed if the thumbs-down votes continue to outnumber the thumbs-up votes on the planet’s equivalent of YouTube.

At this point, I’d like to step back and ask, “What does it mean to say that we know someone?” Or more specifically, “How do we know that a given incident reveals a person to be dangerous?” Nowadays, it is common to hear college students insist on “safe spaces,” expressing this need in a context that equates discomfort with threat. I dare to suggest that many people would feel discomfort inviting someone of a different race, ethnicity, religion, political ideology, or sexual orientation to dinner in their home. Yet such an uncomfortable social occasion would be an opportunity for growth because an atmosphere of mutual respect leads us to transcend our tribal identities and rediscover our common humanity.

In his play The Rock published in 1934, T.S. Eliot asked, “Where is the wisdom we have lost in knowledge? / Where is the knowledge we have lost in information?” Information Technology (IT) later took as a paradigm the hierarchy Data-Information-Knowledge-Wisdom (DIKW). Data is a simple fact such as “It is 70°F at 9 AM.” Information refers to a more general pattern, such as “Temperature rises as the sun gets higher.” Knowledge includes some analysis beyond simple observation of patterns, for example, “Due to latency, peak temperature occurs around 2 or 3 pm on a sunny day.” Wisdom adds the element of experience or intuition, enabling us to make prudent decisions: “I don’t go for long walks on sunny afternoons without wearing sunscreen.”

There is an increasing tendency in society to rely on data as indicative of a large and permanent pattern.

There is an increasing tendency in society to rely on data as indicative of a large and permanent pattern: one incident of jaywalking means you disobey traffic regulations as a habit; therefore, that piece of data should be input in your Dang’an as evidence of your disobedience and antisocial behavior, inevitably leading to negative consequences for you. This is the core of the digital mindset, which has now gone way beyond a Three Strikes Law aimed at people who already had a record of violent crime.

Another tendency is to over-emphasize explicit knowledge over tacit knowledge, a distinction drawn more than half a century ago by Michael Polanyi in his book The Tacit Dimension and later developed into the concept of intangible assets by Karl Sveiby. Explicit knowledge can be explained easily in words or diagrams—say a recipe for baking bread or a map of the UK that shows where London is located. Tacit knowledge could be conscious, but it is not easily explained in words or images, such as how to tie shoelaces.

When you draw a map or give directions to your house, you are converting your tacit knowledge into explicit knowledge that can be used directly (and hopefully accurately) by the recipient. In 1996, Ikujiro Nonaka and Hirotaka Takeuchi published The Knowledge Creating Company, wherein they presented their SECI model of conversion: Socialization (tacit to tacit); Externalization (tacit to explicit); Combination (explicit to explicit); and Internalization (explicit to tacit). They recounted a striking example of the externalization of tacit knowledge when the Japanese manufacturer of a household bread-making machine failed to analyze why the bread didn’t taste very good. To solve the problem, their technicians observed human bakers at work and thereby glimpsed their tacit knowledge of kneading, which the company applied when redesigning the machine.

More generally, the development of robots depends heavily on the ability of their inventors to convert tacit knowledge into explicit knowledge that can be programmed into a machine. A robot that can fold cardboard into boxes or screw bolts into a car is one thing; developing AI that can play chess is more difficult; programming a robot to carry on a natural conversation is more difficult still. Crowd sourcing is another way that tacit knowledge becomes explicit. Contrary to what many people believe, Google Translate doesn’t actually translate; instead, it sifts the web for equivalences that occur frequently and suggests them as translations of words or phrases.

The interplay of tacit knowledge and explicit knowledge occurs in human learning as well as machine learning. In Front Porch Republic a few months ago, Matt Stewart asked: “Does any educator not see that the logic of many trends in education suggests that the whole process would really be much easier if we could just go ahead and automate both the teachers and the students?”

I’m not sure what he meant by automating students, but the juggernaut of using standardized tests to prove that a teacher did her or his job does seem to be pushing us toward automated teachers. After all, if the teacher’s role is simply to impart a set of lesson plans that follow a fixed curriculum, a robot could fulfill the same function. A human teacher ideally will inspire and challenge his or her students, but if the school board doesn’t value those skills, then why have a human teacher in the classroom at all?

The method of “programmed learning” was developed in the 1960s. I was a guinea pig in a small test run at a local college when I was in high school. I sat in front of a screen and read a paragraph of information, then answered a small set of questions. If I got all the answers correct, the next screen in sequence would be displayed. In this way, I was led to more and more complex topics within the subject.

Users of Khan Academy will recognize this method, which nowadays relies at least as much on graphics as plain text; and there are recorded lectures offered in various languages, too. My own kids found Khan Academy boring, though, because they didn’t accept the axiom that you must get all of the answers in a set correct to prove you learned it and are permitted to move on to the next topic. Again, we see the reliance on a single piece of data as definitive: one mistake means you didn’t comprehend the material.

The digital mindset combined with business analytics that calculate profit and loss task-by-task will often lead humans to behave robotically. Many of us have used a bank’s online messaging to get help. In my experience, a large fraction of the replies are long-winded, padding a simple answer with extraneous info, routine gratitude, or an advertising pitch (or all three) that is cut-and-pasted. This tends to happen when customer service people are paid by the number of queries they answer, with some minimum length required per answer. If quantity is the goal, a robot would be cheaper, as Karel Capek noted when he coined the term “robot” in his 1920 play R.U.R.

Sports are no exception to the 21st century obsession with explicit knowledge based on past data. Methods of analysis such as sabermetrics in baseball have not only provided grist for announcers but also produced a style of micromanagement that has made games longer and more boring. However, boredom is nothing compared to being mistakenly identified as a criminal. The Welsh police discovered that their facial recognition software had a 92% fail rate when they used it to identify potential criminals among 170,000 people at a football (soccer) match in June 2017. As noted in the article’s headline, this kind of gaffe tends to occur when AI is rushed to market to beat the competition. One problem is that algorithms may reflect the tacit biases and assumptions of the programmers.

Returning to the matter of education raised by Matt, it’s one of history’s sad ironies that standardized testing was once a progressive method that opened the door of equal opportunity. In many places, objective tests provided a method of judging talent that was independent of subjective factors such as a student’s grooming, politeness, docility, or pedigree. As explained by Nicholas Lemann in his book The Big Test, the SAT was originally promoted by various universities as a way to find students who came from diverse parts of the US rather than established prep schools. Yet within a half-century, this innovation, as well as the various aptitude and achievement tests it spawned, devolved into “teaching to the test.”

The first step beyond the impasse (or up from the abyss) is to restore institutions and practices that encourage us to trust each other as fellow humans.

I’d like to tie together the numerous threads of this short essay by suggesting that the first step beyond the impasse (or up from the abyss) is to restore institutions and practices that encourage us to trust each other as fellow humans. Humane cultures and politics are eroded when we reduce a person to a set of data points and then make decisions based on digital analysis of this data. Restoring human trust will involve not only reviving and nourishing traditions (a goal of the right), but also embracing diversity to broaden viewpoints (a goal of the left). Mutual respect that openly and honestly addresses “What makes us different? What makes us similar?” in ordinary encounters is one approach that might prove useful, even though it will make people uncomfortable sometimes.

In closing, I’ll quote from John Taylor Gatto, who resigned from teaching in public schools in 1991 shortly after being chosen as New York State teacher of the year. He became a sharp critic of managed education, even the notion of compulsory education. In The Underground History of American Education which is available free online via the Wayback Machine internet archives, he wrote:

Spiritually contented people are dangerous for a variety of reasons. They don’t make reliable servants because they won’t jump at every command. They test what is requested against a code of moral principle. Those who are spiritually secure can’t easily be driven to sacrifice family relations. Corporate and financial capitalism are hardly possible on any massive scale once a population finds its spiritual center.

Click here to read Part 3

Recommended reading:

Is the presence of someone’s DNA proof that they were at the site of a crime? Not always. Read about an innocent beggar whose DNA rubbed off on the perp before the murder and was found at the site, because the latter had given him spare change.

Why Most Published Findings are False” by John Ioannidis. This landmark article analyzes the occurrence of false positives and false negatives in scientific studies. The stats are presented simply and with helpful diagrams.

Three Felonies a Day” by Harvey Silverglate, a civil liberties lawyer who cautions that the proliferation of laws means that an ordinary person might commit three felonies a day without realizing it.

The Coddling of the American Mind” by Greg Lukianoff and Jonathan Haidt. This 2015 article explores trends among college students, such as insisting on trigger warnings, rejecting the invitation of speakers who disagree with their beliefs, and otherwise denying free inquiry at a modern university.

Is money the measure of all things? If so, why don’t economists consider the work of a housewife (or househusband) when calculating a country’s economic wealth? In this paper published in Ecological Economics, several economists propose a new index for estimating a country’s welfare, going beyond GDP.

Local Culture
Local Culture
Local Culture
Local Culture