Paradigms of Math and Non-quantifiable Values

3

Lexington, VA. We quantify things continuously as we move through the everyday activities of our lives, from checking the number of pennies in our bank account to measuring the ingredients for our latest culinary endeavor. Measuring, ordering (sequencing), adding, and manipulating figures has become so commonplace that we rarely stop to think about it. We think instead about the process itself–how to accomplish a particular arithmetic task. Or we think about how to teach ourselves or our children or our community how to accomplish such tasks. But the metaphysics of math are rarely considered or discussed. Nevertheless, this question of the purpose and place of mathematics, as an idea, has a profound effect on how we see the world around us. Even the question “What is mathematics?” has deep importance to our lives. It shapes how we make decisions, what we decide to do, even what we decide to question and what we decide to take on faith. Our paradigms of mathematics have huge implications for those subjects held dear by Porchers.  

In his engaging and approachable book Pi in the Sky, John D. Barrow discusses the shifting paradigms of mathematics over time, among other thought-provoking aspects of math and its history. During the days of antiquity when humanity was exploring the world around us—the physical relationships between things—and starting to build higher and longer structures, geometry was a focus for many of the greatest minds. Euclid created an entire branch of geometry that still bears his name. Pythagoras (or another anonymous member of an adjacent civilization) impressed the Greeks with the theorem that still bears his name. Throughout the world, mathematics could be drawn, seen, used in construction or navigation, created and recreated.

During the age of reason, math became seen as logic itself. Men like Bertrand Russell were convinced that if they could only find the appropriate symbols and the right set of axioms (assumptions), then they could prove the entirety of human knowledge. With one elucidating stroke of a pen, mankind would have nothing to argue about because reason, in the form of mathematical proof, would preclude disagreement. Once Gödel proved that impossible, a new paradigm was needed.

Barrow argues that our modern paradigm of mathematics is that of the computer age. Math is an algorithm that, with the correct inputs, can calculate anything. If we can define it, we can quantify it, which means we “know” it. Then we can compute from it, which means we “understand” it. Then, through various advanced techniques, we can optimize it, whatever “it” is. Finally, and perhaps most insidiously, as long as the figures compute without generating an error, then the output is “true” and taken as fact.

The proof, as they say, is in the pudding. How many people, faithful servants of their electronic devices and feeding said device and its algorithms all of the data a person can generate, have been optimized? How many of us see advertisements targeted to what we “must” want, or have had the vectors of connection recommend so many “friends” for us to connect with that we feel totally disconnected? If this fail-proof science has the keys to optimization, why do so many data-driven people feel less than whole?

Part of the answer lies in the nature of optimization. To optimize implies a qualitative factor, for to optimize is to make best. But how can we know what is best without knowing what is good? Mathematics cannot be so divorced from philosophy and still expect to serve a whole person. Pardon my digression, however. This is not an article about philosophy but rather about math.  

In all of the cultural critiques and worries we see, there seems to be little questioning of this modern notion of mathematics–that mathematics is an optimizer, a better-maker, an elucidator of truth. But this paradigm falls short.

To better illustrate this, let me divert briefly into a particular field of interest: Decision Analysis. Decision Analysis, or DA, is one of the ways that we try to use mathematics to ensure that we are always making the “best” decision. The example that was my first introduction to the topic years ago proceeded thus: 

Imagine that you want to go eat at a restaurant, and there are three options: McDonald’s, Chili’s, and Morton’s Steakhouse. Which do you choose? However you choose, you probably considered a number of factors, such as price and quality of food.  

To work this into a DA model, let’s say that we can rate each restaurant on each of those attributes, on a scale of 1 to 5. So maybe McDonald’s would score a 5 on price while Morton’s scored a 1, and McDonald’s would score a 1 on food quality while Morton’s scored a 5. Maybe Chili’s was a 3 on both.

The next question becomes how important each attribute is to your decision. If you are very price conscious but don’t care what you eat, you may rate price as a “5 important” and food only a “1 important.” Or if you are about to propose to your significant other, the ratings may be reversed.  

Now, by multiplying through the score of each attribute times its importance to you and adding the results together, each restaurant gets a single number score that can be compared. The highest number is your best choice.

This is a pretty straightforward idea, and I use this process to think through comparisons and all manner of difficult decisions, from buying a car to planning a vacation. However, human beings are complex, and people regularly make decisions that are not supported by “the numbers.” Some call this “irrational decision making,” which is its own subset of economic theory.

Yet as most of us can probably recognize, and have recognized if we’ve ever been through a similar exercise, the numbers just don’t quite capture everything that our intuition does. If asked the question, “How much money would it cost me to buy your child?” and you answer, “I wouldn’t sell my child for any amount of money,” you have just found a scenario where two qualitatively different things cannot be put into the same equation and make any sense. Math can’t calculate love.

And here is the problem with math as an optimization paradigm. The human mind can recognize and process so much more information, so much faster, and make comparisons between different ideas, that there is no possible way to ever define, quantify, measure, and analyze every decision that is made. Even if we restrict the endeavor to big decisions, the breadth of attributes one would need to try to identify is staggering. And once you miss one, the entire equation no longer accurately represents the process going through the mind of the decision maker.

As Porchers, we understand this perhaps more viscerally than many, because we also have a host of attributes that we hold dear that are rarely quantified and hence not considered in the optimized decisions of the computer-run world.

A mundane example may have been encountered by many employees: metrics of performance. Whether our workplace is measuring efficiency of production to enhance profit or effectiveness of teaching to seek accreditation, it is trying to reduce reality into a set of quantifiable attributes. There are multiple purposes for this goal that are not inherently bad. It helps reduce ambiguity and encourage everyone to work toward the same goals. It helps reduce bias or favoritism and creates a more transparent process for scrutiny and fairness. It produces numbers that can be fed into an algorithm to help understand what to do next or where to improve. But how many who have encountered such metrics have understood the usefulness of the goal while also feeling deeply that some aspect of the measurement did not line up with their view of reality? How do you quantify the value of a relationship in the decision to hire or fire someone? How do you quantify an individual’s soft attributes over their hard credentials when deciding if they are the best fit for a team? Jeff Tabone wrote an excellent review of The Tyranny of Metrics here that discusses many of the way this line of thinking has derailed organizations from their mission.

The current pandemic and most of its policies and effects provide a more urgent example of the shortcomings of this mathematical paradigm. For decades businesses, and especially those dependent on supply chains, have become more and more global. The reasoning is that bigger is better. Size creates economies of scale that allow things to be made more cheaply. This in turn allows both greater distribution and greater returns to someone. The numbers make the choice very clear and very factual: global supply chains are optimal because they make more money (for someone).

These policies’ effects on communities have been discussed long before the pandemic, and understood by some. What the pandemic brought to light was the fragility of this type of system. How did we factor that fragility into the decision to globalize? How did we weight the self-sufficiency of a community to produce its own food or medicine or essentials? How did we quantify the risk of having to dump milk or slaughter pigs because of a breakdown in the supply chain? How did that factor into the optimization algorithm? The sad truth is that it probably didn’t.

This paradigm of mathematics held by our modern world has led to a neglect of non-quantifiable inputs as important factors in decision-making. To make matters worse, though, it has simultaneously led to an over-emphasis on quantifiable inputs. The language of the campaign trail makes this plain as pollsters, eager to understand the political landscape for their customers, rigorously dissect human beings into buckets of attributes. Martin Schell makes an observation in this article that in order to count people according to said buckets, the process necessarily ignores other characteristics that are the basis for individualism, uniqueness, and diversity. These are replaced by a false “sameness” out of necessity for comparison. The effect is that whoever writes the algorithm or determines the categories of comparison decides which aspects of a person (or anything) are important and which are not.

The most insidious part of this paradigm is not, however, the quantifiables or optimization. It is the belief that numbers that are arrived at correctly are also True, with a capital T.  Such a belief is a category error, confusing Truth with logical validity. The view of mathematics that I’m discussing not only underpins our approach to social networks and advertising and business, but to things like science and to everything to which we apply science.

The conversations around everything COVID have illuminated this painfully. I would be surprised if anyone is unaware of some statistic regarding COVID cases per week, or deaths from the disease, or some other metric used to communicate its impacts. Looking at a news article that informs us of 10,000 new cases per week in some area is certainly sobering and important. Some articles and researchers, though, have suggested that these may not be the correct metrics, or that we should at least include other metrics to paint a more complete picture. Perhaps we should consider hospitalizations, or hospitalizations per incidence, or extrapolate likely cases beyond tallies of positive tests. The metrics we focus on depend on how we answer questions such as, “Which is preferable, 10,000 new cases per week with 100 hospitalizations, or 1,000 new cases per week with 900 hospitalizations?”

This exercise could continue down the rabbit hole of details. Which is better, 900 hospitalizations with 1 death or 100 hospitalizations with 99 deaths? What about life-long lingering effects? What about the opportunity cost of people not getting medical treatment they need because hospitals are full? Numbers certainly tell a story and illuminate a truth that is difficult to see or comprehend or communicate otherwise, but the totality of reality at the place of action can rarely be understood completely by looking only at numbers, however useful they may be. Tabone’s review rightly acknowledges both the usefulness and dangers of metrics, cautioning against giving them outsize influence, but stops short of calling into question the way we approach the paradigm itself.

I don’t pretend to have any answers about COVID itself, but the fact that there are so many statistics with so little understanding of the nuance behind them and the meaning of any given metric worries me. I do not mean to question the legitimacy of the data. Within expected error rates I think that the data we are seeing are factual. But what lies beyond what those numbers are communicating? And how do those unquantifiable priorities and shades of understanding affect our decisions and reactions? Different understandings of these qualitatively dissimilar realities–gained by different experiences and observations of the immediate world around us–feed much of the vitriol of the conversation.

With more data and more computing power than we have ever had, and with investment in those spaces continuing unabated, the dominant lens through which our world views mathematics is undeniable. Yet as we careen down this path, we feel a dearth of important and weighty things in our life–community, relationship, connectedness, and alignment on how to approach the greatest tragedies of our day. Perhaps it is time to take a hard look at what we measure and calculate, what we optimize, what we expect from our math, and even in what regard we hold numbers.

3 COMMENTS

  1. Some years ago, I wrote a review for FPR of a new book on history, in which I made exactly the you’re making here, except that you did it better: demanding that everything be quantifiable leads to any number to tail-wags-dog scenarios. Being in the university world myself, I could add any number of hilarious and not-so-hilarious examples the dynamics you describe, as regards “accreditation,” whose basic logic is to quantify what is mostly unquantifiable.

    Your point here also puts me in mind of Thomas Kuhn’s thesis about how scientific method universalizes itself by pretending that the only questions worth asking are one’s susceptible to being answered with that method.

    Bravo,

    Aaron

  2. “Once Gödel proved that impossible, a new paradigm was needed.

    Barrow argues that our modern paradigm of mathematics is that of the computer age. Math is an algorithm that, with the correct inputs, can calculate anything.”

    I haven’t read Barrow’s book, but in point of fact, shortly after Gödel’s result on “impossibility,” Turing and Church extended the result to computer algorithms.

    Your essay makes provocative points and I concede that “decision science” and “data science” have been misappropriated with results that range from the annoying to the deadly. Still, can we really blame rigorous math for changes to public policy? In my view, we owe these changes to the far-less rigorous theories of economists, social scientists, and public-health wonks, for which we as a society have become the lab rats.

  3. David, thanks for linking to part 1 of my FPR series on Digital Thinking.

    It’s true that math historically was rooted in geometry. This video about the invention of imaginary numbers explains that there was no standardized terminology for algebra, so it was done via diagrams and verbal descriptions:
    https://www.youtube.com/watch?v=cUzklzVXJwo

    OTOH, geometry led to some intriguing insights into how our human brains match our universe. For example, there are only 5 possible Platonic Solids. And if you sit on a beach and slowly pour out sand, gravity will cause the pile to form the shape of a cone. It’s not an opinion, it’s a fact.

    So, the problem we are trying to address is the *abuse* of math by people who create fake quantities based on subjective assessments, such as your example of decision analysis (DA) for choosing a restaurant, which expands on the TripAdvisor type of ratings noted by Jeff Tabone in his review of Muller’s book.

    Quantification has been rampant since the laissez faire of the Reagan era, but it can be traced back to liberal philosopher Jeremy Bentham two centuries ago, who believed “the greatest good for the greatest number” can be reduced to counting, especially money — the digital mindset I noted in part 3 of my FPR series.

    It’s also true that math education begins with calculation (arithmetic) and maintains that focus for 12 years of schooling and beyond. Less numerical aspects of math such as topology are minimized.

    However, “How much money is a human being worth?” is not meaningless. If an accident occurs and someone is killed, the insurance company evaluates future earnings and compensates the family to support its financial well-being. Nobody likes to think in these terms, but if your family is devastated by an emotional loss, you don’t want financial hardship to pile on top of it, do you?

    As you astutely point out, risk analysis became a major feature of debate (and obfuscation) in discussions about the endless pandemic. Few people admitted that the data was rooted in GIGO (garbage in, garbage out): stats on people who died *from* Covid were conflated with those who died *with* Covid (often because they tested positive when admitted to a hospital for some other ailment).

    Further, the tests done in the first half of 2020 (and in other parts of the world through the end of 2021) were often antibody or antigen tests, with their results commingled with PCR test results to yield totals that were presumed consistent, despite the difference in accuracy among test types.

    IMO the best part of your article is the superb challenge to the whole notion of optimization. Perhaps you could have explained alternative factors in a bit more detail than the pointed questions:

    “How do you quantify the value of a relationship in the decision to hire or fire someone? How do you quantify an individual’s soft attributes over their hard credentials when deciding if they are the best fit for a team?”

    Seniority and loyalty are soft factors which some try to quantify in terms of more years on the job or fewer days of absence. Pushing out middle-aged employees whose salaries have risen due to cost-of-living increases seems to be a “clever” form of cost optimization that has affected several of my Indonesian neighbors here in Central Java, far from the financial centers of Tokyo, London, New York.

    One researcher who works to prove the value of intangible factors is Karl-Erik Sveiby, who began his career as an auditor in Sweden and wondered: “Why do companies classify the purchase of equipment as an investment, but the money spent on training employees is classified only as an expense?”
    https://www.sveiby.com/newsitem/Measuring-Intangibles-Suggested-Indicators-with-Cases

    Lots to think about here. I suppose the first step is to question our own patterns of decision analysis.

Comments are closed.

Exit mobile version