To Quantify the Complexity of a Book

by Mark T. Mitchell on October 30, 2013 · 2 comments <span>Print this article</span> Print this article

in Short

There’s a computer program called Lexile that purports to measure the complexity of books and thereby determine the grade level for which they are best suited. Here’s the description:

Lexiles were developed in the 1980s by Malbert Smith and A. Jackson Stenner, the President and CEO of the MetaMetrics corporation, who decided that education, unlike science, lacked “what philosophers of science call unification of measurement,” and aimed to demonstrate that “common scales, like Fahrenheit and Celsius, could be built for reading.” Their final product is a proprietary algorithm that analyzes sentence length and vocabulary to assign a “Lexile” score from 0 to over 1,600 for the most complex texts. And now the new Common Core State Standards, the U.S. education initiative that aims to standardize school curricula, have adopted Lexiles to determine what books are appropriate for students in each grade level.

Not surprisingly, the outcomes of this project have been less than satisfactory.

But missing from this debate is the question of whether the idea of the Lexile makes sense at all. When Huckleberry Finn isn’t complex enough for our high-school students, I can’t help wondering if we need to change the way we conceptualize literary complexity. I’m an English professor, and I live in Iowa City, a UNESCO World City of Literature, but according to MetaMetrix my bookish hometown might as well go play patty cake. On my way to work I pass the House on Van Buren Street where Kurt Vonnegut began Slaughterhouse Five—but with a score of only 870, this book is only a fourth-grade read. By these standards Mr. Popper’s Penguins (weighing in at a respectable 910) is deemed more complex.

I also pass St. Mary’s Catholic Church, where Flannery O’Connor sought grace but failed to find the vocabulary needed to push her Collected Stories above the sixth-grade level. I arrive at my office in the same grim building that motivated Raymond Carver to abscond and hold his classes at the nearest bar; his Cathedral scores a puny 590, about the same as Curious George Gets a Medal.

Even the folks defending this method of evaluating books admit that it is not perfect. Yet, listen to the ultimate hope:

To be fair, both the creators of the Common Core and MetaMetrix admit these standards can’t stand as the final measure of complexity. As the Common Core Standards Initiative officially puts it, “until widely available quantitative tools can better account for factors recognized as making such texts challenging, including multiple levels of meaning and mature themes, preference should likely be given to qualitative measures of text complexity when evaluating narrative fiction intended for students in grade 6 and above.” (emphasis added).

In other words, just give us time. We’ll come up with a more sophisticated means of quantifying works of literature so that those designing curriculum will not even need to read the books that are included. All they will need to do is arrange books according to a Lexile number and all will be well. Judgment is old fashioned especially when compared to the certainty of numbers.

{ 2 comments… read them below or add one }

avatar Siarlys Jenkins October 31, 2013 at 11:27 pm

The idea of assigning books to designated grade levels is ludicrous. Children in any grade level read at markedly different speeds, with different levels and forms of comprehension, and have quite individual tastes, which in turn inspire greater efforts to master some books and not others. When it comes to human beings, unification of measurement is part of the problem, not part of the solution. Can the metrics.

avatar dave walsh November 2, 2013 at 2:46 pm

I think this is the sort of thing we have to do once we abolish a canon. And no longer have ground to stand upon to measure relative quality, or value.

Leave a Comment

Previous post:

Next post: