Early-Alerting Early-Alert Systems on College Campuses

1

Naperville, IL. Listlessness, body odor, weight loss, weight gain, wearing the same clothes, falling asleep in class. In the heightened culture of surveillance on college campuses, professors are encouraged to refer such behaviors to so-called behavioral intervention teams or BITs. “Early-alert them” has become a catch-all response for concerned faculty and staff members noticing nonnormative behaviors on campus. Once early-alerted, a student is in the system, and a simple referral unlocks a series of far-reaching interventions The Chronicle of Higher Education calls a “secret support network.”

The secret is so well-kept, in fact, that such systems have quietly become pervasive, used by over 90 percent of four-year public and private colleges and universities according to a Noel-Levitz survey cited by Hanover Research. And yet studies show that even colleges and universities engaged in early-alerting have good reason to doubt its efficacy. Among respondents to a Gardner Institute for Excellence in Undergraduate Education survey, for example, only 40 percent of institutional practitioners reported “improved retention/graduation rates” resulting from the use of such early-warning systems. While forty percent moves the needle in a positive direction, one wonders if the gains might be more dramatic if faculty members talked meaningfully with students before referring them to secret support systems without their knowledge or consent. It’s worth noting that the 40 percent figure is only perceived effectiveness, as national studies on early-alert models conducted by J. M. Simmons and others find “very little empirical evidence to validate the use of these programs.”

That alert-systems may violate student rights is a barometer of just how inoculated many parents and students have become to institutional invasions of student privacy. On many campuses a faculty member can early-alert a student and beget an institutional intervention upon observing any non-normative behavior. In theory a student may be early-alerted for talking too much or too loudly, or too little and too meekly, if in the professor’s judgment the behavior merits sounding the alarm. They can be flagged for overly social behavior or anti-social behavior. Faculty members should, according to one institution’s guidelines, look to early-alert behaviors such as wearing the same clothes, something some faculty members do by thrift, necessity, or personal choice.

The trouble with such secret support systems insofar as personal liberties are concerned is that the individuals subject to scrutiny and surveillance are presumed guilty until proven innocent. In the same way that well-meaning individuals in the McCarthy era were encouraged to come forth with better-safe-than-sorry tips, casting a conspicuously wide net, early-alert systems have the potential to turn every campus employee into a potential informant. Interpersonal trust gives way to bureaucratic machination until the “system” threatens to supersede rather than supplement the humane personal and interpersonal interactions college life once modeled as part of an education in empathy and civility. Put simply, many students do not know they are being watched, how they are being watched, and by whom.

In theory early-alert systems are designed to flag behaviors that put a student at risk for low or failing grades and the potentially damning documentation of those failures in the form of academic probation or dismissal letters. And to the degree that such policies advocate for and protect student rights (in this case protecting them from the unwanted consequences of an academic probation or failing grade on their academic record) they function as intended. However, many early-alert systems empower reporters to undertake subjective observations that go well beyond academic distress to include behaviors such as “listlessness,” “lack of energy,” and “body odor.”

The trouble with such systems is that they too often conflate symptom with cause while failing to acknowledge the relative nature of normative behavior. Even the most kind-hearted faculty members may mistake symptom for cause when early-alerting a student without their knowledge, and doubly so given power asymmetries. The student who arrives in class on Tuesdays and Thursdays smelling like a locker room may in fact have come directly from a lunchtime workout. The student in the Monday/Wednesday class who wears the same clothes on the same days may be making a fashion statement, may be trying to divest themselves and live simply, or may simply be an avowed environmentalist resolved to reduce water usage. Put simply, even the euphemistic “secret support” aspect of the early-alert system fails when measured against the Golden Rule, not to mention the “glass houses” mentors, supervisors, and faculty members sometimes live in.

At universities where advisors are aided in their interventions by sophisticated data analytics software, the ethics of predictive analytics likewise should be flagged as a potentially compromising civil liberties. Should a university be able to use the digital data it accrues on each and every student to, without the student’s consent, mine that data for signs of student distress or at-risk behavior? Similar practices have been implemented at universities around the country. At Northern Virginia Community College (NOVA) George Gabriel, vice president of NOVA’s Office of Institutional Effectiveness and Student Success Initiatives, proposes to help students by still closer scrutiny of their digital lives, telling the Atlantic in 2015 that he would like to use student data to create predictive algorithms that predict at-risk students or detect early “warning signs.” He would also like the community college to collect “behavioral data,” encompassing “what they [students] are saying on social media.”

To limit the potential for compromising student privacy, well-intentioned colleges and universities must learn to govern their growing use of clandestine surveillance, better informing students of overwatch systems. They might focus unconsented institutional intervention on behaviors that expressly threaten the safety of the individual in question or the campus community at large. In this category we can place individuals showing obvious warning signs of life-threatening distress or crisis, such as expressing suicidal thoughts or threats of violence against self or others. Most concerned educators would agree that such behaviors, especially when seen in young people, warrant urgent institutional intervention, hopefully one that is respectful and rehabilitative rather than punitive or privacy-invading .

However, many of the same colleges and universities that widely and wisely issue alerts in such instances also recommend faculty members early-alert students exhibiting, for example, “frequent or high levels of irritable, unruly, abrasive” behavior, criteria by which some of today’s academics might be condemned based on their behavior in faculty meetings alone. Even behavior lacking “decorum” is said to constitute a reason for sounding the alarm. Of course, determining what constitutes “decorum” or “abrasive” brings with it a baked-in geo-demographic and cultural bias an institution of higher learning should be expected to respect and to acknowledge. To the extent that a college or university functions as a community it is, of course, responsible for maintaining safety and for protecting campus from unlawful threats. But body odor is not a crime; nor is listless behavior, sleeping in class, or conscientious withdrawal. In such cases open-minded conversations between faculty, staff, and students are likely to pay far greater dividends than closed, secret systems lacking express student consent.

The university’s best, most utopian aims must not beget dystopian early-alert policies that infringe on students’ personal liberties while turning campus into a place where everyone is an informant, and deviations from the norm beget Orwellian intervention.

1 COMMENT

  1. Universities are under immense pressure to cover expenses and avoid any threat to normal operation, which is becoming more and more narrowly defined in terms of which social behaviors are acceptable (or safe enough to prevent lawsuits).

    Combined with 21st century “we all know that” certainty about the value of hitech, the “alert system” is not a surprise. I’m not happy about it, though it’s possible it will indirectly reduce academic bloat by replacing armies of overseers (i.e., diversity enforcers, who need to perceive transgressions in order to prove their salaries are worthwhile) with automation.

    However, part of the problem with predictive technologies is that the algorithms are developed by us humans, and often reflect our biases.

    Here’s an example of one failure that might have been due to an aggressive salesperson hoodwinking a police department about AI that wasn’t so accurate:
    https://www.bbc.com/news/uk-wales-south-west-wales-44007872

    Another problem is the “truncation of doubt”: for example, if x correlates with y 90% of the time, and then y correlates with z 90% of the time, the algorithm might simplify the incompleteness of incoming data by assuming that x correlates with z 90% of the time, even though the mathematics would be .9 x .9 = 81%.

    Increasing the number of factors definitely reduces the probability, even when you start with 99.9% correlation as described concisely in terms of high school math on this page: https://en.wikipedia.org/wiki/Law_of_truly_large_numbers

    Don’t miss the quip by Penn Jillette in the introduction section.

Comments are closed.

Exit mobile version