I’ll be taking the next three weeks off all things digital. These weekly posts should resume when I return.
“Seventeen Theses on Writing and Place.” Matt Miller articulates some truths about language and place, e.g.: “Places can fail people, but far more frequently the failure goes the other way around. When a place fails to meet our needs, it is often because someone first failed to concern themselves with the needs of the place.”
“Communication Breakdown.” Matthew Crawford points out that much new technology today only adds layers of friction rather than actually solving a problem. As an example, he describes a recent phone call to schedule an x-ray: “To the extent we were able to understand one another, I gathered from our exchange that there had been a breakdown of communication between the imaging center, the medical practice that referred me to them, and my insurance company. Given my full name and birthdate, she insisted that I live in Redondo Beach. That’s what it said on her screen. It took a fair bit of cajoling for me to convince her that I do not now, nor have I ever, lived in Redondo Beach. The whole pretext of IT is to facilitate the exchange of information. But I can only think that the breakdown in cases like this is due to the fact that each of these bureaucracies is so clotted up with IT systems that they collapse under their own weight.”
“Moral Identity Politics.” Nathan Beacom ponders identity politics and recommends a better way of conceiving them: “In paying too much attention to accidental aspects of identity, we have lost touch with their essential roots. If we can see again that moral identity is closer to the heart of the human person than accidental characteristics, we may begin to understand behaviours that heretofore had been puzzling, and we may find ourselves able to understand, sympathize with, and communicate among those people who, before, had left us bewildered.”
“Mundanity, the Mind, and AI.” Gracy Olmstead reflects on the value of mundane work and what will be lost if we offload this to AI: “what if the mundane is actually what makes us human? What if placed, present habits are the ones that truly shape our hearts and heads? What would happen if we refused to see them as degrading, and instead embraced L’Engle’s levity and life? What if we were to see these things, in fact, as play?”
“What Happens When People Don’t Understand How AI Works.” Tyler Austin Harper warns that we need to be honest about what generative AI is—and isn’t: “To call AI a con isn’t to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines.”
“The Megaproject Economy.” I disagree quite vehemently with Marko Jukic’s call for the mega-project of space exploration in this essay, but he presents a fascinating analysis and is right that building society around consumption is a recipe for disaster: “That developing and largely unindustrialized countries like India or Colombia are seeing rapidly falling birth rates is evidence that it is not industrial production which causes fertility collapse, but industrial consumption. They are getting the overflow of the industrialized world, which can produce more smartphones and televisions, more sweets and candies, more appealing online videos, more appliances and apparel, more power plants and electrical poles, than it knows what to do with, just like it produces so much grain that ancient breadbaskets like Egypt today import food from the developed world rather than bother feeding themselves.”
“Marie Gluesenkamp Perez’s Quiet Radicalism.” Evelyn Quartz tries to make sense of the Washington State congresswoman who cites Wendell Berry as a key influence: “Ever since Gluesenkamp Perez’s 2022 upset, national outlets have leaned on familiar labels to characterize her—‘moderate,’ ‘centrist’—as if her win in a Trump district could only be explained by ideological triangulation. But Gluesenkamp Perez isn’t splitting the difference between two poles so much as she is working from a different starting point, centered on repair, ownership, and trade work.” (Recommended by Adam Smith.)
“Honey Bees Learn to Fight Deadly Varroa Mites.” Jordan Charbonneau visits a monastery where some bee-keeping monks are working to breed bees that can ward off Varoa mites themselves: “While there are some treatments for Varroa mites, some brave beekeepers—like the monks at Holy Cross—are taking a new approach by abstaining from treatment. By not treating for mites and letting susceptible colonies die off, they hope to breed new, stronger generations of bees that can reduce mite numbers on their own through behaviors like grooming and taking care of each other.”
“The Vaccine Wars.” Brian Volck writes a thoughtful essay on vaccines drawing from his experience as a pediatrician: “Vaccines differ from most other therapeutic substances in one important way. The rare but serious health risks of a medication to lower cholesterol levels or blood pressure, for example, are limited to those who take it. The individual who receives a vaccine may benefit from the immune effects while accepting a small but nontrivial risk, yet the biggest beneficiary is the community. A community with high vaccination rates protects even those who, for whatever reason, are not immune by reducing the likelihood of exposure to infected persons – the phenomenon known as herd immunity.”
“A Ban on State AI Laws Could Smash Big Tech’s Legal Guardrails.” Lauren Feiner reports on a provision in the new budget package that would ban state AI laws for 10 years. She talks with Rep. Ro Khanna about the risks: “Khanna warns that missing the boat on AI regulation could have even higher stakes than other internet policies like net neutrality. ‘It’s not just going to impact the structure of the internet,’ he says. ‘It’s going to impact people’s jobs. It’s going to impact the role algorithms can play in social media. It’s going to impact every part of our lives, and it’s going to allow a few people [who] control AI to profit, without accountability to the public good, to the American public.’”
“We’re Not Ready for the AI Power Surge.” Emmet Penney outlines the new data centers being built in the US and the preexisting problems that they exacerbate: “The problem is that even before the AI surge, America’s energy grid was already slipping into disrepair. Successive reports during the last few years from the North American Electric Reliability Corporation (NERC), a grid watchdog, have warned that more and more of the U.S. is in danger of capacity shortfalls. That may sound like jargon, but what it means is a higher potential for blackouts like the ones we saw in Europe last month.”