Money, the Wild, and the Metaverse

3
Photo by George W. Ackerman

Living as Humans in a Machine Age.” Registration is now open for our fall conference in Madison, Wisconsin. Paul Kingsnorth will be the keynote speaker, and we’ll post the full schedule soon. We’re looking forward to a great weekend with friends old and new.

Money.” The new issue of Plough looks quite good. My copy arrived this week, so I’ve only read part of it so far, but Peter Mommsen’s introduction and Clare Coffey’s incisive critique of how social media commodifies friendships both offer rich food for thought.

They Built the Digital World. Now They Just Want to Sew and Make Chairs.” Lisa Bonos profiles tech workers who are discovering the satisfactions of making physical things and confronting material limits: “In tech’s boom times, many sought to ‘move fast and break things,’ a motto Mark Zuckerberg popularized at Facebook and blossomed into a growth-at-all-costs ethos that spread throughout Silicon Valley. Now, in an era of layoffs and cost-cutting, workers feel an urge to slow down and make things.”

The Biggest Problem in AI? Lying Chatbots.” Gerrit De Vynck reports on methods that programmers are using to try to prevent LLMs like ChatGPT from lying so often: “The result [of their training] is bots that act like precocious people-pleasers, making up answers instead of admitting they simply don’t know.”

A New Rule of Education for the Age of AI.” Richard Gibson thinks about what “Rule of Education” might help us live well with AI. These are the kinds of rules you can just slap on a syllabus: “AI’s foremost challenge for higher education is to think afresh about forming humans.”

Everything Old is New Again.” Ethan McGuire reviews A.E. Stallings new selected poems: “If any contemporary poet deserves a volume of selected poems—and most who release them do not—Stallings does. Her bid for a form of relevant immortality, her potential afterlife, requires testing, and one way she and we her readers can test this during her lifetime is through such a collection.”

Out of the Wild.” Samuel Matlack confronts the paradoxes latent in the idea of wilderness and wrestles with how we humans should get involved in the lives of our fellow creatures: “But the false ideal of nature untouched by humans obscures these decisions altogether. Rather than putting limits on human mastery, it is an abdication of responsibility to make difficult choices.”

Kirk and Del Noce: Two Philosophers Connected through History.” Carolina Riva Posse considers the harmonies between Russell Kirk and Augusto Del Noce: “Del Noce and Kirk are both philosophers formed by their study of history. In the midst of the primacy of praxis, they re-propose the permanent things that respond to the longings of the human heart and struggle for a political vision that is not reduced to practical techniques of sociological efficiency.”

The Hypocrisy of English-Only ‘Decolonization.’” Eric Adler points out an odd double-standard that underlies many debates around general education requirements: “Some American educators appear to favor a ‘decolonized’ course of studies that’s conducted entirely in English. And they do not seem to recognize the contradictions inherent in that position.”

RIP Metaverse.” Ed Zitron wonders what lessons should be learned from the catastrophic failure of the hyped-up Metaverse: “Decentraland, the most well-funded, decentralized, crypto-based Metaverse product (effectively a wonky online world you can ‘walk’ around), only had around 38 daily active users in its ‘$1.3 billion ecosystem.’”

The Spiritual Battle of Teen Screen Time.” Bonnie Kristian thinks carefully about the scale on which we need to work to help young people foster healthier relationships with smart phones: “No one family can entirely fix the kids and phones problem, but neither can Congress. In each case, the scale of the solution is wrong. And the place we have the best chance of getting the scale right is the local church.”

Amitai Etzioni, 94, Dies; Envisioned a Society Built on the Common Good.” Robert D. McFadden summarizes the intellectual contributions of an influential Jewish sociologist: “Communitarianism, with its emphasis on community, not the individual, staked out ground between liberal advocates of civil liberties and welfare rights on one hand, and conservative champions of laissez-faire economics and traditional values on the other. It never became a mainstream political movement, but it won significant followings in America and Europe.”

Rational Magic.” Alan Jacobs annotates a recent essay by Tara Isabella Burton on why technophiles are getting into spiritualism. I really enjoy this format and the opportunity it affords to peer over the shoulder of an experienced reader.

3 COMMENTS

  1. The insane “green” war against cows is spreading from the test-case Netherlands to Ireland, and you can be sure it won’t stop there. You can be sure the EPA will try to force something like this in the US soon enough, and if they can’t do that they’ll find other ways to jam it through.
    https://www.telegraph.co.uk/news/2023/06/02/irelands-mooted-cow-massacre-warning-to-net-zero-britain/
    “The collateral damage of net zero is now getting uncomfortably close to home. First Dutch farmers were threatened with compulsory purchases to satisfy EU emissions targets, fomenting a new revolt in the process. Now it’s Ireland’s turn, where the government is reportedly looking at plans to cull around 200,000 cows to meet its climate targets”

  2. After reading the two articles about AI-content generation systems, I decided to try to see what ChatGPT would proposed as an argument to support mandatory labelling of any content that was generated with the help of AI. What follows is my query and the ChatGPT response. It was an interesting quick experiment.

    Query: “Due to the challenges of AI hallucinations, draft an argument to convince people that all content that was generated with the help of AI must be so labelled.”

    ChatGPT response: Title: Promoting Transparency: The Imperative of Labeling AI-Generated Content

    Introduction:
    In an era of rapid technological advancements, artificial intelligence (AI) has emerged as a powerful tool that can create and manipulate content with astounding precision. However, this rise in AI-generated content has brought forth an array of ethical and societal challenges, particularly concerning the potential for AI-induced hallucinations. To navigate this complex landscape and foster a responsible digital environment, it is essential to label all content that has been generated with the assistance of AI. By implementing a labeling system, we can uphold transparency, protect individuals’ well-being, and empower users to make informed choices regarding the content they consume.

    Ensuring Informed Consumption:
    Labeling AI-generated content allows users to distinguish between human-generated and AI-generated content, empowering them to make informed choices. The rapid growth of AI has made it increasingly difficult to discern between authentic human creations and AI-generated replicas. By requiring labels on AI-generated content, individuals can differentiate between the two, enabling them to critically evaluate the information they encounter and exercise their judgment accordingly. This transparency serves as a bulwark against the inadvertent acceptance or dissemination of potentially misleading or harmful content.

    Safeguarding against AI Hallucinations:
    AI-induced hallucinations pose a significant challenge in the digital landscape. While AI’s capabilities to generate realistic images, videos, and text are impressive, they can also be exploited to deceive or manipulate unsuspecting users. By labeling AI-generated content, we establish a necessary safeguard against the proliferation of deepfakes, forged identities, and fabricated narratives. Such labeling creates awareness, making users more vigilant and discerning when engaging with potentially manipulated content, ultimately protecting them from emotional, psychological, and reputational harm.

    Protecting Privacy and Consent:
    The generation of AI content often involves the use of personal data, including images, voice recordings, and social media profiles. Failing to label AI-generated content infringes upon individuals’ right to privacy and informed consent. Labeling acts as a clear signal to users that their personal data might have been utilized in the creation of AI-generated content. This transparency enables individuals to understand how their data is being employed and empowers them to take appropriate measures to protect their privacy and control the use of their personal information.

    Encouraging Ethical AI Development:
    By mandating the labeling of AI-generated content, we incentivize responsible AI development practices. It creates accountability among developers, researchers, and technology companies, encouraging them to adhere to ethical guidelines and standards. The labeling requirement serves as a catalyst for promoting fairness, transparency, and bias mitigation in AI systems. It reinforces the need for comprehensive data privacy protocols and rigorous algorithmic oversight. By making the AI generation process more transparent, we foster an environment that values the responsible and ethical use of AI technology.

    Conclusion:
    Labeling AI-generated content is an essential step toward fostering transparency, protecting individuals from AI-induced hallucinations, and upholding privacy rights. By providing individuals with the knowledge and agency to make informed decisions, we empower them to navigate the digital landscape responsibly. Furthermore, labeling serves as a powerful incentive for ethical AI development practices, leading to increased fairness and accountability. Let us embrace this forward-thinking approach, ensuring a future where technology augments human potential while upholding the values of transparency, privacy, and informed consent.

Comments are closed.

Exit mobile version