When an autonomous Uber vehicle struck and killed a pedestrian in Arizona, Uber suspended its fleet of self-driving cars and assured everyone that it was “cooperating with authorities.” Such “cooperation” probably just means checking and updating the algorithms and sensors on its cars. This is because we tend to treat potential problems caused by digital technologies as engineering problems that can be solved by better technologies. When we do consider the ethical implications of such technologies, we tend to rely on ethical approaches that lend themselves to algorithmic thinking: if an autonomous car has to choose between hitting an elderly person in a cross-walk or swerving and hitting a mother pushing a stroller, what should it be programed to do?
Rather than trying to solve particular ethical conundrums, virtue ethics focuses on forming good humans.
Thankfully, Shannon Vallor, a philosophy professor at Santa Clara University, offers a much better approach to technology in her recent book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. She argues that rule-based approaches to technological problems are inadequate and urges a return to virtue ethics. Rather than trying to solve particular ethical conundrums, virtue ethics focuses on forming good humans. Vallor’s defense of virtue ethics is salutary and desperately needed. However, I think her call for global cooperation in cultivating technomoral virtues is unrealistic; while we may be frustrated by the limits of local action, virtue formation remains inherently embodied and local.
Vallor defends virtue ethics for several reasons. One is that the rapid pace of technological change causes what Vallor terms “acute technosocial opacity”: in other words, we can’t accurately predict how adopting new technologies will change our society. Rule-based ethics approaches are particularly unhelpful in such conditions. Another reason she turns to virtue ethics is that ethical reflection should help form good persons. Too often, discussions around new technologies focus on what these technologies will enable us to do and whether these new abilities will have positive or negative effects. By working within the virtue ethic tradition, Vallor rightly attends to the ways that technologies always shape their users. This tradition asks not merely whether a person’s acts are good or bad, but whether a person is “moving toward the accomplishment of a good life; that is, [whether] they are living well.”
Vallor draws on Aristotelian, Buddhist, and Confucian traditions to develop a stripped-down “global technomoral virtue ethic” with four basic planks: “1. A conception of the ‘highest human good’ … 2. A conception of moral virtues as cultivated states of character, manifested by those exemplary persons in one’s community who have come closest to achieving the highest human good. … 3. A Conception of the practical path of moral self-cultivation. … 4. A conception of what human beings are generally like.” Working from this framework, Vallor proposes twelve virtues that she argues are particularly necessary for wielding digital technologies well: Honesty, self-control, humility, justice, courage, empathy, care, civility, flexibility, perspective, magnanimity, and wisdom.
For the most part, Vallor’s argument is insightful and convincing, but, as FPR readers might expect, I remain quite skeptical that her vision of a global community, a community thick enough to actually form virtuous members, is possible. How would a digitally-connected, global community actually sustain a vision of the good life and form its members in the practices needed to pursue this life? Vallor recognizes this is a serious question, but rather than answering it directly, she argues that we don’t have any other viable hope:
As we determined in our analysis of the internal goods of global technomoral practice, the ability of human actors to adequately and reliably secure such goods in coordinated action with others will depend on our cultivation of the particular technomoral virtues likely to be conducive to such success. Why should we think that such cultivation is practically possible? Consider that the alternatives are to surrender any hope for continued human flourishing, to place all our hope in an extended string of dumb cosmic luck, or to pray for a divine salvation that we can do nothing to earn. I am confident I am not alone in finding these alternatives unappealing.
Vallor may not be alone in finding these alternatives unappealing, but that doesn’t mean we can wish a global community of virtuous technology users into existence. Ironically, given her aside about “divine salvation,” the only global communities I can think of that might be able to form their members in deep ways are religious communities. On the one hand, people who believe that they “can do nothing to earn” salvation may in fact exercise more humility and magnanimity than those who think their virtues make them deserving of dignity and worth. More to the point, such communities are deeply embedded in particular places; they may include people from around the globe, but they do so through embodied, local communities.
Embodied, grounded communities incubate virtue much more reliably than global “coordinated action.”
Such embodied, grounded communities incubate virtue much more reliably than global “coordinated action” (whatever that means) spearheaded by technocratic or governmental leaders. Near the conclusion of her book, Vallor hopes for “a widespread human recommitment to the practice of moral self-cultivation, embodied in a new culture of global technomoral education.” But as other technology theorists have argued, global communities of “technomoral education” may simply not be possible, much less likely. As L. M. Sacasas has recently written, “Meaningful talk about ethics and human flourishing in connection with technology, to say nothing of meaningful action, might only be possible within communities that can sustain both a shared vision of the good life and the practices that embody such a vision. The problem, of course, is that our technologies operate at a scale that eclipses the scope of such communities.” Alastair Roberts makes a similar observation regarding the kinds of community sustained by social media: “unlike the Aristotelian polis, social media isn’t a political community intentionally ordered towards virtue and the common good, but rather is an impersonal technological framework or set of platforms, continually being optimized for ends that are frequently contrary or entirely indifferent to our moral and social well-being.” In other words, rather than looking toward a global technological culture to form virtuous members, maybe we need to turn our gaze closer to home and consider how our participation in embodied communities might be forming—or de-forming—us.
This is why one of the best sections in Vallor’s book is her chapter on robots, particularly its discussion of “carebots.” As Vallor argues, “skillful caring is a formative experience in cultivating the moral self,” so outsourcing care to robots may actually damage our own character. For instance, we are unlike to cultivate empathy through thin, online interactions with others. Giving to a GoFundMe campaign to help someone or expressing condolences on a Facebook wall can be expressions of empathy, but they don’t require the embodied, uncomfortable self-giving necessary to actually form empathy. Carebots, however, may do away with the need for such care. How might I be damaged if I didn’t have to care for my sick daughter because a robot did it for me (maybe even more capably than I can)? As any sleep-deprived parent knows, this is not an easy question to answer: I don’t want to sit up with my toddler for hours in the middle of the night; I want to use whatever technique or technology will get her back to sleep now.
If I choose what technologies to use based simply on what will be most convenient for me, I will make terrible choices.
The very unpleasantness of these vital questions about technology and the good life is why we shouldn’t try to answer them on our own. If I choose what technologies to use based simply on what will be most convenient for me, I will make terrible choices. Vallor recognizes this, and hence argues for a global community that guides such choices, but, again, I’m not sure a global community can meaningfully limit individuals who choose personal convenience over the common good. Smaller, religious communities like the Amish (whom Wendell Berry repeatedly commends) or the Bruderhof have a much better track record of making decisions about what kinds of technology to use. In part, this is because it is at least possible for them to have a genuine, participatory conversation about what technologies will help them achieve their shared vision of the common good.
Even those of us who do not belong to such tight-knit communities can participate in these kinds of conversations in our families, churches, schools, and workplaces. Using technologies well within these embodied communities can cultivate the virtues we need to then use technologies more wisely on larger scales. Vallor makes a related point in her discussion of children raised in healthy cultures. If young people learn to relate well to others in embodied communities, they will be more likely to exercise these virtues in digital communities:
Empirical studies of the effects of new social media on youth lend increasing support to what is called the ‘rich get richer’ thesis: young people who are already well on their way to mature social competence and who have acquired above-average skills of social discernment, empathy, care, self-control, social courage, and confidence tend to thrive and gain important benefits from new social media practices. On the other hand, young people who lack social competence, or who report high levels of social anxiety or loneliness, not only have less positive experiences online but in many cases seem to suffer increased anxiety and loneliness as a result of their new media habits.
Our virtual presence—our self as extended by digital technologies—can all too easily subsume and damage our embodied self. But a self that is profoundly rooted in and shaped by and accountable to local, embodied community is more likely to have the virtues required to interact well in digital contexts. In essence, virtues (or vices) cultivated in local, embodied communities are amplified through digital technologies.
Vallor would likely respond that turning toward local communities as incubators of virtue constitutes a head-in-the-sand denial of our globally interdependent condition. We don’t have the luxury, Vallor thinks, of simply cultivating virtue in particular, embodied communities:
The truly novel challenge of our contemporary technosocial environment is the extent to which 21st century technology networks—the speed, range, and power of which have grown by many orders of magnitude in mere decades—ensure the radical and virtually irreversible interdependence of these morally incommensurable cultures and communities. . . . We face a changed world in which our individual and local fates are largely conditioned by what humans collectively and globally practice. The good life cannot be reliably secured for any of us without the global coordination of practically wise courses of technosocial action that unfold in a stable and intelligently guided manner over the comparatively longterm, in many cases on timescales significantly longer than the average human lifespan.
Vallor is right that twenty-first century communities are inescapably interdependent on the actions of people across the globe. But in many ways, this has been true for at least the last five hundred years (the fate of the Maya civilization was certainly “conditioned” by choices made in Spain). The ability of individuals to “secure” a good life has always been infringed on by the distant decisions of the powerful. And such decisions have almost never unfolded in a stable and intelligently guided manner. If our hope for a good life rests on stable and intelligently guided global coordination, we are doomed.
But a focus on virtues should remind us that a life can be good without being comfortable or stable. If our primary goal is not engineering the right actions but forming good people, we can start in our homes and neighborhoods. We should certainly extend this work into the legislative halls of our states and nations, but the real work of virtue formation isn’t suited to these contexts. Likewise, we should take what actions we can to pressure tech companies to design their products in ways more conducive to virtuous interactions, but, given their financial incentives and built-in constraints, Uber and Facebook are not going to lead the way in developing technomoral citizens. Rather, people seeking to participate virtuously in broader communities through social media and other digital technologies will need to look for ways to subvert often corrupt structures to healthy ends; we will need to “make do” with Certeauian creativity; we will need to beat social media swords into ploughshares and clickbait spears into pruning hooks. Cultivating virtue has never been easy. It remains possible.
“The very unpleasantness of these vital questions about technology and the good life is why we shouldn’t try to answer them on our own. If I choose what technologies to use based simply on what will be most convenient for me, I will make terrible choices.”
So true — this is one of the many ways we let convenience cheat us out formation. At the same time, we should remember that communities which can provide some ballast against our individual tendencies to choose ease and convenience above care and sacrifice, also make these sacrifices lighter by sharing them. My husband and I stayed with a Bruderhof community last fall, and the community’s care for their elderly members overwhelmed me. Not only do they find creative, effective ways to keep their elderly and disabled men and women integrated into community life and work, they also share the often crushing tasks of daily care. Thus, the adult children of an aging woman don’t have to be her only (or even the primary) caregivers. This duty is shared with other members of the community: other adults, teens full of extra energy and cheer, trained nurses, and even small children. I saw similar support surrounding families with young children, disabled members, those with mental illness, and more.
Their example was a reminder that if I am going to call someone into something hard–such as the embodied care of another person–I must also be calling them into something good–a community committed to sharing those burdens.
“Vallor is right that twenty-first century communities are inescapably interdependent on the actions of people across the globe. But in many ways, this has been true for at least the last five hundred years (the fate of the Maya civilization was certainly “conditioned” by choices made in Spain). The ability of individuals to “secure” a good life has always been infringed on by the distant decisions of the powerful. And such decisions have almost never unfolded in a stable and intelligently guided manner. If our hope for a good life rests on stable and intelligently guided global coordination, we are doomed.”
This paragraph touches on the doubt I feel regarding place-based ethics as a resource for addressing some problems.
As you say, global scales are so often the source of exploitation. Still, it’s hard for me to get out from under the shadow of “global coordination” because that shadow is being cast by very global monsters–climate change, several migrant crises, multinationals, etc. I wonder if there’s not something to Vallor’s proposed synthesis of virtue ethics and global technocratic coordination given the nature of those problems.
Let me put that idea in question form, because I’m not certain about it. Doesn’t the scale of certain problems require commensurate or homologous scales of response? Specifically communitarian or place-based virtue ethics is undoubtedly appropriate to so many issues–e.g., it’s seemingly the best solution to the effects on our political discourse wrought by social media. But I don’t know if that kind of place-based ethics works for problems like the ones that seem to concern Vallor because not every problem has the same “scale.” I’m sympathetic to the idea but still unsure. Instead, might it be possible to imagine a tool kit or repository of ethical solutions attuned to the scale of a given problem–some communitarian, but some requiring communities to coordinate through technocratic and even bureaucratic mechanisms?
That’s a challenging question, Ben. My instinct is to respond by extending the brief reference I make in my conclusion to our need to form virtues on smaller scales and then exercise these virtues on larger scales. Even the “global” problems you mention (climate change and migrant crises) have local causes and manifestations. So any hope I have in “global” coordination (and what exactly this entails needs to be defined) relies first on local formation. But that’s not a complete answer, and Vallor’s book has certainly given me food for further thought.
Comments are closed.