Interested in discussing this article in your classroom, parish, reading group, or Commonweal Local Community? Click here for a free discussion guide.
In November 2022, the Powerball lottery jackpot reached $1.9 billion and, for the first time in my life, I bought a ticket. It started as a joke. A friend was celebrating a birthday, and I’d gone to the gas station to buy him a lottery ticket. I figured, why not? I would enjoy being rich. But as I waited for the magic numbers to be revealed, I ruminated about what I would do with a billion-dollar post-tax windfall. I’d need to convert the money into something truly significant, something that would stand the test of time. I found myself Googling how much it costs to buy a school. I figured I could bankroll at least a few great new schools with all of my money. Maybe I would use my largesse to single-handedly transform educational opportunities in Northern Indiana, or even start a major foundation that would harness the stock market to create schools perpetually. This would not be easy. There is that whole challenge of the teacher shortage. I would have to figure out what curriculum exactly would make these schools so transformative, and how to make sure my fellow Hoosiers would send their children to my transformative schools. I started to worry that converting dollars into units of human enlightenment would prove overwhelming, given how little I know about K–12 education. I’d need to hire consultants to do it right. And that was as far as I got in my planning when I learned that I had not won the Powerball.
In the roaring 2020s, there are plenty of stories about people relentlessly assembling eye-popping fortunes, and about their ambitious plans for how to deploy them. Elon Musk, one of the richest people in the world, is said to sleep on the floor of his Twitter office. His self-reported philanthropic motivations include securing an extra-planetary future for the human race and saving democracy through social media. Musk makes me self-conscious that my moral imagination only got as far as buying schools. He seems willing to throw all his vast resources at these goals…yet always ends up reinvesting in himself.
In one of the more fascinating recent episodes of modern capitalism, one week after I lost the Powerball, thirty-year-old cryptocurrency trader Sam Bankman-Fried lost over $15 billion in personal wealth. While most of the news coverage focused on how a single man amassed, grossly mismanaged, and then abruptly lost such a fortune, a crucial subplot focused on Bankman-Fried’s philosophical interests. Living relatively frugally, he committed over a billion dollars to an ethical project called “longtermism.” The grants he made were Muskian in their tastes and scale of ambition: Bankman-Fried was interested in developing and regulating artificial intelligence, in identifying and training high-talent STEM students, and in forecasting and preventing future pandemics. His largest grants went to optimizing and spreading the longtermist movement. In summer 2022, sources like the New Yorker and the New York Times ran feature pieces introducing longtermism as the future of ethics.
For those new to the discussion, longtermism is a combination of two significant and perennially controversial assumptions. First, the longtermists assume that measurable impact matters the most in ethics. This assumption is part of an ethical system called utilitarianism. Just as you might try to maximize the return you get from a financial investment, so should you try to maximize the return you get from an ethical investment. The return on investment might be measured in the number of lives saved or improved—or even in the number of good lives created. If you have a choice between spending two dollars on a gag birthday gift for a friend or on purchasing a mosquito net for someone at risk of contracting malaria, utilitarianism says to go for the mosquito net. Crucially, utilitarians say that philanthropists should not be biased by more narrow considerations like their personal relationships, histories, or emotional attachments. Ethics is the discipline of getting rid of such biases to focus on what’s most valuable. Sam Bankman-Fried’s parents, both Stanford professors, raised him to be fluent in this kind of optimization paradigm. He framed most of his decisions about what to eat, where to travel, and how to invest in terms of “EVs,” or “units of expected value.”
The second assumption of longtermism is that the time of the impact does not matter in ethics. This assumption is sometimes called temporal neutrality. It does not matter whether lives get better immediately, or pretty soon, or even in humanity’s distant future. Ethical philanthropists should be willing to make calculated now-for-later tradeoffs, and should consider their effect on merely possible future people, not just current actual people. Our emotions cause us to pay much more attention to events that may occur soon, but these emotions are just as biasing as our idiosyncratic personal attachments. A crucial feature of temporal neutrality is that it encourages us to distinguish between the real risks that come with any future-directed tradeoff (which should be weighed in our decision making) and our innate, present-biased fears, which can make us imprudent.
Utilitarianism and temporal neutrality have been debated seriously in philosophy departments for at least 150 years. Prototypes of each assumption even appear in the Platonic dialogues two thousand years ago, where Socrates and his Greek friends tried to get to the bottom of how the “art of measurement” figures in a good life. What has changed recently is that a small cadre of extraordinarily wealthy men—and they are nearly all men—have emerged with the resources to practice this ethical system at scale. As he grew his crypto fortune, Bankman-Fried partnered with Silicon Valley philanthropy wonks and philosophers at elite universities to create a charity called the FTX Future Fund. The Future Fund set out to make grants of at least $100 million a year based on four longtermist ideas about value and control. First, humanity’s overriding ethical aim should be to protect future generations by focusing on the distant future as much as the near. Second, now is the “crucial time in human history” for shaping that future. Third, the best means for pursuing ethical goals is to invest in “ambitious people and projects.” And finally, the ethical pioneers must be willing to fail often in order to “succeed at scale.”
In the twentieth century, most major philanthropies set up endowments with oversight boards; their founders, who were usually financiers, took on more of a patronage role. When John D. Rockefeller set up his blockbuster foundation in 1913, he turned over decision making to an independent board almost immediately. “I have not had the hardihood even to suggest how people, so much more experienced and wise in those things than I, should work out the details even of those plans with which I have had the honor to be associated.” Rockefeller wanted an opportunity to transform money earned through his often brutal capitalist tactics into something with moral significance. What he wanted from his foundation was a kind of reputational immortality: the Red Cross and other institutions would keep doing good in his name long after he died. He assumed that the tactics that helped him build a vast oil empire by age thirty-nine might not be the best methods for expanding the American Red Cross.
In the twenty-first century, there is less and less of a boundary between the financial and philanthropic activities of the world’s richest men. In launching the Future Fund, Bankman-Fried opted for a move-fast and pay-as-you-go model. Some of the Future Fund’s million-plus-dollar grants were applied for in a single day and approved within two weeks. The architects of longtermist philanthropy wanted it to work with a perpetual start-up ethos: a very tight inner circle, big risks, and betting on their own smarts to power through the inevitable obstacles.
The longtermists certainly gained experience failing at scale in November 2022. When Bankman-Fried’s crypto exchanges imploded, all the pledged grants suddenly evaporated. The Future Fund’s advisors, in Millennial fashion, posted their resignation note on Twitter. And as the fraud charges rolled in, the outspoken proponents of longtermism found themselves unwitting participants in a morality play.
Just as we are now coming to terms with whether digital tokens will ever be the future of money, we should also be wrestling with the ethical framework guiding the decisions of these wealthy longtermists. Understanding the strange way their movement thinks about time, money, and their power to control both can give us insight into how our new boom-and-bust crypto economy is changing our approach to ethics.
Utilitarianism has always had a complicated relationship with time and money. In the mid-1800s, Cambridge professor Henry Sidgwick was one of the great leaders of the movement. He instructed his protégés that time itself was irrelevant to ethics; utilitarians should just aim to have the biggest possible impact: “Hereafter as such is to be regarded as neither less nor more important than Now.” As a matter of practice, Sidgwick and his fellow Victorian-era utilitarians worked on current problems and used existing institutions. They engaged with Parliament, developed philosophical curricula, experimented with religious movements, and lobbied for better working conditions across the British factory system. They were also remarkably democratic for their time—the most significant impact the early utilitarians had was in convincing their contemporaries that women, the poor, and nonwhite people have interests that were just as morally significant as those of rich white men.
It isn’t an accident that utilitarian thinking came into its own at the dawn of the Industrial Revolution. Utilitarianism is unimaginable without capitalism—without its confidence in the power of money to move systems and its mechanisms for measuring efficiency. Pre-capitalist ethical systems offer us much more complex and sometimes contradictory advice on the role money plays in a morally good life. Consider Christianity. Jesus tells his followers that it’s harder for a rich person to have eternal life than it is for a camel to fit through the eye of a needle. But he also condemns followers who do not make prudent future investments and grow their resources. But he also praises a seemingly crooked financial agent who, expecting to get fired, preemptively renegotiated all the debts owed to his master. Jesus tells a would-be disciple to sell all his earthly possessions to follow him. And then, when Judas suggests that Mary Magdalene sell some expensive perfume to feed the poor, Jesus upbraids him: “the poor will always be with us.” The only consistent message about money throughout the New Testament seems to be that we should handle it very cautiously, like a kind of moral nitroglycerin.
With the rise of capitalism, money began to feel less magical and more scientific. In the 1700s, the founder of Methodism, John Wesley, encouraged his followers to pursue wealth but with holy intentions for distributing it: “Those who gain all they can and save all they can should also give all they can so that they will grow in grace and lay up a treasure in heaven.” By the 1800s, a feedback loop had developed in Europe between, on the one hand, Protestant ideas about election and sanctification and, on the other, capitalist ideas about earning and investing. Holy people earned more and invested prudently in the Church and their communities, which then grew and endured. Max Weber noticed this social phenomenon and wrote The Protestant Ethic and the Spirit of Capitalism.
The utilitarians simplified this Wesleyan motto, subtracting the heavenly goal. And by the mid-twentieth century, utilitarianism had evolved to tie its moral goals much more closely to personal finance. A good person in capitalism should aim to “earn to give.” For decades, the utilitarian Peter Singer has taught a course at Princeton called Practical Ethics, encouraging Ivy Leaguers to take up high-earning finance jobs and then redirect their individual incomes to very efficient charities. The particular philanthropic targets have changed over time: in the 1970s you needed to donate to people suffering from famines in Southeast Asia and to organizations promoting vegetarianism and nuclear de-proliferation. In the 1990s and early 2000s, the focus was on reducing deaths from malaria and direct financial transfers to the world’s poorest people. Singer and other modern utilitarians take pains to discourage donations to institutions that don’t have efficient, measurable moral production functions. In 2013, he caused a bit of a stir by criticizing patrons of the Make-A-Wish foundation who granted a gravely ill Seattle boy’s superhero role-playing dreams. According to Singer, this was sentimental moral inefficiency at its worst. He and other leading twentieth-century utilitarians have strenuously inveighed against donations to art museums, religious groups, and universities. Such institutions, they claim, don’t need your money or attention, and don’t convert dollars to impact the way utilitarian philanthropy does.
In the late twentieth century, most practicing adherents of this form of utilitarianism came from a set of former philosophy majors of modest means. Most of the rest of us, meanwhile, continued to let our philanthropic ambitions be guided haphazardly by school, church, and community fundraisers. Our overriding ethical goals stayed focused on being good parents, colleagues, neighbors, and coworkers. We valued participation over optimization. But at the dawn of the twenty-first century, the utilitarian program surprised everyone by migrating out of philosophy lecture halls and into industry, in particular to new financial-services and technology companies. Whereas Singer hoped to influence more traditional billionaires with somewhat more regular assets (think Bill Gates and Warren Buffet), these new utilitarians focused on tech iconoclasts like Elon Musk and unconventional financial instruments like cryptocurrency. This focus on the new economy also shows up in the advice utilitarians have started to give for how to fulfill our ethical obligations. They’ve moved on from malaria; now the overwhelming focus is on shaping artificial intelligence and empowering a generation of prophetic technologists.
The vision has been laid out in a series of popular books over the past few years, most of them coming out of the University of Oxford, where William MacAskill, one of Singer’s acolytes, is a professor. MacAskill has been one of the most prolific advocates for “effective altruism,” especially in Silicon Valley and on elite college campuses. Unlike the typical Oxford don, he embraced public engagement early in his career. He founded a nonprofit called 80,000 Hours to help college students find careers that will empower them to have the greatest measurable moral impact. He has published almost exclusively with trade presses. While he lives on a modest 31,000 pounds a year, he has Elon Musk on speed dial and has become the philosophical spokesman for the new strain of technologically financed utilitarianism. He was the key ethics adviser for the FTX Future Fund.
In his 2022 longtermist manifesto, What We Owe the Future, MacAskill resurrects the old Sidgwickian idea that utilitarians should consider future impact just as much as present impact. What he adds is the idea that we can make our own present moral codes into longstanding, potentially immortal codes, particularly by building them into the generalized artificial-intelligence (AI) systems that we are now creating. MacAskill often borrows a term from investing, “locking in,” to describe how our future will be determined. If you want to have enough money for retirement when you’re sixty-five, you have to lock it in when you are much younger, by committing resources to, say, a good 401(k) plan. If you miss the investment window, perhaps by ignoring the power of the stock plan and its compound interest, then you are certain to regret it. Likewise, MacAskill argues that we are at a point when human civilization is like an emerging adult, but one with extraordinary technological powers to lock in its future prospects. MacAskill believes that as we outsource more and more of our decision making to AI, it will come to have a power that is more far-reaching and durable than our older, squishier moral technologies—namely religious, educational, political, and cultural institutions.
In MacAskill’s vision, the fundamental question for our generation is what kind of immortal code we will allow to take over. He doesn’t really suggest there are competing ethical ideas that might get locked in; rather, the question is whether we will allow the future to evolve in a chaotic and uncontrolled way or whether we will use our powers to guide it toward the best outcome. He hopes visionaries will lead us into this brave new world by engineering AI that aligns with our philosophical goals. And he urges the reader to be afraid of the existential risks of failing to do so. If we focus too much of our energy on marginal improvements to the squishy institutions of church, state, commerce, and charity and neglect the AI revolution, we might inadvertently hasten a humanity-destroying climate, pathogenic, or weapon event. He insists: “Few people who will ever live have as much power to positively influence the future as we do.”
The sixteenth-century astronomer Nicolaus Copernicus shocked the scientific establishment by discovering that the Earth was not the center of the universe. In fact, there is no place in the universe that everywhere else depends on. The Earth is special to us because it is where we happen to live, not because it is the planet around which everything else revolves. This idea, which revolutionized physics, is often called the “mediocrity principle.” It has an analogue in how we think of time and value. The present era might be special to us because it is when our lives are occurring, but no point in time is any more special from the standpoint of the moral universe. This was exactly Sidgwick’s point when he urged that Hereafter and Now are on the same ethical footing.
This sense of time pushes us to befriend other generations rather than to treat them like mere subjects. The meaning of our lives is determined in part by how we respond to the moves of previous generations and process their mistakes. Our descendants will accept some of our plans and reject others. Our values and projects will be part of how they understand the meaning of their lives, but they’ll also add their own chapters to the human story.
New technological developments tempt their developers to deny mediocrity. Archimedes thought his Greek civilization was the fulcrum on which the future of humanity turned. I suspect Robert Oppenheimer had the same thought in 1940 when he grasped the power of the nuclear bomb. They were both right about the changes they were witnessing but mistaken about their capacity to predict or control the future. It’s hard to lock anything in, and we should be thankful that no previous generation had the kind of control that longtermists like MacAskill envision. Intergenerational ethics is and should be a perpetual negotiation.
Samuel Scheffler, a philosopher at NYU, has written movingly about how wrestling with the temporal mediocrity of our lives motivates us. “The vastness and impersonality of time are every bit as chilling and awe-inspiring as the vastness and impersonality of space, and the need for a refuge—for some that serves the function of a place in time—is, for many people, almost as strong as the need for a place in space.” For Scheffler, we build ourselves homes in time by attaching our lives to the squishy traditions and institutions. It starts with small, personal habits. Every Tuesday morning you have a coffee with your coworkers, and however else your job might change, you start to believe that those Tuesday mornings will be there to return to. The two years of pandemic quarantines were soul-crushing in part because they razed so many of these traditions, replacing them with one bizarre, repeating, lonely day.
Successful institutions create homes in time that expand beyond the confines of our lives and are there to greet future inhabitants. Our ancestors attended school, voted, and lit candles in church sanctuaries. We hope that our children and grandchildren will do the same, and we also expect that they will make adjustments to the systems. We do not hope these institutions endure because we think our current methods of education, governance, or worship maximize expected value. They do pretty well. Rather, we want our short lives to be connected to the lives of others, and to be a bit more significant as a result. We want our ethical endeavors to have a slightly bigger parcel of temporal real estate.
In the twenty-first century, trust in institutions is at a low point, and so is our confidence that schools, governments, or religions will offer us such opportunities. Financial abuses have been a major part of the problem. Institutions require a massive amount of work to maintain, and the potential for corruption is ever present. Consider Pope Francis’s recent efforts to untangle decades-long trails of corruption in Vatican finances. The new crypto economy feeds off of our current moment of widespread institutional degradation. It advertises itself as a workaround. The promise of crypto is that we can conduct our financial lives without trusting banks or governments, instead putting our trust in immortal cryptographic protocols and the engineers who establish them. The longtermists have tried to tie this vision to a moral mission.
But money has never had quite as much power as utilitarians have wished. And as we expand our temporal concern, we’d do well to remember the unique ethical power of institutions. The future is deeply uncertain, with risks and opportunities we can barely grasp. Future people will no doubt be affected by our current decisions, but they aren’t our dependents, and there is no reason to suppose their lives will be entirely determined by the technology we build for them now. They will have their own moral goals, which they’ll develop in the schools, economies, and political and cultural institutions that we build together and bequeath to them. And future generations are likely to reflect on the era of Bankman-Fried and crypto-philanthropy in the same complex and judgmental way we look back at John Rockefeller and petroleum-driven philanthropy. The power of good intergenerational institutions like schools, governments, religions, and, yes, even conventional philanthropies run by boards and endowments is that they embrace temporal mediocrity. If we care about making a long-lasting, flexible, and democratic impact, the lesson we should learn from the brief, bizarre life of the FTX Future Fund is that all those squishy institutions are still among our best moral technologies.