Calendar An icon of a desk calendar. Cancel An icon of a circle with a diagonal line across. Caret An icon of a block arrow pointing to the right. Email An icon of a paper envelope. Facebook An icon of the Facebook "f" mark. Google An icon of the Google "G" mark. Linked In An icon of the Linked In "in" mark. Logout An icon representing logout. Profile An icon that resembles human head and shoulders. Telephone An icon of a traditional telephone receiver. Tick An icon of a tick mark. Is Public An icon of a human eye and eyelashes. Is Not Public An icon of a human eye and eyelashes with a diagonal line through it. Pause Icon A two-lined pause icon for stopping interactions. Quote Mark A opening quote mark. Quote Mark A closing quote mark. Arrow An icon of an arrow. Folder An icon of a paper folder. Breaking An icon of an exclamation mark on a circular background. Camera An icon of a digital camera. Caret An icon of a caret arrow. Clock An icon of a clock face. Close An icon of the an X shape. Close Icon An icon used to represent where to interact to collapse or dismiss a component Comment An icon of a speech bubble. Comments An icon of a speech bubble, denoting user comments. Ellipsis An icon of 3 horizontal dots. Envelope An icon of a paper envelope. Facebook An icon of a facebook f logo. Camera An icon of a digital camera. Home An icon of a house. Instagram An icon of the Instagram logo. LinkedIn An icon of the LinkedIn logo. Magnifying Glass An icon of a magnifying glass. Search Icon A magnifying glass icon that is used to represent the function of searching. Menu An icon of 3 horizontal lines. Hamburger Menu Icon An icon used to represent a collapsed menu. Next An icon of an arrow pointing to the right. Notice An explanation mark centred inside a circle. Previous An icon of an arrow pointing to the left. Rating An icon of a star. Tag An icon of a tag. Twitter An icon of the Twitter logo. Video Camera An icon of a video camera shape. Speech Bubble Icon A icon displaying a speech bubble WhatsApp An icon of the WhatsApp logo. Information An icon of an information logo. Plus A mathematical 'plus' symbol. Duration An icon indicating Time. Success Tick An icon of a green tick. Success Tick Timeout An icon of a greyed out success tick. Loading Spinner An icon of a loading spinner.

How to build a better world: Philosopher outlines long-term view for the planet in his new book

Will MacAskill fans the flames of debate for a brighter long-term future for the planet in his new book.
Will MacAskill fans the flames of debate for a brighter long-term future for the planet in his new book.

Finding himself having dinner with Nicola Sturgeon and asked to pitch one policy that would make a difference, William MacAskill told the first minister to prepare for a pandemic.

It was 2017 and the other guests smiled before quickly steering the conversation onto more pressing priorities for the first minister.

Five years on, more than two of them blighted by Covid, it is no surprise that MacAskill’s thoughts are provoking far more than polite disinterest today.

The philosopher, whose ideas for a better world are igniting interest around the globe, remains exercised by pandemic readiness and is convinced the next global pandemic will only be accelerated by rapidly expanding, unfettered technology. The coronavirus pandemic, he said, could seem mild in comparison with the horrors that novel engineered pathogens might bring.

“It’s already the case that scientists can take an existing pathogen virus, like the flu, and make it more deadly or more transmissible to increase its destructive power,” explained MacAskill, who became the youngest associate professor of philosophy in the world when he joined the University of Oxford at 28.

“That ability is, firstly, only getting more powerful, so we can do that to a greater and greater degree. And, secondly, it’s getting more democratised, so more and more people are able to do this.

“If we’re not careful, well, imagine a world in 10, 20 years where millions of people around the world could spend just a couple of thousand dollars to design whatever sort of super-virus they want. That’s a scary world and we need to be thinking and taking action with foresight to reduce that risk.”

What’s more, said MacAskill, even more risk is added by the “enormously quick, fast progress” in Artificial Intelligence (AI), which could put power and control firmly in the wrong hands.

The 35-year-old said: “There are two categories of risk. One is the classic AI takeover scenarios that Nick Bostrom, a colleague of mine, has sounded the alarm bell for – it’s where we simply lose control of the AI systems themselves.

“And even if we managed to control AI and ensure that AI systems do what the designers want, there’s still enormous risks. It could lead to enormous intervention of power.

“We worry about the scale of inequality within a country or globally and the scale of differences in power, but that could just be a tiny fraction of what the future holds.

“There are good arguments for thinking that AI could enable uneven distribution of power, where all power is concentrated in the hands of a single person or company or country. That power discrepancy could persist indefinitely.”

It’s a worrying thought that he said we need to take seriously.

He explained: “In particular with AI, it could give unprecedented power to dictators. A significant part of why the last 200 years have seen the flourishing of democracy around the world – in a way that was essentially unprecedented in history – is because the current technological landscape means that workers are really useful. We generate a lot of value, we’re able to innovate, build things and make things better.

“If we’re in a world where the workforce is just automated, AI is doing everything, including the army, surveillance and police force, well, then there’s much less of a political force in favour of democracy and away from authoritarianism.

“So it seems quite likely that you end up with a situation where a small group of elites start controlling everything, just as we saw for the first few thousand years of human history since the agricultural revolution.”

MacAskill, who was born and raised in Glasgow, has spent his academic career advocating for effective altruism, or the use of evidence and reason to help other people as much as possible with our time and money.

President of the Centre for Effective Altruism and co-founder of 80,000 Hours – a non-profit organisation that encourages people to pursue careers with the largest positive social impact – he believes we should value all lives equally, making life decisions that align with that philosophy. And he practices what he preaches.

In an extension of Giving What We Can, another of his altruistic organisations, which asks people to donate 10% of their lifetime income to the charities making the biggest difference, MacAskill spends only £26,000 a year on himself after tax and savings, giving the rest to charity.

Now, MacAskill is taking the case for effective altruism one step further, arguing that we should not just consider the wellbeing of people living today, but future generations, too.

© SYSTEM
William MacAskill.

In his new book, What We Owe The Future: A Million-Year View, the moral philosopher makes the case for long-termism – making the right decisions today to ensure the potentially 80 trillion people yet to live on the planet will inherit a “flourishing and long-lasting society, where everyone’s lives are better than the very best lives today”. Hence, his warnings about future pandemics.

The idea of improving the lives of unknown future people initially, he admits, left him cold, so how did he become a proponent of long-termism?

“Back in 2009, when I set up Giving What We Can, the fundamental premise was that everyone counts equally, morally speaking, no matter where in the world they live,” he said.

“Distance and space is not a morally irrelevant fact, when we’re talking about harms or benefits to someone. So, if you’re thinking about this expanding moral circle or giving concern to people who are otherwise disenfranchised, well, if distance and space doesn’t matter, distance and time shouldn’t either.

“Future generations are completely disenfranchised. They can’t represent themselves, they can’t advocate for their interests, and that was a very natural argument that weighed on me.”

It may seem abstract to make impactful decisions and changes to benefit potential future populations.

However, MacAskill pointed out, mitigating the long-term impact of climate change is just one area we’re already tackling with urgency, and there’s many “concrete” things we can do to safe-guard the world for the next generation, as well as our own.

He said: “Whether it’s technology to reduce the risk of worst case pandemics, technical work to make sure that AI systems are safe and secure or diplomacy to reduce the risk of a nuclear war, these things are very good, just from a near term perspective. Even just in the interest of the present generation we should be doing much more, then when we start thinking about future generations these become an even greater moral issue.”

In the current climate, with war raging in Ukraine and rising support for populist politics in countries around the world, yes, he admits, it can often feel like we’re in an uphill battle to create a morally progressive, fairer society by encouraging everyday people to make sacrifices. However, by looking at the past, we can see the potential for a better future.

“It might well be true that the world sucks in many ways,” he said. “But we can do things about it, we can make the world better and, in many ways, the world is much better than it was 200 years ago.

“Back then the majority of people were in some form of forced labour, now that’s true for much less than 1% of the world’s population. Back then, there wasn’t a feminist movement, women couldn’t vote. Back then, almost everyone was in extreme poverty, now less than 10% of the world is. In significant part that is because of the action of morally motivated people.

“When I first thought of Giving What We Can, the response I got was, ‘Oh that’s insane, no one will sign up to give 10%, that’s far too demanding’. Now we have over 7,000 people and literally over a billion pounds given to very effective causes, and many billions of pounds have been pledged.

“On a broader societal scale, the battle to mitigate climate change will be very long and hard, and it won’t be perfect. We’ll still get two or three degrees of warming. But there’s been enormous successes, and we should celebrate that and then double down.”

On the topic of climate change, MacAskill said the policy changes are a great example of why you don’t need unilateral moral progression.

He added: “There’s are some solutions to climate change that require everyone to be on board. Let’s say a global tax on carbon, for example. Then there are other solutions that only require one person or one group to be involved.

“For example, over the course of the 2000s, Germany invested enormously in reducing the cost of solar power by effectively underwriting the entire industry. It didn’t make much sense for a northern latitude, fairly cloudy country – but it now means that the cost of solar power is extremely low, and it’s going to get lower in the future, too.

“If a single country invests very heavily in clean tech and it makes the cost of renewable energy cheaper than the cost of fossil fuels, then other people, just on grounds of pure self-interest, will move on.

“I think a lot of the most promising things we can do are, actually, individual groups or single countries taking action that makes the whole world better. Once we’ve used up that pot of opportunities, there’ll be even harder things that do require global cooperation but, at the moment, there’s still many things we can do that in that category.”

Vegetarian MacAskill gives the recent rise of veganism, and the meat-substitutes it has spawned, as another example of how moral changes by some can benefit the wider population – his own Effective Altruism movement advocates for so-called “clean meat” and invested in companies like Beyond Meat and Impossible Foods.

“If you can get plant-based meat and cultured fat that taste exactly like meat, and you can make it cheaper, healthier than regular meat, you can get people who are not that morally motivated, necessarily, moving on to a better diet,” he explained. “We can do a mix of moral advocacy and technological solutions that are compatible. You can do both sides of the ledger at the same time.”

MacAskill’s thinking has already grown followers around the world and been praised by the likes of Elon Musk and Bill Gates but he remains disarmingly down to earth and practical.

“My message is squarely directed about, let’s say, five to 10% of the world’s population,” explained MacAskill, whose previous books include Doing Good Better. “Even after my giving, I’m still in the richest few per cent of the world’s population, and [living on] a little more than the median income in the United Kingdom.

“I’m talking to people who may not feel like it but they are among the richest people in the world. That’s you and me. It’s about recognising our privilege and thinking about what we can do, and in response, using that privilege in order to help the world.”

He added: “What’s the enemy of effective altruism? It’s apathy. The reason why people aren’t engaged is not particularly because they’re thinking about things in a very different way. It’s not even that people are too egoistic. Instead, people are often just apathetic.

“So if I can encourage people to get off their seats and start making the world better, even if that’s in ways that I kind of disagree with, that’s what I want to see.”


What We Owe The Future: A Million-Year View is published by Oneworld