Blog Prize Digest: April

We’ve had a very exciting first month at the Blog Prize—viral posts, enlightening discourse, and cool new bloggers jumping into the race. We’re planning to announce mini-prizes soon for the best posts on specific subjects, so stay tuned. 

Will MacAskill also announced a new book! It’s called, What We Owe the Future.

Some of our favorites from the blog roll

We had two great explanations of longtermism, a central interest of ours, one from Simon Bazelon (@simon_bazelon)(of Secret Congress fame) and the other from Neel Nanda of Anthropic. Both tackle how to introduce longtermism without relying on too many abstract concepts or counterintuitive claims. 

The pitches are both exciting and distinct: Simon writes of how we can emotionally relate to the far future, through an appeal to the preciousness and fragility of life. Neel uses case studies on AI and bio-risk, which suggest our survival is more precarious than we think. 

We’ve been enjoying Eric Hoel’s The Intrinsic Perspective blog from Eric Hoel (@erikphoel), especially his widely circulated post, “Why We Stopped Making Einstein’s:

“I think the most depressing fact about humanity is that during the 2000s most of the world was handed essentially free access to the entirety of knowledge and that didn’t trigger a golden age.” 

In other words: where are all the geniuses? Hoel hypothesizes that we might need a new age of tutoring, and we are very excited about the conversation this generated. We need more geniuses!

On a similar note, Jeremy Driver (@J_D_89) writes a follow up to his now-legendary cheems mindset post, which focused on our social and political horizons, with a post on the personal cheems mindset:

Broadly, personal cheems mindset is the reflexive decision for an individual to choose inaction over action, in particular finding reasons not to do things which have either high expected value, or a huge upside with very little downside risk. 

We believe there’s a huge amount of good that’s not created because people needlessly limit their own ambitions, and Jeremy is one of our favorite writers on reclaiming your agency. He also wrote a post on reactions to the article. We like this piece of advice from Michael Story (@MWStory) he mentions: 

https://twitter.com/MWStory/status/1504049716400910338

Build your own anti-cheems community!

We also had a few favorite philosophical deep dives:

From Good Optics, “Past and Future Trajectory Changes,” “changes that improve the value of the long-term future through some mechanism other than preventing existential catastrophe.” He writes:

Whether trajectory change or existential risk mitigation is more effective obviously depends on the magnitude of existential risk. More fundamentally, it depends on how smooth or jumpy the curve of increase in the expected value of the future is. To the degree that the future is not completely determined yet, variation in human choices will result in variation in the ultimately amount of  realized moral value. Good choices will result in more value than bad choices. Different worldviews imply different functions mapping quality of choices to amount of value. For instance, one might think that there are really only two equilbiria in the long-run: extinction and utopia. If this is your view, your function mapping performance to realized value would look something like this:

One of many good posts fFrom Sam Atis (@sam_atis) is,Was Iit Aall wWorth It?it,” Oon thinking through degrowth as a utilitarian:

It seems pretty obvious to me that while GDP growth increases living standards, it also increases the chance of the world ending. If you look through Toby Ord’s list of existential risks to the world (x-risks) seen above, you’ll notice that the most dangerous x-risks almost certainly wouldn’t exist if not for the industrial revolution and economic growth. From nuclear war (1/1000 risk of wiping us out) to engineered pandemics (1/30 risk) to AI Risk (1/10 risk!), we’re basically playing Russian roulette with the future of the world, and most of the bullets are ones we could only put in the chamber thanks to economic growth.

From Joe Carlsmith (@jkcarlsmith), a four-part series On Expected Utility:

Some people think that unless you’re messing up in silly ways, you should be acting “as if” you’re maximizing expected utility [but] expected utility maximization (EUM) can lead to a focus on lower-probability, higher-stakes events — a focus that can be emotionally difficult. For example, faced with a chance to save someone’s life for certain, it directs you to choose a 1% chance of saving 1000 lives instead – even though this choice will probably benefit no one. And EUM says to do this even for one shot, or few shot, choices – for example, choices about your career.

From the judges:

If you are still hungry for more posts:

Cliodynamics infographic of the month