9 Comments

I absolutely love the last questions. You are filled with so much insight& wisdom but above all, compassion. I really admire that.

Expand full comment

An excellent read—thank you for the nuanced and systems-level thought here. Individuals are rarely (never?) formed in isolation; they are shaped and molded by the environments in which they live. Taking a critical look at those environments is necessary.

Expand full comment
Dec 1, 2022·edited Dec 1, 2022

I don't get longtermism. Or, at least, the idea that we should be concentrating on solving some issue that may or may not exist in the future. It seems to me that every incremental advance we make now makes the future better, as well as today.

Expand full comment

a super refreshing post, given the general bandwagon of articles crucifying (often rightly so..) SBF. it's really nice to instead see a more empathetic take that is willing to shift from finger pointing towards a more humble inner/collective facing critique. thanks for sharing.

Expand full comment

Accurate because it's more representative of a multifaceted view of people, rather than seeing people as 'good/evil'

Expand full comment

Very interesting! And this blog really spoke to me. Thank you!

Expand full comment

"But it’s also a tech bro’s fantasy; there’s no real evidence, yet, that AI or something like AI is a threat to human well-being. And the overwhelming focus on AI is a demonstration that EA has become deeply anti-empirical."

I feel compelled to push back on this point a bit. I can think of two kinds of evidence that AI is a potentially dangerous technology:

1. One is the already existing effects of the AI algorithms that run Facebook, Google search, YouTube, etc. Namely, the incentive to maximize users' attention time leads to these services prioritizing content that is controversial or incites anger, and I think we've seen in the last several years how that can lead to radicalization of people and even destabilization of American democracy. (It would be fair to point out that along with AI, the profit motives of tech companies were a necessary ingredient in the cauldron.)

2. Second are the potential effects of AI once it achieves human-level general intelligence (in case the alignment problem is not solved). Is it fair to dismiss concerns of this danger as "not real evidence" just because it's an intellectual argument rather than an already existing effect? Before the first nuclear bomb was ever detonated, you would have been wise to listen to physicists' warnings of how dangerous nuclear technology could be, rather than dismiss their ideas as "not real evidence". Similarly for warnings of the dangers of human-caused climate change (though these dangers are in the past and present and not just the future).

Expand full comment

So which altruistic groups did donate his billions to, before the collapse?

I thought I read AI research and stuff. Where was his concern for animals? Veganism?

Just asking, not accusing...

Expand full comment

This was a beautiful article focused on the human being, SBF, who succumbed to the fantasy of Effective Autroism. Wayne has showed us that Direct Action is the more effective boots on the ground work. EA is a lie built in the idea that you can solve the problems at the Board Room level by doing whatever it takes to get rich. The Board Room is too insulated to understand what happens in the field and it breeds greed and narcicissm. Direct Action breeds humility, compassion and a better understanding of the problems we face because you're actually doing the work, not just donating money.

Expand full comment