"But it’s also a tech bro’s fantasy; there’s no real evidence, yet, that AI or something like AI is a threat to human well-being. And the overwhelming focus on AI is a demonstration that EA has become deeply anti-empirical."
I feel compelled to push back on this point a bit. I can think of two kinds of evidence that AI is a potentially da…
"But it’s also a tech bro’s fantasy; there’s no real evidence, yet, that AI or something like AI is a threat to human well-being. And the overwhelming focus on AI is a demonstration that EA has become deeply anti-empirical."
I feel compelled to push back on this point a bit. I can think of two kinds of evidence that AI is a potentially dangerous technology:
1. One is the already existing effects of the AI algorithms that run Facebook, Google search, YouTube, etc. Namely, the incentive to maximize users' attention time leads to these services prioritizing content that is controversial or incites anger, and I think we've seen in the last several years how that can lead to radicalization of people and even destabilization of American democracy. (It would be fair to point out that along with AI, the profit motives of tech companies were a necessary ingredient in the cauldron.)
2. Second are the potential effects of AI once it achieves human-level general intelligence (in case the alignment problem is not solved). Is it fair to dismiss concerns of this danger as "not real evidence" just because it's an intellectual argument rather than an already existing effect? Before the first nuclear bomb was ever detonated, you would have been wise to listen to physicists' warnings of how dangerous nuclear technology could be, rather than dismiss their ideas as "not real evidence". Similarly for warnings of the dangers of human-caused climate change (though these dangers are in the past and present and not just the future).
"But it’s also a tech bro’s fantasy; there’s no real evidence, yet, that AI or something like AI is a threat to human well-being. And the overwhelming focus on AI is a demonstration that EA has become deeply anti-empirical."
I feel compelled to push back on this point a bit. I can think of two kinds of evidence that AI is a potentially dangerous technology:
1. One is the already existing effects of the AI algorithms that run Facebook, Google search, YouTube, etc. Namely, the incentive to maximize users' attention time leads to these services prioritizing content that is controversial or incites anger, and I think we've seen in the last several years how that can lead to radicalization of people and even destabilization of American democracy. (It would be fair to point out that along with AI, the profit motives of tech companies were a necessary ingredient in the cauldron.)
2. Second are the potential effects of AI once it achieves human-level general intelligence (in case the alignment problem is not solved). Is it fair to dismiss concerns of this danger as "not real evidence" just because it's an intellectual argument rather than an already existing effect? Before the first nuclear bomb was ever detonated, you would have been wise to listen to physicists' warnings of how dangerous nuclear technology could be, rather than dismiss their ideas as "not real evidence". Similarly for warnings of the dangers of human-caused climate change (though these dangers are in the past and present and not just the future).