Scott analyzes a new survey of AI researchers, showing diverse opinions on AI timelines and risks, with many acknowledging potential dangers but few prioritizing safety research.
Longer summary
This post discusses a recent survey of AI researchers about their opinions on AI progress and potential risks. The survey, conducted by Grace et al., shows a wide range of predictions about when human-level AI might be achieved, with significant uncertainty among experts. The post highlights that while many AI researchers acknowledge potential risks from poorly-aligned AI, few consider it among the most important problems in the field. Scott compares these results to a previous survey by Muller and Bostrom, noting some differences in methodology and results. He concludes by expressing encouragement that researchers are taking AI safety arguments seriously, while also pointing out a potential disconnect between acknowledging risks and prioritizing work on them.
Shorter summary