EPISODE · Apr 27, 2026 · 5 MIN
“What holds AI safety together? Co-authorship networks from 200 papers” by Anna Thieser
We (social science PhD students) computed co-authorship networks based on a corpus of 200 AI safety papers covering 2015-2025, and we’d like your help checking if the underlying dataset is right. Co-authorship networks make visible the relative prominence of entities involved in AI safety research, and trace relationships between them. Although frontier labs produce lots of research, they remain surprisingly insular — universities dominate centrality in our graphs. The network is held together by a small group of multiply affiliated researchers, often switching between academia and industry mid-career. To us, AI safety looks less like a unified field and more like a trading zone where institutions from different sectors exchange knowledge, financial resources, compute and legitimacy without encroaching on each other's autonomy. Of course, these visualizations are only as good as the corpus underlying them, because the shape of the network is sensitive to what's included. Here's what it currently looks like showing co-authorship at the individual level:There are 2 details boxes here, which are omitted from this narration. The boxes have the titles "Figure 1: Methods" and "Figure 1: Color legend". While academic and for-profit authors occupy distinct clusters, over 95% of nodes are part of the [...] --- First published: April 24th, 2026 Source: https://www.lesswrong.com/posts/c7D2q7k97QcwCxBrN/what-holds-ai-safety-together-co-authorship-networks-from --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
NOW PLAYING
“What holds AI safety together? Co-authorship networks from 200 papers” by Anna Thieser
No transcript for this episode yet
Similar Episodes
Dec 20, 2021 ·0m