MENU

Research community dynamics behind popular AI benchmarks

Martinez-Plumed, Fernando; Barredo, Pablo; Heigeartaigh, Sean O.; Hernandez-Orallo, Jose

NATURE MACHINE INTELLIGENCE
2021
VL / 3 - BP / 581 - EP / 589
abstract
Experimental benchmarks such as ImageNet and Atari games play an important part in advancing artificial intelligence research. An analysis of results and papers linked to 25 popular benchmarks shows that research dynamics beyond conventional co-authorship has developed in this area. The widespread use of experimental benchmarks in AI research has created competition and collaboration dynamics that are still poorly understood. Here we provide an innovative methodology to explore these dynamics and analyse the way different entrants in these challenges, from academia to tech giants, behave and react depending on their own or others' achievements. We perform an analysis of 25 popular benchmarks in AI from Papers With Code, with around 2,000 result entries overall, connected with their underlying research papers. We identify links between researchers and institutions (that is, communities) beyond the standard co-authorship relations, and we explore a series of hypotheses about their behaviour as well as some aggregated results in terms of activity, performance jumps and efficiency. We characterize the dynamics of research communities at different levels of abstraction, including organization, affiliation, trajectories, results and activity. We find that hybrid, multi-institution and persevering communities are more likely to improve state-of-the-art performance, which becomes a watershed for many community members. Although the results cannot be extrapolated beyond our selection of popular machine learning benchmarks, the methodology can be extended to other areas of artificial intelligence or robotics, and combined with bibliometric studies.

AccesS level

MENTIONS DATA