March 16, 2026
METR Analysis Projects Uncertain AI Progress Trajectories Amid Safety Concerns
In a timely post published on LessWrong just hours ago on March 16, 2026, researcher Alvin Ånestrand delves into a pivotal question for the AI safety community: will AI progress accelerate or slow down? The analysis centers on METR's Time Horizons benchmark, a key evaluation tool developed by the Model Evaluation and Threat Research (METR) lab to measure AI systems' ability to handle long-duration tasks, which are critical indicators of potential risks from advanced capabilities.
METR, an independent AI safety organization, focuses on rigorous evaluations to track dangerous capabilities in frontier models, informing alignment strategies and policy. Recent updates from METR reveal explosive progress, with AI models now improving at approximately 10x per year on Time Horizons—doubling roughly every 3.5 months for 2024-2025 releases, a sharp acceleration from prior rates of about 3x annually.
Ånestrand's projection weighs competing dynamics. Pro-acceleration arguments highlight AI aiding its own development, potentially compounding gains. Counterarguments point to impending slowdowns as reinforcement learning (RL) compute costs rise and diminishing returns set in, possibly reverting to slower 7-month doublings by late 2026 or 2027.
This forecast arrives amid heated debate in AI safety circles, where accurate timelines are essential for resource allocation in alignment research. METR's benchmarks have become a gold standard, underscoring the urgency of scaling safety measures as capabilities surge.
As AI labs race forward, such analyses from METR provide vital data for anticipating when models might reach transformative thresholds, urging the field to prepare for both acceleration scenarios and potential plateaus in pursuit of robust alignment solutions.
Read Research Source →