AI Agents of the Week: Papers You Should Know About
Stay ahead of the curve with LLM Watch
This week, we saw breakthroughs in how AI agents remember and manage long-term knowledge, coordinate complex tasks through orchestration frameworks, and even teach themselves from their own mistakes. From a multi-agent memory system that lets AI truly remember across extended conversations, to a robust task scheduler that delegates jobs among specialized sub-agents, to an ambitious demo of 30+ AIs working together to fully automate a scientific discovery. We also explore new methods that let agents self-correct and learn when they fail, and a clever planning approach that handles long tasks with short memory by structuring intermediate results.
Taken together, these advances hint at a future in which AI teammates can carry out complex, long-duration projects (even scientific research!) with minimal human intervention, leveraging better memory, better planning, and the ability to reflect and improve on the fly. Below, we unpack five standout papers from the past week – explaining each core innovation, why it matters for autonomous agents, what problem it solves, and what it unlocks for the road ahead.
Keep reading with a 7-day free trial
Subscribe to LLM Watch to keep reading this post and get 7 days of free access to the full post archives.