LLM Watch

LLM Watch

The Week in AI Agents

AI Agents of the Week: Papers You Should Know About

Get ahead of the curve with LLM Watch

Nov 16, 2025
∙ Paid

Executive Summary

This week in AI agents:

  1. Researchers introduced a graph-based memory system that lets agents learn high-level strategies from experience, boosting their cross-task generalization. To tackle multi-agent coordination, new frameworks enabled agents to share learned representations and retrieve past trajectories as context, accelerating team adaptation to novel tasks.

  2. Communication efficiency emerged as a theme: one study defined novel metrics to learn leaner communication protocols, cutting redundant messages while improving cooperative success.

  3. Another work proposed a “Subject-DAG” multi-agent planner that decomposes complex problems by domain, assigning specialized LLM agents to each subject area and routing information among them - significantly outperforming one-size-fits-all approaches.

Collectively, we’re moving towards agent architectures that are more memory-equipped, specialized, and efficient, laying groundwork for agents that can reason, plan, and learn in increasingly sophisticated ways. Below, we’ll take a closer look at each paper’s core innovations, why they matter, and what they hint at for the future of autonomous AI agents.


User's avatar

Continue reading this post for free, courtesy of Pascal Biese.

Or purchase a paid subscription.
© 2025 Pascal Biese · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture