LLM Watch

LLM Watch

Share this post

LLM Watch
LLM Watch
The Week in AI Agents: Papers You Should Know About
Copy link
Facebook
Email
Notes
More
State of AI Agents

The Week in AI Agents: Papers You Should Know About

Stay ahead of the curve with LLM Watch

Pascal Biese's avatar
Pascal Biese
May 04, 2025
∙ Paid
22

Share this post

LLM Watch
LLM Watch
The Week in AI Agents: Papers You Should Know About
Copy link
Facebook
Email
Notes
More
5
Share

Another week, another issue! This week, researchers tackled long-standing challenges like an agent’s ability to remember past interactions, plan complex tasks over time, and learn from its own experience – all critical to moving from single-turn chatbots to truly autonomous helpers.

More specifically, we’ll cover the following highlights:

  1. Long-Term Memory Architectures (Mem0): A new memory system for LLM-based agents that persists and organizes knowledge beyond fixed context windows, enabling far more coherent multi-session dialogues while slashing latency and cost. This innovation addresses the memory bottleneck in sustained autonomous interactions.

  2. LLM-Driven Robotics & Planning: A household robot agent that uses multiple LLMs to interpret goals, plan steps, and recall past actions for long-horizon tasks like tidying up rooms. By integrating visual scene understanding and memory retrieval, it achieves high task completion accuracy in unstructured home environments.

  3. Self-Improving Agents via Experience: A learning technique where agents automatically generate and reuse their own successful examples to improve at sequential decision-making tasks. Without any human-provided prompts or fine-tuning, this method boosted task success rates (e.g. from 73% to 91% on a benchmark) to rival more complex specialized approaches.

  4. Monte Carlo Dynamic Memory–guided LLM Planning (MC-DML): A hybrid planning approach that combines Monte Carlo Tree Search with an LLM for semantic state evaluation and a dynamic memory module to record outcomes across trials. In text-based adventure games, MC-DML systematically simulates action sequences, uses the LLM to predict natural-language consequences, and leverages past successes and failures to bias its search - achieving superior first-attempt performance and exemplifying how structured search and language reasoning yield sample-efficient, grounded autonomy.

Each of these developments expands what autonomous AI is capable of and we’ll take a close look at all these contributions – explaining the core ideas in intuitive terms, the problems they solve and new capabilities enabled, and how they fit into the broader trajectory of making AI agents more effective and trustworthy for researchers, developers, and society.

Let’s start the deep dive!

Keep reading with a 7-day free trial

Subscribe to LLM Watch to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Pascal Biese
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More