Very insightful. The specific dataset used to better reveal higher level reasoning already inherent in LLMs is very interesting. For clarification, is CoAT employing a structured memory graph - and if so, what do the nodes and edges represent?
I wanted to say no at first, but there actually is one fundamental similarity between the two: both give the model information it would "usually" not have access to. If that's what you're thinking of, then yes. Other than that they are very different concepts.
Very insightful. The specific dataset used to better reveal higher level reasoning already inherent in LLMs is very interesting. For clarification, is CoAT employing a structured memory graph - and if so, what do the nodes and edges represent?
Hello- can we say that COAT is extending the prompt stage RAG concept to LLM reasoning ? Thx
I wanted to say no at first, but there actually is one fundamental similarity between the two: both give the model information it would "usually" not have access to. If that's what you're thinking of, then yes. Other than that they are very different concepts.