> An agent has no such learning ability. At least not out of the box. It will continue making the same errors over and over again. Depending on the training data it might also come up with glorious new interpolations of different errors.
This is the main thing that I find annoying about LLMs tbh.
https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/