I don't think the hype around "state of the art" small models like lfm2.5-thinking is warranted. Even if there's a bunch of "benchmarks" that they score higher on compared to other models of their "class", they're still to weak to perform tasks. Like all tiny models they're prone to babbling and getting confused.
@mauve I tried to restrain them by feeding them lots of ollama hyperparameters and setting temperature low..but even then I usually am better off with just implementing an fuzzy stringmatching algo on top of textfiles (no AI/neural net etc)