Having tested a bunch of #OpenSource #LLM projects, I gotta say that OpenHermes 2.5 is the most helpful out of the ones I can run locally.
I recently wasted a bunch of time getting Phi-2 to do some summarization work, and it just couldn't stay focused for more than a sentence or two.
@mauve I was tinkering with ollama for a bit, but my local hardware just isn't fast enough to make it useful.
@skryking What have you been using to run the models? I find LM Studio really nice for tinkering. https://lmstudio.ai/
I find Q4 quantized models work pretty well on my steam deck.
@skryking it has less innate knowledge of facts but it is pretty good at "reasoning". I'm gonna teach it to make function calls and traverse datasets + summarize stuff. 😁
@mauve I really just need to get off my lazy butt and buy a new graphics card so I can do more acceleration.
@mauve do you have any documentation / links of how you teach it to use functions?
@skryking This post by @simon is what exposed me to the idea for the first time: https://til.simonwillison.net/llms/python-react-pattern
I also have a slightly improved prompt here: https://gist.github.com/RangerMauve/19be7dca9ced8e1095ed2e00608ded5e
I'll likely be publishing any new work as open source on Github. :) Probably with Rust.
@skryking Nice. I've been wanting to get into Rust for years but didn't have much of a use case. Now with the candle library from HuggingFace and my latest adventures with LLMs I've had an actual reason to write something in it. :) https://github.com/huggingface/candle/
@mauve yeah I'm still hunting for a use case at the moment. Something non work related and interesting enough to keep my old hard to focus brain interested.
@skryking For me it was more that I can finally make this stuff work related and potentially find clients to pay me to mess with it. :P Sadly my hand pain makes computer touching less appealing off the clock.
@mauve Thanks for the suggestion, I just fired it up...that one is definitely faster than llama2 on cpu mode only.