@skryking For me it was more that I can finally make this stuff work related and potentially find clients to pay me to mess with it. :P Sadly my hand pain makes computer touching less appealing off the clock.
@skryking Nice. I've been wanting to get into Rust for years but didn't have much of a use case. Now with the candle library from HuggingFace and my latest adventures with LLMs I've had an actual reason to write something in it. :) https://github.com/huggingface/candle/
@lutindiscret I'm putting together a Matrix one here: https://matrix.to/#/#userless-agents:mauve.moe
@skryking This post by @simon is what exposed me to the idea for the first time: https://til.simonwillison.net/llms/python-react-pattern
I also have a slightly improved prompt here: https://gist.github.com/RangerMauve/19be7dca9ced8e1095ed2e00608ded5e
I'll likely be publishing any new work as open source on Github. :) Probably with Rust.
@jonny I liked Dunkey's coverage of the gameplay :P https://www.youtube.com/watch?v=buRSN13jH3E
@jonny Are you gonna play the pokemon with guns game?
@skryking it has less innate knowledge of facts but it is pretty good at "reasoning". I'm gonna teach it to make function calls and traverse datasets + summarize stuff. 😁
@skryking Nice. I only do CPU workloads. Try running phi 2 some time! It's super low in resurce usage. Particually the Q4 quantized models.
@laskov Oh yeah, I read their release but haven't used it yet. Was there anything specific they excelled at?
@staltz I've already got the programming socks. :P
@skryking What have you been using to run the models? I find LM Studio really nice for tinkering. https://lmstudio.ai/
I find Q4 quantized models work pretty well on my steam deck.
Hmm, after testing the raw Phi 2 within LM Studio instead of the examples provided by HuggingFace candle, I think it's actually pretty decent after all.
Specifically I got phi 2 Q4_K_S gguf working from TheBloke.
Can't get that model running with candle since it can't seem to load the model weight.
Having tested a bunch of #OpenSource #LLM projects, I gotta say that OpenHermes 2.5 is the most helpful out of the ones I can run locally.
I recently wasted a bunch of time getting Phi-2 to do some summarization work, and it just couldn't stay focused for more than a sentence or two.
Woot, I have finally written enough #rust code to be unable to avoid the lifetime specifications / borrow checker stuff.
@beka_valentine RIP. What software are you using? I generally find OBS does everything I want.
I love this post-mortem from a former #p2p enthusiast... 🧵
"DHTs were not reliable or performant. We were way too optimistic about device discovery and NAT traversal."
He's absolutely right. If you're using a DHT, you're doing it wrong. It might have been the right primitive in 2003, but not today.
Occult Enby that's making local-first software with peer to peer protocols, mesh networks, and the web.
Exploring what a local-first cyberspace might look like in my spare time.