Thinking back to the first time I used my head mounted display as a teleprompter when doing public speaking. 😂
🤖 v1.0.0-beta.15: Local LLMs!
You can now configure local LLM models in peersky://settings/llm
It comes with Qwen2.5-Coder 3B as the default model.
The APIs are currently available to apps such as peersky://p2p/editor/ and peersky://p2p/ai-chat/
Thanks to @agregore and @mauve for the support!
Docs: https://github.com/p2plabsxyz/peersky-browser/blob/main/docs/LLM.md
What’s next?
https://github.com/p2plabsxyz/peersky-browser/issues/97
are you fucking kidding me
Neat paper hypothesizing that the reason we sleep is so that we don't evolve to both day/night and dilute our fitness in both. Specializing for one half of the cycle makes animals more effective. Similar to hibernation and migration in the winter.
So I wrote a blog post on LLM performance. It was focused on SWE-Bench and discussed why performance is topping out.
As part of the post I pulled down gigs of runs from the SWE-Bench S3 bucket and went through several of the harder test cases. I focused on improvements in the last six months. Primarily on Opus.
Regrettably I’m probably not moving forward on that post. Why? Because after going through the data I found that the LLMs are cheating on the tests. And that’s a whole different thing.
Occult Enby that's making local-first software with peer to peer protocols, mesh networks, and the web.
Yap with me and send me cool links relating to my interests. 👍