@light I use it as an offline alternative to search engines for generating code snippets and using vision models to extract text from cameras. Also messing with local models for generating web apps for non-coders. It uses small local models that the users control instead of apps depending on expensive cloud stuff.
"This breakthrough enables each pixel of an OLED display to simultaneously emit different sounds, essentially allowing the display to function as a multichannel speaker array"
https://www.sciencedaily.com/releases/2025/05/250521125055.htm
@tofu It's like if XMPP was JSON
@fleeky More like an octopus with many tentacles that have some specialized talent 🐙
BTW for folks into #localfirst #ai , come join the userless-agents Matrix channel to talk about approaches and use cases!
maybe I should buy a 30 pack of raspi picos and program them each as a keyboard that types one hardcoded letter, and nothing else.
then to type out "hello world" I just have to plug in the USB cables one by one in the right order
Made another lil #agregore app today. This combines the web's SpeechSynthesis API with Agregore's LLM APIs to talk to a spooky LLM with it responding with a voice as well as text.
hyper://816idd9ddxq8asy68sya1y3du3nyipiszcr6tfyq66x47ha3jxuy/speak_ai.html
Make sure to set up Speech Syntehsis on your machine if it doesn't work initially. On linux I had to set up speech-dispatcher and espeak-ng. This should work fully offline, too!
IMO Ollama's streaming API is much nicer, but here's an example repo for streaming #OpenAI inference from a request with zero dependencies in #JavaScript.
Occult Enby that's making local-first software with peer to peer protocols, mesh networks, and the web.
Yap with me and send me cool links relating to my interests. 👍