I should practice with my syth more to get the ability to perform live. It's kinda like public speaking which I do all the time but feels higher stakes. In the worst case rn I just bore some nerds. 😅
Also I can just give them 20 bucks and make their day unliike big artists where my contribution is a small drop.
"This breakthrough enables each pixel of an OLED display to simultaneously emit different sounds, essentially allowing the display to function as a multichannel speaker array"
https://www.sciencedaily.com/releases/2025/05/250521125055.htm
BTW for folks into #localfirst #ai , come join the userless-agents Matrix channel to talk about approaches and use cases!
maybe I should buy a 30 pack of raspi picos and program them each as a keyboard that types one hardcoded letter, and nothing else.
then to type out "hello world" I just have to plug in the USB cables one by one in the right order
Made another lil #agregore app today. This combines the web's SpeechSynthesis API with Agregore's LLM APIs to talk to a spooky LLM with it responding with a voice as well as text.
hyper://816idd9ddxq8asy68sya1y3du3nyipiszcr6tfyq66x47ha3jxuy/speak_ai.html
Make sure to set up Speech Syntehsis on your machine if it doesn't work initially. On linux I had to set up speech-dispatcher and espeak-ng. This should work fully offline, too!
IMO Ollama's streaming API is much nicer, but here's an example repo for streaming #OpenAI inference from a request with zero dependencies in #JavaScript.
Merkle-DAGs are a neat concept. But honestly `last know writer + writer lamport clock` is a pretty good timestamp too.
I guess I'm thinking documents in a sort of tree like `root/page/section/subsection/paragraph0...n` where each leaf is a value + a clock for last write wins.
Writer set guarded by something like keyhive or a cheap append only set of public keys if you're lazy.
Occult Enby that's making local-first software with peer to peer protocols, mesh networks, and the web.
Yap with me and send me cool links relating to my interests. 👍