Show newer

I should practice with my syth more to get the ability to perform live. It's kinda like public speaking which I do all the time but feels higher stakes. In the worst case rn I just bore some nerds. 😅

Show thread

Also I can just give them 20 bucks and make their day unliike big artists where my contribution is a small drop.

Show thread

I don't usually listen to music with vocals so I don't follow many groups that focus on it. But I think I'll try following some local bands cause I love seeing them play. Seeing some rando's do their best on a small stage warms my heart.

"This breakthrough enables each pixel of an OLED display to simultaneously emit different sounds, essentially allowing the display to function as a multichannel speaker array"

sciencedaily.com/releases/2025

High key considering reworking the LLM stuff in Agregore to be a protocol handler since injecting new JS APIs into iframes hasn't been working. 😅

Not sure what to make the URL scheme look like. The path should be ollama or openAI style, but what would I put in the hostname?

Love when my cats get their evening zoomies and sprint accross the entirety of my home and propel themselves over the bed. They're like torpedos which manage to use my stomach as a spring board for an extra boost.

BTW for folks into , come join the userless-agents Matrix channel to talk about approaches and use cases!

matrix.to/#/#userless-agents:m

I forgot my Apple password and now my mac mini is a locked box. 🙃

"reverse cyborg" is an exciting concept. Machines yearning to graft flesh onto their frame.

maybe I should buy a 30 pack of raspi picos and program them each as a keyboard that types one hardcoded letter, and nothing else.

then to type out "hello world" I just have to plug in the USB cables one by one in the right order

Show thread

Made another lil app today. This combines the web's SpeechSynthesis API with Agregore's LLM APIs to talk to a spooky LLM with it responding with a voice as well as text.

hyper://816idd9ddxq8asy68sya1y3du3nyipiszcr6tfyq66x47ha3jxuy/speak_ai.html

Make sure to set up Speech Syntehsis on your machine if it doesn't work initially. On linux I had to set up speech-dispatcher and espeak-ng. This should work fully offline, too!

IMO Ollama's streaming API is much nicer, but here's an example repo for streaming inference from a request with zero dependencies in .

github.com/RangerMauve/openai-

Using server sent events in an HTTP POST is the sort of evil I wouldn't even do. 🤪

NDJSON would have been nicer IMO

I just uninstalled all my cracked adobe products in disgust at their new pricing structure

Merkle-DAGs are a neat concept. But honestly `last know writer + writer lamport clock` is a pretty good timestamp too.

I guess I'm thinking documents in a sort of tree like `root/page/section/subsection/paragraph0...n` where each leaf is a value + a clock for last write wins.

Writer set guarded by something like keyhive or a cheap append only set of public keys if you're lazy.

arxiv.org/pdf/2004.00107

inkandswitch.com/keyhive/noteb

Show thread
Show older
Mauvestodon

Escape ship from centralized social media run by Mauve.