You know, I reckon it might help reverse Firefox's declining marketshare at very little effort for Mozilla to implement a feedreader into Firefox, & run an advertising blitz advocating how much calmer this "new" way of reading the web is!
They'd get some eyerolls, & some may find it suspicious how broadly supported it'd be out of the gate...
But since they're rightly concerned about their declining marketshare, this'd be a cheap & hugely beneficial thing to try!
Hey folks! We have a new round of tutorials and example apps out as well as updated `hyper://` docs. Read more on our blog: https://agregore.mauve.moe/blog/2023/12/demos-and-tutorials-second-round
@reconbot what sort of progressive lebses?
For my hackathon project I did try to make CFA (Cat Factor Authentication, using your cat's microchip as a second factor) a thing 😆 The project did win a prize, but more for the experimentation then the actual result https://wpengine.com/blog/hackathon-december-2023/
@smitner Yoooo! I use the same keyboard as my daily driver. 🥰💜
More than ever, we need networking protocols which are resilient, privacy preserving, bandwidth conserving, able to run on low-spec hardware, and not quite as preoccupied with being the global network for everyone ever.
We’re delighted to present Willow, a new family of peer-to-peer protocols that cater to just that niche. https://willowprotocol.org is a guide to those protocols, with full specifications, ~50 hand-drawn diagrams, illustrations, and comics, and much more besides.
Our thanks to @NGIZero for supporting this project!
@fleeky it's like the cyberbody for the LLM
@fleeky It's code that sits between user input and LLM output. It detects the AI doing "internal thinking" so thst the user doesn't see it, and when the LLM tries to ibvoke a function the harness will execute the actual code for it and feed it to the AI. It also hides those steps from the user so they only see the final result
@fleeky using open sourcr language models and a "harness" which lets the model invoke external functions which then pipe results back into it's context and resume generation
@simon Sweet yeah. I appreciate you publishing it! I'm also gonna try formatting it using standard ChatML format.
So far I've just been manually performing the action and pasting results which is a convebient option
@simon Hey! I tried out your ReAct prompt with OpenHermes Mistral and I found that an important step was to get it to verify whether the result was correct before answering, and tried to guide it to perform a more specific query if not.
I posted the gist with my prompt here: https://gist.github.com/RangerMauve/19be7dca9ced8e1095ed2e00608ded5e
“Again we have deluded ourselves into believing the myth that capitalism grew and prospered out of the Protestant ethic of hard work and sacrifices. Capitalism was built on the exploitation of black slaves and continues to thrive on the exploitation of the poor, both black and white, both here and abroad.”
MLK
—The Three Evils speech, 1967
I'm severely colourblind - my eyes can hardly detect red light at all.
So, working in web development, picking colour schemes is hard.
There are tools around to help you pick accessible colour schemes, but they assume that you can tell by looking that a colour is the one you want, and the only information you need the computer to calculate is the contrast ratio.
I realised I need a tool that will take the name of a colour and find a shade that gives a target contrast ratio.
Here it is: https://colourblind-palette-maker.glitch.me/
It uses the new APCA perceptual contrast algorithm and the Oklab colour space to help me find colours that people with better colour vision will interpret correctly, while ensuring there's good contrast for as many people as possible.
Occult Enby that's making local-first software with peer to peer protocols, mesh networks, and the web.
Exploring what a local-first cyberspace might look like in my spare time.