Link via @psilocervine on Cohost: Some investigative journalism inside Unity's road to briefly introducing a company-destroying "install fees" policy last month.
The most interesting takeaway here is the whole disaster really was just IronSource, who merged with Unity last year, puppeting the company into destroying IronSource's competitor AppLovin at all costs. An entire art form is a pawn to be sacrificed in a fight between two adtech companies you've never heard of
https://mobilegamer.biz/fuck-you-were-not-paying-inside-unitys-runtime-fee-fiasco/
@neauoire Yeah it'd be neat to do compilers or parsers or transformations over top of this stuff. The part about how to do concurrent streams is interesting.
As a profesional todo list (with extra steps) maker this is a useful development for me 😎👉👉
@j3rn Siiick. Is your source published for those? Would love to read it.
@j3rn @neauoire Mobile is the worst part in my experience! I'll take a thousand "python module version mismatch" issues over an xcode upgrade 🤣
Neat! What sort of stuff do you even use it for? I've only really looked into it as a curiocity since my job is mostly shoveling bytes around in weird ways rather than working with data.
@j3rn Jeeze that's so true. The main reason I avoid too many layers is people keep breaking them ans forcing me to update/refactor 🤣
I wanna go the other direcrion and code in prolog or something zany just to see what life is like in that world.
@technobaboo Omg yes. Also the characters and music 😭💜
@simon Nice, like a file format for the configs so folks could pass them around and track changes in git?
@simon PR: https://github.com/simonw/llm-gpt4all/pull/17
Gonna need to mess with the parameters more another day though. But my gut feeling is we can up the quality of output significantly by turning down the temperature a bit and reducing the top_p to 1 and top_k to 4 like in the replicate.com demo
@simon Looked into this, I think the top_p and top_k are the main differences. The default in gpt4all is way more "loose".
Would a PR that sets different defaults be welcome? Or would you prefer to just have the flags exposed like your llama-cpp example?
https://github.com/simonw/llm-gpt4all/blob/main/llm_gpt4all.py#L112
https://docs.gpt4all.io/gpt4all_python.html#the-generate-method-api
I'll try hardcoding some values and running a generation again to see if it's "better" in the meantime.
@pauldaoust Hobestly might replace it with the default textbox implementation in QT/GNOME next. Syntax highlighting is too many cycles!
Occult Enby that's making local-first software with peer to peer protocols, mesh networks, and the web.
Exploring what a local-first cyberspace might look like in my spare time.