Fun observation about local #LLM setups. They're typically trained to do one shot responses to everything, e.g. generating an entire app in one go, but the smaller ones are too stupid to do this successfully and need to be coaxed into doing multi-shot generation. Half my prompt is effectively "Don't write the rest of the code yet, just do this small part for now".
But yeah I've got quen2.5-coder:7b generating lil web apps now from basic prompts for #agregore
I'll need to play with it more but hopefully this could be an easy alternative to the cloud based AIs with cloud based "artifacts".
With Agregore you can generate the app entirely offline and publish it to the #dweb and keep a lil library of built apps for reference.