Fun observation about local #LLM setups. They're typically trained to do one shot responses to everything, e.g. generating an entire app in one go, but the smaller ones are too stupid to do this successfully and need to be coaxed into doing multi-shot generation. Half my prompt is effectively "Don't write the rest of the code yet, just do this small part for now".
But yeah I've got quen2.5-coder:7b generating lil web apps now from basic prompts for #agregore
The real test will be to see if my non techie pals could get absolutely any use out of it. :P