Follow

Getting output data out of an LLM outside of raw chat interfaces is like scooping water with a leaky bucket. You need to plug more and more holes and add extra buckets to catch the spills. Structured JSON can help for a lot of cases but it makes the text generation less capable.

· · Web · 1 · 0 · 1

@mauve With smart enough caching maybe you could have it call itself recursively? Like, instead of directly predict JSON the model calls a function that starts building JSON and then calls back in with separate prompts/context for each key?

@freakazoid Yeah that's kinda what I do, I have the model output some specific text then I turn it into something machine readable. e.g. this code for generating the html/css/js for a page: github.com/RangerMauve/llm-app

Sign in to participate in the conversation
Mauvestodon

Escape ship from centralized social media run by Mauve.