@mauve what method?
@fleeky using open sourcr language models and a "harness" which lets the model invoke external functions which then pipe results back into it's context and resume generation
@fleeky It's code that sits between user input and LLM output. It detects the AI doing "internal thinking" so thst the user doesn't see it, and when the LLM tries to ibvoke a function the harness will execute the actual code for it and feed it to the AI. It also hides those steps from the user so they only see the final result