Cool, I guess I now have hands on experience getting LLMs to interact with the internet and system resources in case folks want to hire me to do stuff like that.

Fully offline and locally with open source models without high end hardware or GPUs.

Follow

@fleeky using open sourcr language models and a "harness" which lets the model invoke external functions which then pipe results back into it's context and resume generation

@fleeky It's code that sits between user input and LLM output. It detects the AI doing "internal thinking" so thst the user doesn't see it, and when the LLM tries to ibvoke a function the harness will execute the actual code for it and feed it to the AI. It also hides those steps from the user so they only see the final result

Sign in to participate in the conversation
Mauvestodon

Escape ship from centralized social media run by Mauve.