Cool, I guess I now have hands on experience getting LLMs to interact with the internet and system resources in case folks want to hire me to do stuff like that.
Fully offline and locally with open source models without high end hardware or GPUs.
@mauve so cool!
@mauve what method?
@fleeky using open sourcr language models and a "harness" which lets the model invoke external functions which then pipe results back into it's context and resume generation
@mauve sounds neat.. can you elaborate on harness?
@fleeky It's code that sits between user input and LLM output. It detects the AI doing "internal thinking" so thst the user doesn't see it, and when the LLM tries to ibvoke a function the harness will execute the actual code for it and feed it to the AI. It also hides those steps from the user so they only see the final result
@fleeky it's like the cyberbody for the LLM
@mauve famous last words 🤣