Gonna be presenting a demo of teaching local a #llm to search wikipedia with "Function Calling"
Source for my demo code: https://github.com/RangerMauve/mind-goblin
Video of my talk about making an #OpenSource #LLM perform function calling on my machine.
@fredy_pferdi Oh that's great to know TY. I'll look into it. Is this going to use Vulkan for the GPU acceleration? I wasn't sure what my options would be since Ollama seems to only support Cuda and Metal
@mauve It also supports AMD ROCm the equivalent to cuda