@simon It works! The ggml-replit model is having trouble with this fib function and got confused generatting error handling code. :P Are there any gpt4all things that are good with code completion? Or do they also need special treatment for how to prompt them?
@simon Wow, yeah this works great! The orca mini model especially has a lot of bang for your buck.
Thanks for making this!
@mauve I've honestly not spent enough time with the gpt4all models to have a great feel for how to get the best results out of them yet
@simon That's fair. For what it's worth I've been having decent luck with orca mini. I'll need to do some postprocessing to fetch just the code blocks out of responses though.
BTW! Sent in a PR to fix a bug when trying to run a gpt4all model offline. https://github.com/simonw/llm-gpt4all/pull/9
@mauve shipped, thank you!
@mauve theoretically it should, but I haven't tried it myself