I think in the next couple years OS-shipped #LocalAI will replace the use of heavy cloud based #AI. Microsoft, Google, and soon Apple will be shipping devices with local LLMs and it'll be cheaper for applications to target those APIs rather than pay OpenAI or the such. This will also mean that we'll get into a sort of "browser wars" of model functionality gated by hardware vendors.
For example, Chrome is working on shipping web APIs for LLM access. I'm planning to release something similar in @agregore in the next week or two.
https://github.com/explainers-by-googlers/prompt-api/blob/main/chrome-implementation-differences.md
@mauve i’ve played around a little with local llamafiles
my current takeaway is local is the future, but only when it doesn’t chew through batteries.
that was the alarming thing— “data centers are draining power this hard 24/7”?
@tychi I tried Qwen2:0.5B and it's almost able to do stuff with very little usage. That's like several hundred orders of magnitude less power consumption whole being ablw to do some small tasks. I think these things could be put to work for specificallt crafted prompts with multi shot for a lot of use cases
I don't think cloud AI will fully go away but I think it'll make less and less sense for consumer facing use cases as the small models become more viable via better training and better hardware acceleration.