Follow

I think in the next couple years OS-shipped will replace the use of heavy cloud based . Microsoft, Google, and soon Apple will be shipping devices with local LLMs and it'll be cheaper for applications to target those APIs rather than pay OpenAI or the such. This will also mean that we'll get into a sort of "browser wars" of model functionality gated by hardware vendors.

· · Web · 2 · 0 · 4

I don't think cloud AI will fully go away but I think it'll make less and less sense for consumer facing use cases as the small models become more viable via better training and better hardware acceleration.

For example, Chrome is working on shipping web APIs for LLM access. I'm planning to release something similar in @agregore in the next week or two.

github.com/explainers-by-googl

@mauve Agree*. The advances in UX / tooling have come a long way already. Tried out Ollama a few days ago after having tinkered with some LLMs maybe a year ago and was shocked at how easily it stood up. Once the tools enable RAG and some other obviously useful capabilities, I don't think I'd ever want to use a cloud AI

* That said, I don't know that I would personally expect cloud AI to stop dominating consumer market. Word processing and file storage and a dozen other computational categories that are relatively easy to do locally have been progressively swallowed by cloud services because
A) the UX is so good, and
B) they're cheap, and
C) the minimum level of technical understanding is very low

I feel like the vast majority of users in 5-10 years will still value those things

@mauve Brave already supports custom ollama endpoints already.

Quite cool.

@agregore

@mauve all you need is a local ollama instance which is pretty much compatible to ChatGPT.

@agregore

@hermeticvm @agregore Ohhh I see. This is for the built in LLM UI they have. I am working on JavaScript APIs for web apps to have access to.

@mauve I see. Let's hope for a good standard. I agree with your take that we'll see more local LLM stuff. Especially for latency and privacy reasons. @agregore

@mauve i’ve played around a little with local llamafiles

my current takeaway is local is the future, but only when it doesn’t chew through batteries.

that was the alarming thing— “data centers are draining power this hard 24/7”?

@tychi I tried Qwen2:0.5B and it's almost able to do stuff with very little usage. That's like several hundred orders of magnitude less power consumption whole being ablw to do some small tasks. I think these things could be put to work for specificallt crafted prompts with multi shot for a lot of use cases

Sign in to participate in the conversation
Mauvestodon

Escape ship from centralized social media run by Mauve.