For folks into #OpenSource #AI, I'd suggest checking out continue.dev and their integrations with ollama and LM Studio.
I found that the best models have been WizardCoder 7B and OpenHermes 2.5. Make sure to offload execution to another box or limit the CPU threads for inference so you can keep some for your window manager.
The App Store is a system of control over both developers and users, and selective limitations in Safari are part of how Apple enforces that system of control. If this wasn't clear to you before, it should be now
"... we’ve amassed around 60,000 followers across our six trial accounts, and we have had to do very little moderation of replies associated with our content.
"We’ve had really encouraging levels of engagement (i.e. replies, re-posts and likes) on Mastodon.
"For some equivalent posts we’ve seen significantly larger engagement numbers for Mastodon compared to X/Twitter, particularly given the relative sizes of different platforms"
https://www.bbc.co.uk/rd/blog/2024-02-extending-our-mastodon-social-media-trial
@techsinger TBH I wouldn't be surprised if there was a faster option out there. I found that for text generation it was way slower than LM Studio for example. But that doesn't support multi modal stuff.
@techsinger I will say that out of the box it fully ate my system resources to do so and locked my music player and browser. :P I may experiment with giving it fewer CPU cores so I can have space for processing UI stuff
@techsinger It's not super fast but it's fast enough. I think the main slowdown is loading the model into memory, and from there generation was about one word per second for me. I'm not sure which llava it is specifically, but it's probably a 7b one.
Docs on it are here: https://ollama.com/library/llava
@cblgh From what I understand it just means text+images. You can have it take an image as input and then ask it questions about what's in it. There might be other modes of operation too? I really want to make one that interacts with my shell to load/edit files.
Tried getting a fully local multi-modal model to tell me what it seems in my logo and it's honestly mind blowing that it can identify anything at all. i used `ollama run llava` on my steam deck. Might be a useful tool to integrate with caption generators or for #blind folks wanting to get a description without needing an online service
@tobiash Yeah! I started inviting coworkers into it for online hangs and also been replacing my jitsi calls with spatial ones.
Sadly self hosting it requires a bunch of cloud junk and even then it'd lack content to populate worlds with. 😭
@mauve@staticpub.mauve.moe In case anyone is interested, I've made a PR that adds an initial attempt using the `url` field with `rel=alternate`.
https://github.com/RangerMauve/staticpub.mauve.moe/pull/2/files
@nasser I need to book a vacation within the next couple months, that's for sure. :P
@silverpill These are great. I didn't know I could place anything but a string under URL. Setting it as alternate with the AP mime type seems like a good solution
Hey fedifolks! I was hoping to get some bikeshedding feedback on a new #FEP we're working on at #DistributedPress.
tl;dr we want #ActivityPub objects to link to #P2P URLs for alternate ways to load them. Right now I'm debating between putting them in `alsoKnownAs` or into the `url` field. e.g. `"alsoKnownAs": ["ipns://staticpub.mauve.moe"]` for @mauve@staticpub.mauve.moe
I worry that field is already in use and that it could cause trouble. Would a new field name be better? Maybe `alternateURL`?
Occult Enby that's making local-first software with peer to peer protocols, mesh networks, and the web.
Exploring what a local-first cyberspace might look like in my spare time.