"AI" is rapidly becoming synonymous with "poor quality crap"

I am enjoying seeing this become more obvious.

The fun part about this is the race to the bottom - see, quality is one of the factors people consider when they evaluate whether something is worth the money they're being asked to spend for it.

The lower the quality, the lower the price they're willing to pay.

And now "AI" generated crap is showing up heavily in lowest-quality garbage and spam.

Even if your brand claims to be "high quality" it's going to be tainted by association with "AI"

This was inevitable, in a way, thinking about the history of how it's been marketed and priced.

Releasing LLMs into the mass market was a -mistake- on several fundamental levels.

First, the less obvious systemic one - without a continuing human-generated set of inputs to draw from, their ability to generate high quality desirable material more or less went away - not even because of model collapse, but because of -economic- collapse.

By pushing so aggressively to replace human writers, the LLM companies killed off their own food supply.

Second, by releasing so widely and cheaply, this meant that, yes, the "tools of creation" were in the hands of everyone!

and a lot of "everyone" are interested in finding ways to automate plausible human interactions in order to scalably remove money from the others.

Which means spam.

So now your product is being irretrievably associated with poor quality garbage games, spam, and the nastier sort of fake information ops that make people angry first with the content and then by having been fooled by them.

Some people are getting a benefit, sure. Coders love this shit because -most code is low quality boilerplate anyway-

But you don't -need- a premiere, top of the line massive datacenter-driven "companion" AI for what amounts to spicy autocomplete.

The other people benefitting are using it to creep on people by generating lewds, and -those- guys are going to keep paying no matter how high you crank the price.

So the ai companies, instead of marketing this as a coding -assistant- (rather than -replacement-) and making it expensive, got greedy and marketed it as "fire all your employees; everyone uses this for everything now" and priced it for mass adoption.

And now everything is cheap crap, and now you -can't- raise the price, and now you're stuck running very large expensive datacenters with zero resale value and your food supply to train the models on is gone and most of the people actually using your product are only doing so thru psychological manipulation - real fucking healthy way to keep a customer base, huh - or thru mandates from toxic management, so your remaining customer base is slowly burning out and becoming less functionally able -to- use your product.

So anyway I figure, yeah, we're stuck with this toxic sludge but at least it'll be mostly malware hijacking graphics cards and other small model shenanigans in a few years.

Depends on how long the "more money than god" corps can keep up this hemmoraging; I figure google will likely outlast most of the competition, cuz their hardware verticality gives them more staying power due to the economic advantages of being able to have that shit in house.

tho if zuck mortgages facebook to fund this delusion I will laugh for a week

thing is, there's only so long you can push a bubble like this without a result, and with "genai" the result is genuinely unattainable because - due to the size of the bubble - there is really no way to -actually pay off- the operations costs at this point.

For example, openai says it wants 200M paying customers by 2030.

That is basically "the entire Professional and Business Services sector of the US economy" paying....apparently $25/seat/month.

I'm not entirely sure -paperclips- have that kind of market saturation.

Oh I'm sorry, my mistake, I misread the chart.

it's 22 million, not 220 million, in that sector.

.....well that's concerning; -are- there 220 million professional and business services jobs worldwide?

How's their market penetration looking in China and India?

'cuz that's the thing, ain't gonna be a whole lot of workers who -don't- have desk jobs using this for work, so ...what, are you expecting amazon warehouse packers to go home and pay for chatgpt? vs, what, food?

Saw a fun thread the other day about lawyers complaining about chatgpt-enabled pro-se litigants.

So now one of your big professional orgs is associating your product with "annoying assholes who make a lot more very tedious work for us" - great job there.

Again, made it too cheap and too easily available. You put too low a value on human expertise, you fuckweasels.

Now that there's absolutely no way to get to that timeline from here -

Yeah, you could -absolutely- have taken over the whole world without anyone being able to credibly stop you, but you fucking got greedy and tried to do it -fast-.

You thought this was a -race- you utter -numpty bastards-

And now the product is being seen as cheap and nasty, and there's no possible way to scale demand high enough to make a profit off of it; openai has Fucked Up.

I -am- rather concerned about the blast radius when this implodes, but there's not a whole hell of a lot I can do about that one.

Government subsidies won't help much here, tho if we -are- in the funny balkanization timeline then this could make matters very interesting.

I wonder if the billionaire class has ever really considered what happens when the economy implodes -before- they get their miracle AI bunker butler.

@munin I think the neuralink is the fallback and it's a lot further along

@munin right now they're doing trials in cou tries with less medical oversight so they can specifically target the language center of the brain which is spooky. I don't think they demand a long lifespan from those implanted. At least at this stage.

@mauve

yeahhhh thing is, -training- people to use neural interfaces -effectively enough- takes longer than that.

Follow

@munin I think right now the goal is to effect changes in mood and thought patterns rather than allowing the recipients to access computers. At least the most sussy parts. This seems more geared for implanting into "butlers" or "workers" with empowerment being a veil for the advertisements

@mauve

you know, shock collars are pretty cheap on aliexpress and you can find ones compatible with the openshock protocol really cheap; it's not that hard to do behavior modification if you have any idea how operant conditioning works.

@munin That's a really good point.

I'm hoping nothing works out however they want to entice their "butlers". 😎👉👉

Sign in to participate in the conversation
Mauvestodon

Escape ship from centralized social media run by Mauve.