I've largely dismissed the strain of AI alarmism based on the notion the a computer will be so smart that the danger it poses to humanity is outsmarting us.

There are real dangers in AI, most of them relate to people using these technologies in improper ways due to having a poor understanding of what they really are... and most important the exploitation & degradation of the human body of knowledge creativity represented by publicly available digital information. 1/

But some people still talk about the risk of AI being too smart, tricking us-- or simply not delivering what we expect, or using any power it is given in ways that result in manipulation more sophisticated than what we could ever anticipate.

Do all of the people who have these fears buy-in to that silly bootstrapping theory of AI advancement? (This is the idea that once an GI AI becomes "smart" enough to redesign itself, it will spiral off to become... well a god basically.)

2/

This idea is an excellent and very fun sci-fi plot. (And sci-fi should never be ignored)

But is there any evidence that such a thing could even happen? I suppose the "evolved" algorithms used to program some motors to help robots walk show that there is some possibly. But those algorithm generators were not as simple as they were described in the press. There was a lot more scaffolding.

It wasn't just "we wrote a program to try random motor movement and then it evolved a way to move" 3/

A self modifying intelligence would need metrics to tell if it was improving or not.

This would require huge sets of data...and a way to compare them. We can do that, but it's not efficient at all.

In the 80s I thought a computer doing face recognition was impossible: a photo is just too much data to process.

We've solved this problem in the least exciting way. A kind of brute force. Throw more servers at it. Yes some of the algorithms are nice... but it's not exactly magic. 4/

The human brain has a stupendous number of cells. It's the most energy intensive organ in your body using 20% of the calories you consume despite being a much smaller portion of your body mass. This is why so few organisms have complex brains. Intelligence is an excellent strategy but it is also expensive. Even in nature.

LLMS are less efficient than your brain.

Thinking, organizing information is real work that requires energy.

(we really are getting to basics here) 5/

Follow

@futurebird Some companies are already working on harnessing human brain organoids for compute. So there's defs folks trying to understand how to harness brain structure. medicalxpress.com/news/2024-03

@mauve

It has no mouth and it must scream.

This needs to be internationally illegal immediately.

@futurebird

@mauve @futurebird that article linked says nothing about this? It talks about organoids of various tissues, and how they found out how to basically create something like a vascular system that works well for them. It talks about how this can be used for medical research, for developing new drugs and stuff. Nothing about this weird dystopian brain organoids stuff you are talking about

@enby_of_the_apocalypse @futurebird Sorry bad article it was the first result I found that was vaguely related. I'll see about getting a better link on Monday after I'm back at my pooter.

@mauve @futurebird the article seems a bit written for those “AI” bros, the whole quoting Elon Musk as a source and the whole talk of “ai” “outliving” humans, and talking about as if this is the solution to the issue of “ai” data centers consuming absurd amounts of ressources rather than just not using that inefficient bullshit technology, but anyways…

It seems highly ethically questionable to use these, and to rent them out commercially and stuff, when nobody knows if these things feel, if they have consciousness, stuff like that. Tbh it kind of reminds me of how the plot of the matrix was initially supposed to be that the machines use human brains for their processing power, not as an energy source (which was sadly scrapped because someone thought that people wouldn’t understand that plot). Ofc this is a different thing because it doesn’t use entire brains but just organoids, but it still feels really creepy to me.

@mauve @futurebird generally I think that growing organoids of various tissues in labs is a really good thing since it can help in medical/pharmacological research and might even replace animal testing (another thing that’s ofc abhorrent). But when handling neurons like that I’d prefer it if researchers considered ethical problems more, and in situations where they don’t know how unethical something might be, maybe not do the thing, or at the very least not on this scale

Sign in to participate in the conversation
Mauvestodon

Escape ship from centralized social media run by Mauve.