when you start considering 'marketing and advertising' as a kind of psychologically abusive manipulation, and you consider how chatgpt is already specifically structured to "encourage engagement" by means of psychologically abusive manipulation, the use of the latter for dispensing the former becomes perfectly obviously effective.
“Prompt injection” is a misleading label.
What we’re seeing in real LLM systems looks a lot more like malware campaigns than single-shot exploits.
This paper argues LLM attacks are a new malware class, Promptware, and maps them to a familiar 5-stage kill chain:
• Initial access (prompt injection)
• Priv esc (jailbreaks)
• Persistence (memory / RAG poisoning)
• Lateral movement (cross-agent / cross-user spread)
• Actions on objective (exfil, fraud, execution)
If you’ve ever thought: “why does this feel like 90s/2000s malware all over again?", that’s the point.
Security theater around “guardrails” misses the real issue:
models can’t reliably distinguish instructions from data
assume initial access. Design for containment
a while ago i saw a tumblr post comparing some kind of arcane piracy process to the steps for navigating the underworld of greek mythology … does anyone have that handy? its important
FOUND IT: https://clarabeau.tumblr.com/post/748307077456363520
Some relaxing music for some chill evening code. (cw: loud screeching)
This is the level of noise I need to do anything at all today apparently. F. Noize & LekkerFaces - Tripping On Acid by F. Noize & LekkerFaces on #SoundCloud
https://on.soundcloud.com/mq5iaoBLa915OAuTgC
Neat one handed keyboard design
Occult Enby that's making local-first software with peer to peer protocols, mesh networks, and the web.
Yap with me and send me cool links relating to my interests. 👍