“Prompt injection” is a misleading label.
What we’re seeing in real LLM systems looks a lot more like malware campaigns than single-shot exploits.
This paper argues LLM attacks are a new malware class, Promptware, and maps them to a familiar 5-stage kill chain:
• Initial access (prompt injection)
• Priv esc (jailbreaks)
• Persistence (memory / RAG poisoning)
• Lateral movement (cross-agent / cross-user spread)
• Actions on objective (exfil, fraud, execution)
If you’ve ever thought: “why does this feel like 90s/2000s malware all over again?", that’s the point.
Security theater around “guardrails” misses the real issue:
models can’t reliably distinguish instructions from data
assume initial access. Design for containment
a while ago i saw a tumblr post comparing some kind of arcane piracy process to the steps for navigating the underworld of greek mythology … does anyone have that handy? its important
FOUND IT: https://clarabeau.tumblr.com/post/748307077456363520
Some relaxing music for some chill evening code. (cw: loud screeching)
This is the level of noise I need to do anything at all today apparently. F. Noize & LekkerFaces - Tripping On Acid by F. Noize & LekkerFaces on #SoundCloud
https://on.soundcloud.com/mq5iaoBLa915OAuTgC
Neat one handed keyboard design
Nature-Inspired Computers Are Shockingly Good At Math https://science.slashdot.org/story/26/01/10/0628251/nature-inspired-computers-are-shockingly-good-at-math?utm_source=rss1.0mainlinkanon
Interested in #decentralization and #p2p ? Want to contribute to #OpenSource software?
Check out our "good first issue" project board and see if something sparks your interest.
Occult Enby that's making local-first software with peer to peer protocols, mesh networks, and the web.
Yap with me and send me cool links relating to my interests. 👍