Surprised I haven't come accross any p2p DoS attack tools.

Should be easy as hell to generate thousands of DHT entries that lead to invalid IPs.

@mauve bit torrent has been in operation for a very long time without DDoS issues afaik. so either there is no incentive, or the bit torrent protocol makes this type of attack difficult to pull off. you've piqued my interest though!
@cuaxolotl @mauve it's an incentives thing, no one has seriously attacked it. Kademlia mainline DHT as implemented doesn't have serious protection against spinning up invalid nodes and shitting up the address space. However, while you can do that trivially (the node IDs are literally just a random 160-bit integer) each node keeps multiple lists of other known nodes at exponentially further away from itself in the keyspace. It periodically polls these nodes for liveness and keeps track of and ranks them according to how long they've been up. So, if you need to find something and you have access to a nodeid/ip combination that you have some out of band means of knowing it's been around for while, you can query it and it will forward requests to its neighbor nodes preferring the longest-lived ones. Effectively this means that if I shit up the network with a million fake nodes tomorrow, but your bitorrent client cache still has nodes from yesterday, your searches will be alright. If you come onto the network for the first time tomorrow though, and you just have to pick a random node to start from, your odds are as bad as what percentage of the nodes on the network are now fake ones. additional problem: just because a node is long-lived doesn't mean it's good, nodes don't really know that except if they try the node's info and it's valid, your node keeps an internal reputation for those nodes but afaik all attempts to make this info shareable to other nodes don't work.

there are some extensions to kademlia that make the node id to be a cryptographic hash and make it dependent on the ip/port so you are extremely limited as an attacker in making plausible fake nodes but i don't know if anybody even uses that and in any case not enough to matter. also problem with that is if your node changes ip/port its reputation has to start over because it will need a new node id. but like i said nobody uses it, but it does demonstrate the difficulty in addressing the problem.

@Moon @cuaxolotl Yeah, I feel like with being more popular lately we'd at least see people attempting black hole attacks on ipfs.io or on some popular NFT collections.

@mauve @cuaxolotl I need to do a deep dive on ipfs sometime because I don't know it as well. I know it's pretty similar but that's all.

I can tell you though that the use of IPFS on NFTs is in significant part a shell game, tons of NFTs use IPFS links for their data but I have found that attempting to look up the data using any IPFS gateway except the marketplace's or special provider's own very frequently just does not work. But that is more of an NFT thing than IPFS thing. The protocol is working fine, the marketplace's gateway is just busted (and they seemingly never get fixed) and no one else is sharing the file so you just have to use a regular HTTPS request to their gateway for the file.

wait does ipfs require explicit “seeding” ?

Man for all it’s faults, freenet takes another W if that’s true…

@ademan @cuaxolotl @mauve yes it does, basically same as bittorrent in that regard.

Man that kinda sucks. The freenet approach is wayyyyyyy more elegant (although it has the extremely unfortunate side effect that you can’t control what your node serves, which can be some very nasty things on freenet).

Basically your node keeps an LRU cache of data that has passed through it. This means that unpopular data will be dropped from the network sooner, data gets replicated automatically where it’s needed in the network, reducing load on the initial “host”s and you have deniability about what your node serves.

@ademan @cuaxolotl @mauve @Moon The reference implementation for IPFS has a cache of recently requested blocks; I think it’s set to 2 GB by default, but the garbage collector is really stupid and just starts deleting any of the blocks until you hit the low water mark. Anything you want to keep around you have to pin.

You could probably abuse some of the public gateways to keep your files around by requesting them round robin style before they fall off of all of their caches.

Follow

@Coyote @cuaxolotl @Moon @ademan Generally I run an ipfs-cluster set and pin data there. Gonna be looking into making it work better with mutable datasets and IPNS.

Honestly might be easier to just go with a blockstore and libp2p and skip the garbage collector and pinning stuff entirely.

Sign in to participate in the conversation
Mauvestodon

Escape ship from centralized social media run by Mauve.