@Moon @cuaxolotl Yeah, I feel like with #IPFS being more popular lately we'd at least see people attempting black hole attacks on ipfs.io or on some popular NFT collections.
@Moon @cuaxolotl You might enjoy this comparison article I wrote about #IPFS, #Hypercore #BitTorrent and #SSB.
https://blog.mauve.moe/posts/protocol-comparisons
It doesn't get super into the weeds on the DHT tho
@mauve
This is a wonderfully detailed comparison, very useful for devs trying to make protocol choices for new P2P apps. But I still managed to understand most of it, despite being more of UX guy, with very limited coding experience.
I'd love to see a similarly detailed comparison for chat protocols (IRC, XMPP, Matrix, Jami, Tox).
Man that kinda sucks. The freenet approach is wayyyyyyy more elegant (although it has the extremely unfortunate side effect that you can’t control what your node serves, which can be some very nasty things on freenet).
Basically your node keeps an LRU cache of data that has passed through it. This means that unpopular data will be dropped from the network sooner, data gets replicated automatically where it’s needed in the network, reducing load on the initial “host”s and you have deniability about what your node serves.
all requests in freenet are recursive, you ask you peers, they ask their peers, etc. Requests are deniable because you can’t tell if a request was originated by your peer or one of their peers, etc.
sybil isn’t something explicitly guarded against (except by darknet mode, wherein you hand pick your peers) but there are metrics to decide whether to drop poorly behaving peers. Attacking the LRU really isn’t feasible because of the way requests are routed. You’d need to have content hashes that exist that occupy the same part of the address space big enough to fill a specific peer’s LRU
@ademan @cuaxolotl @mauve @Moon The reference implementation for IPFS has a cache of recently requested blocks; I think it’s set to 2 GB by default, but the garbage collector is really stupid and just starts deleting any of the blocks until you hit the low water mark. Anything you want to keep around you have to pin.
You could probably abuse some of the public gateways to keep your files around by requesting them round robin style before they fall off of all of their caches.
@Coyote @cuaxolotl @Moon @ademan Generally I run an ipfs-cluster set and pin data there. Gonna be looking into making it work better with mutable datasets and IPNS.
Honestly might be easier to just go with a blockstore and libp2p and skip the garbage collector and pinning stuff entirely.
I can tell you though that the use of IPFS on NFTs is in significant part a shell game, tons of NFTs use IPFS links for their data but I have found that attempting to look up the data using any IPFS gateway except the marketplace's or special provider's own very frequently just does not work. But that is more of an NFT thing than IPFS thing. The protocol is working fine, the marketplace's gateway is just busted (and they seemingly never get fixed) and no one else is sharing the file so you just have to use a regular HTTPS request to their gateway for the file.