I feel like I should be using #sqlite in #p2p use cases way more.
Only thing that's unclear, really is how write throughput would work. It seems like doing periodic dumps of datasets is the best use case there, but that doesn't play as nice with applications UX where people expect stuff to sync on the fly.
This "Wikipedia as a static DB" use case is extremely cool for example.
The #WASM support seems neat. I heard somewhere that there's a way to perform a query on multiple DB backends at once, so it'd be cool to see if that's possible here.
https://sqlite.org/wasm/doc/trunk/api-worker1.md#method-exec
With that in place you could query data from multiple peers together without needing to merge their datasets.
This is kinda the approach I took for HyperBeeDeeBee in applications where multi-author queries were important.