@holo
Ah awesome, I didn't know about that, so sorry for suggesting this again!
It is possible via using something like `indexeddb-chunk-store` with WebTorrent to create storage which can persist through sessions, allowing for chapters previously visited to keep seeding, but the optimal use case for this is a SPA where reloading of the page is minimized as re-creating the client, re-importing the torrents, re-creating the peers are all an expensive process - again - this would not really work for the less frequently viewed chapters.
For dedicated servers, as mentioned, the H
@H network has some additional features which are nice and promote running a client, with one of the key features I've taken advantage of in the past being the ability to download a copy of a specific gallery to your own H
@H client, this doubles as forcing your H
@H instance to cache this making sure the download isn't wasted, while also copying this to a separate location to allow data hoarders to keep local copies of their favourite galleries.
A re-implementation of H
@H or a similar client would be optimal as from what I can see there is currently a 0.01% cache miss rate, but as the server-side code is private, this would require quite a bit of reverse engineering, and a feature which would incentivise people to host a client if implemented.
The core functionality of a H
@H client is quite simple, there is a 110s check-in timer to a set address which returns all the ranges (125MB collection of files, 1/65535 of the entire site) that your specific client should cache (as determined by the H @ H RPC server). At all times your specific client is expected to be able to serve all of these files, with your client requesting these individual files in the range which it doesn't possess (on the fly). Using ranges makes it a lot less intensive for the RPC server to allocate these files and allows the RPC server to quickly and easily redirect the requests for each image to a client it knows should have these files (based on allocated ranges), while attempting to ensure no single client gets overloaded.
The inner workings of how these ranges are calculated, how these ranges are allocated by the RPC server, how the weighting of these ranges are calculated are the key challenges which would need to be solved to re-implement the H @ H client and RCP server, outside of the need to encourage users to start using the application.
This has always been something I've been interested in the past 6 years I've run an H @ H client, so I'm going to start investigating this a little further and help where I can if other people are interested in re-implementing the H @ H client and RCP server.
Edit: Sorry for the pings, I attempted to space them out but it appears not all did!