As title states, this is a question about the '/chapter'-route.
To be clear, that is the route simply listing all chapters metadata, not the one for actually downloading chapters.
I figured I should 'optimize' my userscript a bit to try and give the API-backend a break by avoiding asking for already seen chapters in my queries.
To be exact, I have a couple "filters" (that are not entirely mutually exclusive, nor necessarily inclusive), which I use the API-params to filter things so I can reduce my amount of fetches and thus load put on db. And as I want to further reduce that load, I would ideally want to be able to tell the second filters queries to exclude the chapter-ids seen in the first.
The '/chapter/?ids[]=' param can be used to whitelist the exact chapters you want to see (up to 100), but I found no equivalent 'excludedIds[]=' param, similar to 'excludedGroups[]='. Does it not exist, am I blind when reading docs, or is such a blacklist undocumented?
To be clear, that is the route simply listing all chapters metadata, not the one for actually downloading chapters.
I figured I should 'optimize' my userscript a bit to try and give the API-backend a break by avoiding asking for already seen chapters in my queries.
To be exact, I have a couple "filters" (that are not entirely mutually exclusive, nor necessarily inclusive), which I use the API-params to filter things so I can reduce my amount of fetches and thus load put on db. And as I want to further reduce that load, I would ideally want to be able to tell the second filters queries to exclude the chapter-ids seen in the first.
The '/chapter/?ids[]=' param can be used to whitelist the exact chapters you want to see (up to 100), but I found no equivalent 'excludedIds[]=' param, similar to 'excludedGroups[]='. Does it not exist, am I blind when reading docs, or is such a blacklist undocumented?
Granted, if such a blacklist-param had the same limit of 100 as ids[] (which it probably wouldn't, as comparing to excludedGroups[] we see no such limit) it wouldn't be terribly useful either way (As it's not uncommon for a filter to have seen 2k+ chapters if user hasn't caught up on chapter in a week+).
But at least it would reduce the pagination by up to [a single] query(ies)!
Though I am unsure whether that is enough to actually reduce load, or whether the backend would instead actually experience more load when all the queries include such a list of 100 extra params in their n queries (if they had to paginate 2k results, that would be n=20) just to avoid performing 1 extra query
Overall, I am not wholly convinced it's actually all that useful even if uncapped (For the backend, that is. Due to the rate-limits it would definitely still be useful for the client). As when filter A has had n pages of results (n*100 results), if filter B then ends up having a sizeable overlap (lets pick the maximum: 100% of A appears in B) and additionally needs to paginate m pages of new results.
When excluding, that is n+m pages in total to paginate (filter A + (B-A)), where for m of those pages the blacklist is used, giving a total of m*n of those params for backend to handle (n per page). While had they not been excluded, B would only have had n extra pages to paginate (a total of m+n pages of results for filter B).
So in conclusion n more queries in return for not having those "extra" m*n params to handle. (At 0% overlap it would be 0 extra queries).
For the client that is a steal (n/5 fewer seconds to retrieve data. Even larger time-save if we account for fetching one cover for each unique manga id), but for the backend it really depends on the cost of the parse+filter operation vs handling extra queries. Which would depend on implementation, and could be everything from super cheap, to horribly costly.
But at least it would reduce the pagination by up to [a single] query(ies)!
Though I am unsure whether that is enough to actually reduce load, or whether the backend would instead actually experience more load when all the queries include such a list of 100 extra params in their n queries (if they had to paginate 2k results, that would be n=20) just to avoid performing 1 extra query
Overall, I am not wholly convinced it's actually all that useful even if uncapped (For the backend, that is. Due to the rate-limits it would definitely still be useful for the client). As when filter A has had n pages of results (n*100 results), if filter B then ends up having a sizeable overlap (lets pick the maximum: 100% of A appears in B) and additionally needs to paginate m pages of new results.
When excluding, that is n+m pages in total to paginate (filter A + (B-A)), where for m of those pages the blacklist is used, giving a total of m*n of those params for backend to handle (n per page). While had they not been excluded, B would only have had n extra pages to paginate (a total of m+n pages of results for filter B).
So in conclusion n more queries in return for not having those "extra" m*n params to handle. (At 0% overlap it would be 0 extra queries).
For the client that is a steal (n/5 fewer seconds to retrieve data. Even larger time-save if we account for fetching one cover for each unique manga id), but for the backend it really depends on the cost of the parse+filter operation vs handling extra queries. Which would depend on implementation, and could be everything from super cheap, to horribly costly.