[EN] Easy way to download raws from " Comic valkyrie ".

Joined
Nov 24, 2024
Messages
50
Open console. Control+Shift+I. Paste this command first. You will see " JSZip loaded " in console.

JavaScript:
const script = document.createElement('script');
script.src = 'https://cdnjs.cloudflare.com/ajax/libs/jszip/3.10.1/jszip.min.js';
script.onload = () => console.log('JSZip loaded');
document.head.appendChild(script);
After that paste the second command. Do this until you get all blobs.
JavaScript:
const zip = new JSZip();
const blobCounts = {};
const processedBlobs = new Set();

function downloadBlobAsPng(blobUrl, parentDivId, index) {
    return fetch(blobUrl)
        .then(response => response.blob())
        .then(blob => {
            return new Promise(resolve => {
                const url = URL.createObjectURL(blob);
                const image = new Image();
                image.src = url;
                image.onload = () => {
                    const canvas = document.createElement('canvas');
                    canvas.width = image.width;
                    canvas.height = image.height;
                    const ctx = canvas.getContext('2d');
                    ctx.drawImage(image, 0, 0);
                    canvas.toBlob(pngBlob => {
                        const fileName = `${parentDivId}-${index}.png`;
                        zip.file(fileName, pngBlob);
                        resolve();
                        URL.revokeObjectURL(url);
                    }, 'image/png');
                };
                image.onerror = () => {
                    resolve();
                    URL.revokeObjectURL(url);
                };
            });
        })
        .catch(err => {
            console.error(`Error fetching blob for ${parentDivId}-${index}:`, err);
        });
}
function processImages() {
    const images = Array.from(document.querySelectorAll('img[src^="blob:"]'));
    const promises = images.map((img) => {
        if (processedBlobs.has(img.src)) return Promise.resolve();
  
        const parentDiv = img.closest('div[id^="content-p"]');
        const parentDivId = parentDiv ? parentDiv.id : 'unknown';
  
        if (!blobCounts[parentDivId]) {
            blobCounts[parentDivId] = 0;
        }
        blobCounts[parentDivId] += 1;
        processedBlobs.add(img.src);
  
        return downloadBlobAsPng(img.src, parentDivId, blobCounts[parentDivId]);
    });
 
    return Promise.all(promises).then(() => {
        console.log(`Processed ${processedBlobs.size} blobs.`);
    });
}
processImages().then(() => {
    zip.generateAsync({ type: 'blob' }).then(content => {
        const a = document.createElement('a');
        const url = URL.createObjectURL(content);
        a.href = url;
        a.download = 'images.zip';
        document.body.appendChild(a);
        a.click();
        a.remove();
        URL.revokeObjectURL(url);
    });
});
const observer = new MutationObserver(() => processImages());
observer.observe(document.body, { childList: true, subtree: true });
 
Last edited:
Dex-chan lover
Joined
Jan 20, 2018
Messages
1,035
Edit:
Ok, just tested it once...
  • It doesn't dl entire chapter. In my case it was first 6 pages out of 66. That means (I guess) I have to flip through chapter first. Wasting of time.
  • It splits every page. In my case it was 3 (1125x536) parts. Merging them together results in 1125x1608 image with obvious distortions at merging boundaries. Json for relevant jumbled jpeg clearly says
    Code:
    "views":[{"width":1125,"height":1600,
    8 pixels difference. That means I have to manually overlap these parts. Wasting of time.
CBA to test more. Probably more uhmm lets call it "inconveniences" 😞

I don't mean to be mean but this doesn't look "easy" especially if you need to dl 10+ pages.
 
Last edited:
Joined
Nov 24, 2024
Messages
50
Edit:
Ok, just tested it once...
  • It doesn't dl entire chapter. In my case it was first 6 pages out of 66. That means (I guess) I have to flip through chapter first. Wasting of time.
  • It splits every page. In my case it was 3 (1125x536) parts. Merging them together results in 1125x1608 image with obvious distortions at merging boundaries. Json for relevant jumbled jpeg clearly says
    Code:
    "views":[{"width":1125,"height":1600,
    8 pixels difference. That means I have to manually overlap these parts. Wasting of time.
CBA to test more. Probably more uhmm lets call it "inconveniences" 😞

I don't mean to be mean but this doesn't look "easy" especially if you need to dl 10+ pages.
Scroll, run second command again, dump images to a folder, repeat. For some reason whatever I tried didn't work. Because web-page probably unloads blobs if it is 6-8 pages away. Blobs can't be downloaded directly. They need source material to give them permission. Yes, you have to adjust for difference. Also it doesn't split images. Images in Comic Valkyrie are like that. They inset them by 33.337, or something like that. Code downloads blobs only. Blame their way of putting images.

I call this easy because instead of opening blobs in a new page to download, one-by-one, it gets them all ( or whatever it can )in one go. I simply run code again and again, and dump rar in a folder until i get all images.
 
Dex-chan lover
Joined
Jan 20, 2018
Messages
1,035
Also it doesn't split images. Images in Comic Valkyrie are like that.
Nope. Maybe blobs created by viewer are 1/3 of full image but images returned by server are complete. Jumbled but complete.
NkyiaSE.jpeg

Edit: why don't use hakuneko? I'm 100% sure it can rip binb reader used by comic-valkyrie.
 
Last edited:

Users who are viewing this thread

Top