How do the aggregators work? If they only scrape once, then can you just upload a bunch of bogus pages first and then update with the proper pages once the scrape is done?
I'm surprised nobody came up with some method to foil scrapers yet, like a script that scrambles the original image files on the server and descrambles it in real time in the reader for instance...