Re: Merged Compression Tests
themabus, you have no idea about how those methods really work do you
no worries, lets start with imagediff, which if you used at least once you would know that its slower than simple decompression
unecm "Final Fantasy IX (E)(Disc 1 of 4)[SLES-02965].bin.ecm" ~75 sec
imadiff "Final Fantasy IX (E)(Disc 1 of 4)[SLES-02965].bin" into version "(F)(Disc 1 of 4)[SLES-02966]" ~5 min
imagediff is patching in 3 basic steps
1. calc md5 (hdd speed) to validate integrity of foreign image
2. create sector map of foreign image (also hdd speed)
3. rebuild image while comparing imagediff's sector map to sector map of foreign image (and this one is "slightly" 2-3x slower than just reading whole image)
it reads whole original image at least 3 times!
and now compare it to extracting ALL 8 files to their original form (without any patching) in 8 (eight) minutes from FreeArc archive (+ unecm on 8 files which is even slower than decompression = 8 *75 = ~10 minutes )
FreeArc is ultra fast because of its method:
- first decompress 380mb archive into repetition filters dictionary + non-common data (in this step you dont have 5gb un-merged set, only ~700mb dictionary + ~50mb of data)
- next repetition filter is working in a way which you though imagediff should work - rebuild files with common parts which are stored in dictionary and are loaded into RAM for fast access (and here is done the magic of un-merging into 5gb)
- rebuilded data is stored in cache and then written onto hdd
and as I already wrote from the beginning "no need for storing by ImageDiff when merging with repetition filter is much more conveniant"