151

(26 replies, posted in General discussion)

oh
that's great then it would be really nice to have those in .dat

152

(26 replies, posted in General discussion)

to be honest, i don't want to
i could have made .sbi optional at all so it would rather be .cue->.ccd+.bin
but it could get out of control, then, as i see it
people might convert images and reupload them without referencing redump.org at all
and those would likely get more popular than ours because of application ease
so while they'd still be valid images
a lot of project's meaning would be lost and redump.org would have to compete with it's own subset
i don't want that to hapen,
i think this also might be one of reasons why Dremora didn't create such program
so i chose to make it as specific as possible

153

(26 replies, posted in General discussion)

.dat is meant as a container from where to retrieve sizes, instead of actual .bin files, when they're not accessible
or when 1st pass is done & .img, etc. created,
execution of program with .dat will overwrite only .sub & .ccd, not .img itself
so it's slightly faster and less HDD killing, for those cases when only testing of $8001 vs $0080 is required

lol, Dremora can't manage to write such a tool in 3 years time and you do it in 1 day (and you both live in Latvia? tongue)

So the sbi format is missing relevant data that cannot be generated again? How about a tool which loads the libcrypt sectors from the database instead of using the sbi file? cool

no, no, Dremora had most of it done in psxt001z for a long time - there is SUB generator, it just doesn't take TOC as an input
i guess he had his reasons not to make it or really is very busy
and so did i - i had written very similar program recently, to make .SUBs for those saturn ring tests
so just had to make few little changes here and there - it's not a big deal
thanks though smile

did few more tests - i've masked first one MSF column, then 2nd in Crash Bash
and it didn't pass both of the times, no matter the CRC value
so it isn't threshold (any CRCs pass, but moded MSFs do not)
it's strange - i really thought this would be it
maybe there really is nothing more to this?

and so far all records in db belong to one of those two patterns, afaict,
so it's 50:50 for about 100 records,
not that bad if somebody wants to just burn some CDs

but for sake of preservation SBI, as it is now do not fit, imho
belonging to exact pattern is lost, so if there would be no more DB
guessing would be all that's left and still even if both values would pass on PSX
this information itself would be lost

batch reading from db is possible, cHrI8l3 would know better than i
but i think it wouldn't be good to to that - ther's no simple way, imho
it would retrive those 100 records whole, every time
things like that basically could kill server

one single reading to save manual labour making a backup in .txt or so - for later processing,
on the other hand, would do no harm

154

(26 replies, posted in General discussion)

well ok,
i've made a program that would convert .sbi -> .sub taking magic value as an input parameter
sbi2sub

processed following images with it:
Crash Bash
Speed Freaks

and tested with ePSXE 1.70

plugins that do not appear to read subcode at all (they crashed @LC):
SaPu's CD-ROM
Xeven's Cdr

plugins that would have to work but didn't (both games hanged prior LC with those):
P.E.Op.S. CDR
Pete's CDR

plugins that did worked:
ePSXe CDR <- only one that passed and would run actual CDs instead of images
Moby2 cd disk image driver
and additionaly ePSXe's Run ISO command

without subcode:
Crash Bash would crash on loading screen after character select & intro, before stage select
Speed Freaks hangs while Neon City stage is loading

with subcode those parts passed on both XOR patterns ($8001 & $0080)
i didn't test further though:
maybe there are later checks that wouldn't
maybe there is certain threshold, so for example:
byte from each MSF + 2 CRC bytes = 4  - maybe half of them passing is sufficient
but CD-R recorded this way would degrade much faster
maybe hardware or even other emulators act different
so i didn't test all those things - i'll take a closer look later with debugger - it's quite interesting

but consider this:
if those CRCs are there for the kicks only - completely unneeded to pass LibCrypt
it's a huge flaw in protection then, imho - this pattern gives away those special blocks
so what if Sony removed it later?
what if there are LC versions without modified CRCs or at least without constant (maybe even for USA or Japan)?
we wouldn't pick those currently, right?
the only way would be to clear subchannel a lot - with multiple rereads
and compare it later on with the ones from different CD, made in a similar way
psxt001z is capable of doing this, afaik, but nobody would - it takes forever
it's such a drag, not everyone would even test CDs with fast option

155

(26 replies, posted in General discussion)

but CRCs are modified as well -
it's either (CRC from clean block or CRC from modified block) XORed with one of predefined values:
0x0080 or 0x8001
so what is this all about? why would Sony bother XORing then?
to tell people where LC is, so it could be copied with ease?
maybe emulators cheat? have you tested on console?

i don't think SBI author did really understand how LibCrypt works,
since ther's that possiblility to include only one of MSFs (saving 7 bytes?) -
on LibCrypt this seems to never happen - it's always both or none, afaict
so CRC can even be sole value modified:
http://redump.org/disc/1128/
those sectors are known to be LC,
with them pattern looks complete: 32 total sectors divided in groups of 16
3rd entry is @offset 0x20 in SBI, containing only valid bytes
how can modified data be recreated from unmodified values?
btw such blocks would be skipped with original plugin - it only stores ones with modified MSFs

so this complexity of format - it appears completely irrational and quite useless,
it could have been just MSF+entire Q
overcomplication often is a result of lack of understanding

156

(26 replies, posted in General discussion)

One thing I can suggest though is that we do need something added to the site to give us a list of the games that do have sbi files on them so that we can quickly know what games have been proven to have LibCrypt protection on them without having to look at each game separately.

http://redump.org/discs/system/psx/libcrypt/2/


btw, do those SBIs even work?

there appears to be no CRC stored within them,
so it's supposed to be recreated i guess
but since there are multiple patterns,
how would you tell whether to XOR with 0x0080 or with 0x8001?
if it's byte right after MSF, something's wrong then - it's never different from 0x01

for example:

same changes on CDs with different patterns:

                      C/A TNO IND M   S   F   Zro aM  aS  aF  CRC      Unmd   LC1    CRC      Real   LC2
MSF: 03:08:05 Q-Data: 41  01  01  07* 06  05  00 *23  08  05  ffb8 xor b838 = 4780 | ffb8 xor ff38 = 0080
MSF: 03:08:05 Q-Data: 41  01  01  07* 06  05  00 *23  08  05  3839 xor b838 = 8001 | 3839 xor ff38 = c701

in SBI CRC is lost:

                      C/A TNO IND M   S   F   Zro aM  aS  aF  CRC      Unmd   LC1    CRC      Real   LC2
MSF: 03:08:05 Q-Data: 41  01  01  07* 06  05  00 *23  08  05  ???? xor b838 = 4780 | ???? xor ff38 = 0080
MSF: 03:08:05 Q-Data: 41  01  01  07* 06  05  00 *23  08  05  ???? xor b838 = 8001 | ???? xor ff38 = c701

so to recalculate CRC it could be either one of those algorithms:
a) Real XOR 0080
b) Unmd XOR 8001
but only one of them is valid for each CD

i.e.
this is as far as SBI goes:
MSF: 03:08:05 Q-Data: 41  01  01  07* 06  05  00 *23  08  05
those bytes are the same for both CDs
http://redump.org/disc/592/
http://redump.org/disc/798/
but each has different pattern in DB

it can't be right...

edit:
from what i can tell from cdr sources
0x01 following MSF indicate both time values being modified
and that's as much as SBI can hold
other options are 0x02 and 0x03 - for storing only relative or absolute MSFs
so i'd say ther's no way for SBI to hold CRCs - it's an unfortunate limitation of format

so what are those SBIs then good for at all?

gap detection never completes?
you could try other interface under EAC Option / Interface
or under 'Drive Options' change 'Detection accuracy'

it could also be EAC's problem

if those last sectors are visible from IsoBuster or similar program,
you can extract last track this way - as range

remove leading and trailing bytes i.e. sync it with one extracted from EAC,
so they would start similar, only this one should have data running longer,
where it's zeroes already on EAC.

i'm sorry, i'm not sure what you mean
if EAC hangs on gap detection - you should change to different Detection Method: A, B or C
usually it would be A
this is described in guide under 'Setting up EAC the first time'
http://redump.org/guide/cddumping/

since whole sector is filled with scrambled data, you have to move to next until this data runs out
there are 2352 user data bytes in raw sector = 0x930 = 588 samples
so when you move to next, add 588 samples for eacsh skipped sector
and then count exact amount of samples in last
that should be your combined offset

regarding pregaps - hit F4 in EAC to initiate detection
but it will be 0 in cases when ther's only 2 tracks, so it's more complicate then

thanks SoulReever
it looks good, but you'll have to wait for some time until admins add you to dumper list in db

IsoBuster provides track sizes from TOC,
but for db 150 sectors should be removed, like described in guide (section 'Removing the pregap using Resize')
http://redump.org/guide/cddumping/
usually they would be included with following audio track, when extracted with EAC
so removed are duplicate sectors - nothing to worry about
(unless there's audio in gap or right after gap and combined offset is negative - EAC can't handle it, but that's quite rare)

for gap detection always use method A
in this case it is an problem with EAC and should be 0

so you would extract tracks as usual with IsoBuster/EAC
remove 150 sectors from the end of Track 01, like always
and then also add 150 empty sectors to the beginning of Track 02 - manually this time
(since EAC didn't include them)
like described here, in point '1)'
http://forum.redump.org/post/16045/#p16045
in your case it's 352800 bytes

when you'll add sizes of both files afterwards sum should be (last CD sector from TOC+1)*2352

163

(18 replies, posted in General discussion)

would records be described properly, there would be no need to worry about additional information
it could be included later on, by different people, different project perhaps - it wouldn't matter
we'd have done the best we can on our part - preservation of CD images in a way relating them to a source material
so if somebody later thinks, that release date is important or publisher or something else -
there would be no bariers for such information's inclusion
it would relate with records exactly like with original medium

but unfortunately lack of information is only one of problems associated with current naming scheme

certain amount of misinformation is being produced:
apparently Sony assigned serials to editions (not titles) for PSX Japan market
now serials are being included in filenames
but, except for a few instances, you will never be able to find later editions (such as PlayStation the Best, PSone Books, etc)
in redump.org's PSX Japan set (outside website, that is)
currently all kinds of serials are being treated on the same level:
Sony's assigned, Publisher's assigned, EXE names, Others (missprints and such)
and, when filename is generated, only first one from set will be included,
which mostly, not always, is EXE name
this creates an illusion of Sony's edition<->serial relationship presence
(to which people are used to - since it's association you'll find everywhere on web,
so i believe this is how they will translate serial: as describing edition),
when in fact it's lost and crooked

also systems would get increasingly diversified for no sound reason -
solely because different ways are implemented to separate colliding records, when there could be one

Maybe a hand-edited clrmame dat would be sufficient for most info that is now only available on the website.

indeed it is possible to  create scripts that would leech this information from redump.org and reassemble .dat
but it's not a rational solution
it's a hacking in a sense
besides nobody would care about it
why would they, when you're getting data directly from website and even images (CD) organized accordingly?

this is of course my subjective view of things
to one problem there can be several answer
so i wouldn't want to propagandize this (maybe deluded) opinion further on
but rather would like to see people think about those matters, on their own - not being so passive
i hope then, if ther's enough interest, an sensible solution will be reached

164

(18 replies, posted in General discussion)

I guess, it would be better to include all the metadata in the dat.

it could be .dat if there would be means to process it afterwards
otherwise it's in vain - it will be lost there

This type of thing has been discussed before over at No-Intro...

that's completely not what i mean
it's not a matter a trust

ther's enough information in db on website but it's being lost on it's way
what people get are those images that do not mean anything by themselves
only way to connect 'Game (v1.1)' with actual media is through website
i believe there should not be such dependece
images should be on their own - a representation of physical media in electronic form,
an alternative for those that for one reason or another can not have it

it would lack some information, like: cover scans, manuals, but currently it is not our concern - we make backups from CDs
if described properly we could go just this far

there could be other projects that would complete lacking information or redump.org could do it later on
but by current descriptions, which file names are it's impossible

this whole thing we're doing is getting perverted on a last step - that's what's sad - it could be so much better

165

(18 replies, posted in General discussion)

oh, nice, cHrI8l3

166

(18 replies, posted in General discussion)

Can you tell me any side that does correct "preservation"? :x

i can't

in my opinion TOSEC is overdoing it with meaningless information tags while not making backups properly

i'm not familiar with cartridge media, hence can not comment on No-Intro - how good it's done
but i think at least their naming convention provide a good means for such implementation

What is missing for actual preservation is the attached metadata with more information. This is currently implemented on the redump.org website in a limited way. Ideally this data should be included with the binary copies that were migrated and should be replicated.
Also everything else that were with the physical media should be preserved (Cover, Manual).

indeed, you're right

i see .dat and everything derived from it (also CD images) as independent set of information on it's own.
website is merely a mechanism to produce it, there shouldn't be further dependence.

So what should be done to make it "real preservation" for you? For myself, including metadata that has more information how the dump has been obtained would be a good starting point.

names should change.

they can hold a lot of this meta-data associated with source material.
as of now they basically serve only to identify title and then separate records - avoid collisions.
they could describe source medium better, down to edition at least - as far as it's possible.

for example, currently 'many-to-one' relationship from db is being reduced to 'one-to-one' in .dat
when several editions would yield similar CRCs, this information is lost outside of redump.org
this problem will escalate with time (and it's only one from several)

would somebody want to add covers or manuals now for PSX CDs - an independent project linked with redump.org images -
he'd be faced with fact that it's impossible to do so without manual examinination of records on redump.org website

167

(18 replies, posted in General discussion)

Wow pretty selfish act man...
I don't see the point...
The last thing this site needs...

thanks

EXPENSIVE hard to obtain discs from Japan?

www.ebay.com

This site is about disc PRESERVATION.

well, this site might be - on the paper. but when you look at it, what's really happening -
ther's no way to trace back those images to actual CDs - how is this an preservation? it's an charade.
it has been discussed a lot among staff, and nothing has changed
since talking is useless, being an 'Dumper' mysel like you are,
that's about only other option i have left to try to influence anything here.
but i wouldn't want to hijack this thread with this, it has nothing to do with Merged Compression

BadSector isn't some leecher collecting images just for himself the guy makes a great effort to distribute dumps to help ensure preservation.

it has nothing to do with BadSector. i personally have a great respect for him.
i'm not uploading for a while, just said it now.

168

(18 replies, posted in General discussion)

oh ok, i see

i'm sorry about that, but i won't upload anything anymore
i hope those missing images would motivate people to redump them and join project
maybe they would see things as i see them regarding naming and would somehow accelerate an decision
that may be stupid, but about only thing i can do

169

(43 replies, posted in General discussion)

infact one of our project torrent have a dat file with packiso's CRC for quick renaming, and we plan to release the dat with every update from now on, which should solve the time issue in renaming those sets.

for people that would get those images elsewhere and then compress accordingly - with PackIso
to join in torrent at higher position?
but the thing is - there aren't alot of those people, as i understand it.
majority won't be bothered with renaming themselves - they'll take what is given.

so, imho, if somebody feels like recompressing and reuploading, like xenogears i suspect may (to .zip/.rar)
it wouldn't do any harm - the more there are those torrents - better
one enforced set without alternatives wouldn't be good
even though i think .zip or .rar particularly aren't rational - still if somebody feels different about it - it won't harm
(well, not any more than any torrent based on redump.org .dats at current state
i see a general problem with names, i think they're terribly wrong,
hence in torrents also and anything else derived from them
but that is a concern of redump.org crew)

afterwards to maintain names in sync with updates made at redump.org it would be enough to compare .dat files (redump's):
one taken when set was made, and current - ther's no need to rescan whole set every time, imho
(and to produce alternative .dats for that reason)
when it's uploaded it's locked to redump.org @certain point in time (like a snapshot),
so you can say this torrent is a subset of that .dat
when you see CRCs of some title change from .dat to .dat - you know this CD should be updated
when same CRCs belong to different title now - it was renamed then
and new records would manifest as new titles with new CRCs
that's all managment there is, and it is a concern solely of person maintaining torrent, not everyone downloading it, as i see it
i don't really see application for .dat indexing compressed files

you should browse to the next sector until you see all 0x00 follow scrambled data
at the position where new sector usually would start: 0x02D0, in this case

02D0 : 00 FF FF FF FF FF FF FF  FF FF FF 00 03 D0 39 61

it's header from sector 12639
last before gap likely is 12640, so it's couple more pages forward

if it is so, then combined offset would be (0x930/4)*2+(0x2d0/4)=588*2+180 = 1356

but you should check for yourself, maybe i made a mistake there

171

(10 replies, posted in General discussion)

many of the reprint games with diff serials will have the same data/version #, but this way we can make sure we have at least one of every serial in case they are different versions.

for Japan - yes
but for Pal / NTSC-U it's mostly same serial for all releases, they could be the same or different -
you won't know unless all are verified.

for example:

PSX Redump.org Missing From Database NTSC-US wrote:

#
SLUS-00066    3D Baseball

SLUS-01272    007: The World Is Not Enough isn't listed, but only 'Greatest Hits' was dumped

PSX Redump.org Missing From Database NTSC-US wrote:

SLUS-00887    Action Man - Operation Extreme
SLUS-00777    Activision Classics
SCUS-94502    Adidas Power Soccer

there is undumped 'Greatest Hits' for Activision Classics with same serial that isn't listed
http://www.vgrebirth.org/games/search.a … 2&y=11

PSX Redump.org Missing From Database PAL wrote:

SCES-00902    Ace Combat 2    [G]
SLES-03083    Action Man - Destruction X    [M5]
SLES-00015    Actua Golf    [Unknown]

Ace Combat 3: Electrosphere isn't listed as missing, but only 'Original' is dumped; 'Platinum' still missing
http://www.vgrebirth.org/games/search.a … =0&y=0

list with titles only and statement explaining - you're just looking for single version for each of those particular games,
(which may not be all there is for specific region)
would make more sense, imho - people would understand it clearly and have no illusion of it's completeness

but still why would you take such burden upon yourself?
redump.org can be used directly to see whether particular game (down to edition) was dumped

i mean if you see use to it - sure - i do not object, but to me it just doesn't seem rational

172

(10 replies, posted in General discussion)

it must have been a lot of work, so props for that
but it isn't a good idea generally, imho

first of all redumps aren't any less welcome than new entries
secondly as mentioned, it's subset from an incomplete list by itself
finally - serials, what i see as a huge problem with redump.org

Please note this list is serial# specific, not title specific.

it doesn't explain further than that, but basically you'd get Japanese list different from the rest -
it would have all multiple releases listed when Europe and USA would not,
they'd list only single entry for each title (with a few exceptions), right?
this is a huge diversity.
it means re-release for Europe, USA = original, when it's dumped - you remove original from the list
but for Japan you'd remove only this exact re-release

There is some argument on proper titles for certain games, which results in a shift in starting letter, always search by serial number. Please do not add serials/titles to this list until you have searched the redump.org database by serial #.

serials currently are messed up on redump.org (imho)
you'd get serials from exe name (mastering artifact) on the same level as Sony's assigned
so when you find serial in db, it desn't really mean - game with such assigned serial
(which your list is based upon) have been dumped

so, to have consistency, imho, it would be far better then to have an list with Title -> Edition relation,
or solely Titles - getting rid of serials at all, if you'd need one,
but i do not see such necessity at all

173

(43 replies, posted in General discussion)

i don't think ther's anything to worry about
neither me nor cHrI8l3 maintain those sets, they'll likely remain as they are

i myself think it's still way too early for this,
it would be more appropriate when PSX is about 80% or so complete
and even then as alternative to PackIso likely

though regarding .zip or .rar, i think it's a step backwards
they are uncompromise in favour to people with fast connections, a lot of storage space and time to waste

compression increase merged set offers is unprecedent
it won't be 8 times of course, but on average it'd be x2 easy over PaskIso
and decompression speed is still good
http://img410.imageshack.us/img410/5558/merged.png
so let's say this 80% PSX set takes 1tb with PackIso
then it would 500gb merged
my connection allows maximum download speed of 500 kilobytes per second, which is about average i guess
so i'd save (1024*1024)/60/60/24 = ~12 full days on download
that's a lot of space and time economy
the price is slower decompression speed,
but to loose those 12 days i saved, i'd have to decompress really so often
let's be generous and say it's 2 minutes overhead on decompression, which is not true
(it's event not true for zip vs merged, but it's ok - let's be generous)
so then 12*24*60/2 = 8640
i'd need to decompress 8640 images one by one, to claim 'it wasn't worth it' -
would i decompress all merged versions of same title at once, i'd actually save time
(8640 happen to be about 80% of PSX titles, so i'd need to decompress each one of them: French, German, etc...
it's very unlikely and still i'd save space)

about ease of use - it's not a problem either, imho
graphical frontend can be made that would allow user friendly extraction

it's the same
when you have 2 files in commandline 'temp.bin' and 'track01.bin'
'left' moves data from beginning of 'track01.bin' to the end of 'temp.bin'
'right' does opposite

though, if you'd want to just add blank data (0x00) you'd need to first generate dummy file for that,
which you can with psxt001z
and it's easier to do copy /b then

175

(43 replies, posted in General discussion)

Size

Split:

 zip (7z -tzip)       :3996932848 <| 8 * 499616606 - average from 3 samples

 rar (-m5)            :3725107104 <| 8 * 465638388 - average from 3 samples

 PackIso (ECM->7z)    :3020367596 <| taken from cHrI8l3's table but it's slightly off: each archive is by about 4..6 bytes smaller

Merged:

 ImageDiff+ECM->7z    : 379311961 <| ImageDiff with default settings

 Xdelta3+ECM->7z      : 387772715 <| Xdelta: -N -D -R -n -0; 7z: -mx=9; it's strange though, patches themselves are smaller uncompressed

 Xdelta3+ECM->FreeArc : 394458697 <| -m4, -m4x (size is the same)

 Xdelta3+ECM->FreeArc : 388348030 <| -m9x

Compression speed

Split:

 zip (7z -tzip)       : 8 *  ~87 =  ~696 seconds

 rar (-m5)            : 8 * ~347 = ~2776

 PackIso (ECM->7z)    : 8 * ~250 = ~2000

Merged:

 ImageDiff+ECM->7z    : 814
  ECM                 : 36
  ImegeDiff           : 7 *  ~56 =  ~392
  7z                  : 386

 Xdelta3+ECM->7z:     : 605
  ECM                 : 36
  Xdelta3             : 7 *  ~28 =  ~196
  7z                  : 373

 Xdelta3+ECM->FreeArc : 624
  ECM                 : 36
  Xdelta3             : 7 *  ~28 =  ~196
  FreeArc             : 392               <| -m4

 Xdelta3+ECM->FreeArc : 622
  ECM                 : 36
  Xdelta3             : 7 *  ~28 =  ~196
  FreeArc             : 390               <| -m4x

 Xdelta3+ECM->FreeArc : 797
  ECM                 : 36
  Xdelta3             : 7 *  ~28 =  ~196
  FreeArc             : 565               <| -m9x

Decompression speed

Split:

 zip (7z -tzip)       : 32..256 (8 *  ~32 =  ~256)

 rar (-m5)            : 40..320 (8 *  ~40 =  ~320)

 PackIso (ECM->7z)    : 72..576 (8 *  ~72 =  ~576)

Merged:

 ImageDiff+ECM->7z    : 84(209)..959
  unECM               : 36
  ImegePatch          : 7 * ~125 =  ~875
  7z (1 or many diffs): 48

 Xdelta3+ECM->7z      : 84(118)..322
  unECM               : 36
  Xdelta3             : 7 *  ~34 =  ~238
  7z (1 or many diffs): 48

 Xdelta3+ECM->FreeArc : 97(131)..335
  unECM               : 36
  Xdelta3             : 7 *  ~34 =  ~238
  FA (1 or many diffs): 61                <| -m4, -m4x

 Xdelta3+ECM->FreeArc : 101(135)..339
  unECM               : 36
  Xdelta3             : 7 *  ~34 =  ~238
  FA (1 or many diffs): 65                <| -m9x

Programs used

7-Zip 4.53 (PackIso)
7-Zip 4.65
ECM v1.0
FreeArc 0.50
ImageDiff v0.9.8
RAR 3.80
Xdelta 3.0u

ImageDiff is quite slow with larger files indeed, though patches it produce, while being larger, compress better for some reason.
replacing it with similar program: Xdelta3, improved both: compression and decompression speeds a lot.
replacing 7z with FreeArc on the other hand didn't improve anything, though i tested just a few options:
m4 - for being suggested as equal to -mx=9 of 7z, which i commonly use
and couple more
(maybe it does beat 7z with some - i'm not saying it doesn't, they're quite close anyway)
also i didn't test those inbuild filter chains

from those results i'd say 'Xdelta3+ECM->LZMA' is optimal configuration,
would .ecm be created for most demanded version from set (U or E, whichever it is)
it would loose only few seconds on decompression to PackIso, while improving ration a lot
(would this game contain audio tracks it'd be a tie probably (TAK vs APE))
it would be worse if patching is required, but still acceptable, imho
also whole set would compress/decompress considerably faster,
but it's unlikely somebody would do that, imho, not often at least
also memory requirements for 7z @x9 are ok: 700mb/70mb

would such set be created now - it'll involve a lot of constant recompression, though - whenever title is added,
so it's too early, imho, but otherwise i like it a lot