1 (edited by Nexy 2011-10-13 11:13:14)

We would respectfully request sub code dumps of all CD's with CDDA/Audio tracks, and protected games for research and analysis purposes.

There is some concern about discrepancies with gaps using the current "find your own way" method. There is also curiosity of anomalies in the sub data for protected titles.

It is recommended that the drive you use is capable of properly dumping sub code, most drives support this these days.

Please contact either me or F1reb4ll for the tool we request be used for this purpose.

Would you please post the test.log and test.sub files on a file host so that the moderators and researchers can check and analyze them. I will create a small tutorial for using these sub code dumps to determine how to dump the data tracks with proper length. You will also need to check the lengths of dumped audio tracks to make sure that they are the correct length.

Redacted and corrected 13/10/2011

- Nexy

Plextor PX-760A 1.07 (+30) : Plextor PX-716SA 1.11 (+30) : Plextor PX-W5224A 1.04 (+30) : Plextor PX-W4824 1.07 (+30) : Plextor PX-W4012TA 1.07 (+98) : Plextor PX-W1610TA (+99) : Plextor PX-W1210TA 1.10 (+99) : Lite-On LTR-48246S (+6) : Lite-On LTR-52246S (+6) : Lite-On LH-20A1H LL0DN (+6) : BenQ DW1655 BCIB (+618) : ASUS DRW-2014L1 1.02 (+6) : Yamaha CRW-F1 (+733) : Optiarc SA-7290H5 1H44 (+48) : ASUS BW-16D1HT 3.02 (+6)

PCE/TG16CD control is even strictier, that's why noone wants to bother smile

and what will this accomplish?
all details of subchannels can't be fully illustrated by cue sheet (for example, when P doesn't match with Q)
so you're be replacing one interpretation with another

Please contact either me or F1reb4ll for the tool we will be using for this purpose.

eh, are Truman and Jackal still trying to steal this super advanced and very useful program?



for systems, where subchannel is being analyzed now, (yeah, those, nobody wants to dump for) what do you get in the end?
discsc, where audio sector gets scrambled and stiched to data track or vice versa...
it's so much more preserved now, isn't it? so worth it...
future generations will be crying tears of joy seeing this amazing contribution, i'm sure of it.
http://1.bp.blogspot.com/-1FE4-QWBfF0/TfWogu-ZrHI/AAAAAAAAADw/pjt6JSA7YdI/s200/HAPPY+CRYING+FACE+%2528tumblr+meme%2529.png

Themabus: I'm not sure what your getting at here.

Sure flags do not get stored, and they can change in the middle tracks or gaps, which is why the subs are needed.

As for P and Q, from my experience P doesn't matter only Q matter when it comes to gaps. Many tracks can end up wrong size if you go by P, it's pretty simple to test if you merge the image together into a whole disc and compare the size of the whole thing to how many LBA there should be in the image. Often times this comes out wrong if you don't check the frames. Why is that? Because Q track number and Q index needs to be paid attention, this can often conflict with TOC and P channel, and usually does. P channel tends to be off by 1 sector always in almost every single dump with CDDA I've done.

TOC may point to an area in the track which is marked as gap in the Q channel (CD Lock for instance). Other discs may say there is a gap of 2:02 , which it correct by Q channel (which is what EAC goes by) but this can conflict with TOC. Which will make the track the wrong size as the first 2 sectors after where TOC says is start of track are marked as index 00 in the subs. I can come up with many such examples. Another example is Blood, where the final sector of the data track is filled with 00, it is marked both in the TOC AND subs as part of Track 01 and not Track 02. Where if you go by the guide, you would be mistaken thinking this is part of the gap of track 02.

Granted I do PC strictly, and PC is just badly mastered and nothing to be done about it, other than at least try to get the tracks sizes correct so that the disc image is the right size. It's most important to have correct size of Tracks FIRST. Rest can be handled by cue and subs for any discrepancies in the mastering process.

We should always be trying to dump things to the best of our understanding and using the tools we have at our disposal.

Plextor PX-760A 1.07 (+30) : Plextor PX-716SA 1.11 (+30) : Plextor PX-W5224A 1.04 (+30) : Plextor PX-W4824 1.07 (+30) : Plextor PX-W4012TA 1.07 (+98) : Plextor PX-W1610TA (+99) : Plextor PX-W1210TA 1.10 (+99) : Lite-On LTR-48246S (+6) : Lite-On LTR-52246S (+6) : Lite-On LH-20A1H LL0DN (+6) : BenQ DW1655 BCIB (+618) : ASUS DRW-2014L1 1.02 (+6) : Yamaha CRW-F1 (+733) : Optiarc SA-7290H5 1H44 (+48) : ASUS BW-16D1HT 3.02 (+6)

5 (edited by Jackal 2011-10-08 23:59:52)

Nexy wrote:

Themabus: I'm not sure what your getting at here.

The dumping format is simply incomplete and out of date.. There's little point in asking people to dump subchannels if this data is mostly discarded and it only serves for a flawed interpretation of what the gaps should be like.

themabus wrote:

eh, are Truman and Jackal still trying to steal this super advanced and very useful program?

afaik it's buggy and hasnt been worked on for years. Just wait some more years for trurip or robotrip or whatever's supposed to come out (or code it yourself, I know you have the skills tongue).. Maybe we could have a decent open source project going if the authors of these programs didn't have such big ego's. Then again, if you have two parties who aren't sharing any knowledge, you can't expect one of them to start doing that.

Please explain what is out of date and why sub code data would be flawed.

Program has been worked on recently because some annoyance with the log was fixed which I requested. sub2cue has also been worked on because it had a bug.

I'm not sure why everyone is on a different page regarding everything which is being done. Also no progress is made if methods/tools aren't addressed, fixed or coded to begin with. What your saying basically is we are wasting our time and should not even bother with this project anymore? Is that what your saying?

Also, fuck ripper and truman, ripper is a jackass drunk with delusions of grandeur and truman doesn't bother to contact or talk with anyone else which makes me think his ego is bigger than ripper's is. Until something is released there it's just all bullshit and vaporware as far as I am concerned.

Plextor PX-760A 1.07 (+30) : Plextor PX-716SA 1.11 (+30) : Plextor PX-W5224A 1.04 (+30) : Plextor PX-W4824 1.07 (+30) : Plextor PX-W4012TA 1.07 (+98) : Plextor PX-W1610TA (+99) : Plextor PX-W1210TA 1.10 (+99) : Lite-On LTR-48246S (+6) : Lite-On LTR-52246S (+6) : Lite-On LH-20A1H LL0DN (+6) : BenQ DW1655 BCIB (+618) : ASUS DRW-2014L1 1.02 (+6) : Yamaha CRW-F1 (+733) : Optiarc SA-7290H5 1H44 (+48) : ASUS BW-16D1HT 3.02 (+6)

IMHO this would just complicate process to an extent where it would pose more problems than solve
as it IMHO is with those systems mentioned

problem being addressed here is a doubtful interpretation of gaps
but the thing is that cue sheet defines only start and end frames of gaps, while in subcode gaps are referenced in channel P and channel Q with possibly independent fields and other oddities you mention
and so proposed solution is different interpretation of said gaps with subsequent translation to same limited format
(and there are oddities that can only be guessed - data can be such, that there won't be 'correct' answer
when image format comes with full subcode, burden of this interpretation
is then placed on end application that will use this image
it's still always an interpretation though - not caring about channel P is an interpretation,
AFAIK some hardware solutions read P exclusively and don't bother with Q,
maybe some consoles in some reading modes do too
for example PSX won't position precisely on audio tracks it can shoot off by full sector if i remember correctly
and if Q-CRC does not pass it would just return Q from last valid frame
but because we are locked in this limited format that requires gap definition, we have to carry it out ourselves -
make a decision whether frame belongs to gap or not)
process for said solution requires rather time consuming reading on specific drives
(which does not guarantee resulting subcode to be precise, as there are no control information for it;
only processing of different discs on different drives would give this certainty (to an extent, as different drives can still share similar chipsets (see Plextor Mode2 firmware bug for an example) and so on))
this translation to cuesheet might then require manual analysis of aquired subcode data
after it's done, one might need to scramble sectors and carry them from one track to another
and when this is done, actual result is this carrying sector from file to file - no, data gained, only carried from file to file
it's just so meaningless, it's ultimately useless to address it this way IMHO

Well come up with a better solution and suggest it then rather than make posts like your first one which I found offensive and bordering on trolling.

Much talk already on changing to clonecd format also, I may as well post the idea here too and absorb the negative reactions to that too. There are limitations to this format too, but not as many as bin/cue, and it's easy enough to convert existing images to it with just ccd generation and sub code files added.

Also talk of entire disc dump being scrambled.

Conflicting opinions on all of these subjects, it's rather annoying imo.

What bothers me the most is the childish way in which ideas get discussed at times, we are all adults, should act like adults and discuss the possibilities and ideas in a rational matter rather than acting like spoiled 12 year olds. I for one do not appreciate it all when I spend thousands of dollars and put hundreds of hours of my own time in this project, only to see such bullshit posted.

Plextor PX-760A 1.07 (+30) : Plextor PX-716SA 1.11 (+30) : Plextor PX-W5224A 1.04 (+30) : Plextor PX-W4824 1.07 (+30) : Plextor PX-W4012TA 1.07 (+98) : Plextor PX-W1610TA (+99) : Plextor PX-W1210TA 1.10 (+99) : Lite-On LTR-48246S (+6) : Lite-On LTR-52246S (+6) : Lite-On LH-20A1H LL0DN (+6) : BenQ DW1655 BCIB (+618) : ASUS DRW-2014L1 1.02 (+6) : Yamaha CRW-F1 (+733) : Optiarc SA-7290H5 1H44 (+48) : ASUS BW-16D1HT 3.02 (+6)

"It's time to REQUIRE sub code dumps of all CD's with CDDA/Audio tracks."
[...]
"Please contact either me or F1reb4ll for the tool we will be using for this purpose."

is this mature to you? i had this kind of maturity up to my neck, this is the reason why i left IRC and don't generally talk here anymore.

"Well come up with a better solution and suggest it"

thank you, i thought you'll never ask.

in my opinion, when you have a certain set and a subset of this set contains some oddities,
a value of those oddities should then be determined and neccessary course of actions defined based on this value.
it's not that different from what you generaly propose, i do however object on form and don't agree with value.
you probably know that this project started from PSX database and grew around it
i believe that implementation of how PSX is handled is close to optimal one, it was thought out well,
it's further down the road, where it went astray.
with PSX: those CDs with LC protection are treated slightly different - oddities are examined and defined in db (ASCII field),
from where they can be retrieved and processed/analyzed further.
(i do not like .sbi output, it can't hold all database values, but it's not really that relevant,
other output format was made - .lsd, i think, which addressed this issue,
so it could as well be .sub or different format, if needed)
this is what a good database is.
binary data is a lazy way to do things, if your motivation is to examine and preserve some sort of information -
it's just conservation you'd be doing, missing documentation part.
thus i do not agree on generaly expanding image format to .img/.sub/.ccd,  .mdf/.mds, 2448 sectors + TOC or any other.
this data should be analyzed and stored in db in readable format.
(the same with other things, i.e. medium topology based protections - timings should be analyzed and stored in table
.mds (or other neccessary format) then could be generated from this data as an output
with SafeDisc and similar protections that already are in db, this information would be much more useful
if actual sectors affected would be listed in records - this data could be examined then and worked with,
without a neccessity to have actual matching images
etc.)
so, yes, thing like this should be discussed and well thought out, it's not a matter to rush
i do not agree on "all CD's with CDDA/Audio tracks"
let me give you an analogy with PSX again:
some PSX CDs have gaps larger that 3 seconds, it makes them partially unreadable on drives i tested with:
Plextor, Lite-On (most popular ones at that time)
might be that some drives read them well though, i however did swapping for those.
is this the reason to read all PSX CDs with multiple tracks with swapping or with specific drives/methods?
no. why would it be? it's only a small fraction of CDs affected.
you are wildly exaggerating there.
IMHO only those specific discs should be treated differently, if value of gained data is sufficient.
with case of gaps, i do not believe it is, as this information is of little significance
and implementation of said process to gather this data
would complicate image making process far too much (see my previous post)
possibly bringing it to stall

10 (edited by Jackal 2011-10-12 15:08:15)

BTW. I feel that this topic shouldn't be a sticky, because it states opinions rather than general policy

ps. I agree with themabus' feeling that things sorta got 'out of hand' and we just kept adding more systems which less and less suitable for our initial dumping method. Go have some talk with ripper about  'half sectors' and sector overlaps etc and you'll learn that your proposed solution prolly isnt even gonna cut it. A custom format would be needed in order to really preserve all these mastering errors and oddities.

And maybe it's a good idea to just mark any dump that isn't preserved correctly under the current standards as 'yellow'. Otherwise we just keep on mixing flawed dumps with known good ones (then again, how can we be sure a dump is good if we keep raising the bars and keep on adding new fields and new requirements?).

Themabus:

My intentions with the first post were not to cause conflict or infighting, but only to spur communication and consensus. I can see where it can be viewed as a bit childish, I will redact it. That can be settled and on with the important stuff rather than communication style...

Yes most of us are aware that this project started and grew from PSX database, that's a good thing and efforts are surely appreciated and attractive to people wishing to preserve whatever is in their interest and leads them here to begin with.

Please elaborate on what you mean by "oddities are examined and defined in db (ASCII field)". I'm not entirely sure what you mean by that.

binary data is a lazy way to do things, if your motivation is to examine and preserve some sort of information -
it's just conservation you'd be doing, missing documentation part.

I'm all for that, but what should we be documenting? Should we be documenting just anomalies in sub code? topology? user data? protection mechanisms? conflicts between sub code and TOC? <insert anomaly here> ? all of the above? Do please be specific, as we should all be really interested to know what it is we should be focusing on, rather than just conservation as you say.

Some of these things are fairly easy to document, others would require a great deal of investigation and documentation. I myself do that on other platforms so am not averted to such things, but certain aspects of it may also garner unwanted attention from various entities. However I believe that should not stop it from being done.

thus i do not agree on generaly expanding image format to .img/.sub/.ccd,  .mdf/.mds, 2448 sectors + TOC or any other.
this data should be analyzed and stored in db in readable format.
(the same with other things, i.e. medium topology based protections - timings should be analyzed and stored in table
.mds (or other neccessary format) then could be generated from this data as an output
with SafeDisc and similar protections that already are in db, this information would be much more useful
if actual sectors affected would be listed in records - this data could be examined then and worked with,
without a neccessity to have actual matching images
etc.)

An interesting point indeed, but how should such information be collected and stored, again please elaborate on it. As for the last part with safedisc and also laserlok, a list of sectors is easy enough to generate. But I think the contents of those sectors should be examined more closely for possible meaningful data which may be contained therein. There is conflicting opinions about that dating back to the first appearance of them, and maybe it's time someone investigate that and figure out exactly what is true or false about such theories.

Your last point I can understand completely, so it need not be discussed any further. As I said I will redact the first post.

Also thank you for maintaining some interest in things even through differences in opinion and sometimes conflict and heated discussion. Also please try to understand that some people are quite passionate about this, and there is no intention of stalling the project or creating dissent. Only to better preserve. =]

Jackal:

BTW. I feel that this topic shouldn't be a sticky, because it states opinions rather than general policy.

The point of such topics is discuss opinions and make decisions which in turn create the general policy and procedures, without it, any project would stagnate and become a dead carcass. I feel important topics and discussions should be stickied as it makes it easier for newcomers to get an idea of what goes on, and also as reference points for reflection by experienced users. IMO there are many relevant posts in this forum that SHOULD be stickied which are not.

ps. I agree with themabus' feeling that things sorta got 'out of hand' and we just kept adding more systems which less and less suitable for our initial dumping method.

Well it's a bit too late to do anything about that now, the cat is already out of the proverbial bag.

Go have some talk with ripper about  'half sectors' and sector overlaps etc and you'll learn that your proposed solution prolly isnt even gonna cut it. A custom format would be needed in order to really preserve all these mastering errors and oddities.

I don't want to talk with ripper about anything, and you know damn well he is very arrogant, and condescending. Also as themabus mentioned in his post, is this about preservation or conservation. That needs to be worked out first before any other discussions about formats and other things take place. Also ripper is not the be-all end-all of information about what's on various discs, the fanboi-ism is getting a bit annoying actually. I don't want this to just end up in a bickering session though, so if you have information about such things rather than tell us to go ask so-and-so about it, post it up here for all to know. Isn't that the whole point?

And maybe it's a good idea to just mark any dump that isn't preserved correctly under the current standards as 'yellow'.

Indeed there are many questionable things, who decides what though, and based on what... again I think "current standards" are not even defined at this point. Except for only certain systems like PSX and PS2 where that has already been decided, but could change if the need to is founded.

- Nexy

Plextor PX-760A 1.07 (+30) : Plextor PX-716SA 1.11 (+30) : Plextor PX-W5224A 1.04 (+30) : Plextor PX-W4824 1.07 (+30) : Plextor PX-W4012TA 1.07 (+98) : Plextor PX-W1610TA (+99) : Plextor PX-W1210TA 1.10 (+99) : Lite-On LTR-48246S (+6) : Lite-On LTR-52246S (+6) : Lite-On LH-20A1H LL0DN (+6) : BenQ DW1655 BCIB (+618) : ASUS DRW-2014L1 1.02 (+6) : Yamaha CRW-F1 (+733) : Optiarc SA-7290H5 1H44 (+48) : ASUS BW-16D1HT 3.02 (+6)

my point of view is such that this extra information should be presented in format suitable for database,
i.e. organized as some sort of readable structure, as it is with libcrypt
with this the purpose of this database extends further than index of hashes - it can be used independently of binary data -
one does not need to violate copyright law to work with such data

what should be or shouldn't be analyzed must be discussed, imho,
we have different opinions on this, i wouldn't want to push mine, i agree with you there, i just think it should be discussed

about topology, basically i don't see it much different from how it was done with libcrypt again
either .mds can be analyzed and data filtered from this container or medium could be examined itself with specific tools
my understanding is such that only few elements are of interest, as validation takes few seconds
though it could be that at each execution a subset of some particular elements is selected randomly from larger set
it would be easier to start from older implementations of such protections and then proceed forth to newer ones
maybe those older ones do not have ring 0 drivers of their own, so SPTI monitoring could be sufficient
or at least hacking shouldn't be too difficuly
(StarForce is what i have seen most, so i'm basing on that)

how to organize this all, i think new people should be recruited - coders, hackers, etc.
defined how things will be organized - by voting? / who votes? are all votes equal? what should staff hierarchy be?
(imho it's on shoulders of too few people now, i think Jackal was pretty much running site alone last year
now much depends on iR0b0t - hit by the bus factor and such
i myself would gladly resign then), etc.
and then those matters should be looked into and new database model developed at the same time,
that would address current issues and take those changes ito account
imho model Dremora had came up with was pretty good as far as addressing issues go - it could be recycled for ideas

Sorry for late reply, wasn't notified the thread was replied to.

Yes lets discuss what should be done and in which ways. Please put your idea's and opinions up for discussion, got to get the ball rolling.

PLEASE don't resign, your input and experience is very valuable and needed!

I'm not familiar with Dremora's model, enlighten me please.

Yes we need more people and knowledgeable ones, I'm trying to recruit such people, and have at least one now who will work behind the scene's at least, see other new sub forum.

So my thoughts on what to keep on the db, are list of error sectors for safedisc/cd ring protect/laserlock. That data is easily kept as a list, and doesn't violate any kind of copyright. Some research needs to be done into if the sectors are just plain bad on purpose, contain some meaningful data which can be extracted through a different unscrambling process, or unknown?

For sub code, maybe just a hash of that data to check if they match another disc, and only blue when matched once and green with multiple matches (fireball's suggestion).

For topology data, I don't even know where to begin with that, you say you have experience with starforce, please elaborate on what it checks for. I have some with securom and tages, and they are checking seek times. I am not entirely sure how the data is on the discs though, if they are doubled sectors or just constant/angular velocity changes or what. Certainly need more research into that.

Plextor PX-760A 1.07 (+30) : Plextor PX-716SA 1.11 (+30) : Plextor PX-W5224A 1.04 (+30) : Plextor PX-W4824 1.07 (+30) : Plextor PX-W4012TA 1.07 (+98) : Plextor PX-W1610TA (+99) : Plextor PX-W1210TA 1.10 (+99) : Lite-On LTR-48246S (+6) : Lite-On LTR-52246S (+6) : Lite-On LH-20A1H LL0DN (+6) : BenQ DW1655 BCIB (+618) : ASUS DRW-2014L1 1.02 (+6) : Yamaha CRW-F1 (+733) : Optiarc SA-7290H5 1H44 (+48) : ASUS BW-16D1HT 3.02 (+6)

thanks Nexy

this is wip db structure way back from 2009
http://pics.dremora.com/redump-NGS.png
titles are seperated to a table, so no limit of 2; sub info is broken apart to release table, so no need to specify it in brackets; info on languages is more detailed
we were discussing db issues at that time, though since then more problems surfaced

i haven't messed with Starforce, it's just from observations - it spins up medium and then does some seeks, apparently clocking timing between sectors
on some specific sites or forums .mds files, that people made with DPM on can be downloaded and then they can be used with .iso images, so it's all in there, in this meta data
Starforce is quite popular here, i'd say about every 6th or so game has it, so this could be used to advantage when doing debugging, comparing information from multiple .exes
sometimes there would even be DVD and CD release of same game, both protected with Starforce, so such cases could be of particular interest