arise, dead topic!

so i had audacity to rewrite this reCombine program
it's billion times faster now; have multithreading, trillion terrabytes or something files support (DVDs)
and can do up to 2^60 combinations  **which will take about 6 million years to compute
so this is the bestest program ever now
and you won't need other programs for rest of your life

http://www.mediafire.com/download/cyys76gst825vv8

p.s. Gaijin made me do this so it's all his fault
p.p.s. actually, when i think of it, everyhing is his fault
p.p.p.s i still don't know when to use 'the'

kthxbye
live long and prosper

hi nocash

nocash wrote:

What is the "Unknown 01h" byte for? Disc number? Subchannel number? Patch type/version? Number of following 10-byte pairs?

this byte defines format of following bytes
specifications of sbi format are given in 'PSEmu Pro CDR plugin interface'

// 1 byte Format: the format can be 1, 2 or 3:
// Format 1: complete 10 bytes sub q data
// Format 2: 3 bytes wrong relative address
// Format 3: 3 bytes wrong absolute address

nocash wrote:

And the http://redump.org/guide/libcrypt/ page says that CRC-16 is also scattered on libcrypt'ed cdrom... but the SBI files don't support that?

yes, it's a pity
for exact data can not be reproduced from .sbi files
though it's enought to fool protection algorithm
as console does not return erroneous subchannel data, in such cases it returns previous frame instead
so .sbi basically just stores information where error occured and that's enough to bypass LC

nocash wrote:

What does that mean? Does it mean that there are 64 sectors being common candidates to contain protection info on PSX discs? And this case, 16 of them actually containing that info? And what is LC1, LC2, and is it PSX related, too?

LC1 and LC2 are LibCrypt patterns from a viewpoint of data not code
i.e. if you'll look for documents on LibCrypt, they'll reference changes that took place in code
and variations noted in those guides will not correspond to LCx generations in this DB

as i remember it this program was updated long ago for a last time
and could be it does not identify every LC pattern as such
might be some would just be reported as odd sectors
also not everything reported as odd means it's LC

generally LC always has those pairs
and backup copy of pattern further on CD, if data track is large enough
this backup though seems to never get actually used by algorithm
...many strange things with LC, it's as if it was rushed

hola, senoras y senores

been away for a summer (mmmm, sweet summer of leisure and love...)
for which i do not apologize at all, heh

BruteForce3C sends ReadBuffer (0x3C) command, as described in 'SCSI Primary Commands - 3 (SPC-3)'
to a specified drive
incrementally going through values 0x00..0xff for byte 1 and 2
checksum is calculated on returned data and if it's different from a checksum of zero-filled data,
buffer is logged to file for further examination
if at all, usually vanilla command 3c 02 00 would work (byte 1 : 02h - Data)
for MediaTek based Lite-Ons it's 3c 01 01 (byte 1: 01h - Vendor specific) for buffer;
3c 01 02 / 3c 01 e2 for EEPROM;
3c 01 f1 for KEYPARA
and buffer size is limited for LO
i've seen other custom commands, beside LO, but none that would return usable buffer data
other parameters might be custom too
and there might be completely custom commands, not based on ReadBuffer
in such cases this program will not help
you could try to determine it by other means (on forums, sources of programs for specific drives, disasm fw and such)

IDE2USB shouldn't matter

numbers at the end of execution are just timing

aboot Segmentation fault

i have
absolutely
no
idea
tongue

i think such matters shouldn't affect naming in database itself
it's a concern of end user what he does with his images
naming shouldnt become cryptic for many to simplify storage for some
though database could output more information to .dat (xml),
that could be parsed afterwards, allowing implementations of various 3rd party tools
currently pretty much only names go there, while most is left on web server
so this data, it's very fragile and difficult to access, which is wrong, imho

though something very similar was already done in pakkiso - only generic track names - without serials

hi, darthcloud
as i remember it, it was decided not to romanize when Japanese title is based on foreign term
(there was discussion somewhere in Fixes section, probably should still be there)
for example romanizing Final Fantasy to Fainaru Fantajī wouldn't  make much sense
likely the same with Zeruda - Zelda (could be term "Zelda" is used inside game universe), not sure about the rest, though

thanks Nexy

this is wip db structure way back from 2009
http://pics.dremora.com/redump-NGS.png
titles are seperated to a table, so no limit of 2; sub info is broken apart to release table, so no need to specify it in brackets; info on languages is more detailed
we were discussing db issues at that time, though since then more problems surfaced

i haven't messed with Starforce, it's just from observations - it spins up medium and then does some seeks, apparently clocking timing between sectors
on some specific sites or forums .mds files, that people made with DPM on can be downloaded and then they can be used with .iso images, so it's all in there, in this meta data
Starforce is quite popular here, i'd say about every 6th or so game has it, so this could be used to advantage when doing debugging, comparing information from multiple .exes
sometimes there would even be DVD and CD release of same game, both protected with Starforce, so such cases could be of particular interest

my point of view is such that this extra information should be presented in format suitable for database,
i.e. organized as some sort of readable structure, as it is with libcrypt
with this the purpose of this database extends further than index of hashes - it can be used independently of binary data -
one does not need to violate copyright law to work with such data

what should be or shouldn't be analyzed must be discussed, imho,
we have different opinions on this, i wouldn't want to push mine, i agree with you there, i just think it should be discussed

about topology, basically i don't see it much different from how it was done with libcrypt again
either .mds can be analyzed and data filtered from this container or medium could be examined itself with specific tools
my understanding is such that only few elements are of interest, as validation takes few seconds
though it could be that at each execution a subset of some particular elements is selected randomly from larger set
it would be easier to start from older implementations of such protections and then proceed forth to newer ones
maybe those older ones do not have ring 0 drivers of their own, so SPTI monitoring could be sufficient
or at least hacking shouldn't be too difficuly
(StarForce is what i have seen most, so i'm basing on that)

how to organize this all, i think new people should be recruited - coders, hackers, etc.
defined how things will be organized - by voting? / who votes? are all votes equal? what should staff hierarchy be?
(imho it's on shoulders of too few people now, i think Jackal was pretty much running site alone last year
now much depends on iR0b0t - hit by the bus factor and such
i myself would gladly resign then), etc.
and then those matters should be looked into and new database model developed at the same time,
that would address current issues and take those changes ito account
imho model Dremora had came up with was pretty good as far as addressing issues go - it could be recycled for ideas

"It's time to REQUIRE sub code dumps of all CD's with CDDA/Audio tracks."
[...]
"Please contact either me or F1reb4ll for the tool we will be using for this purpose."

is this mature to you? i had this kind of maturity up to my neck, this is the reason why i left IRC and don't generally talk here anymore.

"Well come up with a better solution and suggest it"

thank you, i thought you'll never ask.

in my opinion, when you have a certain set and a subset of this set contains some oddities,
a value of those oddities should then be determined and neccessary course of actions defined based on this value.
it's not that different from what you generaly propose, i do however object on form and don't agree with value.
you probably know that this project started from PSX database and grew around it
i believe that implementation of how PSX is handled is close to optimal one, it was thought out well,
it's further down the road, where it went astray.
with PSX: those CDs with LC protection are treated slightly different - oddities are examined and defined in db (ASCII field),
from where they can be retrieved and processed/analyzed further.
(i do not like .sbi output, it can't hold all database values, but it's not really that relevant,
other output format was made - .lsd, i think, which addressed this issue,
so it could as well be .sub or different format, if needed)
this is what a good database is.
binary data is a lazy way to do things, if your motivation is to examine and preserve some sort of information -
it's just conservation you'd be doing, missing documentation part.
thus i do not agree on generaly expanding image format to .img/.sub/.ccd,  .mdf/.mds, 2448 sectors + TOC or any other.
this data should be analyzed and stored in db in readable format.
(the same with other things, i.e. medium topology based protections - timings should be analyzed and stored in table
.mds (or other neccessary format) then could be generated from this data as an output
with SafeDisc and similar protections that already are in db, this information would be much more useful
if actual sectors affected would be listed in records - this data could be examined then and worked with,
without a neccessity to have actual matching images
etc.)
so, yes, thing like this should be discussed and well thought out, it's not a matter to rush
i do not agree on "all CD's with CDDA/Audio tracks"
let me give you an analogy with PSX again:
some PSX CDs have gaps larger that 3 seconds, it makes them partially unreadable on drives i tested with:
Plextor, Lite-On (most popular ones at that time)
might be that some drives read them well though, i however did swapping for those.
is this the reason to read all PSX CDs with multiple tracks with swapping or with specific drives/methods?
no. why would it be? it's only a small fraction of CDs affected.
you are wildly exaggerating there.
IMHO only those specific discs should be treated differently, if value of gained data is sufficient.
with case of gaps, i do not believe it is, as this information is of little significance
and implementation of said process to gather this data
would complicate image making process far too much (see my previous post)
possibly bringing it to stall

nope, never happened.
i think this was there when i came to this site and it never went anywhere.
way you can use PSX images is with plugins or convert .bin/.cue +.sbi to .img/.ccd with program from my signature (PSXstuff)
(if you convert, for XOR value enter either one, it doesn't really matter)

IMHO this would just complicate process to an extent where it would pose more problems than solve
as it IMHO is with those systems mentioned

problem being addressed here is a doubtful interpretation of gaps
but the thing is that cue sheet defines only start and end frames of gaps, while in subcode gaps are referenced in channel P and channel Q with possibly independent fields and other oddities you mention
and so proposed solution is different interpretation of said gaps with subsequent translation to same limited format
(and there are oddities that can only be guessed - data can be such, that there won't be 'correct' answer
when image format comes with full subcode, burden of this interpretation
is then placed on end application that will use this image
it's still always an interpretation though - not caring about channel P is an interpretation,
AFAIK some hardware solutions read P exclusively and don't bother with Q,
maybe some consoles in some reading modes do too
for example PSX won't position precisely on audio tracks it can shoot off by full sector if i remember correctly
and if Q-CRC does not pass it would just return Q from last valid frame
but because we are locked in this limited format that requires gap definition, we have to carry it out ourselves -
make a decision whether frame belongs to gap or not)
process for said solution requires rather time consuming reading on specific drives
(which does not guarantee resulting subcode to be precise, as there are no control information for it;
only processing of different discs on different drives would give this certainty (to an extent, as different drives can still share similar chipsets (see Plextor Mode2 firmware bug for an example) and so on))
this translation to cuesheet might then require manual analysis of aquired subcode data
after it's done, one might need to scramble sectors and carry them from one track to another
and when this is done, actual result is this carrying sector from file to file - no, data gained, only carried from file to file
it's just so meaningless, it's ultimately useless to address it this way IMHO

and what will this accomplish?
all details of subchannels can't be fully illustrated by cue sheet (for example, when P doesn't match with Q)
so you're be replacing one interpretation with another

Please contact either me or F1reb4ll for the tool we will be using for this purpose.

eh, are Truman and Jackal still trying to steal this super advanced and very useful program?



for systems, where subchannel is being analyzed now, (yeah, those, nobody wants to dump for) what do you get in the end?
discsc, where audio sector gets scrambled and stiched to data track or vice versa...
it's so much more preserved now, isn't it? so worth it...
future generations will be crying tears of joy seeing this amazing contribution, i'm sure of it.
http://1.bp.blogspot.com/-1FE4-QWBfF0/TfWogu-ZrHI/AAAAAAAAADw/pjt6JSA7YdI/s200/HAPPY+CRYING+FACE+%2528tumblr+meme%2529.png

ice.exe was made to automate additional steps in DC dumping (unscrambling/track separation)
AFAIR error indication in brackets is based on EDC/ECC check per sector (it determines sector nature: data or audio) and was intendet for gap managment
so there might indeed be cases when indicated number is different from 75 for good image or is 75 for bad image
75 only means that gaps are of common configuration
ice.exe was not meant for image validation
validation is performed by obtaining two matching images, preferably on different drives

thanks, np
works good for me now
i can't test GD-ROMs atm but it reads regular CDs even through my IDE2USB

hi, jamjam

i tried DCdumper.exe 0.2a but it would just crash on my PC (XP SP3)

usually programs would use SCSI Pass Through Interface (SPTI) to communicate with drive
(SCSI_PASS_THROUGH_DIRECT / SCSI_PASS_THROUGH_DIRECT_WITH_BUFFER)
it has least problems
and you can get handle with CreateFile, so only letter is needed - it shouldn't matter if its USB or ATA or SATA
there are good examples in Kris Kaspersky's book
http://forum.mobilism.org/viewtopic.php … ;view=next
(look for RAW reading through SPTI
there should be precompiled examples and .c files that came on CD with the book somewhere on web too,
but if you can't find those PM me and i'll send them to you)
and on Truman's homepage
http://www.cdtool.pwp.blueyonder.co.uk/workshop.htm

good luck

no, no, cool beans, thanks
'Disc' would be awkward if ther's several double-sided discs
though this reminds me, DB rewritte never happened
this 'Version (datfile)' construct is a hack
is a rewrite planned, anyone knows? how many people have access to the site?

or is it added just like disc 1/2?

17

(22 replies, posted in General discussion)

could you, please, insert line in disc submission form to separate rings per disc?

edit:
maybe in view too, it's kinda weird looking now
http://redump.org/disc/18613/
all ring fields blend togeather
in this case they can be distinguished by pattern
but if rings were more alike it would be a mess IMHO

labels in edit form would be great too

18

(22 replies, posted in General discussion)

ah, i missed this too

how to make ring green?
it looks like i have a matching WOW expansion, even IFPI's are the same
which is pretty strange IMHO
but i don't see an option to set color
do i add it 2nd time exactly the same and it will group and become green?

19

(1 replies, posted in News)

PSX libcrypt data could be parsed to output actual 16-bit LC key encoded in subchannel to the web db
and as an field in .dat for each protected game
tool could be made to convert .bin/.cue + this value (from .dat or console) to .img/.ccd
(it's currently a neccessity for recording anyway)
.sbi/.lsd files could still be providet, but set would become usable without them - more independent

http://forum.redump.org/topic/5299/cdrx … -pcsx-etc/

20

(2 replies, posted in General discussion)

PCE had a primitive way of stream interleaving, by mixing data/audio tracks
so it was important for this console to preserve actual data alignment
however, it is fully done with raw images
you don't need additional steps for this

this TOC, you refer to, is not more than a cue-sheet
about a decade ago there were those .iso/.mp3 dumps
and .mp3, when converted to .wav, could have different size, than the original
so you could sometimes spot those (.bin or .img originated from .iso/.mp3, not real CD) this way, that's about it
(i.e. compare .cue from raw image to .cue from image in question)

result is correct as it is now, probably just an unnecessary remainder is what it is
there should be quite a lot of commented out code parts and such
i didn't really invest time in aesthetics
but you could test this unit with raw DVD or CD sectors if you want to be certain

no, it's name, that should have been changed
it's pretty messy like that
this Reed–Solomon unit is there for Lite-On, since those drives erase two bytes from each raw sector
this data has to be regenerated
they would not work at all, would there be something wrong

i made that unit atop of Morelos-Zaragoza & Thirumoorthy code
(3. 'Reed-Solomon errors-and-erasures decoder' @The Error Correcting Codes (ECC) Page)

i don't know, maybe i'll clean it up someday
not anytime soon though,  i'm pretty done with this for now

edit:
ah, right, you mean, it shifts all data out now?
probably can just remove this bracket then

ok
thank you flibitijibibo
i'll add link to this thread in main topic, for other Linux users

yes,  type is determined with disc_detect_type by simply requesting some sectors ahead of standard GC disc size
and then, if it pass, ahead of Wii size
i guess maybe your drive just doesn't return expected sense code
there was other method left in source, relying on content of DVD, but it was not used for some reason
so i left it alone

maybe you could check what sense your drive returns
on Windows IsoBuster could do this but i don't know about Linux tools

edit:
oh, right, i guess you could just compile it to print sense it returns

edit:
i've rechecked 0.5.3 (that precompiled version from year back) on my Lite-On with DVD+R and DVD+R DL
and it gets first one as Wii and 2nd as Wii_DL
so, i would say, on Windows generally it's fine

no, no, we tested it on Windows quite a lot, it should be alright
i don't have Wii discs myself though, but it read PC DVDs and GC discs fine
but on Linux nobody did
i thought maybe it's something specific to your system
but still it's strange it crashes
if it was for size difference, drive would just request sectors that can not be read and program would abort
it sound very much like something has to be fixed for Linux