1

(3,494 replies, posted in General discussion)

Nemok wrote:

But there's a mistake anyway since range 329687-329947 has 261 sectors, not 260...

Hi, I'm not sure what was the case here, but yeah mentioned range is 261.

How I remember it is that you need to end up with an x amount of sequential sectors that differ from the normal read sectors and they are correct with matching ECC/EDC.

If you are getting different results each time, then each of those ranges is either a mixture of normal + twin sectors or there may be corrupted/scrambled sectors. Guess you need to do this manually or make some automated tool that rereads each sector until the twin sector is obtained.

kingspoons wrote:

It's PAL (UK). As for the ring codes, one is AV05509E-AVL1, which doesn't seem to match the redump db. In fact, it looks like it's one out!

If it's really 09 and not 08 then it's an undumped revision.
Looks like there's a Classics release so it's probably that. I added it to the undumped wiki.

We document the barcodes how they are visible on the box, so include the ">" when it's there, even if it serves no practical purpose for the database.

4

(14 replies, posted in General discussion)

Here's another example: http://redump.org/disc/66596/

Some stupid questions:

- http://redump.org/disc/74810/ - Does the original disc play on a CD-i? What about a backup of the unfixed dump?

- http://redump.org/disc/99290/ - This has 8.848 errors despite being fixed? What's going on?

F1ReB4LL wrote:
superg wrote:

As some of you are already aware, some CD's have a mastering issue where write offset changes across the disc. For the standardization purpose, I will be calling that "offset shift".

That's an incorrect term. There's only 1 offset per disc, while you're talking about leftovers from earlier burning/dumping mastering stages. Those leftovers are physically present on the disc and need to be kept, since they belong to the disc data.

There are some clear cases mentioned in the topic where bad mastering is causing dumps to descramble incorrectly and creating tons of erroneous data sectors, because there's some samples missing or added at random positions in a data track. That's the main focus of this topic, right? I don't remember if this also makes the original discs non-functional or if the drive performs some sort of on-the-fly correction to output a correct sector?

And as F1ReB4LL was pointing out also on Discord, there seem to be many cases of discs with scrambled data in the Track02 pregap after offset correction, for example: http://redump.org/disc/1770/ + http://redump.org/disc/1716/ + http://redump.org/disc/7986/
And if I remember correctly, this disc http://redump.org/disc/5479/ also has garbage at the start of the audio. If you remove the bytes, the track matches the PS1 track, so it seems to have been ripped from the PS1 version. And IIRC the same was true for Fighting Force PC vs. PS1. But I'm not sure anymore, as it's 13-15 years since those were first dumped, time sure flies yikes
It's unclear whether this is caused by for example the gold master disc being a CD-R that was burned with track-at-once or something, but the most logical explanation is that an audio track was copied with offset garbage and then burned again. But this is a different issue then that we don't have to discuss here?

IIRC Truong / Ripper theorized that erroneous sectors with garbage bytes at the end of a data track were the result of a "split sector" or "half sector" or whatever they called it, that is part data / part audio tongue If you check the scrambled output, is it data and zeroes interleaved or does the data stop at some position and is it only zeroes after that?
But errors at the end of the data track also seem to be a different issue and since the remainder of the disc is audio tracks, performing offset shift corrections for such discs does not improve the dump in any meaningful way?

There were some examples recently where DIC was leaving sectors scrambled inside a data track with correct sync/header and mostly correct data, resulting in different dumps than before. So the default descrambling behavior must have been changed by sarami at some point or it's a bug. If a sector is inside a data track and the vast majority of it is data, IMO there's no sense in leaving it scrambled and the descrambled data is indeed more meaningful.

5

(20 replies, posted in General discussion)

Giving up on finding a missing Resident Evil 2 PS1 PAL. There's no evidence of an undumped release. These are all the known releases:

Original:
5 028587 081194    UK / Rest of Europe    SLES-00972
5 028587 081828    France    SLES-00973
5 028587 081835    Germany    SLES-00974
5 028587 081842    Spain    SLES-00976
5 028587 081859    Italy    SLES-00975
5 028587 081866    Greece / Portugal    SLES-00972
5 028587 081873    Nordic    SLES-00972
                   
Platinum:
5 028587 082559    UK / Rest of Europe
5 028587 082566    France           
5 028587 082573    Germany           
5 028587 082580    Spain           
5 028587 082597    Italy           

No separate Irish, Belgian (French), Australian releases exist and no hits on any subsequent barcodes

ss_sector_range

http://forum.redump.org/topic/6073/

7

(20 replies, posted in General discussion)

Current best guesses for RE2:
- Special Edition (Germany) release - Claimed to match normal German release, but need pictures for proof
- Ireland - Resident Evil 3 has an Irish release with a blue rating logo on the box

8

(20 replies, posted in General discussion)

00977 number is claimed by the bonus disc, so it seems very unlikely that an unknown RE2 release exists with the same number.

9

(17 replies, posted in General discussion)

My final vote would be to correct offset whenever it's necessary, practical and possible, so follow the base rules that we previously discussed:

0. If there is no non zero data in pregap/lead-out, use 0 offset. Unless it's possible to manually detect the write offset with a reasonable degree of certainty, in which case combined offset correction can be used.

1. if there is non zero data in lead-out and that data can be fully shifted out of there (left) without spanning non zero data into pre-gap, correct offset with a minimum shift required
2. if there is non zero data in pre-gap and that data can be fully shifted out of there (right) without spanning non zero data into lead-out, correct offset with a minimum shift required

Whenever a disc is dumped with offset correction, this should be documented in comments.

And then for the rare headache cases discussed in your last post where it's impossible to shift out data from lead-out/pre-gap (data is wider than allocated TOC space for it):

3. Use 0 offset and preserve relevant non-zero data in separate pregap.bin or leadout.bin. I don't see any advantage in trying to include this data with the main dump through a custom cuesheet format or whatever, but if it's decided otherwise, that's fine by me.

And for the DC / PSX or other discs that have missing relevant TOC / pre-gap / lead-out data, we should also preserve this data in separate files (offset corrected if possible).

As for offset matching and "universal" checksums: Audio checksum databases like AccurateRip and CUETools are already ignoring leading and trailing zero bytes, so they are essentially already storing "universal" checksums? I think this is beyond the scope of the Redump project and would require too much work and too many changes.

Guess we still need to figure out how add the separate files in the database, with iR0b0t not around. Maybe resort to storing .dat or checksums in comments for now, similar to Xbox PFI/DMI/SS.

10

(20 replies, posted in General discussion)

None of these sources explain how to extract the list from the BIOS. I guess someone experienced with IDA should disassemble the PS1ID part?

In reply to the issue posted here: https://github.com/saramibreak/DiscImag … issues/121

DIC dumps multisession .cue with these lines added:

REM LEAD-OUT 01:30:00
REM SESSION 02
REM LEAD-IN 01:00:00
REM PREGAP 00:02:00

These lines are added with the intention to document the disc layout and to describe data that isn't there.
So the REM LEAD-OUT, REM LEAD-IN and REM PREGAP lines are describing missing blocks the same way as PREGAP (without REM) did before we came up with this. They are describing a length and not a relative address.
When I look at previous definitions of REM LEAD-OUT, they are also describing a length: https://github.com/libyal/libodraw/blob … m-lead-out (but a couple lines below there's a typo where it says "The REM LEAD-OUT command is used to specify the LBA corresponding to an MSF.", where they actually meant REM MSF)

It wasn't meant to be a functional implementation. These lines are stored in the database. As soon as you download the .cue from the redump site, these lines are omitted. So you end up with a cuesheet that only has the REM SESSION 02 line.
Which leads to the problem that was described before here: http://forum.redump.org/post/94441/#p94441
To have somewhat functional .cue's using old definitions, I proposed to add the PREGAP 02:32:00 after the REM SESSION 2 to downloaded .cue's. This fix was never implemented and since then, user7 got in contact and IsoBuster implemented some changes that I'm unaware of.

To summarize, there are a couple different solutions:

1. Add the REM LEAD-OUT 01:30:00, REM LEAD-IN 01:00:00, REM PREGAP 00:02:00 into the redump .cue downloads, and have tool authors interpret these lines as a missing block, similar to PREGAP.

2. Add the PREGAP 02:32:00 line to redump .cue downloads. Tell people to use redump .cue instead of the one produced by DIC.

3. Keep everything as is, but have tool authors interpret multi-bin .cue with only the SESSION 02 line (= .cue as currently downloaded) as missing a 02:32:00 block between the sessions and hardcode a fix.

12

(3,494 replies, posted in General discussion)

https://mega.nz/folder/0NQh3QgL#Y2vjOQAxojTAPB4xJtjvIw

No PVD is detected on this disc, but sector 16 has a normal PVD, so I wonder what the problem is?

13

(3,494 replies, posted in General discussion)

Also my input for automatic offset correction:

There should be a limit on the maximum amount of non-zero samples that are corrected.
Looking at some of the large offsets that are dumped:
-9392 http://redump.org/disc/78870/
(next one is -4707)
+6722 http://redump.org/disc/1838/
(and several +5292)

It seems that maybe -10.000 to +10.000 samples seems like a logical range.

Here is my proposed method for automatic offset detection for Audio CD's:

Step 1: Determine the amount of non-zero data in the lead-out:
- If there are >10.000 samples of non-zero data in the lead-out, then this is a strange disc that violates Red Book standards. Offset correction is skipped and the disc is dumped with 0 offset?
- If there are ≤10.000 samples of non-zero data in the lead-out, then look if there's at least an equal amount of zero samples at the start of Track01. If this is the case, then proceed to use this as the custom offset correction (shift data to the left).
- If there are less zero samples at the start of Track01 than non-zero samples in the lead-out, then this is a strange disc that violates Red Book standards. Offset correction is skipped and the disc is dumped with 0 offset?
Step 2: This step should only be performed if no non-zero data was found in the lead-out. Determine the amount of non-zero data in the Track01 pregap, counting backwards from sector -1.
- If no non-zero data is found in the Track01 pregap, then no offset correction is required and the disc is dumped with 0 offset.
- If there are >10.000 samples of non-zero data in the pregap, then this is a strange disc that violates Red Book standards. Offset correction is skipped and the disc is dumped with 0 offset?
- If there are ≤10.000 samples of non-zero data in the pregap, then look if there's at least an equal amount of zero samples at the end of the last track. If this is the case, then proceed to use this as the custom offset correction (shift data to the right). If there are less zero samples at the end of the last track than non-zero samples in the Track01 pregap, then again this is a strange disc and 0 offset should be used?

The staff will discuss (and perhaps refine) this proposed solution and then we will inform you.

14

(3,494 replies, posted in General discussion)

http://redump.org/disc/74600/

Why is it counting 150 sectors as part of write offset? Is it a bug?

Logs: https://drive.google.com/file/d/1lbh0vA … sp=sharing

And also: http://redump.org/disc/77974/

The reason for the ss_sector_range modifying of the SS.bin bytes is to get consistent hashes between different dumps of the same discs. Otherwise you end up with 100 different SS.bin that are actually the same, but those bytes change randomly on different reads.

IIRC the "cleaning" was only for SSv1, but please confirm.

Those dumps are missing the video partitions IIRC.

17

(3,494 replies, posted in General discussion)

Hi,

this disc has a strange track01 pregap: http://redump.org/disc/87558/

Logs: https://mega.nz/folder/AYoggTyS#-S138gR9lRjgqzx9d_d-8A

Is it correct?

18

(3,494 replies, posted in General discussion)

Latest DIC still outputting incorrect multisession cuesheets? http://forum.redump.org/topic/41246/ibm … intergame/

And DIC or MPF are always scanning the .img instead of the .bin files, resulting in an incorrect error count for multisession discs. The gap between the sessions should not be included in the error count, because it's not included in the .bin tracks.

19

(3,494 replies, posted in General discussion)

-12 seems way more common than 0, but otherwise you're right

iR0b0t posted in 2017: "needs a fix"

21

(3,494 replies, posted in General discussion)

Hello,

Intothisworld wrote:

From what I understand, the table of contents at the beginning of an audio CD lays out all the LBAs (or timestamps?) for tracks throughout the disc. So when a disc has an offset pressing, say shifted +88 from another similar release, are the LBAs/timestamps in the TOC shifted by +88 as well?

The TOC indeed tells you the length in LBA of each track. Offsets have no bearing here.

Intothisworld wrote:

And the second question is in regards to data potentially being shifted into the lead-in or lead-out... Would it be possible to add a feature to DIC where it automatically searches the lead-in/lead-out for non-zero bytes? And then if necessary adjusts accordingly (or at least tells you how to manually adjust accordingly)? Is there any technical limitation to something like that? If it is indeed possible, I'll go ahead and submit a request on the github, but if not, I don't want to waste anyone's time. Thanks again.

This should be possible and seems sensible in the cases where you want to capture non-zero bytes that would otherwise be lost. There are some discs already in the db that were dumped this way: http://redump.org/discs/quicksearch/off … /audio-cd/
So feel free to request such a feature and we will allow such dumps.

22

(3,494 replies, posted in General discussion)

FYI, the supposed +30 reference offset that was discovered and announced by http://forum.redump.org/user/48/ was later disputed by his friend Truong (from Trurip). With his tests using an FPGA, he determined the true "zero" read offset to be +48 (used by Pioneer drives as read offset as opposed to the +30 used by Plextor) compared to EAC. When you look in Red Book there is a part somewhere which tells about 3 x 6 samples that could explain the 18 samples difference between the 2.

Anyway, it's all hearsay and irrelevant, because Redump uses combined offset correction whenever possible and for Audio CD's the EAC reference offset is the standard and used by databases like AccurareRip and CUETools that store checksums for many millions of discs. In the end it's just a number and if you only correct the read offset, regardless of the reference used, you can still end up with data being shifted into the pregap or lead-out that will be missing from the dump because the write offset isn't corrected.

There is no write offset correction for audio CD's, just read offset correction. IIRC the way these audio databases can link different pressings is by hashing the tracks without leading and trailing zeroes, thereby eliminating any offset differences and by measuring the amount of zeroes and storing them alongside the hashes, it's possible to calculate the offset difference.

I guess we need to do more research.. maybe the SafeDisc scrambled data is also meaningful.. but I always assumed that there could be no good/reliable data hidden behind a C2 error.

sarami wrote:

Logs. https://www.mediafire.com/file/js0a6rkd … 29.7z/file
C2 errors: 1445. vs your dump http://redump.org/disc/31708/ is 722

Yeah but 1 of the 2 sectors is always just 22-23 bits.. do C2 errors take into account offset correction?

For intentional C2 errors like here we always fill with 0x55 pattern.