1

(1 replies, posted in General discussion)

This sounds interesting. Possibly, these discs were pre-recorded with the wobble that is necessary to validate the disc and a basic loader executable on one session, and then you could burn a second session that would be loaded by the executable on the first session. I doubt this would be possible to replicate on a normal burner, most likely these discs were manufactured with a custom process that included the wobble. I would certainly be interested to see one of these discs though, and analyse it myself.

2

(4 replies, posted in General discussion)

The numbers on the barcode are indeed printed with spaces, not without, in most cases. It's best to record the data as thoroughly as possible, which means including spaces when they are present, and leaving them out when they are not. We also include the "T" at the beginning of Japanese barcodes that have that, despite it not being a "real" part of the barcode. It's easy to transform our barcode set to "raw" barcodes by a simple regex, the same cannot be said for the opposite if we left these characters out.

scsi_wuzzy wrote:

Edit: It looks like it's a problem with the firmware I'm using. I updated to 3.10 a while back after a Sarami mentioned ripping with that version and I was trying to troubleshoot a disc that wouldn't dump. Looking at the DIC source code, though, it only flags drives running 3.00 and 3.02 as capable of reading the leadout via 0xF1. I'll probably just downgrade my drive back to 3.02, as I'd rather not modify DIC.

Edit2: I haven't tried the 0 offset disc again, but it looks like /mr works again after the downgrade.

F1h is not supported on 3.10, the command will return an error, it was disabled in that firmware.

I would like a wiki account so I can contribute to the missing lists and IFPI list.

5

(1 replies, posted in General discussion)

There are various region tags that appear to be duplicates of each other with no rhyme or reason as to which is used:

  • USA, Asia/Asia, USA

  • Asia, Europe/Europe, Asia

  • Japan, USA/USA, Japan

It would be a good idea to consolidate these together.

Additionally, "Europe, Germany" makes little sense to be a region tag of its own considering only one disc uses it and a "Europe" tag should suffice ("USA, Germany" and "Australia, Germany" also seem like they could probably be consolidated into their Europe counterparts).

Also, what is the use of the "Export" region?

6

(3,516 replies, posted in General discussion)

Jackal wrote:

-12 seems way more common than 0, but otherwise you're right

I thought so too, but:
http://redump.org/discs/offset/-12 - "Displaying results 1 - 500 of 1679"
http://redump.org/discs/offset/0 - "Displaying results 1 - 500 of 2584"

Ah yeah, forgot for a second that this site wasn't maintained

8

(3,516 replies, posted in General discussion)

I have been doing some thinking and I believe that the "0" offset we currently have is likely correct. "0" is the most common write offset for CDs in the DB, and two of the listed common offsets in the new disc form are +588 and +1176, equal to exactly one half of, and one sector respectively. Should the true "0" offset be +30 or +48, these common offsets would be +558/+540 and +1146/+1130 instead which seems to be less logical than them being divisors of a full sector.

Using the URL http://redump.org/discs/offset/-12 will search for discs with offset of -12, this works just fine, but I cannot search for positive offsets in this way, such as http://redump.org/discs/offset/+2, as the + is interpreted as a space. Is there any way to query the DB for positive offsets?

10

(3,516 replies, posted in General discussion)

Intothisworld wrote:

Hey there, I'm somewhat new to the community, but I have a question about audio CD drive offsets. I just discovered last night the old forum post where a guy called out EAC and AccurateRip for their drive offset references all being 30 samples off. I know this post made a pretty big splash in the field when it appeared back in 2006, so I'm assuming the DIC creators were aware of it... I'm just curious what the general opinion is of this info amongst the MPF/DIC tech crowd? I'm still very much a learner when it comes to all of this stuff, so beginner-friendly language would be very appreciated tongue Thank you for your time.

We correct by using the data track offset which can be easily determined by the sync, so adjusting for the "correct" offset would not affect any dumps, just the stated write offset.

As for that post, IIRC they did not present any compelling proof of their offset being any more correct besides "some guy in the industry said so", which is not good enough for me especially as said guy was basing his findings off the data track offset, which we know can vary depending on the disc.

11

(3,516 replies, posted in General discussion)

The end of sector 0/the start of sector 1 when dumping both disc 1 and 2 of Fair Strike are filled with 0x55 when dumping on both ASUS and Plextor drives. Tools like ISOBuster are able to read those sectors fine, so something else is going wrong here.

Logs (later errors are the skipped file): https://rib.s-ul.eu/vsUmxVSe

12

(3,516 replies, posted in General discussion)

sarami wrote:
RibShark wrote:

but .010 and .011 files are still skipped

(link is same)
- fixed: check string length

Thanks, working now!

13

(3,516 replies, posted in General discussion)

sarami wrote:
RibShark wrote:

There is no extension

https://www.mediafire.com/file/eq80y20l … st.7z/file
- fixed:
Only check files.

Too many files are being skipped:
https://i.postimg.cc/nLRkMgf5/image.png
Directory structure:
https://i.postimg.cc/YCSLj92r/image.png
I need to skip the "SNGM" file, but *not* the "SNGM.010" and "SNGM.011" files.

ReadErrorProtect.txt is

# This is a file not to read sector. Please write the file name you want to read skipping
SYSTEM.LSK
SNGM

but .010 and .011 files are still skipped

14

(3,516 replies, posted in General discussion)

sarami wrote:
RibShark wrote:

Is there a way to better define which file to skip so it only skips "/SNGM/SNGM"?

Define only filename and extension.

There is no extension

15

(10 replies, posted in General discussion)

Bump, but I am pretty sure the protection on these discs is LaserLock Marathon rather than Star. Information on Star is scarce but I found one article from 2002 which is 2 years earlier than Marathon was supposedly released. Given that the Marathon titles we have are from 2004, the timings indicate that this LaserLock variant is Marathon.

(Have also found a few forum posts indicating that Marathon uses DPM, which matches what is said above; will try and look into that)

16

(3,516 replies, posted in General discussion)

sarami wrote:

Does LaserLock Marathon always have "SNGM.010" and "SNGM.011"?

No, this is for Fair Strike which already has an entry in the DB but I want to verify. Other LaserLock Marathon games use different files.

(sidenote: I am fairly confident that all games in the DB with "LaserLock Star" protection ate actually Marathon, the timings match much better to when Marathon was released.)

17

(3,516 replies, posted in General discussion)

I have a LaserLock Marathon disc with the protected file named "SNGM" in a directory called "SNGM" alongside two other files ("SNGM.010" and "SNGM.011") that do not have read errors. Adding "SNGM" to ReadErrorProtect.txt causes DIC to ignore all three files and sector 18 where the directory is defined. Is there a way to better define which file to skip so it only skips "/SNGM/SNGM"?

18

(3,516 replies, posted in General discussion)

Sarami, could you add 0x55 as an option (preferably the default) for /ps so DVD error padding is consistent with that of CDs and other discs (we use 0x55 for Datel protected Nintendo discs too, for example)?

19

(3,516 replies, posted in General discussion)

Jackal wrote:

any way to get BCA data from gamecube discs with a PC drive?

I believe claunia found a single drive that could get this data, it seems it needs to at least recognise the disc. You might be able to swap trick with a legit DVD with a BCA to read it on other drives but that might not work if the drive reads the BCA on insertion and caches it from there.

20

(3,516 replies, posted in General discussion)

F1ReB4LL wrote:
user7 wrote:

I've got a good build bro, thanks.

I will personally nuke all the dumps made with unofficial/hacked tools, we don't need them. We don't accept the dumps from non-plextor drives, why should we accept dumps from some shady builds?

IMO, this is not acceptable behaviour from a redump mod. I would strongly recommend actually making sure you understand someone before immediately taking harsh and counterproductive actions like this. user7 clearly was refering to his PC, not a build of DIC, and if you were unsure, you should have asked him what he meant first. Even if he didn't, you could have talked things over in a much less hostile manner.

21

(3,516 replies, posted in General discussion)

F1ReB4LL wrote:

Since we're using the descrambled images for redump checksums, the .scm checksum is the only evidence of the original data.

Do we need such evidence? Besides, the descrambling process is 100% reversible and independent from actually reading the data from the drive.

F1ReB4LL wrote:

And the .img checksum is important to compare with the calculated-by-the-site one + needed for the quick db-vs-log verifications.

Then just calculate the CRC32 (the same way that the site does) and skip the useless MD5 and SHA1.

22

(3,516 replies, posted in General discussion)

Agreed RE hashing scm/img: we don't store such hashes on the site, they are pretty much useless and just a waste of time.

23

(3,516 replies, posted in General discussion)

sarami wrote:

Plextor sometimes shifts some bytes when dumps the disc.

0900 : 22 F4 AD 60 D8 7A 5A DE  10 9D 21 4F 95 FC 68 16   "..`.zZ...!O..h.
0910 : 1F 18 9A 77 2D 24 68 9A  00 FF FF FF FF FF FF FF   ...w-$h.........
0920 : FF FF FF 00 29 A6 18 61  F6 D7 FC E1 79 F7 96 F9   ....)..a....y...
LBA[127818, 0x1f34a]: Track[01]: Invalid sync. Skip descrambling
========== LBA[127818, 0x1f34a]: Main Channel ==========
       +0 +1 +2 +3 +4 +5 +6 +7  +8 +9 +A +B +C +D +E +F
0000 : 52 FD 0F 7E 76 9F 69 28  24 1E 99 88 65 66 A1 AA   R..~v.i($...ef..
0010 : F2 7F 0A E0 0F 48 0C 36  8A 16 EF 0E C2 04 59 83   .....H.6......Y.

So which dump is correct? Is this a plextor bug or are these bytes actually shifted on the disc. If it is a plextor bug, can it be resolved?

24

(3,516 replies, posted in General discussion)

I have a disc (Rayman CP Calcul) that is consistently giving a couple of bad sectors (same each time) when dumped with a Plextor (755), but the dump appears fine when dumped with an ASUS BW-16D1HT.

The sectors appear to have no sync when dumped with the Plextor.

Plextor logs: https://cdn.discordapp.com/attachments/ … P_logs.zip
Asus logs: https://cdn.discordapp.com/attachments/ … P_logs.zip

25

(3,516 replies, posted in General discussion)

We should really add to the guides to always dump CDs with audio tracks twice, preferably with separate drives. This would solve most issues.

But yeah, the cache reading is not ideal, and I really hope a better solution to reading the lead-out appears (through firmware modification perhaps).