1,776

Thanks Sarami
The 'Majo to Hyakkihei [Shokai Genteiban]' dump  is working OK now.

1,777

sarami, could you add a command to disable hashing? I'm advising VGHF on a large scale preservation project and they have too many discs and not enough time, a flag to disable hashing would be an immense help to speed things up. I would prefer they use DIC to other options, and this be a deciding factor.

Thanks for your great work smile

All my posts and submission data are released into Public Domain / CC0.

1,778

Hashing doesn't take much time. At least CRC32 is needed, but even all 3 checksums don't take more than a couple of seconds unless we're talking about 50GB blurays.

1,779 (edited by user7 2019-04-08 22:59:39)

>unless we're talking about 50GB blurays

Not only are we talking 50GB blurays. we're talking thousands of dvd-5's and 9's.

I'd rather hash later than lose dumps.

All my posts and submission data are released into Public Domain / CC0.

1,780

Test version
20190409 (Windows)
http://www.mediafire.com/file/eq80y20l9 … or_test.7z
20190409 (Linux)
http://www.mediafire.com/file/uw3e03kdk … est.tar.gz
- changed: hashing order of DVD/XBOX (old: iso -> SS/PFI/DMI.bin, new: SS/PFI/DMI.bin -> iso)

user7 wrote:

sarami, could you add a command to disable hashing? I'm advising VGHF on a large scale preservation project and they have too many discs and not enough time, a flag to disable hashing would be an immense help to speed things up. I would prefer they use DIC to other options, and this be a deciding factor.

When DIC is hashing, DIC doesn't access the disc anymore. That is, you can eject the disc and insert new disc and dump new disc by new DIC when DIC is hashing.

1,781 (edited by user7 2019-04-09 04:02:32)

VGHF is looking to use it with an automatic disc feeder (nimbie), so ejecting it not be doable. they have about 10,000 betas to dump in a time sensitive setting.

All my posts and submission data are released into Public Domain / CC0.

1,782

user7 wrote:

VGHF is looking to use it with an automatic disc feeder (nimbie), so ejecting it not be doable.

Does it means that nimbie can't execute DIC by multiple start? How nimbie assures that some discs are good dumping without hashing?

1,783

Hashing happens post dump, I didn't know that was part of a diagnostic for QC of a dump. Iso Buster just tells you if there are read errors, without needing to hash the entire iso post dump.

All my posts and submission data are released into Public Domain / CC0.

1,784

Drive does not always return the error properly. You do know it regarding puzzle bobble 4 of dreamcast. (To dump the track 12, dcdumper rereads several hundred? times until the hash matches. dcdumper compares the hash, not detect the error.)

1,785

hmmm, what about for DVD or BD

All my posts and submission data are released into Public Domain / CC0.

1,786

What command should be be dumping Mil-CD / Dreamcast unlicensed with /ms or regular CD?

All my posts and submission data are released into Public Domain / CC0.

1,787

Test version
20190413 (Linux)
http://www.mediafire.com/file/uw3e03kdk … est.tar.gz
- fixed: output _volDesc.txt (many DWORD/LONG was changed to UINT/INT because DWORD/LONG is 8bytes not 4bytes in 64bit build)

user7 wrote:

hmmm, what about for DVD or BD

http://forum.redump.org/post/62345/#p62345
Perhaps this problem hasn't solved yet.

user7 wrote:

What command should be be dumping Mil-CD / Dreamcast unlicensed with /ms or regular CD?

As you know, multi-session's cue is being discussed now. Probably cue format is changed (also bin & scm?).

1,788

sarami wrote:
user7 wrote:

What command should be be dumping Mil-CD / Dreamcast unlicensed with /ms or regular CD?

As you know, multi-session's cue is being discussed now. Probably cue format is changed (also bin & scm?).

I dumped some Dreamcast Unlicensed cheat discs, with and without /ms
The resulting files (bins, img, scm) were different (with /ms the files were much bigger in all cases).

Example:

-without /ms
5.729.472 DOWNLOAD (Track 1).bin
2.972.928 DOWNLOAD (Track 2).bin
8.702.400 DOWNLOAD.img
8.702.400 DOWNLOAD.scm

-with /ms
32.189.472 DOWNLOAD (Track 1).bin
3.325.728 DOWNLOAD (Track 2).bin
35.515.200 DOWNLOAD.img
35.515.200 DOWNLOAD.scm

Is this expected?
Let me know if you want me to share the logs for your review.

1,789

pool7 wrote:

The resulting files (bins, img, scm) were different (with /ms the files were much bigger in all cases).

        /ms     Read the lead-out of 1st session and the lead-in of 2nd session
                        For Multi-session

The size difference comes from additional data reads which the /ms command is executing.

PX-760A (+30), PX-W4824TA (+98), GSA-H42L (+667), GDR-8164B (+102), SH-D162D (+6), SOHD-167T (+12)

1,790

So which one is the "right" dump (ie. the one I should submit)?

1,791

The one without /ms (or both if you want, we may need both at a later time)

PX-760A (+30), PX-W4824TA (+98), GSA-H42L (+667), GDR-8164B (+102), SH-D162D (+6), SOHD-167T (+12)

1,792

Thanks; submitted here:
http://forum.redump.org/topic/21513/mil … on-thread/

1,793 (edited by Jackal 2019-04-23 18:47:24)

Olo dumped a disc at 8x speed, but the SecuROM data was messed up:

Data from DIC dump:
MSF: 01:08:58 Q-Data: 610101 01:06:59 00 01:08:59 b0a7

correct (unmodified) data:
MSF: 01:08:58 Q-Data: 610101 05:06:58 00 21:08:58 b0a7

And this one was not detected for some reason at 8x, but at 4x it is:
MSF: 01:08:52 Q-Data: 610101 01:06:53 00 01:08:53 6d32

Is it normal that higher speed can cause so many errors? Any way to optimize this?

Post's attachments

Tzar_subInfo.txt 13.28 kb, 14 downloads since 2019-04-23 

You don't have the permssions to download the attachments of this post.

1,794

[ERROR] Number of sector(s) where bad MSF: 2
        Sector: 14098, 14099,
[ERROR] Number of sector(s) where user data doesn't match the expected ECC/EDC: 1
        Sector: 14097,
[ERROR] Number of sector(s) where sync(0x00 - 0x0c) is zero: 1
        Sector: 901,

In fact, the damaged sectors are 14097, 14098, 14099 and 14100. Why does it say 901 instead of 14100? DIC is probably using a wrong algorithm to show the sector numbers.

Post's attachments

mk3.7z 1.83 mb, 15 downloads since 2019-04-23 

You don't have the permssions to download the attachments of this post.

1,795 (edited by sarami 2019-04-24 13:57:53)

F1ReB4LL wrote:

In fact, the damaged sectors are 14097, 14098, 14099 and 14100. Why does it say 901 instead of 14100? DIC is probably using a wrong algorithm to show the sector numbers.

Replace EccEdc.exe with this
http://www.mediafire.com/file/2quudw2bl … st.7z/file

Jackal wrote:

Olo dumped a disc at 8x speed, but the SecuROM data was messed up

If random error also exists in SecuROM data, it's difficult to fix.

1,796 (edited by Jackal 2019-04-25 19:15:37)

KailoKyra dumped this disc: http://redump.org/disc/3927/ but it wasn't matching the database, because DIC is not descrambling any of the last 3 sectors.

Attached the 3 last data sectors for this disc and the DIC log:

If I remember correctly, we agreed before that all sectors inside a data track with a valid sync should always be descrambled? But still DIC is not descrambling the first 2 sectors. The third and last sector has an invalid sync and shouldn't be descrambled.

Post's attachments

carpet.zip 19.58 kb, 20 downloads since 2019-04-25 

You don't have the permssions to download the attachments of this post.

Descramble algorithm: https://github.com/saramibreak/DiscImag … output.cpp
Line 1699

It's very complex...

1,798

Jackal wrote:

If I remember correctly, we agreed before that all sectors inside a data track with a valid sync should always be descrambled?

Shouldn't all 16 bytes be correct for descrambling? What's the point in descrambling the sector with non-existant "E1" and "C1" modes?

1,799 (edited by Jackal 2019-04-25 20:27:46)

F1ReB4LL wrote:
Jackal wrote:

If I remember correctly, we agreed before that all sectors inside a data track with a valid sync should always be descrambled?

Shouldn't all 16 bytes be correct for descrambling? What's the point in descrambling the sector with non-existant "E1" and "C1" modes?

It's a data sector inside a data track, why not descramble? The sync is valid so the sector is assumed to be intact.

I'm fine with discussing and maybe coming to a new method, but then we also have to fix old dumps that were processed differently.

Some of the dumps are here: http://forum.redump.org/topic/16655/ibm … to-redump/

TRUSTEDME + HARD groups. These are the suspicious ones that were picked up last time...