26

(3,536 replies, posted in General discussion)

Glad to see a new /t command !

To summarize what was happening before it

Forwards reading 287900-288200 :

- "cd d: dump.bin 8 /be raw /c2" + manual work         C3E9D88B    no errors
- "data d: dump.bin 8 287900 288200 /be raw"          C3E9D88B    no errors
- "data d: dump.bin 8 287900 288200 /be raw /c2"     C3E9D88B    no errors
- "data d: dump.bin 8 287900 288200 /be raw /c2 /f"    variable        all 0x55 (288173-288183)
- "data d: dump.bin 8 287900 288200 /be raw /f"          variable        all 0x55 (288173-288183)

Backwards reading 287900-288200 :

- "data d: dump.bin 8 287900 288200 /be raw /r"          variable *    no errors
- "data d: dump.bin 8 287900 288200 /be raw /r /f"       variable        all 0x55 + invalid mode (288173-288183)

Forwards reading 287922-288172 :

- "cd d: dump.bin 8 /be raw /c2" + manual work        DA3582FC        no errors
- "data d: dump.bin 8 287922 288172 /be raw /c2"    DA3582FC        no errors

Backwards reading 287922-288172 :

- "data d: dump.bin 8 287922 288172 /be raw /r"         variable **    no errors ***
- "data d: dump.bin 8 287922 288172 /be raw /r /f"      variable        all 0x55 (288173) ***

* often C3E9D88B
** often DA3582FC
*** most of the time

What is happening now with it

- "data d: dump.bin 8 287900 288200 /be raw /t"

Takes forever to get passed 288173-288183 for each retry round, but should be able to finish with enough time

- "data d: dump.bin 8 287922 288172 /be raw /t"

_Forward.bin
     DA3582FC (match)
_BackToForward.bin
     100127CD, B031FD66, C2DCFD82, 19D8C363, 19D8C363, CF7778EF
-> Identical hash found
     19D8C363 (2 out of 6 attempts) but always with one error right after the first reread

- "data d: dump.bin 8 287922 288162 /be raw /t"

Seems to work pretty well if the last twin sectors are ignored

_Forward.bin
     BA7E0D0B (match)
_BackToForward.bin
     3095FB60, 3095FB60, 3095FB60, 350C16A4, EEB2A8E7, 3095FB60
-> Identical hash found
     3095FB60 (4 out of 6 attempts) with no errors

27

(3,536 replies, posted in General discussion)

Absolutely.
If you need more tests, ask me.

28

(3,536 replies, posted in General discussion)

Getting to the point for http://redump.org/disc/36124/

Testing range 287900-288200 with twindump
- 251 twin sectors in 287922-288172
Backwards reading hashing for range B sectors
- identical CRC-32 found : 57A7CE8A

Testing range 287900-288200 with DIC
Forwards and backwards reading comparison
- 251 twin sectors in 287922-288174 with the 2 last sectors all 0x55
- 251 twin sectors in 287922-288180 with the 8 last sectors all 0x55 or invalid mode
Backwards reading hashing for range B sectors
- no identical CRC-32 found
- hashes inconsistency is not reliable enough to help detect the range of twin sectors

29

(3,536 replies, posted in General discussion)

Thanks for the interesting link. I'll try the specific tool as soon as possible.

sarami wrote:

I'm not sure these ranges are correct.

With successive manual retries, I finally found 287920-288165 to be exact, not 287915-288169.

About DIC, one thing I noticed on a non protected area: output without /r for LBA 1000 to 2000 = output with /r for LBA 1000 to 1999 (and not 1000 to 2000, otherwise output has 1 additional sector). Is it normal ?

30

(3,536 replies, posted in General discussion)

sarami wrote:

PLEXTOR can get the identical hash when uses /f (cache delete), but sometimes returns non-identical hash.
ASUS can't get the identical hash even if uses /f.

Same thing with this disc: http://redump.org/disc/36124/ using this command: "data d: dump.bin 8 0 293587 /be raw /f /r".
A first error happened near the end of the disc where twin sectors are expected :

LBA[288180, 0x465b4]: [F:ProcessReadCD][L:282]
    Opcode: 0xbe
    ScsiStatus: 0x02 = CHECK_CONDITION
    SenseData Key-Asc-Ascq: 03-11-05 = MEDIUM_ERROR - L-EC UNCORRECTABLE ERROR
LBA[288180, 0x465b4]: Read error. padding [2352 bytes]
========== LBA[288180, 0x465b4]: Main Channel ==========
       +0 +1 +2 +3 +4 +5 +6 +7  +8 +9 +A +B +C +D +E +F
0000 : 00 FF FF FF FF FF FF FF  FF FF FF 00 63 48 50 00   ............cHP.
LBA[288180, 0x465b4] Reread NG

So I started focusing on 287000-289000:
- attempt #1: EC8893EC
- attempt #2: E43FF68D
- attempt #3: 6D3297FE

Then narrowing the suspected range to 287500-288500 since:
- 287000-287500 always gives D7F637B0
- 288500-289000 always gives BB08138A

What do you think of using hashes inconsistency to help detect the range of twin sectors by narrowing the suspected range further ?

31

(3,536 replies, posted in General discussion)

Jackal suggested that 'twindump' might help. I'll look for that program.

32

(3,536 replies, posted in General discussion)

I'm a bit confused. I thought that :

- reading forwards was keeping twin #1 and ignoring twin #2 of each duplicated sector since disc drives were considering any repeated sector as erroneous, and so backwards was keeping twin #2 and ignoring twin #1 since twin #2 was read first this way.
- twin sectors were called twins for their identical headers while possibly containing totally different data thus making it possible to hide data in every twin #2.

If so, comparing forwards and backwards images can not reveal twins that happen to contain strictly identical data, if such cases exist. But I don't know that.

sarami wrote:

But Jackal says "Disc has 260 twin sectors in range 329687-329947".

About the principle of that range being larger than the range of sectors with differences you found, I guess it's still possible. But there's a mistake anyway since range 329687-329947 has 261 sectors, not 260...

sarami wrote:

Twin sectors of the Tages are always 260 sectors? If yes, where is the evidence to show that it is correct?

"That is the question." Personally for now, I'm just curious about comparisons between forwards and backwards images for other Tages protected discs.

33

(3,536 replies, posted in General discussion)

I did not, my dump only verified the regular hashes. I initially thought that DIC had provided that info found in the comments, whenever the correct argument was given to it. That's why I wanted to try "/r".

Daemon tools pro is supposed to make working dumps of Tages discs but creates non consistent and non standard mds/mdf images, so using these for comparison with regular images to get that info isn't really straightforward.

34

(3,536 replies, posted in General discussion)

sarami wrote:

I don't know how redump.org stipulates that twin sectors be preserved.

Jackal seems to be the only one who tried to give an example :
http://redump.org/disc/35932/
http://redump.org/disc/34669/

- number of duplicated sectors
- range of duplicated sectors
- type of duplication (twin)
- hashes (with duplicated sectors inserted into the disc image)

35

(3,536 replies, posted in General discussion)

What is the current state of Tages support ?

The github page mentions that DIC "can read in reverse, but specifications are not decided". The associated command seems to have been "/r" at one point, as found here: http://wiki.redump.org/index.php?title= … f_Commands, but is not recognized anymore.

36

(3,536 replies, posted in General discussion)

The issue is fixed. Thanks sarami.

37

(3,536 replies, posted in General discussion)

sarami wrote:
Nemok wrote:

Same behavior unfortunately, though the error has changed a little :

I tried some mixed mode disc, but it's no problem. Could you upload the logs?

Sure.

Disc tested : http://redump.org/disc/31428/
Log files : https://1drv.ms/u/s!AiA4u0yuSX13pn0G6gM … U?e=BJg5Td

38

(3,536 replies, posted in General discussion)

Same behavior unfortunately, though the error has changed a little :

LBA[083196, 0x144fc]: [F:ReadCDForCheckingSubQAdr][L:1080]
    Opcode: 0xbe
    ScsiStatus: 0x02 = CHECK_CONDITION
    SenseData Key-Asc-Ascq: 03-11-05 = MEDIUM_ERROR - L-EC UNCORRECTABLE ERROR
lpCmd: be, 00, 00, 01, 44, fc, 00, 00, 01, f8, 01, 00
dwBufSize: 2448

39

(3,536 replies, posted in General discussion)

OK those issues are now fixed.

The program gets further but fails at checking subQ address for each track :

LBA[088444, 0x1597c]: [F:ReadCDForCheckingSubQAdr][L:1076]
    Opcode: 0xbe
    ScsiStatus: 0x02 = CHECK_CONDITION
    SenseData Key-Asc-Ascq: 03-11-05 = MEDIUM_ERROR - L-EC UNCORRECTABLE ERROR
lpCmd: be, 00, 00, 01, 59, 7c, 00, 00, 01, f8, 01, 00
dwBufSize: 4896

40

(3,536 replies, posted in General discussion)

Indeed subchannel is zeroed. So pack mode never really was supported even if a dump in this mode could be started. Fine.

Also discovered today :

- Discs with data and audio tracks cannot be dumped anymore using ribshark. DIC throws errors like this one:

LBA[083296, 0x14560]: [F:ReadCDForCheckingSubRtoW][L:1283]
    Opcode: 0xbe
    ScsiStatus: 0x02 = CHECK_CONDITION
    SenseData Key-Asc-Ascq: 03-11-05 = MEDIUM_ERROR - L-EC UNCORRECTABLE ERROR
lpCmd: be, 00, 00, 01, 45, 60, 00, 00, 01, f8, 01, 00
dwBufSize: 2448

- DIC often fails to get write-offset on first attempt.

41

(3,536 replies, posted in General discussion)

sarami wrote:
Nemok wrote:

Pack mode without /c2 is still broken.

It's not broken. LG/ASUS does not support the pack mode.

How to explain this ? Were previous DIC releases wrong about it ?

42

(35 replies, posted in General discussion)

For anyone that still tries to get the best DPM approximation possible.

I have written a small bash script program that moves the DPM content into an old format MDS container (convert). This allows manual editing of the DPM values with Advanced MDS editor 0.5.5 and BWA edit 1.1 since these are incompatible with what the latest Alcohol releases create. The edited DPM content may then be put back inside its new format container (rebuild).

43

(3,536 replies, posted in General discussion)

Raw mode with /c2 is fixed.
Pack mode without /c2 is still broken.

44

(3,536 replies, posted in General discussion)

At least with my setup, the latest release is completely unusable: getting c2 errors on every sector, and a warning that the drive does not support subchannel pack mode, while the 20230909 release had no problem (bw-16d1ht + ribshark firmware + sata to usb adapter).

The modified firmware with direct lead-out reading allowed me to solve the issue I had with this particular high positive offset disc and get a dump that matches the one in the database. Provided there's no regression introduced, this firmware is a major step forward, thanks to Ribshark!

bikerspade wrote:

/mr is unreliable. It can sometimes return garbage bytes in the last 24 bytes. Use RibShark’s 3.10 firmware which eliminates the need for /mr

I'll try this firmware when I have time. Thank you.

Using 0xf1 opcode for retrieving cache through the /mr command in DIC allows to get the last 21 sectors of the last track and a varying number of sectors in the lead-out, most of the time 10 to 15 sectors.

It seems to work fine for different positive write offset discs :
+0    ->    9 lead-out sectors    (= 0 lead-out samples required / 5292 available)
+18    ->    16 lead-out sectors    (= 18 lead-out samples required / 9408 available)
+19    ->    12 lead-out sectors    (= 19 lead-out samples required / 7056 available)
+925    ->    11 lead-out sectors    (= 925 lead-out samples required / 6468 available)
+1362    ->    12 lead-out sectors    (= 1362 lead-out samples required / 7056 available)

But for the highest positive write offset value disc :
+1644    ->    1 lead-out sector    (= 1644 lead-out samples required / 588 available)

That's why the dump fails in this case.
I still don't know what the maximum write offset is for the BW-16D1HT.

Please correct me if I'm wrong.

NB :
I noticed that the very last lead-out sector sometimes shows a strange MSF like :
...
31 Cache LBA 270416, SubQ Trk aa, AMSF 60:07:41 [Lead-out]
32 Cache LBA 270417, SubQ Trk aa, AMSF 60:07:42 [Lead-out]
33 Cache LBA 270418, SubQ Trk aa, AMSF 56:21:06 [Lead-out]

In the case of the +1644 disc, the last sector of the last track also shows a lead-out SubQ :
...
19 Cache LBA 088950, SubQ Trk 12, AMSF 19:48:00
20 Cache LBA 088951, SubQ Trk 12, AMSF 19:48:01
21 Cache LBA 088952, SubQ Trk aa, AMSF 19:48:02
22 Cache LBA 088953, SubQ Trk aa, AMSF 18:41:24 [Lead-out]

I've been recently spending some time on a small number of Mega CD and Saturn discs, and discovered that 1 out of my 12 discs did not match any redump hashes using DIC and my Asus BW-16D1HT 3.02.

For that particular disc however, the old isobuster/EAC method and a +667 offset Pioneer BD drive allowed me to get matching results with http://redump.org/disc/8167/, a +1644 write offset disc.

So far, the disc with the highest positive write offset that I've successfully dumped with the Asus is http://redump.org/disc/17715/ with a +1362 write offset.

If I understand this correctly, the failure is only due to the Asus' maximum positive write offset tolerated value, somewhere between +1362 and +1644, and not due to the read method at least in this case.

What do you think ?

49

(35 replies, posted in General discussion)

Hello reentrant

Having also tried cdarchive for a cd, some 'ranges' appear to be missing. I was able to visually identify 4x25=100 'spikes' on the graph drawn by alcohol, but the program only saw 34 according to the ranges count. The 'data' values clearly differ from the example you had given when approaching one 'area' :

DPM Data: 0 1550 -1 1304
DPM Data: 0 1600 1 1305
DPM Data: 0 1650 1 1306
DPM Data: 1 1700 15 1321
DPM Data: 0 1750 17 1338
DPM Data: 0 1800 4 1342
DPM Data: 0 1850 1 1343

There's a second positive 'sector density difference' right after the first one here, so that could partially explain the issue? A resolution of 1 value every 50 sectors used by the high precision sampling is way more likely to have this case.

Maybe you already fixed it since the thread was created. By the way, have you found how to patch a MDS file?