Drakkhen wrote:

According to the GD-ROM SD Card Dumping Guide, "If the HD area has more than one track, the tracks will need to be properly split to conform to Redump standards (Fireball knows hows to do this)."

The guide doesn't go into any detail on how to properly split multi-track SDRip dumps to conform to Redump standards. Is Fireball around? Can we please get an updated guide on how to properly split multi-track SDRip dumps?

After a failed IDE Drive attempt, SDRip is all I've got to rely on currently, and many of my Dreamcast dumps are multi-track in the HD Area, so a guide on how to get this split up correctly would be great!

I was curious about this myself, so I've dumped a game using both my Dreamcast and a TSST drive. I haven't had time to sort out all the details yet, but it looks like the pregap / offsets are not the same. So it's probably mostly a matter of shifting the bytes in the DC rips so that they have the same offset as in the rips from the TSST (and maybe also adding zero padding since, IIRC, the DC can't read some of the gaps).

I may put together some notes if I ever have time to go through it in detail.

I think for data only discs the DC rips should match the TSST rips, though, since there's no offset / gap issue.

Myria wrote:

Has there been consideration to hack CD burner firmware to dump Dreamcast disks?  I don’t think it’s all that difficult, assuming that the laser unit is capable of reading the high density area at all and the firmware is hackable.

There was some brief discussion on this subject a while back (at http://forum.redump.org/topic/29341/tho … led-reads/). Well, not exactly this subject, but the general idea of modifying a firmware to enable scrambled reads. I think if someone made it far enough to hack the TOC to enable scrambled reads and similar, the rest of this could probably be done as well.

I very briefly did some work on reverse engineering the firmware from an old Samsung DVD-/+RW drive. But, I'm absolutely awful at reverse engineering, and I've never done assembly in 8501 or ARM or many of the ISAs that are prominent in optical drive controllers, so it was very much an uphill battle.

But, I still think it's likely a reasonable approach to enabling better drive support for a lot of dumping tasks. It will require someone with a nice mix of knowledge on RE and optical disc technology, and a lot of free time, and I don't think there are a lot of folks around that meet all of those criteria, though.

Did the pins actually break or just bend? Some of the IDE to USB adapters have an IDE connector that's narrow, making it possible to accidentally plug in the connector when it's shifted one row to the left or right. The result is that it will bend two pins in one row, either the leftmost or rightmost set of pins on the connector. In that case, though, you typically can bend them back without breaking anything.

user7 wrote:

Actually in hindsight, I'm getting dcdumper with ICE.exe - for which we don't have source code (thus I can't properly fix my System 2 dump http://forum.redump.org/topic/41632/dc- … ice-error/ )

Ahh, that's unfortunate.

If it's any help, the source for the version of DiscImageCreator referenced in that thread (the version that still had the GD-ROM decoding / splitting functionality) is available at http://www.mediafire.com/file/ro6zxax54 … 07.7z/file. That 7z includes both the source and the binary.

I'm not sure what all functionality ICE has compared to what that version of DIC has, but, at a quick glance, it at least looks like the DIC code is capable of doing the descrambling, locating the TOC within the descrambled data, and splitting based on the located TOC. I know ICE also parses IP.BIN, which it looks like DIC doesn't do, but it might not be too much work to replicate the remaining functionality of ICE based on the relevant parts of the DIC source (in outputGD.cpp).

ehw wrote:

Sorry for the bump but does anyone still have a copy of the source code for this program? It seems to have been lost to time...

user7 wrote:

I wish, I have tried to reach themabus in the past, no luck.

I made a post some time back asking for the source. sarami was kind enough to share it. The post is at http://forum.redump.org/topic/40014/sou … -dcdumper/.

As of when I'm posting this, the Mediafire links that sarami posted then are still working.

I looked at doing some improvements, including checking ECC/EDC error as the dump occurs and retrying as necessary (i.e., instead of just doing a check at the end of each section, retry every sector that has any ECC/EDC errors as it's read), but I ultimately found that my particular drive sucked too much to ever make any use of it and gave up.

There's definitely a lot of room for these types of improvements that could making DC dumping a lot easier, though.

From what I've seen online, a majority (or maybe all?) of these drives were OEM drives for various manufacturers. If you see one that is the correct model number, but is an HP or Dell OEM, it should be fine to use. However, make sure you read the whole model number. Some OEMs will say something like TS-H353 Rev A in one spot, making it sound like an H353A drive, but then, in another spot, it will explicitly say TS-H353B. The rev number is not part of the model number, so the example drive would be a TS-H353B drive, not a TS-H353A drive.

I don't know any specific models right off the bat that are generally believed to be great with damaged discs, but you might take a look at cdrinfo.pl. Over the years, they did some pretty extensive tests of how different drives perform on various types of damaged media. Most of the content is in Polish (which I don't speak), but Google Translate does a reasonable job with it. Their benchmarks are how I learned that my Plextor drives that are awful at reading damaged discs are actually probably performing normally... (at https://www.cdrinfo.pl/artykuly/Plextor … rona14.php) And their summary of the BenQ 5224W indicates that they found it to be good at error correction (at least for audio discs -- I believe they're saying it's not good for data, but I couldn't get that chart to load and the translation isn't quite clear) (https://www.cdrinfo.pl/artykuly/Benq-5224W/strona8.php), so that matches up with what you were told.

I can say from my own experiences that, when I have a damaged disc, I have pretty good luck with LG BD-RE drives (like the WH14NS40 or similar). Optiarc and TSST/Samsung DVD+/-RW drives have also done quite well.

You've probably already observed this yourself, but it seems to be the case that drives perform differently depending on the type of damage present on the disc (or maybe even the specific manufacturer of the disc or some other variables?). Some discs my LG BD-RE can read fine, but the TSST/Samsung drives struggle, and for others, the TSST is best, or the Optiarc. Basically, if you're dealing with damaged discs, it seems like the best plan of attack is just to get a bunch of drives and start trying them all. IsoBuster has some functionality that makes this pretty convenient, where you can save an incomplete image (i.e., one where not all sectors could be read) and then try to complete the image using different drives.

Pokechu22 wrote:

In case it's helpful, I wrote a Ghidra SLEIGH processor spec for the MN102 processor (matsushita/panasonic) a while ago.  I'm not sure if anything recent/readily available still uses the MN102 though.

Thanks for the info! Last time I just picked a random old drive I had lying around to play with, and I'll probably do the same this time when I take a look at it. This gives me one more option if it turns out to be running an odd chipset. IIRC, the last one I looked at was an Intel 8501 ISA firmware, but that was, again, just because that's what I happened to have lying around.

As a general note for this thread, a potentially useful document on firmware reversing is available on Archive (https://web.archive.org/web/20101225174 … rmware.pdf). It's specifically about reversing a DVD firmware to make it region free, but a lot of what it describes would need to be done for a project like this as well. It has specific notes about locating the code within the firmware that handles A3 and A4 operations. While those aren't operations we'd likely be concerned about, locating them would likely provide insight to the general structure of the code and provide a starting point for locating and modifying code that handles other operations (like 0xBE potentially).

9

(2 replies, posted in General discussion)

It would be nice if we could ultimately end up with CD images like the ones Near described. Near's article about the CD format hypothesizes about a CD format that just stores all the lead-in sectors (including subchannels) so that we could have a single file raw image format that embedded the TOC the same way the real disc does. Such an image format would handle multisession discs easily.

Of course, it'd be really nice if we could hack some model of drive to just give us the CIRC data (or some other low level data from every sector) directly. I guess it would be akin to the Greaseweazle and similar hardware for dumping raw flux from floppy disks.

Agent47 wrote:

If you are going to dump CDs you should invest in a CD-only plextor. They have the best read rates. For DVDs, there is zero reason to use a plextor, any dvd drive can dump those discs properly. DVD PX models are trash amd the laser is known for wearing out. If you plan on dumping CDs they should not be recommended ever, yet people still preach buying 716 or 760 drives for dumping cds. No, full stop.

I agree that the lifetime of the CD-based units seems to be probably better on average. In fact, CD-based Plextors feel damn-near indestructible. However, CD-based units also have the issue of having absolutely awful error correction when reading data discs. I recall the Premium was tied for last place in ability to do error correction on data CDs back when CDRinfo.pl tested it. (I believe it was tied for last with another Plextor drive.)

I haven't seen a comparison with error correction on Plextor DVD drives, but, anecdotally, it seems like my PX-760A can read poor condition discs better than my Premium can. The cross-flashed LG can read damaged discs a whole lot better than either one, though, but it has other drawbacks.

In short, all the drives suck for different reasons, basically.

superg wrote:

Scrambling is a simple math involving shift register, it's a trivial implementation.
Delta is ineffective here and it will be as big as the data track (data is scrambled, audio is unscrambled).
The most annoying thing in this conversion process is to actually know which sector is audio and which is data, scm doesn't have that info so you will have to extract it from TOC to be absolutely sure (you can go by data sync header but there is no guarantee there won't be such sequence in audio sector).

I didn't mean a delta between unscrambled and scrambled. What I meant is that it's not necessarily guaranteed that a re-scrambled data track will match the original scrambled one read from the disc. Specifically, error sectors in the descrambled image are replaced with dummy data, so, when those sectors are re-scrambled, you won't get the original scrambled data back.

Because of this, I was thinking of using deltas between the original scrambled data and the rescrambled data. For most any unprotected disc, the files will match exactly. For discs with errors, the rescrambled data for error sectors won't match the data returned by the drive, so the delta will just have to store the differences for these sectors.

I was thinking this would be a better solution than just storing both the unscrambled and scrambled data, as it would be much smaller. For many discs, it would basically eliminate the storage of the scrambled data altogether, since you can recreate it. For other discs, it would require just storing a delta large enough to reconstruct those sectors that don't rescramble to the original data, which, for most discs, is only a few hundred sectors at most.

Regarding the TOC concerns, I was thinking maybe something even simpler than that. I don't necessarily care if I know exactly which sectors, according to the TOC, are data or not. I just want to have a compact way to recreate the original scrambled data from the drive while storing the more useful unscrambled image for day-to-day use. I was thinking it should be possible to just go through the unscrambled image looking for the 00 FF ... FF 00 sync pattern at each sector offset and then doing the XOR with the scrambling table for all the sectors found that way. If there's no sync, just write out the sector as-is. Then, after having done that, compare this rescrambled image with the original scrambled image that DIC read in from the drive. If the images matche, nothing new needs to be stored except a note that the scrambled image can be trivially recreated from the unscrambled image, and we can delete the scrambled image. If it doesn't match, make a delta between the rescrambled image and the original scrambled one read in from the drive, and then delete the original one. Then, we can in the future recreate the original by rescrambling the unscrambled image and then applying the delta.

My real hope, though, was that Aaru or some other package would have the option to simultaneously store both unscrambled and scrambled using some kind of internal representation like I've described in order to save space. I don't think such package exists, though.

MrPepka wrote:

I did some research on this topic and in general a few projects can help me in dismantling the firmware:
ala42's program MCSE - http://ala42.cdfreaks.com/MCSE/
scanlime's project CoasterMeIt - https://github.com/scanlime/coastermelt
Repository for firmware's, patcher's etc for CD/DVD drives - http://forum.rpc1.org/
Devilsclaw's project Flasher - https://github.com/devilsclaw/flasher
The first program is only for removing the region lock, but maybe if you disassemble this program, its source code would help you understand how it works (after all, this program accesses the CD / DVD drive firmware directly), the second project is an attempt to reverse engineer the firmware for CD drives Samsung / DVD (the project is undeveloped, but its resources have been left, so maybe there would be some use for it? The third project is a page with various firmware, patchers, etc., etc. for CD / DVD drives, so disassembly at least these patchers may in this topic And the fourth project is also an attempt to dismantle the firmware for the CD / DVD drive, but this time for LG drives. This project (like the rest of the others) is not being developed either, but its resources and source code are still on GitHub so maybe it could also help in something?

I haven't thought about this for a while, but I'm looking for an excuse to get going on a nice RE project, so maybe I'll look into it again as I have time. Full disclosure, though: I'm not remotely good at RE. At all. I haven't even done any serious assembly language work in years.

In any case, my suspicion is that there are definitely people around who have done enough reversing on CD/DVD drive firmwares to be able to do something like this in a jiffy, but a lot of those people are probably long gone to working on newer projects.

But, at least for older drives that don't bother with any kind of encryption / signature for the firmware, it should be possible to pretty easily modify any behavior that's present in the firmware. One issue is going to be that some behavior is surely happening at a lower level. For example, I suspect the actual descrambling on most drives would be done in hardware just using a scrambling table and an XOR. However, something like blocking 0xBE from working on data discs is probably (?) handled at the firmware level. Thus, while maybe it wouldn't be possible to directly play with the descrambling code, it might be possible just to bypass the check that disallows 0xBE on data discs.

It may even be the case that someone out there has a debug firmware that allows manipulating memory values in the drive. If that were the case, it maybe would be as simple as just using such a firmware and then sending a debug command to alter the memory region holding the TOC after a disc was inserted. I.e., change it so that the drive thinks the data track is an audio track.

I would imagine at least some parts of this are doable without huge effort for someone who knows what they're doing. Unfortunately, that someone is not me. But, I'm willing to take another look at it, especially if someone gets some leads.

F1ReB4LL wrote:

If you're using one of the latest DIC versions, it has both scrambled and descrambled image checksums in the "_disc.txt" file, you can scramble the descrambled image back and verify its checksum, if it matches - no reason to store the scrambled file itself.

I've thought about doing it this way and then just storing deltas for when the scrambled data doesn't match exactly (since the deltas would allow creation of the scrambled data from the unscrambled data and would typically be much smaller than the entire scrambled image). That's probably what I'll end up doing, but I also may look into adding metadata to an archival format like Aaru if it's possible.

I wanted to make sure there wasn't a better way to do it using some existing, standardized approach before I came up with my own solution. It sounds like there's not, unfortunately.

For most CDs, unscrambled images are more useful on a day-to-day basis than scrambled images. Fortunately, an unscrambled image can be used to exactly or nearly exactly generate the scrambled data, with maybe just a few KB of differences from the actual scrambled data returned by the drive. (In the case of intentional errors or incorrectly mastered sectors that were replaced with dummy sectors during the initial descrambling step, the re-scrambled data will differ from the original scrambled data returned by the drive. For sectors without any errors, though, the re-scrambled data should exactly match the scrambled data received from the drive.)

For archival purposes, I'm storing both unscrambled and scrambled images for all the CDs I dump. This ends up taking quite a lot of storage, because each disc is stored both fully scrambled and fully unscrambled. What I'd ideally like to be able to do is just keep unscrambled images, and, alongside them, a difference file indicating what (if any) bytes differ when the unscrambled data is used to generate the scrambled data. This would enable a tremendous space savings since it would still enable full reconstruction of the scrambled data, but it would store only those bytes that cannot be regenerated from the unscrambled image.

Is there any existing software / image format that enables this type of storage? It seems like it would potentially be a nice feature for Aaru, though I don't believe it currently supports this. I've thought about maybe writing a utility to do it myself, but it'd feel much tidier if it tied into something that the community was already using for archival. Maybe if Aaru can't do it natively, it would be possible to add some custom metadata field to Aaru images that encodes any differing bytes?

Does anyone have any thoughts / insight? My motivation is that some discs seem to embed data inside of error sectors (e.g., sarami has pointed out previously that some disc has lines from the poem Jabberwocky stored in the erroneous sectors), and this data is thrown away when the descrambled image is built. I'd like to keep that data.

Anamon wrote:

I might try the laser replacement thing, if just for kicks. Redump is really the only reason I got this drive, so if I'm not going to be able to do dumps with it, it's just going to take up space.

It looks like the laser in the PX-712 was used in various drives from NEC (and possibly Lite-On as well). If you've got any other drives kicking around, you might check the PDFs linked from another thread (http://forum.redump.org/topic/42360/hel … -a-px755a/) to see if any of those models have the same / compatible OPU.

Regarding cross-flashing an LG / ASUS drive, I have a couple notes that may or may not be helpful. One, it seems like the cross-flashing process is pretty reliable at this point, assuming you verify that the drive you have has the proper model / SVC code / manufacturing date. The old-style cross-flashing involved flashing from DOS or using a hacked LG / ASUS flasher, but, most recently, I've cross-flashed a couple drives using a frontend to the flasher that comes included with MakeMKV, and I've yet to have any issues with it. It's certainly not without risk, though. If something happens and the drive calibration data gets wiped, you might be out of luck. Two, even a cross-flashed LG / ASUS probably won't be able to dump all the discs the Plextor would be able to if it's made working again. I've been using my BW-16D1HT for most CDs that don't have audio tracks (to avoid wearing out the Plextor), and, every once in a while, I still have to use the Plextor for such discs. The BW-16D1HT has to use a trick to read the lead-out sectors from the drive cache, but certain discs exhibit an issue where the lead-out sector isn't present in the cache, and DiscImageCreator is unable to dump such discs.

So, even if you cross-flash your LG, you may find you need a Plextor for some discs. Thus, even if you opt to cross-flash the LG drive for dumping, I think you'd be wise to be on the lookout for replacement parts / a replacement drive for the Plextor.

littlebalup wrote:

Awaiting to receive the new OPU from China, I found a working NEC ND-3550A donor with a SF-DS10N in it. I swapped the OPU and the PX-755 is back to life. smile
I performed the Self-Test Diagnostics with success. However it's not perfect as it fails to read some DVD's. No issues with CD's so far. Some laser adjustments are probably necessary but I don't kow how.

Glad to hear it worked! I think one of the best arguments for owning a PX-755 or PX-760 over some of the Plextors is that OPUs are actually available for the 755 and 760 (either in the form of donor drives or on AliExpress). That same MyCE thread I linked earlier has a discussion about the inability to find OPUs compatible with the PX-716, so, unlike the 755 and 760, the 716 is basically trash if the OPU dies. A shame!


littlebalup wrote:

Anyway I found a good source to share of NEC / Optiarc or LiteOn donor drives for PX-755/760 (credits to Blackened2687):
https%3A%2F%2Fforum.cdrinfo.pl%2Fattach … e_v1.3.pdf
https%3A%2F%2Fforum.cdrinfo.pl%2Fattach … e_v1.3.pdf

Something happened to those links. I believe the correct ones are:
https://forum.cdrinfo.pl/attachments/f1 … e_v1.3.pdf
and
https://forum.cdrinfo.pl/attachments/f1 … e_v1.3.pdf

Yes, CDRinfo.pl has a great deal of useful information for optical drives. I've found lots of good info on crossflashing, repairing, etc. Lots of great stuff on MyCE too. Both of those places have been home to a number of ODD experts over the years, so there's everything from low level discussions of dumping raw CIRC data using a modified CD player to higher level stuff like help with burning software.

Larsenv wrote:

Yeah, that's definitely a sign that the laser died. Sorry about that. I've had 3 Plextors in the past do that to me...

Some Plextors have replacement lasers available on AliXpress, but I don't think  that there's one for the 755. There's not one for the 760, so I don't think there's one for the 755.

Has the SF-DS10 supply dried up? I haven't done a laser on one of these drives, but I recall a thread on MyCE from maybe 5-6 years ago where someone discussed replacing the SF-DS10K OPU with SF-DS10L and SF-DS10N OPUs.

Actually, I guess maybe they were just sourcing those parts from other drives, and not purchasing new OPUs. It looks like the thread is here at https://club.myce.com/t/replacement-for … a/314376/4, and they mention the L and N variants are in various Lite-On, Sanyo, and NEC drives.

18

(3,493 replies, posted in General discussion)

KailoKyra wrote:

Follow up on that : I managed to grab another copy, same ringcode and all... and this time it worked without issues.
So likely something bad going on with the discs themselves even if they don't look visually bad.

Out of curiosity, try scanning the bad disc in a flatbed scanner. My near perfect copy of JSRF would not read correctly in either my Xbox 360 or my Kreon drive, and I couldn't make sense of it. When I scanned the disc on the flatbed, though, there were tons of tiny little black spots that I couldn't easily see with the naked eye.

I think maybe one of the Xbox disc manufacturers had an issue with some batches of discs, and they're oxidizing or otherwise rotting away. I've seen reports of other perfectly good looking JSRF discs having the same problem, and I'm sure there are others besides the JSRF discs.

I'm pretty sure my JSRF disc did not have this problem when I purchased it new years ago. I vaguely remember having a modded Xbox extract the disc shortly after purchase, so it was readable at that time, and it's been in the case ever since. I didn't try dumping it proper until I finally got a Kreon drive this year, though I noticed last year that my 360 wouldn't play the disc.

19

(3,493 replies, posted in General discussion)

user7 wrote:

Old and new DIC builds, I'm getting an error with a particular disc in good-enough condition. It's a later Xbox (OG) kiosk disc.

I can't recall if this was the same error that my drives threw, but I've had a couple of Xbox discs, in great condition by all appearances, that wouldn't read. I suspect there was some manufacturing error that might have resulted in some discs having a short life. I know that at least one of the discs worked when it was new, but it no longer does. I also noticed that, when I scanned the disc in my flatbed scanner, the scan had lots of tiny little black spots, as if maybe the reflective layer has oxidized in some spots?

20

(3,493 replies, posted in General discussion)

I mentioned this in another thread, but it might be more of interest to those in this thread:

I believe the Vinpower variant of the WH16NS48 (SVC Code 40) firmware (version 1.D3) may potentially be a good firmware for scrambled dumping on SVC Code NS40 drives. (Drives that, AFAIK, generally can't run the BW-16D1HT firmware, because it's designed for drives with SVC Code NS50.) This is a firmware that is somewhat popular in the disc burning community, because, unlike other firmwares, it supports doing quality scans on BD, BD-R, and BD-RE discs. As a bonus, it looks like it also supports scrambled reads via 0xBE and supports 0xF1 for reading from the cache. DIC doesn't currently have the drive in the database of 0xF1 capable drives, but I did a build of DIC with this drive added to the 0xF1 capable list and was able to dump a 0 offset disc (one that requires /mr in order to fetch lead-out sectors), and the dump matched the dump I previously made with my BW-16D1HT drive.

Maybe this is common knowledge, but I thought I'd mention it, because I didn't see this firmware mentioned in the Wiki and DIC doesn't recognize it. In fact, DIC doesn't even know the offset (+6 samples), because the WH16NS48 isn't in the driveOffsets.txt.

RibShark wrote:

F1h is not supported on 3.10, the command will return an error, it was disabled in that firmware.

I guess that explains why it's only listed as supported on 3.00 and 3.02. In any case, I had only updated to 3.10 as an experiment to see if a protected disc that failed on 3.02 would dump on 3.10. It didn't. So, 3.02 it is.

On a somewhat related note, has anyone done any experiments to see if any of the undumpable discs (due to lead-out issues) are dumpable on 3.00 vs 3.02? I assume it won't have any impact, but I guess it's possible that the cache structure was somehow changed between the two in a way that might have somehow alter when the /mr option fails...

Edit: This got me going on a side project. It looks like the Vinpower variant of of the WH16NS48 firmware (v. 1.D3), a firmware variant that's notable because it adds the ability to do Blu-ray quality scanning on SVC code NS40 LG drives, also supports 0xF1. Once I patched DIC to flag it as an 0xF1 capable model, it worked fine (at least for this test disc):

StartTime: 2021-11-17T17:44:46-0600
CurrentDriveSize
        Total: 255402758144 bytes
         Used:  95840133120 bytes
        --------------------------
        Space: 159562625024 bytes
         => There is enough disk space for dumping
Set the drive speed: 1411KB/sec
This drive doesn't define in driveOffset.txt
Please input drive offset(Samples): 6
This drive can read data sectors at scrambled state [OpCode: 0xbe, C2flag: 1, SubCode: 0]
This drive can read data sectors at scrambled state [OpCode: 0xbe, C2flag: 1, SubCode: 1]
This drive can read data sectors at scrambled state [OpCode: 0xbe, C2flag: 1, SubCode: 2]
This drive can read data sectors at scrambled state [OpCode: 0xbe, C2flag: 1, SubCode: 4]
LBA[297678, 0x48ace]: [F:ReadCDForCheckingReadInOut][L:801]
        Opcode: 0xbe
        ScsiStatus: 0x02 = CHECK_CONDITION
        SenseData Key-Asc-Ascq: 05-21-00 = ILLEGAL_REQUEST - LOGICAL BLOCK ADDRESS OUT OF RANGE
lpCmd: be, 04, 00, 04, 8a, ce, 00, 00, 01, f8, 00, 00
dwBufSize: 2352
This drive can't read the lead-out
But 0xF1 opcode is supported
========== Reading 297657 - 297677 INTO CACHE ==========
01 Cache LBA 297657, SubQ Trk 01, AMSF 66:10:57
02 Cache LBA 297658, SubQ Trk 01, AMSF 66:10:58
03 Cache LBA 297659, SubQ Trk 01, AMSF 66:10:59
04 Cache LBA 297660, SubQ Trk 01, AMSF 66:10:60
05 Cache LBA 297661, SubQ Trk 01, AMSF 66:10:61
06 Cache LBA 297662, SubQ Trk 01, AMSF 66:10:62
07 Cache LBA 297663, SubQ Trk 01, AMSF 66:10:63
08 Cache LBA 297664, SubQ Trk 01, AMSF 66:10:64
09 Cache LBA 297665, SubQ Trk 01, AMSF 66:10:65
10 Cache LBA 297666, SubQ Trk 01, AMSF 66:10:66
11 Cache LBA 297667, SubQ Trk 01, AMSF 66:10:67
12 Cache LBA 297668, SubQ Trk 01, AMSF 66:10:68
13 Cache LBA 297669, SubQ Trk 01, AMSF 66:10:69
14 Cache LBA 297670, SubQ Trk 01, AMSF 66:10:70
15 Cache LBA 297671, SubQ Trk 01, AMSF 66:10:71
16 Cache LBA 297672, SubQ Trk 01, AMSF 66:10:72
17 Cache LBA 297673, SubQ Trk 01, AMSF 66:10:73
18 Cache LBA 297674, SubQ Trk 01, AMSF 66:10:74
19 Cache LBA 297675, SubQ Trk 01, AMSF 66:11:00
20 Cache LBA 297676, SubQ Trk 01, AMSF 66:11:01
21 Cache LBA 297677, SubQ Trk 01, AMSF 66:11:02
22 Cache LBA 297678, SubQ Trk aa, AMSF 66:11:03 [Lead-out]
23 Cache LBA 297679, SubQ Trk aa, AMSF 66:11:04 [Lead-out]
24 Cache LBA 297680, SubQ Trk aa, AMSF 66:11:05 [Lead-out]
25 Cache LBA 297681, SubQ Trk aa, AMSF 66:11:06 [Lead-out]
26 Cache LBA 297682, SubQ Trk aa, AMSF 66:11:07 [Lead-out]
27 Cache LBA 297683, SubQ Trk aa, AMSF 66:11:08 [Lead-out]
28 Cache LBA 297684, SubQ Trk aa, AMSF 66:11:09 [Lead-out]
29 Cache LBA 297685, SubQ Trk aa, AMSF 66:11:10 [Lead-out]
30 Cache LBA 297686, SubQ Trk aa, AMSF 66:11:11 [Lead-out]
31 Cache LBA 297687, SubQ Trk aa, AMSF 66:11:12 [Lead-out]
-----------------------------------------------------
Cache SIZE: 31 (This size is different every running)
-----------------------------------------------------

Neat.

Has anyone tried the Vinpower version of the WH16NS58 firmware? If it supports 0xF1, I might switch my svc code NS50 drive over to that firmware so that it can do both Blu-ray scanning and scrambled rips.

Edit2: The Vinpower firmware for svc code NS50 drives (WH16NS58 v. 1.V5) does not support 0xF1. It just throws check conditions for "ILLEGAL_REQUEST - INVALID FIELD IN CDB." Bummer -- can't have a firmware on the newer version of the drives that will do both 0xF1 and quality scans.

bikerspade wrote:

I've had the same problem with several discs, mostly audio CDs, but one of them I encountered today was a single-track data CD-ROM disc (an old clip art CD-ROM from 1995). Even if I have it retry the cache a thousand times, DIC is unable to get enough from the cache for it to proceed.

I've had that happen before, but I also see this from some discs, where it doesn't even go through the process of trying to get a different buffer size:

.\Programs\Creator\DiscImageCreator.exe cd F "ISO\test\test.bin" 8 /c2 5000 /ns /sf /mr
AppVersion
        x86, AnsiBuild, 20210701T212154
/c2 val2 was omitted. set [0]
/sf val was omitted. set [60]
/mr val was omitted. set [50]
CurrentDirectory
        D:\mpf
WorkingPath
         Argument: ISO\test\test.bin
         FullPath: D:\mpf\ISO\test\test.bin
            Drive: D:
        Directory: \mpf\ISO\test\
         Filename: test
        Extension: .bin
StartTime: 2021-11-17T14:47:00-0600
Set the drive speed: 1411KB/sec
This drive can read data sectors at scrambled state [OpCode: 0xbe, C2flag: 1, SubCode: 0]
This drive can read data sectors at scrambled state [OpCode: 0xbe, C2flag: 1, SubCode: 1]
This drive can read data sectors at scrambled state [OpCode: 0xbe, C2flag: 1, SubCode: 2]
This drive can read data sectors at scrambled state [OpCode: 0xbe, C2flag: 1, SubCode: 4]
LBA[001686, 0x00696]: [F:ReadCDForCheckingReadInOut][L:801]
        Opcode: 0xbe
        ScsiStatus: 0x02 = CHECK_CONDITION
        SenseData Key-Asc-Ascq: 05-21-00 = ILLEGAL_REQUEST - LOGICAL BLOCK ADDRESS OUT OF RANGE
lpCmd: be, 04, 00, 00, 06, 96, 00, 00, 01, f8, 00, 00
dwBufSize: 2352
This drive can't read the lead-out
EndTime: 2021-11-17T14:47:01-0600

I wonder if it's 0 offset discs that do it? I wish I had been keeping a list of which ones do it, but I haven't, so I can't easily go back and check. I know this one is 0 offset according to the Plextor, though, so there's at least one 0 offset disc that does it.

Edit: It looks like it's a problem with the firmware I'm using. I updated to 3.10 a while back after a Sarami mentioned ripping with that version and I was trying to troubleshoot a disc that wouldn't dump. Looking at the DIC source code, though, it only flags drives running 3.00 and 3.02 as capable of reading the leadout via 0xF1. I'll probably just downgrade my drive back to 3.02, as I'd rather not modify DIC.

Edit2: I haven't tried the 0 offset disc again, but it looks like /mr works again after the downgrade.

I've been doing experiments in switching between using my Plextors and my WH14NS40 crossflashed to a BW-16D1HT for various rips. One thing that periodically comes up is the inability for the BW-16D1HT drive to dump some discs due to a "this drive can't read into lead out" message from DIC. I thought the /mr switch was meant to resolve this error by reading into the lead out by reading from the cache? However, for this zero offset disc I just recently tried to dump, even /mr wouldn't allow the disc to be dumped due to the "can't read into lead out" error.

Is there some other steps I'm supposed to be doing to dump such discs?

I've had awful luck dumping GD-ROMs using PC drives. I've got two different recommended models, but they both seem to be unable to read sectors toward both the beginning and end of the high density area. Instead, I've been dumping using a Dreamcast. However, I've got a couple of questions that have occurred to me.

First, I'm confused about why the TOC reported by the Dreamcast seems to differ for some tracks compared to what is in the database. This typically seems to happen when there's an audio track followed by a data track in the same session. For example, for the Japanese release of Guilty Gear X, the database has the last audio track (track 37) as 10405 sectors long, but the Dreamcast seems to report the track as 10330 sectors long for my disc (and I don't think it's a pressing variation -- I've seen this discrepancy with nearly every multitrack disc I've checked). I'm assuming it maybe has something to do with the pregap, but if we add on a 3 second pregap for the subsequent track we get (10330 + (75*3)) = 10555, which makes the track too long. Alternatively, if we add the 2 second pregap for track 37 we get (10330 + (75*2)) = 10480, which is also too long. Is there a way, short of just looking up what's already in the database, to determine what the proper track length is?

Second, and maybe this ties into the first question, is there any benefit to ripping subchannel data when extracting GD-ROMs? I've been playing with the source code for the GD-Ripper utility that comes with Dreamshell. By default, that utility rips to ISO (2048 bytes/sector) and doesn't extract any subchannel data. However, it's a pretty trivial modification to modify it to extract raw 2352 byte sectors plus 96 byes of subchannel data. Is there some index data that's present in the subcodes that can be used to determine actual track boundaries? I'm still learning about subchannels, and I'm not even sure how the hell the subchannel data returned by the Dreamcast is packed (it didn't look like a standard RAW format like we'd get from the MMC command), but I'm curious if this data might be useful to anyone for anything.

Right now, I'm ripping with SD Rip 1.1, but I may go back to using my modified GD-Ripper if there's any use in it.

For the DVD-capable Plextors, my only experiences are with the PX-716 and PX-760 drives. I've had pretty bad luck with both, though admittedly we're talking small sample sizes. In total, I've purchased three PX-716 drives, two of which don't work. Those two might technically work, but they clearly have major issues with the OPU and only work when the stars align (and even then usually not for long enough to read an entire disc). I have another PX-716 that works perfectly fine. I've purchased two PX-760 drives. One of those was supposedly brand new when I purchased it, though it had been sitting on a shelf for ~15 years. That drive is unable to read DVDs, but its CD reading works fine. That particular failure worked out well for me, as I was really only interested in dumping CDs with the Plextor anyway. The other PX-760 works fine for both CDs and DVDs.

It's too bad nobody has been able to locate a replacement for the OPU in the PX-716. I recall there being a discussion ages ago over on MyCE (probably from back when it was CDFreaks) about the PX-716. In that discussion, someone who was doing refurbing had a stack of PX-716 with dead OPUs, but it was reportedly impossible to find any part that can be cross-referenced to the original OPU or a source for the original OPU.

I wonder if it'd be possible to figure out which PX-716 are likely to fail quickly? Maybe they were made in a particular time period or have a particular range of serial numbers? Or maybe they were made in a particular location? I can't recall if PX-716 is one of the models that are sometimes made in Japan or if they're all made elsewhere. In any case, it really sucks that those drives become bricks when the OPUs do fail. I've been favoring my PX-760 drives in no small part because I know I can find replacement OPUs in other drives / (possibly fake) ones on AliExpress if the need arises. If I wear out the OPU in the PX-716, it goes in the trash.

I'm looking forward to any information you have to post. I've noticed from your previous posts that you're obviously digging deep into various issues about disc dumping, and it's been very informative.