26

(55 replies, posted in General discussion)

gigadeath wrote:

The dumps Themabus and I verified together had no such problems at all, and they're full raw dumps. Our dumps matched perfectly, and we live thousands of miles from each other using completely different PCs and CD-rom drives :B
You can see them all in the Mega-CD section.

That is fine - but it doesn't answer my question! WHY dump the raw data if it is not user data?

27

(55 replies, posted in General discussion)

Well, thanks for all the additional clarifications.

I now understand that it is really a problem that there is a chance to miss some audio data of the last track, and I think this is a flaw in Steve Snake's proposed method. It may work fine with 99% of all dumps, but would still produce errors in that 1% you gave examples of.

However, I still have a bad feeling about dumping RAW data (2352) from data tracks of systems (Sega CD, Saturn) which DEFINITELY do not make use of these additional 304 byte of information. What is the point? Those bytes are used for an "internal" (from the system's point of view) error correction algorithm to be able to fix sector errors. The user data (from the system's point of view) is just 2048 byte.

By ripping the RAW data from those data tracks and using that as the basis for the checksumming, you actually INCREASE the chance of producing unreliable dumps, because a 2352 byte block has less error correction than a 2048 byte block (spoken from the CD drive's perspective).

There is no such arguing in the audio section of CDs, because here the 2352 bytes definitely constitute the user data.

28

(55 replies, posted in General discussion)

Vigi wrote:

There is no such thing as "intelligent checksums". There has to be a standard of how reference checksums are calculated before you can disregard the data offset. We prefer to take both the read and the write offset into account when determining the reference, allowing audio tracks (when saved using the standard) to have identical checksums across different regions/games and not just to look at the data integrity the way you are planning to do.

Yes, there has to be a standard. We propose the following one. Only do the checksum on the following audio data:
- starting at the first non-zero byte in the audio data section after the data track
- ending at the last non-zero byte in the audio data section before the end of the file

The positioning of this audio data block is different in the BIN files, depending on the audio offset of the drive. But, the checksum of the audio data block will be the same regardless of that audio offset.

In combination with the checksum of the data block, and the CUE sheet which lists the tracks, this allows for 100% verification of good (I would even call them perfect according to Redbook standard) SegaCD game dumps.

Vigi wrote:

It makes me wonder if the benefit of speed will really be that great, because even a minor small scratch on any of your cd's will give you problems dumping and verifying them the 'fast' way.

But why is that? Doesn't that concern your own ripping method as well? If there really is a bad scratch in the audio data section of the disc, there is NO WAY to reproduce the original bytes in these particular sectors.

In our method, we simply rule that out by stating to rip a CD twice. If the resulting BIN files have no difference, this is proof that the drive had no problems in reading out the audio data at all and can do it reliably over and over again.
A badly scratched CD (of which I have one to confirm that) will always produce different results from the drive internal error correction mechanism, producing a different BIN file with every dump.

Vigi wrote:

Sooner or later you will propably end up using EAC after all. Anyway, good luck with your projects.

The problem of EAC is that it can't cope with data tracks or extract them directly, and in the end you will end up with one file per track.
I love the simplicity of having one BIN file per CD, and that is difficult to achieve.

I would like to give that "PerfectRip" program a try though - I didn't find it on this site, and google results were inconclusive.

Vigi wrote:

As for GoodGen, I like No-Intro's dat better, because it's more accurate. Then of course I'm not even talking about MAME and how they want to preserve the Sega Genesis roms (splitting the data into separate files exactly like they are stored on the actual rom chips). Most people also consider this 'pointless' while others don't (see the resemblance?).

The whole life is a constant stream of defining goals, weighing methods to achieve them, and adjusting goals to fit those methods... wink

29

(55 replies, posted in General discussion)

Let's first focus on Sega CD and Saturn games. The structure of those CDs is pretty simple:
- mode 1 data track, only 2048 bytes of data
- 0 to many audio tracks

The conclusion of the discussion on the Inn was the following.

There definitely IS the problem that every drive has a slight offset in reading the audio tracks. However, this offset is less than 1 sector of the CD, i.e. less than 1/75th of a second. So, practically, it does not matter at all!

So, we have concluded that the drive offset introduces a problem in checksumming methodology, not a problem in ripping methodology.

So you need an INTELLIGENT checksumming tool which calculates the CRC32 for the data track and the audio tracks within the BIN file seperately, adjusting dynamically for the drive offset.

Practically, that means that there might still be floating around several BIN/CUE images of a game, with different checksums for the BIN as a whole. But the idea is that for all those BIN files, the "intelligent checksum" is always the same, thus enabling comparability! The BIN/CUE files may have an audio offset of a few 1/75th of a second. That in itself is irrelevant again because you cannot predict the offsetting which happens when burning back a dump to a real CD, or playing it in a real system.

Concerning hard errors in reading the audio tracks, that can simply be checked by ripping the CD twice. If the resulting BIN file is the same, it means that the CD dump is ok. If the resulting BIN files differ in the audio track data, it means the CD is badly scratched, and the CD drive's error correction kicks in, producing different results with each rip. Meaning, the disc is unsuitable for a "perfect" rip anyway.

Consquently, I've begun working on the GoodSegaCD and GoodSaturn projects on the Inn, hoping that this slightly easier method will be adapted by the Sega retrogaming community as a new defacto standard (similar to the GoodGen stuff).

Looking forward to hearing your thoughts on this!

30

(55 replies, posted in General discussion)

Hi, thanks for your warm welcome.

However, after further discussing the issue with people over at SegaXtreme (your link) and my site (Eidolon's Inn - relevant discussion thread) I have decided that making perfect dumps is pretty pointless, at least for the time being when it's such a hassle to produce.

Thanks anyway and good luck with your project - at a free weekend I will at least try to reproduce the results some of your users had - I own some of the same SegaCD and Saturn games which are already in your DB.

Regards,
Eidolon.

31

(55 replies, posted in General discussion)

Hi y'all, I'd have about 50ish SegaCD games, 50ish Saturn games, 50ish Dreamcast games and a couple of PS1 games to contribute to the database.

While doing some research on the net, I also found tosec.org with a similar approach to the matter and also a large database. How's this site's relation with them? Wouldn't it be a good idea to merge the databases?

Anyway, the processing of doing the "best possible rips" is very cumbersome. Even for only my few games it seems a little like "overkill", at least from a time consumption point of view. Any way that there might be programs released to automate the process?

Best regards,
Eidolon
http://www.eidolons-inn.net