does it work better, when you swap with regular audio CD, instead of recorded?

i guess CD have some scraches or fingerprints on it.
you could try to clean it and set lower reading speed, before dumping with IsoBuster.

does data track have EDC for Form2 sectors?
if it does, you could run it through CDMage and likely 'Light Scribe' image will report some errors.
repairing those probably will make it match to the other images.

53

(22 replies, posted in General discussion)

it could be that some audio data gets cut off along with gap, because of large negative offset.
replacing gap with dummy data isn't 100% safe.
you could take image with IsoBuster on gap sector range and look for 1st byte different from 0x00.
data track image, before gap is removed, might work as well.

yes, you're right Specialt1212,
you shouldn't use --fix on those tracks and offset is indeed not needed in this case.

it would be little more difficult, would there be gap between those tracks.
then you'd have to determine it's size from subcode and transfer it manually.
but afaik USA release hasn't got one.

55

(3 replies, posted in General discussion)

it would have been this way in new db model Dremora wanted to implement
it was a complete rewrite of structure
but i don't know what happened to this idea
Dremora isn't around that much any more and AFAIK he's only person who can make such changes

56

(4 replies, posted in General discussion)

yes, it could be that EAC doesn't overread with drive that is fully capable to do this
for instance it won't with Mediatek based Lite-Ons that i tried
you could extract sector range then with IsoBuster (include few sectors after LO)
and after correcting offset this file should match to one extracted with EAC but have more meaningful data
i.e. where ther's 0x00s at the end of EAC file, there still be some data in this one
like in image you provided

57

(14 replies, posted in General discussion)

it does look so indeed from channel Q

some CDs are very messed up in this regard
with all gaps weird and stuff
usually older ones
equipment must have had some threshold for error tolerance when mastering i guess
larger at first, since consumer hardware wasn't that accurate those days too
e.g. PSX would position to requested second not frame
so 1:74 or 1:75 marker in subcode would mean little to it

58

(14 replies, posted in General discussion)

on 2nd image it is data from the end of sector 130443
so that would mean 0 offset and 3.0 s gap after all i guess

59

(14 replies, posted in General discussion)

alright, if you get junk when going back 2.74 sec and valid data sector at  3 sec
(look for '00 ff ff ff ff ff ff ff ff ff ff 00' sync pattern)
it would almost certainly be 2.74 then
to be completely sure about this offset you could check this CD with d8 capable drive
or do CD swapping thing

60

(14 replies, posted in General discussion)

which are problematic tracks?
briefly checking 1st file it looks like data->audio gap = 2s; audio->audio gaps = 1.74s
which is not uncommon pattern
btw for some reason RAR complains about unexpected end of archive for those attachments

61

(32 replies, posted in General discussion)

thanks tossEAC, i'll add this information to next version
it's good to know streaming is working with Samsung

62

(14 replies, posted in General discussion)

hi TAurus

basically you would have to read ECMA-130 (chapters: 20, 21, 22) and examine .sub with Subcode Analyzer
here's a brief example with Mode 2 section (most often cause) affecting EAC's gap size detection

63

(32 replies, posted in General discussion)

FriiDump 0.5.3 (src) Linux build
- Fixed failing after 1st DL media layer with non-Hitachi methods.
- Fixed still hashing with 'nohash' parameter when resuming.
- Fixed resuming larger files (~4 GB).
- Fixed unscrambling larger files.
- Faster file unscrambling.
- Slight modifications to methods;
  possible performance increase with Hitachi based devices.
- Restructured methods and added some new ones.
- Added layer break information.
- Added current position output when error occurs.
- Added SH-D162A, SH-D162B, SH-D162C & SH-D162D as supported.

Lite-On going as fast as 5400 MB/h with ordinary DVDs
Hitachi-LG expected to be as fast as RawDump and i  fixed up everything i could find wrong
so likely won't be doing any updates to this any longer unless new drives need to be added

included with it is program that could be used to determine READ BUFFER command parameters
for yet unsupported drives
so if you do have one of those and have any luck with it, please report your results here.

hi marzsyndrome

remove can do this
syntax is a bit wacky though:
remove --size=150sec --direction=left "Track 02.gap" "Track 02.bin"

where 'left' says data will be moved from 2nd file to 1st
and so 1st doesn't have to exist, you can name it whatever you like
if name of existing file is given, data will be appended to it

and '150sec' is size in sectors
alternatively it can also be specified in samples or bytes or as hex value too

ok, thank you very much velocity

i think i'll stick this topic for now
since there could be quite a lot people with Plextor drives
hence affected with this issue

worst thing is that, since it's masking erroneous data,
theoretically there could be CDs in DB that pass as good on e.g. CDMage,
i.e. look absolutely ordinary, but actually were affected

though AFAIR all of such CDs i checked still had some mastering artifacts present
as does yours

since CD-ROM decoding is skipped, all of those methods would give scrambled output
so data track should be passed through descrambler manually afterwards
http://www.mediafire.com/?q1mbksntoje
in this pack you will find 'remove' & 'unscramble' programs
so after you have output from cdtoimg
assuming CD offset is +2 and Plextor's +30, resulting in +32 or 0x80
and 'rawdata' was the name of cdtoimg output file
you could try following:
remove -size=$80 -direction=left trash rawdata
unscramble rawdata

resulting file 'rawdata.scr' will have 128 bytes missing from the end
(corresponging to offset)
but should match to image extracted with other drives up to that point, so
fc /b rawdata.scr "Track 01.bin" |more
should result in:

Comparing files rawdata.scr and TRACK 01.BIN
FC: TRACK 01.BIN longer than rawdata.scr

it's a strange coincidence with motherboards and i would use IDE2USB converter too
but i've just tried it on older Gigabyte GA-8I915ME with almost clean XP SP3 (through USB)
and AFAIR this problem initially occured when drive was connected to internal IDE controller
yet symptoms remain - must be Plextor's firmware after all
but still it would be really great if you could check this CD on a different system - to be absolutely certain

hi velocity37

thank you for reporting this

i had this same problem with Plextor Premium in the past
and i thought it's a specific of my model
so i guess this is issue affecting Plextor drives in general then

reading CD with D8 command, swapping audio CD or fetching data directly from buffer
should yield correct output

edit:
what motherboard do you have, btw?
mine is ASUSTeK P5QL-E

yea, i just changed parser string in original plug-in
http://www.mediafire.com/?ym2acwgr3dn
place it in nullDC 'plugins' directory and it should show up with [quotes hack] tag
for GDIs without quotes you should switch back to unmoded plugin

given statement is from time when d8 command and swapping were unknown
so only way how to determine offset was with IsoBuster or similar program

to get artifacts from CD with large negative offset
one would need to cancel it with drive having larger positive offset
(when doing manual reading, offset manifests in 1st sector after data track
so if combined offset is negative, it's obscured with data track -
data track is written on top of it, making detection impossible)
e.g. it would require at least +299 drive for your CD

detection of larger positive offsets generally wasn't a problem
but you'd need drive capable of overreading to fully extract data from such CD afterwards

while there can be some uncertainty, when doing offset calculations manually with IsoBuster
d8 returns offset exactly as drive sees it, i.e. as it is by definition

so, since Plextor Premium does have d8 command
and can overread into both: Lead-Out and 1st pregap
fortunately all of this shouldn't influence outcome for you
though even with Plextor there still can be difficulties with gaps of some Sega CDs
but such occurrences should be seldom

it's cool you want to be certain about everything
often people would rush in head-on and make mistakes

yes, CD = factory offset

it doesn't have to be @0, mostly it won't be
there is this freedom left in definition of CD medium
i wouldn't reallly know why
i guess maybe data synchronization, when mastering, the way they did it
was rather complicated process back then
so this would allow to overcome certain technological difficultties

generally CDs for older systems, like SCD, will have those offsets more random at wider range
most extreme values currently in DB must be around several thousands

hi trmchenry

those listed @AccurateRip are drive offsets
they are indeed constant

value px_d8 returns is combined offset (drive+CD)
you'd use this value for EAC offset correction
knowing drive offset, you can derive CD offset
CD offsets are required for DB

so you must run px_d8 with each CD

from your example, assuming drive offset is +30
you'd rip Ground Zero Texas with offset correction +30 and submit 0 to DB;
Mickey Mania with +32 and submit +2

it would be better at first to compare some CDs to already existing DB entries
if there are none, maybe audio tracks of Mickey Mania are comparable to European release

72

(32 replies, posted in General discussion)

ok, 0.5.2:
- Corrected handling of standard DVDs
  (type should be forced to 3, when dumping or unscrambling).
- Better response to 'speed' parameter.
- Uniform raw output for all devices: unscrambled data + headers.
- Slight performance increase (~1650 MB/h on LH-18A1H).
- Added LH-18A1P, LH-20A1H, LH-20A1P to list of supported devices.

so raw output now also depends on disc type, i.e. --type (-T) parameter
default is Nintendo layout, but for regular DVDs always should be forced to 3

and i couldn't actually test dual-layer DVDs, because of lack of free space

73

(32 replies, posted in General discussion)

yes, i haven't fixed this yet. and also Plextor's RAW output would be different from other drives.
author refers to scrambled data + header & EDC as RAW sectors,
and as i understand it most people do so, but Plextor doesn't provide scrambled output
and either way i don't think this format would be too useful.
so should i instead set this mode to unscrambled+header+EDC for rest of the drives, alike to Plextor?

74

(32 replies, posted in General discussion)

Just so this is clear: -016,32 means that 16 sectors are read with the read command, and 32 are read from buffer

yes

(so with this command, the filesize should increase by ~64kb everytime instead of the normal 32?)? If so, then I don't see why it works fine in readbuf_tool if you read e.g. 206400 bytes from the buffer and why not here with 100 sectors.. maybe there's a bug? Also, since Plextor buffer holds up to 624 sectors, is it possible to raise the limit from 100 to for instance 1000?

readbuf_tool reads data in 64kb chunks, if it's more than that, it would loop until criteria is met
friidump does the same, actually about everything i know does
(e.g. CD tools would usually read 26 sectors at a time which is 26*2448=63648 )
as this is the size that all devices should support, larger than that does not have to work and often it doesn't
drive is not only thing responsible for this limit, ther's also controller, drivers and such
for example my IDE controller is weird - picky on data alignment
and older PC i have in other room wouldn't read buffer at all from this very same LH-18A1H

DVD drives would return either 2064 or 2384 byte RAW sectors
sectors are grouped in blocks by 16
all this is taken into account when doing reads with those methods, so that's why i said it's non linear
100 sectors, it's more than 5 blocks, that best Hitachi method has
friidump's author had ~1250 MB/h with it and some guys would get as much as 2500 MB/h
Lite-On would do about ~1100 on 27 sector requests (27*2384=64368 - about full page); ~2000 on PC DVDs
so further it wouldn't matter really, it would mean drive has to fill this amount of cache inbetween reads,
while calculations are done - it's very unrealistic

dumping fails because there are numerous checks done on data sequence and integrity
therer's likely incorrect content returned from cache, e.g. garbage or sectors from previous reads

1. 'friidump -d d: -c 0 -x 1 -T 3 -0 -r test.iso' reads a normal dvd at ~650 mb/s on Plextor! So looks like you're right and Plextor needs a different approach for GC.
2. 'friidump -d e: -c 1 -x 1 -T 3 -0 -r test.iso' works fine on Samsung and normal DVD, although contents are scrambled (like in Truong's tool).

So, normal DVD's are working fine, but gamecube discs act entirely different on Plextor (with and without swapping) and Samsung (only tested without swapping).. iR0b0t is having the same issues, so I guess that for now, only lite-on drives offer results that are comparable to LG's?

yeah, we have to disable those error corrections
i'll send you a program later tonight to try some things out, ok?

75

(32 replies, posted in General discussion)

oh yes, they're limited to 100, so 512 resets to default value
but if 64 fails, it's too much already
i guess we'll have to to find something special for Plextor

does Samsung work with regular DVDs?