<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
	<title type="html"><![CDATA[Redump Forum — Merged Compression Tests]]></title>
	<link rel="self" href="http://forum.redump.org/feed/atom/topic/4297/" />
	<updated>2009-04-23T22:03:30Z</updated>
	<generator version="1.4.4">PunBB</generator>
	<id>http://forum.redump.org/topic/4297/merged-compression-tests/</id>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/17332/#p17332" />
			<content type="html"><![CDATA[<div class="quotebox"><cite>themabus wrote:</cite><blockquote><div class="quotebox"><blockquote><p>3) Now he shares the pars with other testers to see how many blocks are needed.</p></blockquote></div><p>about equal to archive size, i guess, which would be huge</p></blockquote></div><p>I will go back on this later.<br />I suggested this for stuff like Gamecube that have no better compression with 7z.<br />If images will be setted to same date and compressed with same version of winrar and same settings you&#039;ll have the same of packps2, or not?</p>]]></content>
			<author>
				<name><![CDATA[Rocknroms]]></name>
				<uri>http://forum.redump.org/user/4288/</uri>
			</author>
			<updated>2009-04-23T22:03:30Z</updated>
			<id>http://forum.redump.org/post/17332/#p17332</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/17056/#p17056" />
			<content type="html"><![CDATA[<p>I see what u are saying, and that why we also provide a change list which list games which has been changed since the last update, thou at this moment all this is done manually, and a automated solution would be great. the compressed crc dat files are for peoples like me who have miss couple of updates and thus can use that dat to quickly find which games need to be updated</p><p>about namings of the games, thou i am a big fan of serial # coz thats what i have been using since Psx_renamer days, but i do agree, we need to somehow provide the info what version of a game a said dump is.</p><p>i personally don&#039;t worry how long a game name is as long as it provides the following info</p><p>GameName-region-Language-version-edition-serial</p><p>PS: i need patches for few PSX Asia games dumped by u, and maybe few tracks/full images, could u provide that somehow.</p>]]></content>
			<author>
				<name><![CDATA[BadSector]]></name>
				<uri>http://forum.redump.org/user/4334/</uri>
			</author>
			<updated>2009-04-16T04:12:49Z</updated>
			<id>http://forum.redump.org/post/17056/#p17056</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/17051/#p17051" />
			<content type="html"><![CDATA[<p>Anyone interested can download the PackISO installer here: <a href="http://www.mediafire.com/?otb3dmmtznh">http://www.mediafire.com/?otb3dmmtznh</a></p>]]></content>
			<author>
				<name><![CDATA[BitLooter]]></name>
				<uri>http://forum.redump.org/user/4317/</uri>
			</author>
			<updated>2009-04-16T01:58:11Z</updated>
			<id>http://forum.redump.org/post/17051/#p17051</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/17026/#p17026" />
			<content type="html"><![CDATA[<div class="quotebox"><blockquote><p>infact one of our project torrent have a dat file with packiso&#039;s CRC for quick renaming, and we plan to release the dat with every update from now on, which should solve the time issue in renaming those sets.</p></blockquote></div><p>for people that would get those images elsewhere and then compress accordingly - with PackIso <br />to join in torrent at higher position?<br />but the thing is - there aren&#039;t alot of those people, as i understand it.<br />majority won&#039;t be bothered with renaming themselves - they&#039;ll take what is given.</p><p>so, imho, if somebody feels like recompressing and reuploading, like xenogears i suspect may (to .zip/.rar)<br />it wouldn&#039;t do any harm - the more there are those torrents - better<br />one enforced set without alternatives wouldn&#039;t be good<br />even though i think .zip or .rar particularly aren&#039;t rational - still if somebody feels different about it - it won&#039;t harm<br />(well, not any more than any torrent based on redump.org .dats at current state<br />i see a general problem with names, i think they&#039;re terribly wrong, <br />hence in torrents also and anything else derived from them<br />but that is a concern of redump.org crew)</p><p>afterwards to maintain names in sync with updates made at redump.org it would be enough to compare .dat files (redump&#039;s):<br />one taken when set was made, and current - ther&#039;s no need to rescan whole set every time, imho <br />(and to produce alternative .dats for that reason)<br />when it&#039;s uploaded it&#039;s locked to redump.org @certain point in time (like a snapshot), <br />so you can say this torrent is a subset of that .dat<br />when you see CRCs of some title change from .dat to .dat - you know this CD should be updated<br />when same CRCs belong to different title now - it was renamed then<br />and new records would manifest as new titles with new CRCs<br />that&#039;s all managment there is, and it is a concern solely of person maintaining torrent, not everyone downloading it, as i see it<br />i don&#039;t really see application for .dat indexing compressed files</p>]]></content>
			<author>
				<name><![CDATA[themabus]]></name>
				<uri>http://forum.redump.org/user/2174/</uri>
			</author>
			<updated>2009-04-15T10:44:09Z</updated>
			<id>http://forum.redump.org/post/17026/#p17026</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/17001/#p17001" />
			<content type="html"><![CDATA[<p>I am not sure about other sets, as uploaders of those didn&#039;t use the forum post of the project at UG, but packiso has worked a lot better with PSX, and community have accepted its use, infact one of our project torrent have a dat file with packiso&#039;s CRC for quick renaming, and we plan to release the dat with every update from now on, which should solve the time issue in renaming those sets.</p><p>so i believe packiso is going a good Job there at the moment, thou i myself would like a merged set at some point to save HDD space, i have 2.5TB space and its almost full for last 6 months or so.</p><p>PS: we also made a little installer there which make it easy to work with packiso, if some want he can use that also.</p>]]></content>
			<author>
				<name><![CDATA[BadSector]]></name>
				<uri>http://forum.redump.org/user/4334/</uri>
			</author>
			<updated>2009-04-15T04:48:14Z</updated>
			<id>http://forum.redump.org/post/17001/#p17001</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/16850/#p16850" />
			<content type="html"><![CDATA[<p>i don&#039;t think ther&#039;s anything to worry about<br />neither me nor cHrI8l3 maintain those sets, they&#039;ll likely remain as they are</p><p>i myself think it&#039;s still way too early for this, <br />it would be more appropriate when PSX is about 80% or so complete<br />and even then as alternative to PackIso likely</p><p>though regarding .zip or .rar, i think it&#039;s a step backwards<br />they are uncompromise in favour to people with fast connections, a lot of storage space and time to waste</p><p>compression increase merged set offers is unprecedent<br />it won&#039;t be 8 times of course, but on average it&#039;d be x2 easy over PaskIso<br />and decompression speed is still good<br /><span class="postimg"><img src="http://img410.imageshack.us/img410/5558/merged.png" alt="http://img410.imageshack.us/img410/5558/merged.png" /></span><br />so let&#039;s say this 80% PSX set takes 1tb with PackIso<br />then it would 500gb merged<br />my connection allows maximum download speed of 500 kilobytes per second, which is about average i guess<br />so i&#039;d save (1024*1024)/60/60/24 = ~12 full days on download<br />that&#039;s a lot of space and time economy<br />the price is slower decompression speed, <br />but to loose those 12 days i saved, i&#039;d have to decompress really so often<br />let&#039;s be generous and say it&#039;s 2 minutes overhead on decompression, which is not true <br />(it&#039;s event not true for zip vs merged, but it&#039;s ok - let&#039;s be generous)<br />so then 12*24*60/2 = 8640<br />i&#039;d need to decompress 8640 images one by one, to claim &#039;it wasn&#039;t worth it&#039; - <br />would i decompress all merged versions of same title at once, i&#039;d actually save time<br />(8640 happen to be about 80% of PSX titles, so i&#039;d need to decompress each one of them: French, German, etc...<br />it&#039;s very unlikely and still i&#039;d save space)</p><p>about ease of use - it&#039;s not a problem either, imho<br />graphical frontend can be made that would allow user friendly extraction</p>]]></content>
			<author>
				<name><![CDATA[themabus]]></name>
				<uri>http://forum.redump.org/user/2174/</uri>
			</author>
			<updated>2009-04-10T12:43:58Z</updated>
			<id>http://forum.redump.org/post/16850/#p16850</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/16838/#p16838" />
			<content type="html"><![CDATA[<div class="quotebox"><cite>Haldrie wrote:</cite><blockquote><p>A lot of our dumps are being shared on torrent networks now as well as usenet and most people in the torrent community want something that is easy to extract and not all of them have the latest and greatest hardware. A lot of them even have a problem figuring out packIso (like that&#039;s real hard to use). Basically if we are going to use this to help spread our dumps we need to make sure it&#039;s going to be usable and accepted by everyone first (or at least the majority of people that know what they are doing) before we start migrating everything to a new format.</p></blockquote></div><p>Seriously, this...<br />How could you share the dumps on UG or similar with patches, or unusual ridiculous compression program? 99% of the regular users will just ignore those torrents and the redump.org project will <span class="bbu">never be out of the niche</span>.</p><p>It could annoy more people and offer to the tosec guy another fact to make joke on us. Already now in nearly every UG TOSEC torrent you can read things like &quot;redump.org artificially forges dumps&quot; &quot;redump.org method is the worst of the internet and it&#039;s very difficult to implement&quot;&quot;redump.org dumps are ghost dumps because 90% of discs listed on their site doesn&#039;t exist on the internet&quot; etc... I already can imagine &quot;lol, redump.org dumps are compressed in an abstruse way that it takes hours go get back to a common format, and maybe you have to be an engineer to make this happen&quot;)</p><p>To go head to head with TOSEC the compression format should be the common torrentzip or winrar.<br />Just my humble opinion, but I think to have a very strong point here.<br />Of course I&#039;ll help to seed on UG when the torrents will be out no matter the final compression standard. Keep up the good work to finally spread the dump! <img src="http://forum.redump.org/img/smilies/smile.png" width="15" height="15" alt="smile" /></p>]]></content>
			<author>
				<name><![CDATA[xenogears]]></name>
				<uri>http://forum.redump.org/user/4164/</uri>
			</author>
			<updated>2009-04-10T06:04:35Z</updated>
			<id>http://forum.redump.org/post/16838/#p16838</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/16830/#p16830" />
			<content type="html"><![CDATA[<p>anyone else is in favour in using patches for storing images instead of merged archives ?</p><p>Lil&#039; Update:<br />- ECM issue I mentioned few posts ago (the one that you can not ECM inside FreeArc on large files) was found to be caused not by FreeArc but by ECM itself... ECM can not handle large files, so when you add more than 4gb of discs into archive and go ECM on it .. it will not work! lets hope it will be resolved...</p>]]></content>
			<author>
				<name><![CDATA[cHrI8l3]]></name>
				<uri>http://forum.redump.org/user/27/</uri>
			</author>
			<updated>2009-04-09T21:58:03Z</updated>
			<id>http://forum.redump.org/post/16830/#p16830</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/16826/#p16826" />
			<content type="html"><![CDATA[<p><em>Size</em><br /></p><div class="codebox"><pre><code>Split:

 zip (7z -tzip)       :3996932848 &lt;| 8 * 499616606 - average from 3 samples

 rar (-m5)            :3725107104 &lt;| 8 * 465638388 - average from 3 samples

 PackIso (ECM-&gt;7z)    :3020367596 &lt;| taken from cHrI8l3&#039;s table but it&#039;s slightly off: each archive is by about 4..6 bytes smaller

Merged:

 ImageDiff+ECM-&gt;7z    : 379311961 &lt;| ImageDiff with default settings

 Xdelta3+ECM-&gt;7z      : 387772715 &lt;| Xdelta: -N -D -R -n -0; 7z: -mx=9; it&#039;s strange though, patches themselves are smaller uncompressed

 Xdelta3+ECM-&gt;FreeArc : 394458697 &lt;| -m4, -m4x (size is the same)

 Xdelta3+ECM-&gt;FreeArc : 388348030 &lt;| -m9x</code></pre></div><p><em>Compression speed</em><br /></p><div class="codebox"><pre><code>Split:

 zip (7z -tzip)       : 8 *  ~87 =  ~696 seconds

 rar (-m5)            : 8 * ~347 = ~2776

 PackIso (ECM-&gt;7z)    : 8 * ~250 = ~2000

Merged:

 ImageDiff+ECM-&gt;7z    : 814
  ECM                 : 36
  ImegeDiff           : 7 *  ~56 =  ~392
  7z                  : 386

 Xdelta3+ECM-&gt;7z:     : 605
  ECM                 : 36
  Xdelta3             : 7 *  ~28 =  ~196
  7z                  : 373

 Xdelta3+ECM-&gt;FreeArc : 624
  ECM                 : 36
  Xdelta3             : 7 *  ~28 =  ~196
  FreeArc             : 392               &lt;| -m4

 Xdelta3+ECM-&gt;FreeArc : 622
  ECM                 : 36
  Xdelta3             : 7 *  ~28 =  ~196
  FreeArc             : 390               &lt;| -m4x

 Xdelta3+ECM-&gt;FreeArc : 797
  ECM                 : 36
  Xdelta3             : 7 *  ~28 =  ~196
  FreeArc             : 565               &lt;| -m9x</code></pre></div><p><em>Decompression speed</em><br /></p><div class="codebox"><pre><code>Split:

 zip (7z -tzip)       : 32..256 (8 *  ~32 =  ~256)

 rar (-m5)            : 40..320 (8 *  ~40 =  ~320)

 PackIso (ECM-&gt;7z)    : 72..576 (8 *  ~72 =  ~576)

Merged:

 ImageDiff+ECM-&gt;7z    : 84(209)..959
  unECM               : 36
  ImegePatch          : 7 * ~125 =  ~875
  7z (1 or many diffs): 48

 Xdelta3+ECM-&gt;7z      : 84(118)..322
  unECM               : 36
  Xdelta3             : 7 *  ~34 =  ~238
  7z (1 or many diffs): 48

 Xdelta3+ECM-&gt;FreeArc : 97(131)..335
  unECM               : 36
  Xdelta3             : 7 *  ~34 =  ~238
  FA (1 or many diffs): 61                &lt;| -m4, -m4x

 Xdelta3+ECM-&gt;FreeArc : 101(135)..339
  unECM               : 36
  Xdelta3             : 7 *  ~34 =  ~238
  FA (1 or many diffs): 65                &lt;| -m9x</code></pre></div><p><em>Programs used</em><br /></p><div class="codebox"><pre><code>7-Zip 4.53 (PackIso)
7-Zip 4.65
ECM v1.0
FreeArc 0.50
ImageDiff v0.9.8
RAR 3.80
Xdelta 3.0u</code></pre></div><p>ImageDiff is quite slow with larger files indeed, though patches it produce, while being larger, compress better for some reason.<br />replacing it with similar program: Xdelta3, improved both: compression and decompression speeds a lot.<br />replacing 7z with FreeArc on the other hand didn&#039;t improve anything, though i tested just a few options:<br />m4 - for being suggested as equal to -mx=9 of 7z, which i commonly use<br />and couple more<br />(maybe it does beat 7z with some - i&#039;m not saying it doesn&#039;t, they&#039;re quite close anyway)<br />also i didn&#039;t test those inbuild filter chains</p><p>from those results i&#039;d say &#039;Xdelta3+ECM-&gt;LZMA&#039; is optimal configuration,<br />would .ecm be created for most demanded version from set (U or E, whichever it is)<br />it would loose only few seconds on decompression to PackIso, while improving ration a lot<br />(would this game contain audio tracks it&#039;d be a tie probably (TAK vs APE))<br />it would be worse if patching is required, but still acceptable, imho<br />also whole set would compress/decompress considerably faster, <br />but it&#039;s unlikely somebody would do that, imho, not often at least<br />also memory requirements for 7z @x9 are ok: 700mb/70mb</p><p>would such set be created now - it&#039;ll involve a lot of constant recompression, though - whenever title is added, <br />so it&#039;s too early, imho, but otherwise i like it a lot</p>]]></content>
			<author>
				<name><![CDATA[themabus]]></name>
				<uri>http://forum.redump.org/user/2174/</uri>
			</author>
			<updated>2009-04-08T16:07:33Z</updated>
			<id>http://forum.redump.org/post/16826/#p16826</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/16816/#p16816" />
			<content type="html"><![CDATA[<div class="quotebox"><blockquote><p>First of all it looks like this program is still in development and the current version is an alpha build so I don&#039;t think it would be a good idea to migrate everything to this just yet until the program is more perfected by the author.</p></blockquote></div><p>yes its still unstable release, and every time you want to store an archive, you should execute test after compression (-t switch)</p><div class="quotebox"><blockquote><p>Basically if we are going to use this to help spread our dumps we need to make sure it&#039;s going to be usable and accepted by everyone first (or at least the majority of people that know what they are doing) before we start migrating everything to a new format.</p></blockquote></div><p>yes yes i agree ! dont use freearc yet for anything official, just try it for personal use for now, make used to it, learn basic usage etc...<br />It will be however possible to create bundles of FreeArc with other compressors (like ECM) and config, lets say f.e. &quot;FreeArc 0.5 (Redump Pirate Release)&quot; and every thing user will need to do is to unpack it, run FreeArc.exe, select and extract archives made with ECM <img src="http://forum.redump.org/img/smilies/tongue.png" width="15" height="15" alt="tongue" /> no need for installation, no need for messing with config, thats one example...</p><div class="quotebox"><blockquote><p>so in &#039;Arc 6&#039; ther&#039;s ECMs of all 8 images -&gt; diff<br />decompression and reverse of diff on all 8 images would complete in 7-8 minutes, leaving ECMs, right?</p></blockquote></div><p>yes there are 8 .ecm files (no diffs!!) understand one thing... im looking for a method that will not involve diffs, I added diff method in tests only for comparison of archive sizes that you can get with and without diff, and as you can see both moethods giving pretty much this same size (with few MB in favour of diff and greater speed/convenience in favour of repetition filter...)</p><div class="quotebox"><blockquote><p>can you make a filter chain then, that would produce one ECM and diffs, alike to &#039;ImageDiffs+ECM+7z&#039; ?</p></blockquote></div><p>I dont thinks so, FreeArc config does not treat every file inside archive separately, but as a merged solid bundle, for patches you would need more compliacated algorithm that will select pairs of files of create patches from those pairs...</p><div class="quotebox"><blockquote><p>I do not think this is meant to be used for spreading the images. Instead it is meant as to store many games in very little space.</p></blockquote></div><p>hell yeah <img src="http://forum.redump.org/img/smilies/smile.png" width="15" height="15" alt="smile" /> for everyone who is short on hdd space and dont want too much mess with images</p>]]></content>
			<author>
				<name><![CDATA[cHrI8l3]]></name>
				<uri>http://forum.redump.org/user/27/</uri>
			</author>
			<updated>2009-04-07T13:47:40Z</updated>
			<id>http://forum.redump.org/post/16816/#p16816</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/16812/#p16812" />
			<content type="html"><![CDATA[<p>I do not think this is meant to be used for spreading the images. Instead it is meant as to store many games in very little space. <img src="http://forum.redump.org/img/smilies/smile.png" width="15" height="15" alt="smile" /></p><p>It might be useful if you want to have something like a &quot;complete psx FF7 collection&quot; and share that with others.</p><p>But I myself prefer the approach of &quot;1 disc - 1 archive&quot;, even when it is space-wasting. <img src="http://forum.redump.org/img/smilies/smile.png" width="15" height="15" alt="smile" /></p><p>(I currently have all games torrentzipped, since it is the easiest way to quickly scan the whole collection :x)</p>]]></content>
			<author>
				<name><![CDATA[Sotho Tal Ker]]></name>
				<uri>http://forum.redump.org/user/4396/</uri>
			</author>
			<updated>2009-04-07T03:10:44Z</updated>
			<id>http://forum.redump.org/post/16812/#p16812</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/16811/#p16811" />
			<content type="html"><![CDATA[<p>thank you cHrI8l3, that does sound intriguing</p><p>so in &#039;Arc 6&#039; ther&#039;s ECMs of all 8 images -&gt; diff<br />decompression and reverse of diff on all 8 images would complete in 7-8 minutes, leaving ECMs, right?</p><p>can you make a filter chain then, that would produce one ECM and diffs, alike to &#039;ImageDiffs+ECM+7z&#039; ?</p>]]></content>
			<author>
				<name><![CDATA[themabus]]></name>
				<uri>http://forum.redump.org/user/2174/</uri>
			</author>
			<updated>2009-04-07T02:04:10Z</updated>
			<id>http://forum.redump.org/post/16811/#p16811</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/16809/#p16809" />
			<content type="html"><![CDATA[<p>This definitely looks pretty good and I might try it soon. There are just a few things though. First of all it looks like this program is still in development and the current version is an alpha build so I don&#039;t think it would be a good idea to migrate everything to this just yet until the program is more perfected by the author. Another thing I&#039;ve been thinking of is the possible amount of PC resources people are going to need to extract an archive made with this program. A lot of our dumps are being shared on torrent networks now as well as usenet and most people in the torrent community want something that is easy to extract and not all of them have the latest and greatest hardware. A lot of them even have a problem figuring out packIso (like that&#039;s real hard to use). Basically if we are going to use this to help spread our dumps we need to make sure it&#039;s going to be usable and accepted by everyone first (or at least the majority of people that know what they are doing) before we start migrating everything to a new format.</p>]]></content>
			<author>
				<name><![CDATA[Haldrie]]></name>
				<uri>http://forum.redump.org/user/485/</uri>
			</author>
			<updated>2009-04-06T22:51:22Z</updated>
			<id>http://forum.redump.org/post/16809/#p16809</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/16804/#p16804" />
			<content type="html"><![CDATA[<div class="quotebox"><blockquote><p>what about records with FreeArc+NanoZip</p></blockquote></div><p>lets make one thing clear... FreeArc is a <strong>compression suite</strong> (not compressor!!) it allows you to link different algorithms<br />it has built-in some nice algorithms like LZMA (the one 7z uses), repetition filter, exe filters, delta...<br />and you can configure it to work with almost any other command line compressors/data filters, f.e. ECM, NanoZip, Precomp, APE, WinRAR, etc... whatever you need<br />and then.. you can create your own packing profiles by linking compressors with filters etc.</p><p>you can run for example <em>arc.exe a -mecm+rep:1gb+lzma:128 -dp&quot;C:\Working Dir&quot; -- &quot;C:\Working Dir\arc.arc&quot; &quot;file1&quot; &quot;file2&quot;</em><br />and it will first ecm both files, then filter repetitions within 1gb range, and then compress it to LZMA with 128mb dictionary<br />its simple <img src="http://forum.redump.org/img/smilies/smile.png" width="15" height="15" alt="smile" /></p><div class="quotebox"><blockquote><p>if you can talk author into integrating ecm then - it&#039;s even better</p></blockquote></div><p>it is already possible, but ecm have some issues on large sets of data... it can be used on &lt;2gb files thou<br />might be worked out in one of next releases</p><div class="quotebox"><blockquote><p>(7 is about 10 - 7 approximates to 10) = symmetrical</p></blockquote></div><p>wtf <img src="http://forum.redump.org/img/smilies/smile.png" width="15" height="15" alt="smile" /> repetition filter takes most of the time, decompressings from LZMA takes less than half of that time</p><p>Try it, deffinately !!<br />download and run installer of v0.50: <a href="http://freearc.org/Download.aspx">http://freearc.org/Download.aspx</a><br />download and unpack over installed version update pack: <a href="http://www.haskell.org/bz/arc1.arc">http://www.haskell.org/bz/arc1.arc</a> (recommended)</p><p>Edit:<br />if you want you can store your own configurations in arc.ini, here is one of mine:<br />cso7=ecm+rep:1gb+lzma:128mb:max:bt4:273<br />and you run it with -mcso7 switch</p><p>and... following can also be configured and run when you have audio stored as wav:<br />packiso= $obj =&gt; ecm+7z, $wav =&gt; ape</p><p>and. FA has also a GUI but you can not run custom configurations from GUI yet.. however it will be soon resolved</p><p>in short... its soft for PRO&#039;s and thats why I though redump staff might be interested <img src="http://forum.redump.org/img/smilies/tongue.png" width="15" height="15" alt="tongue" /> are you ? <img src="http://forum.redump.org/img/smilies/smile.png" width="15" height="15" alt="smile" /></p>]]></content>
			<author>
				<name><![CDATA[cHrI8l3]]></name>
				<uri>http://forum.redump.org/user/27/</uri>
			</author>
			<updated>2009-04-06T19:11:35Z</updated>
			<id>http://forum.redump.org/post/16804/#p16804</id>
		</entry>
		<entry>
			<title type="html"><![CDATA[Re: Merged Compression Tests]]></title>
			<link rel="alternate" href="http://forum.redump.org/post/16803/#p16803" />
			<content type="html"><![CDATA[<p>ok, i&#039;ll clean up hdd, fetch those images and test for myself<br />asap</p><div class="quotebox"><blockquote><p>symmetric compressors are used in NanoZip</p></blockquote></div><p>i thought it&#039;s the same program, like mode or something<br />so ther&#039;s 2?<br />what about records with FreeArc+NanoZip, then? are they compressed twice, like zip+arj?</p><p>edit:<br />i&#039;m basing my statments on what you write<br />i&#039;ve asked you about only comparable value in that table - whether it&#039;s slower, and you said it is<br />that &#039;fast&#039; part you added later<br />and after that you wrote that FA extract in 7 minutes, <br />...about the same as compression value given ~10 (7 is about 10 - 7 approximates to 10) = symmetrical<br />would you have wroten&nbsp; - FA extracts in 1 minute, i&#039;d think it&#039;s fast (as 7z extracts in 2), <br />but you wrote - in seven </p><p>so what would be decompression speed difference of exactly same files 7z vs FA? <br />have you tested?</p><p>edit:<br /></p><div class="quotebox"><blockquote><p>i don&#039;t mean that 7z is ultimate archiver btw.<br />if ther&#039;s one faster at same or better ratio (like TAK vs APE &amp; FLAC) - by all means</p></blockquote></div><p>so if FreeArc does that - it&#039;s great<br />if you can talk author into integrating ecm then - it&#039;s even better</p>]]></content>
			<author>
				<name><![CDATA[themabus]]></name>
				<uri>http://forum.redump.org/user/2174/</uri>
			</author>
			<updated>2009-04-06T15:58:12Z</updated>
			<id>http://forum.redump.org/post/16803/#p16803</id>
		</entry>
</feed>
