I based my port on flamewing's version as MainMemory based his port on this version too. If you want to do the other compressors and integrate them in KENSSharp, then be my guest. I have other projects I'd rather work on (though having done this somewhat helps me with respect to those projects). :P flamewing, I found and fixed a bug in your implementation of moduled Kosinski decompression. In kosinski::decode_internal, you're reading the stream up to the end, but when you call it again in kosinski::decode, there's no more data to process, so only the first module gets decompressed.
From the SVN repo, I only see code for Kosinski? Here is my source code for all 4 decompressors: http://www.sappharad.com/junk/KensNET_decomp.zip Output for Kosinski, Nemesis and Enigma is 100% identical to the win32 command line tool. The saxman decompressor seems to stop early, although it's correct up to that point and looking at the input file and the code it looks like it should be stopping. So either there's something I don't understand there yet, or the command-line tool does something else after it hits a pair of 00's. The logic itself is literally a line-by-line equivalent to the original C code, which should be fairly obvious when you see that even the debug comments from the C version are still there. Also, damn. I wish I would've known the code for S2LVL was checked into the SVN repo, I decompiled it with .NET Reflector and re-wrote the parts that didn't decompile properly. I will port over the compressors this weekend then, if time permits. I'd also like to improve the API a bit after that point, because right now the method signatures are identical to the DLL's, and there should no reason to write temp files out if we can just return you a byte array in memory.
It would be nice if you could make the public API for them the same as Kosinski. And if you didn't know S2LVL was on the SVN, perhaps you should subscribe to the RSS feed, or join #svn on BadnikNET (lol advertising). But, the code there is in the middle of a large rewrite of the object definition format, and is not suitable for public release.
Yes, I've only done Kosinski compression and decompression so far, but as I said, you're welcome to do the other compressors (plus, you've already done the decompressors). Public.cs in the Kosinski project of KENSSharp defines the public API. EDIT: Also, seeing the code again reminded me of the other bug I found in flamewing's code: the use of std::istream::ignore() in kosinski::decode is incorrect in this context. std::istream::ignore() will skip bytes until it finds the given delimiter (here, you specify the null byte). The original code skipped null bytes, but your codes skips all bytes until it finds a null byte.
Thanks for the bug reports. This is what happens when I trust my coding skills enough not to test a particular case -- I screw up :-p You can check out my versions at my Google code repo (browse the code and look for the libs directory). These decompressors are easy to convert, as are the compressors for Kosinski and Enigma. The Nemesis compressor is an unportable ugly mess which I wasn't able to figure out, and I ended up writing it from scratch based on a description of the format. The output is slightly larger than the KENS version, but it is much faster and it is portable. What files are you testing as input for Saxman compressor/decompressor? I haven't converted it to C++ yet because I couldn't find any information on, or files compressed with, it.
Compression algorithms shouldn't be fast, compression algorithms should be efficient. Winzip is much faster than 7z when it comes to create a zip file, and of course the zip files created with 7z are much smaller. Don't misread me, I am glad there are new portable versions of these algorithms, but I'm even more glad (glader?) that you're putting these on an SVN, so improvements to the code will always be possible.
My algorithm is fast because it uses the modern version of the Huffman-coding, which is optimal in compression time, space and ratio; however, Nemesis format is actually a length-limited Huffman code, which is why the resulting file size is not optimal. I am still learning the Package-Merge algorithm, which will allow for better compression and probably still be faster than the KENS version.
I felt that the decompressor for Nemesis was an ugly mess too, so I'm not surprised to see the compressor is worse. My logic there is still mostly a direct line-by-line rewrite, but I ended up trying to clean up some of the extra junk and it wasn't worth the effort. As long as it works, I guess it's fine for now. I was trying it on the data in /sound/music in the Sonic 2 split disassembly. The readme that came with the decompressor listed which songs it works with, but it's not supposed to work with everything. It also doesn't help that I can't find any documentation on the format. Everything I find keeps saying look at "the other guide" and this other guide is nowhere to be found.
Okay, sorry to bump your thread but I figured this was worth it. I've ported over all 4 compressors, so the initial version of KensNET is done. http://www.sappharad.com/junk/KensNETv1.zip All method signatures should match the originals right now, you should just be able to drop it in-place and call the .NET versions of everything. There were actually some bugs in the original Nemesis and Engima compressors that I had to fix in the process. There were a few instances where the pointer would go one past the end of the array. I either moved (in some cases, the bounds check was after the read) or added some additional bounds checking. Other changes: I fixed a problem with the Nemesis decompressor. I had originally replaced instances of pow(2,x) with (1<<x) assuming it would be faster and always produce the same result. What I wasn't expecting was that x is sometimes negative and it actually does need that fractional value to work. This appears to fix the files that weren't working before. I also fixed the problem I was having with the Saxman decompression, which was caused because I misinterpreted the _read()==0 checks as checking the byte returned from read, when they were actually checking for the end of file. After testing each on a few different files, I'm getting identical results on both compression and decompression compared to the original code. Please consider using this and let me know if you find any problems. I'd still like to offer some additional signatures to call, as I see no reason why you wouldn't want one that gives you back the results in a Memory Stream so you don't need the overhead of reading/writing temp files.
No clue why, but I set up a GHZ1 file with my file locations and the like, but when I open the S1LVL file with the program it comes up with a message and closes...not sure why.
No need to apologize -- you had no way of knowing. For what is worth, now that I finished coding the Package-merge version of the Nemesis encoder, I can say that this is the algorithm that Nemesis used in the version included in KENS. My version is much faster and it is portable, and now gives the same compressed file size (plus 0.5 bytes in average because I explicitly pad files of odd length with an extra byte) as the KENS version did. The files are almost identical, in fact -- the only differences I have seen so far is the order of elements in the header table and the final padding byte. I will be updating my SVN later today, after I test in a few other files to make sure it is working correctly. Edit: Correction: my version actually has better compression for some files, but the difference is minor.
In case you missed it, I posted my KensNET library on the previous page with all 4 compressors and decompressors ported and working, with source code. But as I mentioned in FraGag's thread, I agree that his KensSharp is probably the route to go in the end since the code will be clean. I encourage you to use my other compressors/decompressors in the meantime if you'd like, until his are done. Also, your graphics corruption in the screenshot above is not because of his decompression code. (Just in case you weren't aware of where the problem originates) It appears you broke your code sometime since the beta 4 release. If I plug my library into your beta 4 release, it works fine. If I plug it into your latest SVN code, I see the same corrupt art.
I will only use KENSSharp for (de)compression, mostly because I started it in the first place (although the only thing left from my attempt is the public API and the design of Comp). I know it's not the decompression, as objects appear fine (but with some layering issues, possibly due to the settings in Graphics.SetOptions). The difference between objects and blocks/chunks is that objects use Graphics.DrawImage to put the tiles onto the bitmap, whereas blocks use Bitmap.DrawBitmap (Extensions.cs), because the Graphics class cannot be used with indexed bitmaps. The explanations I can think of are that Bitmap.LockBits is giving me a bad pointer (but then it would probably SIGSEGV, right?) or the data is somehow different. Edit: Somehow, chunk images are being switched around when running in Mono.
Meanwhile, I've added read-only S3K support. Read-only, because I'm not sure how to save the 8x8 and 16x16 files.
Beta 5 is now available, after 59 revisions and 596 file changes. Redid object definitions, to add proper display of more complex objects. Dropped support for S2 2007 and S1 2005 disassemblies. Added some chunk, block and palette editing. Added start position editing. Added underwater palette support. A log file will be saved in the event of a crash. Read only support for ''[[Sonic 3 & Knuckles]]''. Object definitions are on the SVN, in the relevant disassembly folder.
That's disappointing considering the fact that these two disassemblies are still in wide usage today. Of course, I know that you wanted to force usage of the SVN disasms as much as possible, but then again, you were saying that your method to load things are specific and will only work on specific names and such. You should write up something to allow more wider support, otherwise how can you convince people to use your program instead of SonED2? Perhaps once Linux support fully works, that might be more of an incentive but remember, the disassemblies that you support have a minimum usage compared to the widely-available and widely-promoted earlier disassemblies.