Keen 1-3 RLE compression

From ModdingWiki
Jump to navigation Jump to search
Keen 1-3 RLE compression
Format typeCompression algorithm
I/O unit size1-bit

Commander Keen 1-3 uses RLE Compression to compress its 320x200 FINALE.CKx and PREVIEWx.CK1 screens. Dangerous Dave also uses this format to compress its graphics. It was previously used in Catacomb I and II for the level data (not the graphics). This is one of many possible ways to create a functional RLE system.

The compressed file consists of a UINT32LE giving the decompressed size, followed by the compressed data.

The compressed data is processed in a loop. After each byte is read, its value is used to select an action. If the high bit is set (val & 0x80) then one more than the lower seven bits ((val & 0x7F) + 1) is the number of following bytes passed through unchanged. If the high bit was not set, then that value plus three is the number of times the following byte is repeated.

The next byte is then read and the same procedure followed until there is no more data or the decompressed size in the header has been reached.

An illustration:

80 aa 00 00 81 bb cc 05 dd

This data is interpreted as follows:

80 aa     # Copy AA through unchanged
00 00     # Repeat 00 three times
81 bb cc  # Copy BB and CC through unchanged
05 dd     # Repeat DD eight times

Or as pseudocode:

  • Get the initial dword [Final length]
  • If [Decompressed data length] < [Final length] then:
    1. Read a byte [Value]
    2. If the byte value >= 128, then:
      • Read and output the next [value - 127] bytes
      • Move forward [Value - 126] bytes and return to step 1.
    3. If the byte value < 128, then:
      • Read the next byte and output it (value + 3) times.
      • Move forward 2 bytes and return to step 1.

It can be seen that this system is not optimal since two repeating bytes must be treated as non-repeating.

Implementation bugs

Some implementations of this particular RLE algorithm have a bug that causes the decompression code to behave erraticly for compressed data whose expanded size exceeds a certain amount of bytes. Since the original games were programmed for 16 bit x86 processors, memory pointers need to be adjusted when reaching a 64k boundary. The RLE implementation adjusts the output pointer incorrectly when the offset reaches 0xFF00 (Dangerous Dave) or 0xFFF0 (Commander Keen 1-3, Catacomb II). Reaching said offset causes the code to add offset / 16 to the segment part of the address (which is correct), but then it sets the offset to 0 instead of setting it the remainder of that division (i.e. bitwise AND the offset value with 0xF). This effectively makes the decompression code overwrite up to 15 bytes of the already generated output.

The offset is only checked after writing a run of repeating or non-repeating data, which makes it hard to predict what the actual offset will be when the limit is reached and the pointer is normalized. If the four least significant bits of the offset are 0 at that point, then the offset will be adjusted correctly and no corruption occurs, otherwise up to 15 bytes are overwritten.

A possible workaround for this bug would be to make sure the RLE compression routines generate data that ends a run at offset 0xFF00 (or 0xFFF0, depending on which number the decompression code uses). This means the compression code needs to keep track of the 16 bit offset value the decompression code is going to use and produce a special, slightly less effectively compressed run when that offset is reached.

This implementation bug is usually not a problem, since most data compressed with this algorithm is nowhere near 64k. In Commander Keen 1-3, this form of RLE stores full-screen EGA images, which are only 32000 bytes when decompressed. The first two Catacomb games use this for their levels, which are only 4096 bytes when decompressed.

Note that the limit of 0xFFF0 in Keen 1-3 and Catacomb II also means that certain run lengths can cause an overflow of the offset value that will remain undetected by the decompression code, causing it to start overwriting the last 64k written the output. For example, if the current offset is 0xFFEF, which is smaller than the limit of 0xFFF0, the code doesn't adjust the offset and proceeds with the next RLE run. If that run writes 17 bytes to the output, the new offset will be 0x0000 instead of 0x10000 (since the code is using 16 bit registers/variables), and the new offset will still be considered smaller than 0xFFF0. Again, this is not a big problem, since these games never use any compressed data large enough to trigger this.

Example code

  • Javascript: the cmp-rle-id algorithm in gamecomp.js
  • x86 Assembly: CATASM.ASM from Catacomb source code (has a limit of 0xFF00, but not the normalization bug)