ZX0 data compressor

Page 3/3
1 | 2 |

By jltursan

Prophet (2538)

jltursan's picture

23-09-2021, 10:18

Sometimes you don't need an efficient way; but to simply save some RAM Wink

By Grauw

Ascended (10056)

Grauw's picture

23-09-2021, 15:27

Well I reckon the decompression code is open source and short, so you can modify it to do what you want right.

By santiontanon

Paragon (1482)

santiontanon's picture

23-09-2021, 16:00

I think the "standard" version should be fairly easy to adapt to write to VDP. The main instruction used to write the decompressed data is an "ldir", so, that can probably just be replaced by some "otir" (with some extra code at the beginning to set the write address to the VDP, etc.)

By theNestruo

Champion (314)

theNestruo's picture

23-09-2021, 17:04

santiontanon wrote:

I think the "standard" version should be fairly easy to adapt to write to VDP. The main instruction used to write the decompressed data is an "ldir", so, that can probably just be replaced by some "otir" (with some extra code at the beginning to set the write address to the VDP, etc.)

No, it's not so easy... A illustrative (VERY simplified) example: if the unpacked data is ABCABCABC, the algorithm says: 3 literal bytes (and writes the first ABC with an ldir), then 6 bytes from offset -3 (and writes the rest with an ldir... reading from the previously unpacked ABC).
That copy from previously unpacked data (negative offset) is what I think that cannot be done in an efficient (neither fast nor short) manner.

By Metalion

Paragon (1444)

Metalion's picture

24-09-2021, 09:44

Not easy but feasible, albeit probably much slower.
You can always read data from VRAM and then write it back.

By santiontanon

Paragon (1482)

santiontanon's picture

24-09-2021, 14:31

Ah, I see. Indeed! I had not considered that

By pgimeno

Champion (300)

pgimeno's picture

25-09-2021, 02:44

For benchmarks, this is the mandatory reference (pletter and ZX0 are already listed):
https://github.com/uniabis/z80depacker

It contains speed, size and compression benchmarks.

Manuel wrote:

I'd say that ideally, the decompression speed is maximized, or the size of the data and/or decompressor is minimized, or some good combination of that suitable to the project. Given "infinite" calculation power of a modern computer to create the compressed data that is most suited for the targeted purpose, how far could one go?

Not much farther; in fact, I believe that the compressors already take that into account and aim for the best possible compression for their model. The best compromise speed/compression is probably that of LZ77-based algorithms, and those model repetitions of sequences within the data, therefore you depend on how many repetitions the data has, and the specifics of the compression algorithm such as the expected frequencies of repetitions, lengths, etc. in order to favour shorter or longer distances or run lengths.

Going for a different algorithm is either going to add complexity to the decoder side, meaning it will be anywhere between "not so fast" and "unbearably slow", or going to decrease compression ratio (e.g. RLE).

By Metalion

Paragon (1444)

Metalion's picture

26-09-2021, 20:04

pgimeno wrote:

For benchmarks, this is the mandatory reference (pletter and ZX0 are already listed):
https://github.com/uniabis/z80depacker

Well, using that very good reference, I wanted to make a full comparison of all Z80 decompression tools. So I made an Excel file of all those results, and for each tool, I gave it a amount of points for its rank on each of those 3 performances (the maximum points for each measure was 100 points):

. compression ratio (on all files tested, in %),
. Z80 depacker size (in bytes),
. Z80 depacking speed (relatively to a comparative LDIR).

On a maximum total of 300 points, here a the top 5 decompression tools:

Packer	Depacker		Size - points	Ratio - points	Speed - points	Total
zx0	dzx0_standard		68	92	61,81%	67	4,53	54	213
zx1	dzx1_standard_ix	69	90	63,78%	55	3,54	67	212
zx1	dzx1_standard_ix_180	68	92	63,78%	55	3,59	65	212
zx1	dzx1_standard		68	92	63,78%	55	3,69	62	209
lzsa1	unlzsa1_small		67	95	69,54%	21	1,87	92	208

So ZX0 with the standard depacker is number one, with 213 points.

At the moment, those points are not weigthed, but it might be interesting to discuss on how we could weight them (giving more weight on speed for example).

By theNestruo

Champion (314)

theNestruo's picture

26-09-2021, 20:24

Metalion wrote:

At the moment, those points are not weigthed, but it might be interesting to discuss on how we could weight them (giving more weight on speed for example).

Thank you for the data!
About points and weights: I'm afraid that, like code optimizations, completely depends on each particular scenario.
I usually prefer size (shorter size and better ratio) to speed because my game unpacks between screens where losing one frame (or even some of them) is not noticeable... But if, for example, I had to unpack sprite patterns on the fly I'm likely to sacrifice the size of the decompressor to get a faster speed and avoid dropping frames.

Page 3/3
1 | 2 |