Working on changing the tilemap format, for the room/screens. Originally, the game uses a 32×32 meta tile format. IIRC, it’s one byte per 32×32 block. So that’s 64 bytes per 256×256 pixel screen. That’s pretty small. A 256×256 pixel screen with 8×8 tiles (and subpalette per tile) is 2048 bytes. Quite a bit bigger. Cutman’s level has 22 rooms. If I made each room unique, than that’s 45k just for that level. Not a big deal if this was strictly just a hucard/rom project, but that’s pushing it a bit for the CD project. I’ve already decided that I’m gonna break the levels into ‘loads’, and certain things can be loaded directly into vram from the CD track at the start of a level, but even then – I still need to keep the space requirements down.
I figured that I can build upon the 32×32 block system. I could double the map entry with a parallel map, so that each block is two bytes instead of 1 byte. That gives me 65,536 blocks; too much for a look up table. But I don’t need to use all the bits. I figured, with 16k, I can get about ~630 32×32 blocks in the decode table. That would decode the 32×32 block into 16 8×8 tiles. Each tile would be 9bit addressing (access up to 512 tiles in vram), and each tile would have access to its own unique subpalette. I’d bitpack everything to cram it into 16k table. I think the original game uses something like 64 32×32 blocks? This would be a big improvement, in just that alone. I could even use the unused bits for collision stuffs. For the backend PPU emulation, I’ll add a special ‘reg’ that the code reads from – to get the additional subpalette and MSbit. It’ll clear it after reading from it, too.