I found an excellent color reduction algorithm that can be used to produce VB artwork from existing images.
Spatial color quantization is a novel technique for decreasing the color depth of an image, first described in the paper “On Spatial Quantization of Color Images” by Jan Puzicha, Markus Held, Jens Ketterer, Joachim M. Buhmann, and Dieter Fellner, researchers at the University of Bonn in Germany. It combines palette selection and dithering with a simple perceptual model of human vision to produce superior results for many types of images.
scolorq, standing for “spatial color quantization”, is a faithful implementation of the highest-quality algorithm described in the paper. Its results have richer colors and often better detail than the widely-used median cut and octree algorithms supplied in bitmap editors like Adobe Photoshop, Paint Shop Pro, and the GIMP, particularly when reducing to very low color depths, like 4, 8, or 16 colors, as the sample images below demonstrate.
Attached is a sample image, before and after. The only problem I can see is that it is a bit slow (it took an estimated 45 seconds to 1 minute for a single 384×224 image on my 2.4GHz P4). However, some speed gains might be had by converting the algorithm to work on an 8-bit grey-scale image (and possibly incorporate the grey-scale conversion) rather than a 24-bit, full-color image. Perhaps some algorithm other than the “highest-quality” one implemented could be used, without incurring a noticeable quality loss.
I would also like to see some kind of stereo optimization incorporated, i.e. find the differences (beyond some threshold) between the images of a stereo pair and quantize one image and the differences. This would reduce disparity artifacts caused by dithering.
One thing to notice about the after shot below: the shades chosen are not equally spaced. On the VB, this will have to be compensated for with appropriate BRTx register settings.
That looks nice, but I think there’s too much detail/random noise… I doubt you’d be able to convert that to chars/bgmap (or it’d fill up almost all the Char RAM).
I convert my images in Photoshop, and if I do it with diffusion, it looks nice, but most large images run out of chars… but with pattern, there’s usually quite a few chars that are identical, which of course is good for us. Here’s an example added to the right of yours.
That’s a good point, DogP.
I probably wouldn’t use it for large images like that, unless they were for a cut-scene or something, but it could be useful for converting sprites with lots of colors, where there wouldn’t be a lot of repeating chars anyway, no matter what dither is used.
I’ve used pattern dithering, too, but, as can be seen in your example, a lot of detail is lost, especially in the darker areas. Plus, I just don’t like how it looks, especially when animated.
One of the many (many, many, MANY) projects I still haven’t started is a program that can analyze a set of tiles and find “virtual duplicates” that, say, have the same numbers and general distribution of each shade, and lets you choose which to keep (or just arbitrarily throws one away). It could also take the form of a more intelligent version of Alberto’s “Map Constructor” that looks at the image prior to (or during) conversion.
Another way to save chars would be to find tiles that are the same basic pattern, but lighter or darker (or inverted, etc.). Then, one char could be used with two different palettes (in addition to flipping and/or mirroring).
Yeah… also, one difference between mine and yours is that mine uses 3 equally distributed fixed shades of gray… of course it could be optimized to look better with different shades of gray.