There has been some fuss recently about a new algorithm that vectorizes pixel art. And yes, judging from the example pictures, this algorithm by Johannes Kopf and Dani Lischinski indeed seems to produce results superior to the likes of hq*x and scale*x or mainstream vectorization algorithms. Let me duplicate the titular example for reference:
Impressive, yes, but as in case with all such algorithms, the first question that came to my mind was: "But does it manage dithering and antialiasing?". The paper explicitly answers this question: no.
All the depixelization algorithms so far have been succesful only with a specific type of pixel art. Pixel art of a cartoonish style that has clear lines and not too many details. This kind of pixel art may have been mainstream in Japan, but in the Western sphere, especially in Europe, there has been a strong tradition of optimalism: the tendency of maximizing the amount of detail and shading within the limited grid of pixels. An average pixel artwork on the Commodore 64 or the ZX Spectrum has an extensive amount of careful manual dithering. If we wish to find a decent general-purpose pixel art depixelization algorithm, it would definitely need to take care of that.
I once experimented by writing an undithering filter that attempts to smooth out dithering while keeping non-dithering-related pixels intact. The filter works as follows:
Would it be possible to improve the performance of a depixelization algorithm by first piping the picture thru my undithering filter? Let's try out. Here is an example of how the filter manages with a fullscreen C-64 multicolor-mode artwork (from the demoscene artist Frost of Panda Design) and how the results are scaled by the hq4x algorithm:
The undithering works well enough within the smooth areas, and hq4x is even able to recognize the undithered areas as gradients and smooth them a little bit further. However, when looking at the border between the nose and the background, we'll notice careful manual antialiasing that even adds some lonely dithering pixels to smooth out the staircasing. My algorithm doesn't recognize these lonely pixels as dithering, and neither does it recognize the loneliest pixels in the outskirts of dithered gradients as dithering. It is a difficult task to algorithmically detect whether a pixel is intended to be a dithering pixel or a contour/detail pixel. Detecting antialiasing would be a totally different task, requiring a totally new set of processing stages.
There seems to be still a lot of work to do. But suppose that, some day, we will discover the ultimate depixelization algorithm. An image recognition and rerendering pipeline that succesfully recognizes and interprets contours, gradients, dithering, antialiasing and everything else in all conceivable cases, and rerenders it in a high resolution and color without any distracting artifacts. Would that be the holy grail? I wouldn't say so.
The case is that we already have the ultimate depixelization algorithm -- the one running in the visual subsystem of the human brain. It is able to fill in amazing amounts of detail when coping with low amounts of reliable data. It handles noisiness and blurriness better than any digital system. It can extrapolate very well from low-complexity shapes such as silhouette drawings or groups of blurry dots on a CRT screen.
A fundamental problem with the "unlimited resolution" approach of pixel art upscaling is that it attempts to fill in details that aren't there -- a task in which the human brain is vastly superior. Replacing blurry patterns with crisp ones can even effectively turn off the viewer's visual imagination: a grid of blurry dots in the horizon can be just about anything, but if they get algorithmically substituted by some sort of crisp blobs, the illusion disappears. I think it is outright stupid to waste computing resources and watts for something that kills the imagination.
The reason why pixel art upscaling algorithms exist in the first place is that sharp rectangular pixels (the result of nearest-neighbor upscaling) look bad. And I have to agree with this. Too easily recognizable pixel boundaries distract the viewer from the art. The scaling algorithms designed for video scaling partially solve this problem with their interpolation, but the results are still quite bad for the opposite reason -- because there is no respect for the nature of the individual pixel.
When designing a general-purpose pixel art upscaling algorithm, I think the best route would go somewhere between the "unlimited resolution" approach and the "blurry interpolation" approach. Probably something like CRT emulation with some tasteful improvements. Something that keeps the pixels blurry enough for the visual imagination to work while still keeping them recognizable and crisp enough so that the beauty of the patterns can be appreciated.
Nevertheless, I was very fascinated by the Kopf-Lischinski algorithm, but not because of how it would improve existing art, but for its potential of providing nice, organic and blobby pixels to paint new art with. A super-low-res pixel art painting program that implements this kind of algorithm would make a wonderful toy and perhaps even vitalize the pixel art scene in a new and refreshing way. Such a vitalization would also be a triumph for the idea of Computationally Minimal Art which I have been advocating.