We’ve already seen what happens to images when neural networks attempt to process them with Google’s DeepDream providing results varying from weird to nightmarish. This time, a neural network has been let loose with a far less potential for horror, the task? To colourize a number of black and white images with what it believes to be the correct colours based on analysis of a large number of similar images. This is the work of a team of University of California at Berkeley researchers who experimented with the neural network, with the findings being published in a paper titled Colorful Image Colorization.
The results are a mixed bag, with many of the images with simpler or more limited colour palettes seeming almost spot on, while some of the most complex images, such as the Monarch butterfly, are very impressive. Primarily, the algorithm follows simple rules that seem obvious to us, such as areas identified as “sky” will be blue, as will water or that dirt will be brown and developing from this a coloured picture. There are some clear flaws in certain images, though, with colours following outside of the “lines” in more detailed and complex images, and some difficulties with white, resulting in a rather discoloured heron.
The images were also put through a “colorization Turing Test”, where the images were able to fool human observers into believing that they were not originally monochrome images 20% of the time, which may not sound impressive, but when 50% is the expected rate was it almost impossible to distinguish, it is much more so. With neural networks already producing such results from image analysis, you can’t help but image what wonders (or horrors) they will produce next.