Gimp and neural networks
Matching Paint Job
Deep learning isn't just for industrial automation tasks. With a little help from Gimp and some special neural network tools, you can add color to your old black and white images.
Neural networks (NN) don't just play the traditional Japanese board game Go better than the best human player; they can also solve more practical tasks. For example, a project from Japan colors old black-and-white photos with the help of a neural network – and without asking the user to get involved with the image editing.
Researchers at Waseda University in Tokyo used a database that contains several objects to train a neural model to correctly recognize objects in images and fill them with appropriate color information. Using this model, the network then identifies the individual parts of the image, say, trees and people, and assigns matching colors.
The Waseda team presented this deep learning tool at the SIGGRAPH 2016 computer graphics conference [1]; you will find the code for their photo-coloring tool on Github [2]. The university website [3] provides a research paper on the subject [4], as well as some sample images.
Neural networks consist of many layers that gradually filter out information. For an image, that image might consist of the brightness, edges, and shadows. At the end, the network identifies specific, complex objects. Siri, Google Now, or Cortana use the same principle for speech recognition.
The problem with a conventional neural network is that each layer can make mistakes. The layer then passes this mistake on to the next layer. The type of neural network used with the tool described in this article, which is called a convolutional neural network (CNN) [5], has some built-in ways for limiting the effects of errors.
CNN versus NN
The concept for CNNs comes from biology, although it is not the human brain that serves as a template, but the visual cortex of cats. The Convolutional layer considers the spatial structure of the objects.
CNNs differ from conventional neural networks in the type of signal flow between neurons. The signals in NNs typically pass along the input-output channel in one direction, without the ability to iterate through a loop. CNN takes a different approach. The areas on which the neurons operate overlap and are arranged with offsets. Each layer contributes to a more reliable overall result, thus optimizing the detection rate. The network can identify an object even if it is different from the position defined by the training templates.
Deep learning makes it possible for a computer to identify the objects in an image. This procedure works even when the object on a screen is significantly changed compared to the training model, say, because it has a different background or because the viewing angle of the object or the lighting conditions have changed [6].
Tasks that require visual recognition are something that CNNs cope with very well. But the result depends on the quality and amount of training data, as you will see in the sample pictures later in this article.
The model shown in Figure 1 consists of roughly four parallelized and combined networks. The low-level features network recognizes the corners and edges of an image in high resolution. The data ends up in the global features network, which then sends them through four convoluted layers and then through three fully connected layers that each link the neurons of the one layer with all of the neurons in the other layer.
The result is a global, 256-dimensional vector representation of the image. On the other hand, the mid-level features network extracts textures from the data from the low-level features network.
The results of the global and mid-level-features network are then combined in the fusion layer; the results are resolution-independent thanks to vectorization. Finally, the colorization network adds color information (chrominance) and luminance and restores the resolution of the source image.
The end to-end network thus brings together global and local properties of images and processes them at any resolution. If the global features network suggests that the shot was taken outdoors, the local features network then tends to focus on natural colors.
Not Only Gray Theory
You can use the software from the Japanese researchers and the GIMP image-processing tool to colorize black-and-white images. You'll need a powerful computer with a reasonably recent graphics card.
In my test, I used Ubuntu 16.04 with Gnome. (The Japanese team used Ubuntu 14.04 with Gnome.) To follow the examples, you need to install Git, Gimp, and the LUA package manager, Luarocks:
sudo apt-get install git gimp luarocks
With only marginally more effort, you can then install Torch [7], Facebook's deep learning library. Torch is written in Lua [8] and is available under a BSD-style license. The Torch library provides algorithms for deep learning and is easy to install thanks to LUA.
Because Torch uses C backends and a MATLAB-like environment, it is perfect for scientific projects. Torch also includes packages for optimizing graphical models and image processing. The associated nn
package produces neural networks and equips them with various abilities.
You can clone Torch yourself from Github. Finally, you need to execute the included install script:
git clone https://github.com/torch/distro.git ~/torch --recursivecd ~/torch bash install-deps ./install.sh
The step adds Torch as a PATH
variable to your .bashrc
; you will want to restart Bash at this point. Now, you need to install some LUA packages on the computer:
luarocks install nn luarocks install image luarocks install nngraph
The next step is to set up the actual coloring software. You can download this software from Github [2] using git clone
; I used the supplied script named download_model.sh
to install on my machine:
cd ~ git clone https://github.com/satoshiiizuka/ siggraph2016_colorization.gitcd siggraph2016_colorization ./download_model.sh
For my first attempt, I copied the test1.jpg
image to the siggraph2016_colorization
folder. The test image is a scan of a photo with 638x638 pixels. I trimmed the image to a square shape because the neural network was trained on square images. Then I handed it over to the colorization script:
th colorize.lua test1.jpg test1_color1.jpg
The not-entirely-so-convincing first results are shown in Figure 2. This uninspiring result is probably due to the fact that the CNN processes images 224x224 in size. Also, my image template does not really consist of grayscales.
For my next attempt, I used GIMP to convert the image to grayscale (Image | Mode | Grayscale
); this change did not cause any visible changes to the image. I then reduced the image to 224x224 pixels but without grayscaling. This step affected the resolution, but at least it improved the color scheme. Finally, I set up the grayscales and reduced the image to 224x224 pixels (Figure 3). But how do I transfer the significantly better color information to pictures with a higher resolution?
New Layer
GIMP lets you break down an image to create a Lab color model [9]. GIMP divides the image into three layers: An L
level for the luminance, an a
level for the hues between green and red, and a b
level for the colors between blue and yellow [10].
The idea is to break down the large and small images into this color space. Then I would scale the a
and b
layers of the small image scale to the resolution of the larger image and transfer them to it. When you put the levels together again, you get the larger picture with the color information from the smaller image.
To do this, you open the small colored image and the large picture in GIMP as the first step. Then select Colors | Components | Decompose
to open a menu, where you can decompose the image, and look for LAB
as the color mode (Figure 4). You need to pre-select the option for decomposing the image into layers.
In the next step, activate the a
layer from the small image, right click, and select Scale layer
to scale it to a higher resolution. Then click on the layer with your mouse and copy it using CTRL + C. Then select the large image and add the layer in the Layers
dialog.
Pressing the anchor button at the bottom of the Layers
dialog lets you embed the floating selection
; you then need to repeat this procedure with the b
layer. Finally, use Colors | Components | Recompose
to put the color layers back together. The results: An image in a higher resolution with the color information from the smaller image. For comparison, Figure 5 once again shows the grayscale image as a starting point.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Fedora Asahi Remix 41 Available for Apple Silicon
If you have an Apple Silicon Mac and you're hoping to install Fedora, you're in luck because the latest release supports the M1 and M2 chips.
-
Systemd Fixes Bug While Facing New Challenger in GNU Shepherd
The systemd developers have fixed a really nasty bug amid the release of the new GNU Shepherd init system.
-
AlmaLinux 10.0 Beta Released
The AlmaLinux OS Foundation has announced the availability of AlmaLinux 10.0 Beta ("Purple Lion") for all supported devices with significant changes.
-
Gnome 47.2 Now Available
Gnome 47.2 is now available for general use but don't expect much in the way of newness, as this is all about improvements and bug fixes.
-
Latest Cinnamon Desktop Releases with a Bold New Look
Just in time for the holidays, the developer of the Cinnamon desktop has shipped a new release to help spice up your eggnog with new features and a new look.
-
Armbian 24.11 Released with Expanded Hardware Support
If you've been waiting for Armbian to support OrangePi 5 Max and Radxa ROCK 5B+, the wait is over.
-
SUSE Renames Several Products for Better Name Recognition
SUSE has been a very powerful player in the European market, but it knows it must branch out to gain serious traction. Will a name change do the trick?
-
ESET Discovers New Linux Malware
WolfsBane is an all-in-one malware that has hit the Linux operating system and includes a dropper, a launcher, and a backdoor.
-
New Linux Kernel Patch Allows Forcing a CPU Mitigation
Even when CPU mitigations can consume precious CPU cycles, it might not be a bad idea to allow users to enable them, even if your machine isn't vulnerable.
-
Red Hat Enterprise Linux 9.5 Released
Notify your friends, loved ones, and colleagues that the latest version of RHEL is available with plenty of enhancements.