Technology

How Artists Can Use Neural Networks to Make Art

David Aslan uses neural network "Deep Style" to transform original photos or paintings into images with other styles.

Blog Author: David Aslan
Blog: https://artplusmarketing.com/how-artists-can-use-neural-networks-to-make-art-714cdab53953#.v62zv94uy

1. How does an artist understand neural network?

David Aslan was trained in the traditional art of oil painting, but has gained an interest in art related technologies. He wrote this blog to share his experiences and knowledge with the world, and to help other artists understand neural networks and find cool ways to implement them in their own workflow.

David considers the neural network (NN) as a computational approach to solving problems, which is different from traditional computing. He thinks that NN processes a large amount of data as the input, but does not output very coherent results. Nevertheless, NN learns from these mistakes until it reaches a point of balance, at which it will produce a rough approximation of the “correct” answer. David makes a metaphor to explain this procedure:

You throw in data, you get out junk; you tell the computer that its output is warmer or colder, and then it tries again.

 

2. How does an artist make art using NN?

Unlike computer scientists who would like to use neural networks to create art directly, David considers the neural network as a tool to make art.

He uses “Deep Style” (https://github.com/jcjohnson/neural-style) to transform original photos or paintings into images with other styles. Based on the outputs of the neural network, he uses Photoshop to refine and modify details of these outputs.

Currently, his workflow is as follows:

  1. He prepares one content image (original photo) to be transformed and several style images.
  2. He achieves different outputs of Deep Style by using different style images.
  3. He layers those styled images over the original photo, and uses a layer mask in Photoshop to selectively reveal or hide different parts of the style images.
  4. He uses a final layer to refine the details and blend different parts of the image.

He calls the third and fourth step “Fusion Neural Technique” which makes his work outstanding. The following GIF roughly shows the procedure of his work.

1

2

3. Keypoints with his experiments

Differences between NN and traditional filter in Photoshop

David found that the Artistic filter in Photoshop cannot recognize objects in the input image, and hence only performs a simple computation over all pixels. In this situation, the “Specificity” of the input image is gone. In contrast, neural networks are not only able to recognize objects (content), but also the texture of the image, which produces a more realistic output image.

Function of “Fusion Neural Technique”

Synthesized images usually have smoothed edges and relatively lower resolution. The manual blending layer can make up for some of the lack of information or resolution after processing with NN, to allow the final result to have sharp edges and better resolution. This part is the “creation” of the artist, as he mentioned, neural networks is just a tool for him.

Special views from David towards NN and art

Relationship between art and technology: He thinks that the development of art is linked to the development of technology. For example, “Impressionism” came out of the scientific discovery in optics; invention of amplified electric instruments lead to rock’n roll. Therefore, neural network can be used as a tool for art.

4. Technical details behind his work

In this blog, David uses “Deep Style” to make artworks, which is an implementation of paper A Neural Algorithm of Artistic Style (https://arxiv.org/abs/1508.06576).

This paper uses VGG-Net to extract content information and style information from content image and style image, respectively, and then computes the loss between these and a random noise input image. By using back-propagation, the random noise input image can be transformed into the image with given content and style.

The overview of this model architecture is shown in the following figure:

3

Let vec{p} and vec{x} be the content image and the generated image, and P^l and F^l their feature representation in layer l. The content loss is defined as follows:

4

The corresponding partial derivative of content loss is defined as:

5

By using back-propagation, the initial random input image vec{x} is continuously changed until the feature representation is as similar to P^l as possible, which means the content is reconstructed.

This paper also defines the “Style” of one image. G^l is Gram matrix representing style at layer l. Each element of this matrix is defined as:

6

Then let vec{a} and vec{x} be the original image and generated image, and A^l and G^l their style representation. In this way, the style loss is:

6.1

7

Where N_l is the number of filters in layer l. This loss has partial derivative towards vec{x} as:

8

Combining the content loss and style loss, the final objective function is:

9

By optimizing this L_{total}, the target image vec{x} can be generated, which has similar content to the content image, but with another given artistic style, as shown in the following figure:

10

5. Thoughts from Reviewer

On one hand, artists prefer the neural network to be a tool in “drawing” paintings. On the other hand, computer scientists devote themselves to designing neural networks that can directly “create” artworks. Personally, I think neural networks would be more like a tool than a creator in terms of present research results in this field. Because the current neural networks are more or less based on a probabilistic model, which uses given data to predict an output under some given conditions.

It cannot be denied that people learn new things also based on experiences, just like supervised learning. But this procedure is far more complex. The training data for the neural networks above are only real photos and paintings, from which we would like to find a mapping to connect them. However, artists can creates paintings from real scenes (equivalent to “real photos” above) not only based on those scenes, but also on individual experiences which cannot be easily extracted as mathematical data for a computer.

This blog also provides us with some unique views from artists. In the artists’ opinion, noise in natural images (real scenes) is much higher than in paintings. This may conflict with the understanding from some computer scientists, for example the paper ArtGAN-Artwork Synthesis with Conditional Categorical GANs (https://pdfs.semanticscholar.org/ec94/874d38378f53319d467412a124809542d3db.pdf?_ga=1.46132652.922857708.1488461012).

The blog author also pointed out a problem with this kind of generative neural network: the generated images often have very smooth edges. That is the reason why he had to use his “Neural Fusion Technique” to sharpen these edges in Photoshop manually. Although this undesired smoothness problem is hardly inevitable in these neural networks. In paper Face Aging with Conditional Generative Adversarial Networks (https://arxiv.org/pdf/1702.01983.pdf), the authors proposed using embeddings of original images and generated images from a pre-trained state-of-art classification deep neural networks to compute L2-Loss between them in order to decrease the effect of smoothness in edges. However, this method can only alleviate, but not solve this problem entirely.

 


Analyst: Yiwen Liao | Editor: Hao Wang, Arac Wu |Localized by Synced Global Team : Xiang Chen

0 comments on “How Artists Can Use Neural Networks to Make Art

Leave a Reply

%d bloggers like this: