Image on left is an example image of a peacock, and images on right are GP-evolved images
using a deep neural network for fitness. The fitness tries to maximize the "peacock" identification score
as trained within the neural network. The classification scores for the images are also shown.
A deep convolutional neural network (CNN) trained on millions of images forms a very high-level abstract overview of an image. Our primary goal is to use this high-level content information a given target image to guide the automatic evolution of images using genetic programming. We investigate the use of a pre-trained deep CNN model as a fitness guide for evolution. Two different approaches are considered. Firstly, we developed a heuristic technique called Mean Minimum Matrix Strategy (MMMS) for determining the most suitable high-level CNN nodes to be used for fitness evaluation. This pre-evolution strategy determines the common high-level CNN nodes that show high activation values for a family of images that share an image feature of interest. Using MMMS, experiments show that GP can evolve procedural texture images that likewise have the same high-level feature. Secondly, we use the highest-level fully connected classifier layers of the deep CNN. Here, the user supplies a high-level classification label such as ``peacock'' or ``banana'', and GP tries to evolve an image that maximizes the classification score for that target label. Experiments evolved images that often achieved high confidence scores for the supplied labels. However, the images themselves usually display some key aspect of the target required for CNN classification, rather than the entire subject matter expected by humans. We conclude that deep learning concepts show much potential as a tool for evolutionary art, and future results will improve as deep CNN models are better understood. |
Copyright (C) 2019 Fazle Tanjil.
Back up: http://www.cosc.brocku.ca/~bross/