Over the last decade, the transition from TV towards digital has caused the advertising market to shift from offline to online as well. Retailers are required to follow this trend along with the consumer in order to catch – and keep – their attention. This means designing extra content, like banners, to be placed on websites and social media so that potential consumers actually start seeing them again. This increased demand has caused creative agencies to be too short-handed to keep up with retailers’ requests. As a result, creative agencies developed a need for scalable expansion of their business.
So how do we tackle this problem? In marketing, Artificial Intelligence is a buzzword that seems to be the answer to every problem. Marketing automation has become a real thing over the last couple of years, with the rise of chatbots and personalized advertising. However, it still has to prove itself on the creative side of online advertising – and that is what I will discuss with you today.
In the innovation lab of Greenhouse Group (Labs), we are researching the automated creation of online advertising banners. One particular task is addressed in this research: the design of product compositions at the Creative Hub. Designers at the Hub consider this specific task to be repetitive and tedious work – suggesting it’s both useful and feasible to automate this!
But before we can dive into how we could automate such a creative process, we first have to understand what it is that we are planning on automating exactly. With our Labs team consisting of two highly technical people, we required an introduction to the design space.
Dissecting the design process
The research started by literally looking over the shoulder of a designer that performed the creation of a product composition. By asking why she chose to take the steps she did, we found out that a lot of the work is done based on a gut feeling which was developed after years of experience.
For example, the elements used for the composition all have the same size; proper rescaling and knowing the real size of a product is an essential step in the process for the final success of the resulting composition. Also, creating the composition also requires some trial and error to come up with something that is visually appealing. To do that, designers take into account contrast, hierarchy, visual weight, and color compatibility.
Example of a simple composition
What about GANs?
Would it be possible to teach a model to do all these things for us, and if so, how? Deep Learning has shown promising results lately in the field of creation. Invented in 2014, the Generative Adversarial Network (GAN) structure by Ian Goodfellow is mainly used for artificial image creation nowadays. In 2017, the visual computer technology company NVIDIA created high definition images of fake celebrities using an updated GAN structure. Forming product compositions seemed like a piece of cake after seeing these results, given the availability of enough proper data of real, hand-made compositions to train an algorithm on. We set out to make our own creative AI, but unfortunately ran into a couple of challenges on the way.
Fake celebrities created by NVIDIA
Challenge 1: Collecting and using the data
The first problem occurred when we tried to use data of compositions that had been made before at the Creative Hub. Designers of the Hub create the compositions in Adobe Photoshop, and export them to an image file. For our research, we needed to extract the composition from the file by getting rid of the discount sticker, background, and other attributes. We ran into two problems that made this approach infeasible. First, the Adobe Photoshop files are enormous, making them hard to work with using a programming language. Second, the layers in the files contain no naming conventions, which makes extracting the component layers significantly harder.
Challenge 2: Clustering exported images
The following idea was to use the exported files instead. By finding similar projects to the one that needs to be created, we can recommend a project composition to designers. First off, all the .jpg and .png files that contained a product composition were retrieved from the company server by hand. The images were preprocessed by running them through a VGG19 Convolutional Neural Network trained on the popular ImageNet Dataset. The values of the second-to-last layer’s neurons were used as visual features of the images containing the compositions.
from keras.applications.vgg19 import VGG19, preprocess_input
from keras.preprocessing import image
from tqdm import tqdm
import os
import re
import numpy as np
def feature_extractor(images_dir="PATH"):
list_images = [f for f in os.listdir(images_dir)
if re.search('jpg|JPG|png|PNG', f)]
features = []
target_size=(224, 224)
model = VGG19(include_top=False, weights='imagenet', pooling='max')
for i in tqdm(list_images):
img = image.load_img('images/' + i, target_size=target_size)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
feature = model.predict(x)
features.append(feature)
return features
Code to extract image features using VGG19 (Keras Documentation)
The features were then transformed into two features using t-SNE to plot the images in a 2-dimensional graph.
The resulting graph showed that most value is attached to attributes of the image that do not relate to the composition itself. For example, images with the same background were grouped together as similar images. Clearly, this was not the result we hoped for since we cannot extract information regarding the composition which made this idea infeasible as well.
Conclusion
To sum up, the data that is available cannot provide us with input that can be used to create new product compositions. We needed to think of another solution …. and we found one! In part 2, I will elaborate on a solution that can create product compositions quite well already, without the use of data.
Leave a Reply
You must be logged in to post a comment.