In 2018, I began using Artificial Intelligence in the creation of paintings. I initially discovered a porting of attGAN (Attentional Generative Adversarial Network) which was originally written by Tao Xu and Sharon Huang of Lehigh University. The variation I worked with allows for text input to be converted into images. The algorithm is trained on thousands of photos of objects and if you type in “bicycle” it attempts to create a picture of a bicycle based on the bike images it was fed. I discovered that typing in ideas like “annoyance,” or “rationality,” or “singing,” or “hunger,” causes the algorithm to break down and return random shapes and colors. These abstract images are at times reminiscent of the work of Kandinsky.
I then discovered Convolutional Neural Networks. (CNNs) This form of machine learning is employed in image, video and character recognition. One offshoot of this work is that CNNs can recognize the patterns that make an image appear as it does to human eyes, and then use those patterns to apply the look of one image onto another. (An example would be an app that turns your selfie into a Van Gogh painting.) I started creating style collages in Photoshop — images that combine textures, brush strokes and pallets — and began to use CNNs to transform the attGAN generated images even further. Along the way I have created a handful of recognizable styles that reflect my taste in color, texture and composition.