Watch An AI Invent Its Own Visual Language

By Kelsey Campbell-Dollaghan

May 01, 2018
 

Sketching is a crucial part of many artists’ thought processes–a way to see, by making; Matisse famously said that drawing is putting a line around an idea. Intriguingly, one computational artist is finding that a similar creative process seems to apply to synthetic brains, as well as human ones.

Watch An AI Invent Its Own Visual Language | DeviceDaily.com

At least, that’s one way to interpret the work of artist Tom White, a computational design professor at Victoria University School of Design. Each of the drawings in White’s recent series depicts a different everyday object: a forklift. A bra. An iron. Done in a loose, punchy hand, the drawings look like the hasty sketches of an artist experimenting with abstraction. But White’s images aren’t the work of any human artist. He’s experimenting with flipping the creative process, putting AI in the artist’s place while he simply helps these so-called “Perception Engines” express themselves.

[Image: courtesy Tom White]

Take one Perception Engine’s drawing of an electric fan. First, White showed a collection of convolutional neural networks thousands of images of fans. Trained on all those fan images, the system then “sketched” its own depiction of a fan, adding broad strokes and detailed line work based on its knowledge of fans. “Several neural networks simultaneously nudge and push a drawing toward the objective,” White explains in a Medium essay, comparing the sketching process to a “computational Ouija board.”

The final sketch of a fan may look wildly abstract to human eyes–barely identifiable, to some.  But here’s the twist: Other neural networks reliably classify the system’s wildly imaginative drawing as a fan, too. It’s almost like the system is autonomously creating its own visual language. It’s art by AI, for AI.

“Using perception engines inverts the stereotypical creative relationship employed in human computer interaction,” White continues. “Instead of using the computer as a tool, the Drawing System module can be thought of a special tool that the neural network itself drives to make its own creative outputs.” While he designed the system, he adds, “the neural networks are the ultimate arbiter of the content.”

With grant funding from Google’s Artist and Machine Intelligence group, White used a Riso printer to turn each sketch into a print, which he sells online to fund the process. One series of prints, cleverly titled The Treachery of ImageNet, include captions that nod to Magritte’s iconic 1928 painting, The Treachery of Images (Ceci n’est pas une pipe).

White’s work is related to hotly debated ideas in machine learning today. Neural networks may be very good at perceiving and identifying what they see in images, but they’re weak in other ways. A small irregularity in an image of a fan that a human would never notice, for instance, might confuse the system into thinking it’s looking at an avocado. This is known as an “adversarial example,” and recently researchers have shown that these seemingly harmless glitches could present serious security issues in some situations. Strengthening a neural network against these adversarial examples may depend on helping them recognize upper-level abstract concepts of the objects they see–rather than just granular, pixel-level recognition.

 

Meanwhile, White’s Perception Engines touch on bigger questions about how machines “see” and thus how they think, suggesting that these systems are capable of abstraction and conceptual thinking. “My long-term interest is embodied cognition and using the concepts of shared grounding to understand how machine learning systems do or don’t have a shared understanding of the concepts we think they are learning,” he tells Co.Design. In other words, this may not be a fan–but if AI thinks it’s a fan, and people do as well, it might as well be.

You can buy White’s work online here.

This post originally appeared on Co.Design.

Fast Company , Read Full Story

(29)