Refik Anadol Trains AI to Dream of New York City

View of Refik Anadol's installation Machine Hallucination, 2019, at Artechouse.


Watching Refik Anadol’s Machine Hallucination (2019) is a dizzying experience, like taking a ride at a carnival. Created with algorithms that found and processed hundreds of millions of images of New York City, Machine Hallucination is a buzzing, immersive audiovisual piece, on view at Artechouse, a new digital art space in New York’s Chelsea Market, through November 17. The gallery is a 6,000-square foot converted boiler room, sporting the kind of scrubbed ex-industrial chic—exposed brick, soaring ceilings—that has become a cliché in New York. With Anadol’s piece projected on its walls and floor, the space conjures both nightclubs and nightmares.

Machine Hallucination whirs and throbs for thirty minutes, opening what Anadol said in an interview is a window into the “mind of a machine” as it processes images and then responds to them. There are moments when glimmers of New York are completely clear: you feel as if you’re moving over the city’s grid at a great height, or glimpse images of buildings right before they start to morph beyond recognition. Other times, you are looking at the data architecture, at graphic plottings, or metadata tags of the original photos—keywords like COLOR and URBAN and NEWYORK.

Anadol, a media artist originally from Istanbul who now lives in Los Angeles, has been working with data for a decade to make large-scale installations of sound and light—often visualizing open source data about cities—displayed in public space. In 2016, he was a resident at Google’s Artists and Machine Intelligence Program, where he learned how to use artificial intelligence as an artistic tool. He has previously created installations for Artechouse’s locations in Washington, D.C. and Miami, and was invited to inaugurate its New York venue, which opened September 6.

Anadol made Machine Hallucination with the aid of twelve studio assistants. “Data is my medium, and as a team we’ve been working with data and algorithms and trying to explore this hidden emotional experience inside this invisible world of data,” he said. His goal in this work was to turn machine learning into a narrative of sorts: to make visible the actual process of an algorithm taking in and responding to images. Anadol used several algorithms for this project. The main one, called StyleGAN, was developed by researchers at NVIDIA, a tech company that designs high-end graphics processing units (used, among other things, for video games and self-driving cars). Anadol and his studio used the neural network and various modifications to process a gargantuan dataset of publicly available images of New York City: 300 million photos, and 113 million other raw data points.

A custom algorithm crawled the internet to find images of New York on social media, search engines, digital maps, and library sites. Anadol said they all were taken from the public domain because of his concerns about data privacy: his system “never entered a password or breached anyone’s personal belongings.” Once the photos had been collected, an image recognition algorithm defined the context of the original images, and another algorithm erased any people who appeared—faces, crowds, slivers of bodies. Anadol said this choice had to do with privacy concerns, but also with his desire to focus on architecture and the cityscape Another kind of algorithm used to generate dynamic media, known as a recurrent neural network, absorbed recorded sounds from the cityscape—subway sounds, local radio stations, traffic noises—and composed the soundtrack.

After processing the plethora of audio and visual data, the StyleGAN algorithm was then programmed to “dream”—basically, to spit back visual associations it learned as it reviewed images. This process, popularized by Google engineers, can have surreal results, as when computers thought that dumbbells had arms because they were often found in images with arms. The dreamstate in Machine Hallucination is more abstract—colors and forms coming in waves and patterns, shifting like an Etch-a-Sketch. To the side of the main space, a small room screens hour-long unedited “machine dreams” made from the dataset.

Anadol’s is highly mechanical way of understanding dreams: data in, memory formed, dreams out. Dreams and memory in this equation are not quite interchangeable, but “memory”—or, millions of data processed and stored—leads directly to the dream. Machine dreams, then, are more like outputs than subconscious experiences. (This makes the title seem a little odd, since the machine isn’t misperceiving or hallucinating; it’s processing existing data.) Anadol thinks of the piece as “optimistic science fiction,” an exploration of the machine’s potential to expand human capacity for imagination. And it does reveal the enormous potential for machines to handle images by visualizing these algorithmic processes and their weird outgrowths.

Near the entrance to the show there’s an “augmented reality bar” where you can download an app and point your phone at mocktails (Artechouse hasn’t received its liquor license yet) to see digital flourishes like an abstract whorl or little blue and white figurines appear on the screen around your glass. This bar is flanked by two pieces on 70-by-40-inch screens, running in eight-minute loops, that Anadol calls “data paintings.” They’re pulled from the same dataset, but they have a quieter, pointillistic quality. Images of New York blend and merge and change quickly, the skyline shifting into a sunrise, trees appearing and disappearing, buildings growing and shrinking—all in muted pastels or black and white. These aren’t photographic representations of the city. They don’t correspond to real places. Rather, they feel like a long supercut of memories of places, nostalgic and low-lit, suggestive but not clear. As I just did, we often use cinematic or photographic metaphors—snapshot, flashback, reel—to refer to memory. At the same time, it is more and more common to outsource memories to devices and programs; the popular app 1 Second Everyday allows you to create a fast-cut roll of short videos of your own life, slivers of visual memory that you can play back in full for a high dose of nostalgia. Anadol’s data paintings manage simultaneously to scrub the subjectivity from this kind of photographic record while completely capturing its essence. The machine mimics the quality of human memory almost perfectly. In an old-factory-turned- hi-tech-art-space in Chelsea, the data paintings are almost absurdly moving—flickering, colorful nostalgia not for a New York that no longer exists, but one that never was.