Deep Dream triptych of a Van Gogh self-portrait prepared by Kyle McDonald (2015), based on a photo by Seth Johnson.

At a time when employers incentivize workers to don wearable health trackers, artists Tega Brain and Surya Mattu offer a solution: Unfit Bits, household objects, from drills to metronomes, retrofitted to fool a Fitbit into thinking you’ve completed a triathlon. “Does your lifestyle prevent you from qualifying for insurance discounts?” asks the narrator of the infomercial introducing their product. “Maybe you just want to keep your personal data private without having to pay higher insurance premiums for the privilege.”

At last week’s Eyeo Festival, Brain called for a move away from “scary new media art”—works that highlight nefarious possible futures—toward a more nuanced and complex reflection on relationships with technology today. Eyeo is a three-day art and technology conference that has taken place annually in Minneapolis since 2011. With a roster of speakers from diverse backgrounds—art, coding, academia, the private sector, or a mix of the above—it’s fitting that the event is held at the Walker Art Center, an institution long revered for experimenting with interdisciplinary art and new technologies.

With Unfit Bits, Brain and Mattu provide a critique of workplaces corroborating with insurance companies to surveil employees’ lives. But rather than leave their audience with a sense of doom and powerlessness in the face of privacy invasion, Brain and Mattu give us humor, and perhaps hope. Unfit Bits simultaneously indicts the policing of bodies via data collection and reminds us that we can always find ways to skirt the system.

The shift away from scary new media art was a thread that ran through the course of Eyeo. During her talk, artist Lauren McCarthy expressed exuberance for the online behaviors we’re taught to be ashamed of. Her recent project Follower is an iOS-compatible app that inverts the logic of services like Uber, which allow you to push a button to summon someone to your location, tracking his or her every move via GPS. Users of Follower request to be shadowed for a day without knowing who or where their followers are. McCarthy ended up being the only service provider (“Because I love following so much, I decided to be the only follower,” she quipped, adding that it made legal concerns much easier). At the end of a session, the user receives a notification saying she is no longer being followed, along with a single snapshot of her, taken by the follower at some point during the day.

Submitting yourself to being tracked may seem like an absurd proposition, but Follower’s accompanying promotional video makes a convincing case. Through the French doors and windows of a suburban home, we see an affluent woman in her mid-forties lounging on a sofa. We then watch her go out and interact with the world, leisurely stopping at a newsstand and coffee shops. Narration fades in, expressing a social anxiety of the earnest and privileged: “I wake up, I get dressed, I go out. I do things. I read a magazine and I find out about people. Why do I know about their lives? Somebody should be knowing about mine.” If McCarthy had only made the video promo and foregone building and using the app, I’d still be hooked.

Like Unfit Bits, Follower doesn’t condemn today’s technologies, but it does hint at a possible twisted future. Here, the choice to be or not be surveilled is posited as a luxury experience, something available only to those who can afford it. But McCarthy also complicates any sense of dystopia. Social media has ensnared our society today so rapidly and so fully because it promises something that we all intrinsically want: a little recognition.

Google could be considered the ultimate follower. In its nascent stages, Google endeavored to algorithmically translate what we type in our search bars into fodder for advertisers; now, we increasingly come to rely on its expanded cloud services to store our personal information, images, and ideas.

A year ago, Google’s release of Deep Dream, the so-called “inceptionism” code that yielded visual atrocities like the puppy slug, drove a small segment of the internet to near-hysterics. In his closing keynote at Eyeo, artist Kyle McDonald described how Deep Dream’s computer vision algorithm takes snapshots from the artificial neural network of Google Photos and repeatedly weaves these images into a new one, creating a feedback loop that mutates photographs into trippy and grotesque forms. For some, a more terrifying prospect than the puppy slug was what Deep Dream represents: a glimpse into an artificial intelligence system that humans built but do not fully understand.

Maybe it’s not Deep Dream at all that we should be worried about, but the biases underlying the software. McDonald reminded the audience that last summer Google found itself in hot water when its automatic tagging system labeled photos of two black people as “gorillas.” “Google Photos, y’all fucked up. My friend’s not a gorilla,” software engineer Jacky Alciné tweeted, later adding, “What kind of sample image data you collected that would result in this son?” This mislabeling is more than just a mistake: it reveals the prejudices inscribed in the data sets that artificial intelligence uses to create new realities.

As far as machine learning outputs go, images generated via neural networks pale in comparison to the gorilla gaffe. Google open-sourced Deep Dream to allow the public to create their own images, which on the whole have been harmless. McDonald experimented with Deep Dream with the hopes of programmatically reproducing the styles of Van Gogh, Picasso, and other famous artists, to varying degrees of success. (An article in the Wall Street Journal about an auction of works produced by artists using Google’s software speculated that computer-generated images would be the demise of art, ignoring the long history of procedural art that arguably began with Jean Arp dropping squares onto a sheet of paper.)

When you look beyond Deep Dream’s initial creepiness, the machine-learning process can be quite banal. In all likelihood, McDonald suggested, images generated by artificial neural networks may very well go the way of the Instagram or Snapchat filter: harmless, if kitschy, ornamentation. In other words, what Google has created is the Thomas Kinkade of artificial intelligence. If, in eschewing the “scary new media art” mentality, Brain, Mattu, and McCarthy complicate the notion of a dystopian technological future, then McDonald suggests that this future may be downright humdrum.