The artist Trevor Paglen makes work that shows “what invisibility looks like.” In the past, he’s collected items related to classified military projects, photographed the headquarters of shadowy government agencies like the NSA using souped-up telephoto lenses, and scuba dived to the ocean floor to document the fiber optic cables that are subject to some of the most intense personal data mining by various federal agencies and security apparatuses.

“I like playing with the idea that here’s a photograph of this thing that is literally invisible,” Paglen told me in 2015. “But I have all this documentation and research that points to the fact that it exists.”

To most, an underwater telecommunication cable doesn’t look like a massive surveillance infrastructure, and a hazy image of a corporate building in the deep woods of West Virginia might never suggest that a shadowy NSA listening station is contained within. Paglen’s work plays with the idea of watching and being watched, revealing the covert methods with which humans see one another—and how we teach machines to see us, too.

In his latest endeavor, collected in a solo show titled ”A Study of Invisible Images“ on view at New York’s Metro Pictures Gallery, the artist is again focusing on vision. This time, however, he’s exploring images related to computer vision and artificial intelligence algorithms. “Most images these days are made by machines for other machines, with humans rarely in the loop,” Paglen writes in an introduction to the exhibition. “It’s a form of vision that’s inherently inaccessible to human eyes.”

For example, what does the world look like through the "eyes” of a machine, such as a self-driving car system or facial recognition technology? And how do the people who build these machines influence that process? In other words, if computer scientists and engineers in Silicon Valley are the ones building this technology, then how will their grasp of reality, which is highly specific, affect the rest of the world once everyone is using AI-enhanced technology?

__*Porn (Corpus: The Humans) Adversarially Evolved Hallucination.* Dye sublimation metal print, 2017.__  Courtesy of the artist and Metro Pictures, New York

Porn (Corpus: The Humans) Adversarially Evolved Hallucination. Dye sublimation metal print, 2017. Courtesy of the artist and Metro Pictures, New York

Paglen has zeroed in on three types of invisible images: machine-readable landscapes, training libraries (imagery and data that engineers use to “teach” AI systems), and images made by computers for themselves. To do this, Paglen developed custom software that can implement various types of machine vision algorithms in order to depict what a given algorithm is “seeing.”

In one room of the gallery, there are about a dozen prints created by an AI Paglen taught how to recognize things such as “monsters,” “dreams,” and “porn.” A second AI then interacted with the first AI in order to, in Paglen’s words, “evolve an image that is entirely synthetic and has no referent in reality, but that the pair of AIs believe are examples of things they’ve been trained to see.” For Vampire (Corpus: Monsters of Capitalism), an AI was fed countless pictures of vampires, and the computer vision systems worked together to vomit out an image that looks more like a surrealist or abstract expressionist portrait than something out of Nosferatu. The art works in the other sections of the gallery are equally complex, trippy and seriously mind-blowing.

“I’m using these tools for purposes that they definitely weren’t intended for,” Paglen explains. “For me, making computer vision see monsters and omens and stuff like that is a way to figuratively embody the kinds of things I see happening with rise of artificial intelligence,” things that are at once mesmerizing and horrifying.


You said this exhibition took a really long time to complete. Why, exactly?
I probably started working on this project almost 10 years ago. Not full-time, obviously, and there have been projects in the interim, but that’s sort of a typical cycle for me. I’ll start really researching something and it takes years. For this exhibition, it’s a lot of technology. We basically wrote the programming language to do the work that’s in this exhibition. We wrote [what] I guess you could call a “framework,” though it’s almost like a language. We can take almost any computer vision algorithm that we want and say, “Take this image, take this video,” and then use the particular algorithm to show me an image of what that algorithm sees in this image.

It’s a huge technological curve, and when I started working on this there were far fewer open source tools that you could just use. The technology has to come down enough to meet you, and you have to learn enough so that when the technology becomes accessible, you have some ideas about what to do.

Outside the technological roadblocks, what about the conceptual development of this exhibition?
I think one of the things that took a long time was me trying to figure out how to make this art. In other words, how do you figure out how to use all this technology and do something that’s not just a visualization? It has to be something more than, “Here’s what this landscape looks like to a self-driving car.”

You have to find unexpected things and even kinds of poetry in the systems you’re using. You have to learn all this stuff about AI, learn all this stuff about how to create synthetic images using artificial intelligence, and then ask yourself, “Now what do you want to do? You’ve developed some paint brushes and some paint—you’ve invented the palette—but now what do you want to paint?” It’s so much work just to get to the point where you have paint brushes available to you. You’ve just started [making art] after years of developing these tools.

Some of the work seems to explore how when we automate things like vision more and more, meaning could get increasingly predetermined or biased.
That’s exactly right. How are meanings hardcoded into autonomous vision systems? Who is determining the meanings of things in autonomous (and likely invisible) imaging systems that are becoming more and more ubiquitous? And how does this enforce particular readings of the world? For example, people build AI systems that can only see male and female as genders, and that enforces those gender binaries on the world. But you can’t see it happening.

That enforcement of meaning is very scary to me because when you look at the history of social movements and the history of people making political claims, there is the ability to say, “I am a man” or “I am a woman” or “I am gay” or “I am x, y, or z.” This is a form of self-representation and self-determination. Every political struggle has also been a struggle over meaning. A big part of feminism, for example, was redefining what it means to be a woman, to be able to claim kinds of meanings, to self-represent. That’s precisely what’s not possible as autonomous vision becomes more and more ubiquitous.

This is very specific people creating meanings, with specific agendas in mind. When we’re talking about AI and computer vision, or whatever, we’re basically talking about three uses for it: one is capitalism—making money. Two is police. Three is military. That’s basically it. Those are the kind of ethics that are built into the systems. It’s always going to be biased.

The other part of that is that, again, you always have to give AI and computer vision systems training data from the past. For example, there are way more pictures of white, male CEOs than there are pictures of transgendered, black CEOs. So if an AI machine is visualizing a CEO, it will almost always be a white person. This is how these racisms or biases get really embedded in [the technology]. To a certain extent, [computer vision] always reproduces the past. Whoever is controlling or creating that data set is controlling what the meaning is. That’s a very hierarchical form of power.

__*"Fanon" (Even the Dead Are Not Safe)  Eigenface.* Dye sublimation metal print, 2017.__  Courtesy of the artist and Metro Pictures, New York

“Fanon” (Even the Dead Are Not Safe) Eigenface. Dye sublimation metal print, 2017. Courtesy of the artist and Metro Pictures, New York

This work shares some common threads with past exhibitions, but what makes it feel distinct to you?
This one feels more surrealistic to me, and that feels right. For me, surrealism is all about playing with meanings. What is that relationship between being able to recognize something and not being able to recognize something? What is the stuff that common sense is made out of and how do we fuck with that? That feels like the moment that speaks to me the most about this work. I’m trying to break these systems so the underlying ethics of them become available to think about.

A big part of this project is also about questioning the kinds of “common sense” that are built into artificial intelligence systems. As I mentioned, artificial intelligence systems can only see and do things that their programmers tell them to. With computer vision, you could make up your own common sense. There is a version of common sense that says symbols from Freudian psychoanalysis are really, really important, and you should understand the world through that. [Laughs.] That is a version of some kind of common sense that somebody invented. That’s not what Facebook would do, but the point for me is by trying to build these other, surrealistic images, it makes you ask, “What is this basis for common sense? Whose common sense is being enforced?”

I’m curious about computer vision in relation to mass surveillance, especially with advances in facial recognition. What direction is that going in?
We were talking about the military, police, and consumer applications of this technology. A big area right now is police. It’s kind of interesting. With the growing awareness of Black Lives Matter and police brutality, there’s going to be a huge push to have police wear body cameras. The idea, of course, is that there will be more transparency of what cops are doing. It’s not a bad theory.

Recently Taser, the company, bought a body camera company and an artificial intelligence company. The idea is that Taser’s body cameras will have facial recognition built into them, so when a cop walks through a neighborhood it collects the faces of everyone. Cops will be able to essentially “enroll” people in the police databases. That’s an example of where people could take this idea to have more oversight of the police, but the technology is actually consolidating a huge amount of power in the police.

Could you use this technology to protect yourself in any way?
There have been projects people have tried to do, like wearing make-up patterns to confuse facial recognition software. But I don’t think that’s much of a strategy, to be honest with you. You’re not going to wear a mask all day, and even though you can make something to break the algorithm, now you’re teaching it how to get better. [Laughs.] You’re adding images to the training libraries, and that gets incorporated into [the technology]. I don’t think that’s a strategy. I think it’s more of a policy question. Do we, as a society, need to think about places where we don’t use AI, even though it could be more efficient? Should we say you can’t use AI in criminal justice, or in sentencing? I don’t know, because judges can be biased and racist, too. I think these are big questions.

__*Vampire (Corpus: Monsters of Capitalism) Adversarially Evolved Hallucination.* Dye sublimation metal print, 2017.__ Courtesy of the artist and Metro Pictures, New York

Vampire (Corpus: Monsters of Capitalism) Adversarially Evolved Hallucination. Dye sublimation metal print, 2017. Courtesy of the artist and Metro Pictures, New York