How to Read an AI Image

How to Read an AI Image

Every AI generated image is an infographic about the dataset. AI images are data patterns inscribed into pictures, and they tell us stories about that dataset and the human decisions behind it.

That's why AI images can become “readable” as objects. In media studies, we understand that image-makers unconsciously encode images with certain meanings, and that viewers decode them through certain frames. Images draw their power from intentional assemblages of choices, steered toward the purpose of communication.

When we make images, we bring references. When we look at images, we look at them through references. Those references orient to us toward or away from certain understandings. An AI has no conscious mind, and therefore no unconscious mind, but it still produces images that reference collective myths and unstated assumptions. Rather than being encoded into the unconscious minds of the viewers or artists, they are inscribed into data.

When we make a dataset, we operate within our own cultural, political, social, and economic context. How we frame a dataset is much the way we frame an image. We have a question we'd like to investigate. We find samples of the world that represent the issue we are investigating. Then we snap the photo when those samples come into specific alignments.

We record data this way too. We identify where the useful data might be. We seek out ways to capture that data. Once captured, we allow machines to contemplate the result, or we work through it ourselves.

It’s only the scale of data that makes AI different. But the data works the same way.

Machines don't have an unconscious, but they inscribe and communicate the unconscious assumptions reflected in human-assembled datasets.

Can we read human myths through machine generated images — as a cultural, social, economic, and political artifacts?

To begin answering these questions, I've created a loose methodology. It's based on my training in media analysis at the London School of Economics.

Typically we use media analysis for film, photographs, advertisements. But AI images are not films or photographs.They're infographics for a dataset. But they lack a key, and information isn't distributed along an X and Y axis. How might we read unlabeled maps? And can a methodology drawn from media studies — one meant to understand how the human unconscious moves into and through communication — help us understand, interpret, and critique the inhuman outputs of generative imagery?

Here's my first crack and describing the method I’ve been using.

Let's start with an image that I'd like to understand. It's from OpenAI's DALLE2, a diffusion-based generative image model. DALLE2 creates images on demand from a prompt, offering four interpretations. Some are bland. But as Roland Barthes said, "What's noted is notable."

So I noted this one.

Here is an AI image created of two humans kissing. It’s obviously weird. There’s theuncanny valleyeffect. But what else is going on? How might we “read” this image?

We see a heterosexual white couple. A reluctant male is being kissed by a woman. In this case, the man’s lips are protruding, which is rare compared to our sample. The man is also weakly represented: his eyes and ears have notable distortions.

what does it all mean? To find out, we need to start with a series of concrete questions for AI images:

1. Where did the dataset come from?

2. What is in the dataset and what isn't?

3. How was the dataset collected?

This information, combined with more established forms of critical image analysis, can give us ways to “read” the images.

Here’s how I do it.

It’s challenging to find insights into a dataset through a single image. You can attempt to do a more general “reading” of the image as a photograph. However, a uniqueness of generative photography as a medium isscale: millions of images can be produced in a day, with streaks of variations and anomalies. None of these reflect a single choice: they blend thousands, even millions, of choices.

Only by examining many images produced by a single model, from the same prompt, can we begin to “make sense” of the underlying properties of the data they draw from.

We can think of AI imagery as a series of film stills: a sequence of images, oriented toward telling the same story. The story is: “here’s what the dataset says about your prompt.”

So you want to create a non-linear sequence, a sampling of images designed to tell the same story.

If you’ve created the image yourself, you’ll want to create a few variations using the same prompt or model. Nine is an arbitrary number. I’ve picked it because nine images can be placed side by side and compared in a grid. In practice, you may want to aim for 18 or 27. For some, I’ve generated 90-120.

If you’ve found the image in the wild, you can try to recreate it by describing it with as much detail as possible into your prompt window. However, this technique, for now, assumes you can control for the prompt (or, in the case of a GAN, that you know what your model has been primed on).

Here are nine more images created from the exact same prompt. If you want to generate your own, you can type “studio photography of humans kissing” into DALLE2 and grab your own samples. These samples were created for illustration purposes, so they use additional modifiers.

AI images are different from photography and film because they are *endlessly* generated. But when you generate just a few, patterns emerge. These patterns are where the underlying data reveals itself.

It’s tempting to try to be “objective” and download images at random. At the outset, this is a mistake. The image you are interested in caught your eye for a reason. Our first priority is to understand that reason. So, draw out other images that are notable, however vague and messy this notability may be. They don’t have to look like your source per se, they just have to catch your eye. The trick is in finding outwhythey caught your attention.

From there, we’ll start to create our real hypothesis — after that, we apply that hypothesis to random images.

Ok, so you have a collection of samples. Now we can compare the new images for patterns and similarities.

Simply:what do you see?Describe it.

Are there particularly strong correlations between any of the images? Look for certain compositions/arrangements, color schemes, lighting effects, figures or poses, or other expressive elements, that are strong across all, or some meaningful subsection, of the sample pool.

These indicate certain biases in the source data. When patterns are present, we can call these “strong.” What are the patterns? What strengths are present across all of them?

In the example, the images render skin textures quite well. They seem professionally lit, with studio backgrounds. They are all close ups focused on the couple. Women tend to have protruding lips, while men tend to have their mouths closed.

Next: What are their weaknesses? Weaknesses are a comparison of those patterns to what else might be possible. In this question, two important things are apparent to me.

First, all of the couples are heteronormative, ie, men and women. Second, there is only one multiracial couple. We’ll explore this more in a moment. Third, what’s missing is any form ofconvincing interpersonal contact. The “strong” pattern across the kissing itself is that they are all surrounded by hesitancy, as if an invisible barrier exists between the two “partners” in the image. The lips of the figures are inconsistent and never perfect. It’s as if the machine has never studied photographs of people kissing.

Now we can begin asking some critical questions:

Weaknesses in your images are usually a result of:

Strengths are usually the result of prevalence in your training data, oramplifyingsystem interventions — the more there is in the data, the more often it will be emphasized in the image.

In short: you can “see” what’s in the data. You can’t “see” what isn’t in the data. So when something is weird, or unconvincing, or impossible to produce, that can give us insight into the underlying model.

Here’s an example. Years ago, studying the FFHQ dataset used to generate images of human faces for StyleGAN, I noted that thefaces of black women were consistently more distortedthan the faces of other races and genders. I asked the same question: What data was present to make white faces so strong? What data was absent to make black women’s faces so weak?

Here you can begin to formulate a hypothesis. In the case of black women’s faces being distorted, I could hypothesize that black women were underrepresented in the dataset.

In the case of kissing, something else is missing. One hypothesis would be that OpenAI didn’t have images of anyone at all kissing. That would explain the awkwardness of the poses. The other possibility is that LGBTQ people kissing are absent from the dataset.

But is that plausible? To test that theory, or whatever you find in your own samples, we would move to step three.

You can often find the original training data in white papers associated with the model you are using. You can also use tools to look at the images in the training data for your particular prompt. This can give you another sense of whether you are interpreting the image-data relationship correctly. 

We know thatOpenAI trained DALLE2 on hundreds of millions of imageswith associated captions. You can also peek at theunderlying training datasetto see what references the machine is using to produce its images. Another method is to find training datasets and download portions of them (this may become exponentially harder as datasets become exponentially larger). For examining race and face quality in StyleGAN, I downloaded the training data — the FFHQ dataset — and randomly examined a sub-portion of training images to look for racialized patterns. Sure enough, the proportion of white faces far outweighed faces of color.

When we look at the images that DALLE2 uses for “photograph of humans kissing,” one other thing becomes apparent: pictures of humans kissing are honestly kind of weird to begin with. The training data consists of stock photographs, where actors are sitting together and asked to kiss. This would explain some of the weirdness in the AI images of people kissing: it’s notgenuine emotionon display here.

It might be tempting to say that the prevalence of heterosexual couples in stock photography contributes to the absence of LGBTQ subjects in the images. To test that, you could type “kissing” into the training data search engine. The result is almost exclusively pictures of women.

So it’s not sparse training data, and it isn’t biased data (if anything, the bias runs the other way — if the training data is overwhelming images of women kissing, you would expect to see women kissing more often in your images).

So we move on to look at interventions.

The weaknesses of a dataset can be seen more clearly through the training data. But there may be another intervention. One possibility is a content filter.

We know that pornographic images were removed from OpenAI’s dataset so as to ensure nobody made explicit content. Other models, because they were scraped from the internet, contain vast amounts of explicit and violent material (seeBirhane 2021). OpenAI has made some attempts to mitigate this (in contrast to some open source models).

Could this explain the “barrier effect” between kissing faces in our sample images? We can begin to raise questions about the boundaries that OpenAI drew around the notion of “explicit” and “sexual content.” 

So we have another question: where were boundaries set between explicit/forbidden and “safe”/allowed in OpenAI’s decision-making? What cultural values are reflected in those boundaries?

We can begin to test some of our questions. OpenAI will give you a content warning if you attempt to create images depicting pornographic, violent, or hate imagery.

If you request an image of two men kissing, it creates an image of two men kissing.

If you request an image of two women kissing, you are given a flag for requesting explicit content.

So, we have a very clear example of how cultural values become inscribed into AI imagery. First, through the dataset and what is collected and trained. Then, through interventions in what can be requested.

(This is the result of meddling with a single prompt — I’m unwilling to risk being banned by the system for triggering this content warning. But if others find successful prompts, let me know).

Another system level intervention is failures in the model itself. Lips kissing may reflect a well known flaw in rendering human anatomy — seemy 2020 experiments with hands.. There’s no way to constrain the properties of fingers, so they become tree roots, branching in multiple directions, multiple fingers per hand with no set length.

Lips seem to be more constrained, but the variety and complexity of lips, especially in contact, may be enough to distort the output of kissing prompts. So, it’s worth considering that as you move from observations to conclusions: these machines are not infallible, hands and points of contact between bodies — especially where skin is pressed or folds — are difficult to render well. Hand modeling may be a future-proofed career.

Now we have some orientation to our images as infographic. We see the limitations and strengths, and account for system level interventions and model-level limits.

We’ve identified patterns that represent the common areas of overlap between training data: position, lighting, framing, gender, racial homogeneity.

What is itgoodat? Rendering photorealistic images of human faces.

What is itbadat? Diversity, rendering emotional connection, portraying humanity, and human anatomy.

The images of people kissing are all strangely disconnected, as if unsure whether to kiss on the lips, cheeks, or forehead. The images are primarily heterosexual couples, and lesbian couples are banned as explicit content.

What assumptions would be needed to render the patterns seen in our sample set of nine images?

What assumptions, for example, would cause the AI to be incapable of rendering realistic kisses well? We discussed the lips as possibly a technical constraint. But what about the facial expressions? The interactions? Is it possible that kissing would be absent from data scraped from the internet? Not likely.

More likely is that the content filter excluded kissing from the training data as a form of, or because it is so frequently associated with, explicit content. 

There’s an absence ofreal people really kissingin the training data, and a prevalence of stock photography, which may be why the humans we see in these images seem so disconnected. It is left to present a façade of romantic imagery pulled from posed models, not human couples. The images DALLE2 produces should not be taken as evidence of understanding human behavior or emotions. It isn’t real people kissing, it isn’t “human” emotion on display. It’s a synthesis and re-presentation of posed imagery, the result of looking at millions of people pretending to kiss. Acting at the role of a couple.

We can do some thought experiments: where might “real human emotions” be present in kissing images? Wedding photographs would be a likely source, but the training data is restricted (typically) to licensed photographs. Wedding photographs are rarely public domain or Creative Commons licensed. Furthermore, the absence of any associated “regalia” of a wedding ceremony (tuxedos, wedding veils) suggests they aren’t present.

Including more explicit images in the training model likely wouldn’t solve this problem. Pornographic content would create all kinds of additional distortions. But in a move to exclude explicit content, it has also filtered out women kissing women, resulting in a series of images that recreate dominant social expectations of relationships as between “men and women.”

That warrants a much deeper analysis than I’m going to provide here. But so far we have seen examples of cultural, social, and economic values embedded into the dataset.

Now, let’s return to our target image. What do you see in it that makes sense compared to what you learned? What was encoded into the image through data and decisions? How can you “make sense” of the information encoded into this image by the data that produced it?

With a few theories in mind, I would run the experiment again: this time, rather than selecting images for the patterns they shared with your “notable” image, use a random sample. See if the same patterns are truly replicated across these images. How many of these images support your theory? How many of the images challenge or complicate your theories?

When we go back and look at the broader range of generated images, we can see if our observations apply consistently — or consistently enough — to make a confident assertion.

Are there any images that seem to capture the emotions of a kiss believably? Sure. Are there any images that render connected lips really well? Not really.

Remember that the presence of successful images doesn’t change the fact that weak images reveal weaknesses of data. Every image is a statistical product: odds are weighted toward certain outcomes. When you see successful outcomes fail, that’s insight into how strong those weights are. That’s what we mean by “generated images are infographics of your dataset.”

So it’s telling that we see some images that work, because we can ask questions about why they work — essentially repeating the process.

Along the way you’ll come up with insights, not real statistical claims (you could set that up, of course, by quantifying this process). But images and their interpretations are always a bit messy. Be careful in how you state your conclusions, and beware that models change every day. OpenAI could recalibrate to include images of women kissing tomorrow. It doesn’t mean those assumptions weren’t part of their model.

This method is a work in progress. Quantification is part of it too, and it’s not hard to do, but I’m not getting into that here today.

It’s been useful for me as a researcher. It’s succeeded in finding two underlying weaknesses of image generation models so far: the absence of black women in training datasets for StyleGAN, and now, the exclusion of lesbian women in DALLE’s output. (I’ll reiterate that this merits more discussion, but that’s beyond the scope of this post).

Ideally these insights and techniques move us away from the “magic spell” of spectacle that these images are so often granted. It gives us a deeper literacy into where these images are “drawn from.” Identifying the widespread use of stock photography, and what that means about the system’s limited understanding of human relationships and physicality, is another example.

A critique of this method might be that we could simply go look at the training data. That’s possible, but where to begin? When you have billions of images, the output of these systems is literally a summary of that data. I favor starting with the results of these tools first, because then we cultivate literacy and fluency in critical engagement with their output. The sooner we can move away from the seductive capacities of these images, the better.

Finally, it moves us ever further from the illusion of “neutral” and “unbiased” technologies which are still shockingly prevalent among new users of these tools. We see false attribution to generative outputs as free of human biases. That’s pure mystification. They are bias engines. Every image should be read as a map of those biases, and they are made more legible through the use of this approach.

For artists, it also points to the heart of my practice: using tools to reveal themselves. I do consider these “AI generated images of humans kissing” to be a kind of artwork. It’s a tool used to visualize gaps, absences, exclusions and presence in the dataset. It’s only one use of the tool for artmaking, and certainly not the only “valid” one, but it’s the closest I can come to wrestling with the machine to serve my purposes instead of its own. For that reason, I do consider the output, as simple as it is, to be the “artistic result” of an artistic research process.

Please share this post and encourage friends to read or subscribe!I do this for free, and my reward, sadly, is literally limited to social media engagement. So if you like this post or find it useful, it would be awesome to say so on your social networks. Thanks!

Images Powered by Shutterstock