Field Notes from the AI Apocalypse
Some rough thoughts on flip-flopping about AI art, value, and means-end reasoning.
When AI art first came on the scene there was something a little exciting about it. It was the innocent excitement that comes with seeing something new that, more or less, hasn’t been done before (or at least not quite in the same way). Some people already have nostalgia for the “secret horses” days of AI image generation, when it was less polished, less literally tractable, and less—to succumb to the catchall buzzword—slop. Everything was a bit fidgety, but you could make something that felt at least mildly interesting or cool, tinged with a strange sort of uncanniness.
I was somewhat ambivalent. In fact, I remain somewhat ambivalent. Perhaps for some even this much lenience is disqualifying. But I’ve started to find myself thinking a bit more negatively about the whole phenomenon. In some ways even ambivalence about AI, no matter how lukewarm or reluctant, tended to put you on the defensive. A lot of people really hated AI, but in a way that was really a small, incipient version of the ire that AI provokes now. But a lot of the immediate concerns made me somewhat indifferent. For the sake of making sure we’re working with good, sturdy lumber, let’s take a look at some of the arguments I see frequently and which don’t work. Hopefully the contrast will make it a little clearer what I’m talking about.
A Few (Bad) Toy Arguments: Skill and Emotion1
People complained that art would no longer be about mechanical skill or talent. But that was never something I was particularly interested in for a lot of different art forms. Paintings, drawings, and so on are not just residue for some kind of skilled action (to hearken back to the classic critiques of the “action painting” theory of modern art). Moreover, the past century had filled the artwork with interesting projects and experiments that had nothing to do with mechanical skill. Musical compositions were formed out of random notes or even silences. Artworks were fashioned using rather simple methods—Malevich’s Black Square stands out to me, alongside the rest of his oeuvre as a genuine artistic triumph. While some artforms might be about this kind of skill (virtuoso piano playing and ballet, for example2), it just didn’t strike me as an essential feature of art or even great art. The whole affair gave off the kind of sneering conservatism that says “my five year old could do that.” If anything, it seemed like AI might stimulate some new experimentation in those curiously overlapping worlds of the avant-garde and the kind of art that gets institutional uptake (e.g. displayed in museums, reviewed and photographed for prestigious art magazines, is discussed in academia, and so on).
Then there was the complaint that art is about communicating emotions. The reasoning goes something like this: (1) Artworks communicate emotions.(2) Communication involves A having some emotion and communicating that emotion to B. (3) Fill an AI in for A. AIs don’t have emotions. (4) AIs can’t communicate emotions. (5) Therefore AI “artworks” aren’t artworks. Notice that this argument requires a little more than the claim that artworks produce emotions. Otherwise the AI artwork-candidate producing emotion in the relevant audience would suffice for saying the artwork-candidate a genuine artwork. Artworks end up being conduits for zapping emotions from the producer to the viewer.
The stronger claim—that being artwork requires acting as a conduit for emotional transfer—immediately runs into difficulty. Even some emotional artworks don’t lend themselves to the “conduit” view. Threnody for the Victims of Hiroshima was written with the intent of working through some technical problems in music theory. It was only after it was written and given a practice run that Penderecki realized how emotionally heavy the resulting sound turned out to be. But it’s not a matter of having had some pre-existing emotion and giving it to someone else. He might not have felt anything in particular while writing the score.
The weaker version of the claim, namely that being an artwork relies on merely provoking an emotion, runs into similar problems. The class of artworks remains too narrow. Plenty of artworks are not about communicating emotion. John Cage’s 4’33” seems less interested in exploring an emotion than exploring the possibilities afforded by changing up the structure of our attention. Warhol’s Brillo Boxes weren’t so much supposed to make you feel joy, sorrow or anguish, but to call into question an ascendant form of art history that said we could clarify the concept of art from the inside. A wide swath of color field painting is interested in formal exploration rather than emotion. Minimalists concern themselves with simplicity and flatness. Frank Stella is more interested in interrogating the medium than conveying emotions. While these might inspire some feelings, the whole world is saturated in a similar way. It seems to have little to do with whether or not something is art.
This last point brings us to another concern. The emotions-as-art-making view also cuts too broadly. When we’re not simply being flippant, the idea that art is just whatever makes you feel something is just too broad to meet our actual use cases for the word “art.” Getting mugged is liable to give us rather intense emotions. A break-up also makes it emotional. You wouldn’t nod along if someone said “What a work of art!” in response to you getting mugged or going through a break up. You would probably say “Leave me the fuck alone!” Now, some particularly aesthetico-Christlike character might embrace the beauty of the slings and arrows, but the crucial point here is that the parse isn’t extensionally adequate to the actual use cases of the word “art” in its primary sense. Of course, there’s another obvious worry: plenty of people might react emotionally to AI art. Clearly this much isn’t conceptually impossible.
These are all rather tough bullets to bite, and for all of it you still might not even win the prize. But perhaps you can avail yourself of some other strategy that lets you dodge them. Or, alternatively suppose that you’re willing to put these difficulties aside. Saying that something isn’t art doesn’t get us all the way to saying it’s worthless.
Let us therefore suppose it’s not about defining art and instead it’s about saying what makes something aesthetically valuable. Let us further grant that AI art doesn’t provoke emotion in any viewer, or at least normatively ground emotional responses when understood correctly. The idea at play here is a kind of (restricted) aesthetic empiricism. This idea has been a pernicious one in the history of aesthetics and has recently come under some pretty intense scrutiny. But we shouldn’t get into the details here. Let’s just consider some other ways an artwork could be aesthetically valuable: it could give rise to a distinctive kind of community (Riggle), it could facilitate achievement according the scaffolding of aesthetic achievement (Lopes), it could be an experience of some primitive (in the sense of basic) property called “beauty” (Shelley), it could be engagement in a process of observing, interpreting, connecting, and so on (Nguyen), it could be some other kind of finally valuable phenomenological experience (Peacocke—this is a sort of empiricist pluralism, as I recall, but less restricted than the emotion claim), it could be the exercise of autonomy (Lopes again, this time channeling Kant), it could be knowledge or something special that looks quite a bit like knowledge (Schopenhauer, perhaps Schelling too), it could be the freedom to evaluate our values in a state of free-play (Schiller), it could be an expression of our individuality (Riggle again, Wilde in some ways), it could be the pleasure of reflecting on our own capacity to learn (Bolzano), it could be a way of coming to see something outside of yourself clearly and practicing hold our attention toward the good and the particular (Murdoch), it could be, it could be, it could be…
The point is that emotion doesn’t seem to exhaust the reasons we have for valuing artworks or even valuing them aesthetically. At least, we have a lot of formidable challengers to fight off if we want to cling to the view. If it turns out that AI art doesn’t communicate or provoke emotions, but does do one of these other things (provided those other things actually explain why it’s aesthetically valuable or perhaps even just valuable), then it seems like we have good enough reason not to throw it out too flippantly.
The General Shape of the Bad Arguments
A lot of the unconvincing arguments try to follow a similar strategy. They start with a definition of art or analysis of aesthetic value (or, more simply put, beauty3) and then try to work out an explanation of how AI art fails to meet the definition of art or fails to instantiate aesthetic value. Each task is rather difficult.
Let’s start with the difficulties of the first branch. We start by defining art. Defining art is a notoriously difficult task on its own. Any solution that one can think up in an hour of reflection typically runs into serious difficulties and fails to accomodate central cases. One part of the issue is that empiricism (this is usually meant in a really narrow way) about art-making features—e.g. that we can tell what things are works of art just by looking at them—has a rather strong intuitive pull. But the empiricist paradigm collapsed alongside attempts to clarify the concept of art “from within the practice” to borrow the Greenbergian terminology.4 Part of the problem is that aesthetic value and definitions of art were divorced from each other in the 20th Century. We decided that the only way to make sense of readymades, conceptual art, and so on was to move away from the concept of beauty in trying to separate artworks from other artifacts. Institutional approaches remain somewhat popular answers for the question of what art is. This makes it hard to see what the normative upshot of calling something is. In other words, we can find ourselves saying “AI art doesn’t fall under the description of art that I’ve laid out” and our interlocutor can simply say “Okay. But I don’t care whether or not it’s art. I care if it provides me with something good. Taxonomy building doesn’t tell me what I should do.”
Following the aesthetic value tack might give us a way of showing that a shrug isn’t an appropriate response. Getting a complete and compelling account of what aesthetic value is turns out to be difficult work (but, I think, not impossible). But even if we’re clear on this and can argue completely unambiguously that AI is not a source of this kind of value, it’s much harder to make the case that AI isn’t of any value of any kind at all. People already seem to get some small pleasure from it. In enables them to complete larger projects on their own, engage in some coordinated social practices, and so on.
Both of these strategies ask the merely conceptual work to take us too far. They start by asking what makes something valuable in some specific sense or makes it art, and then try to rule out AI art in one way or another. But the starting place is useful in some respects. Let’s look at a slightly different strategy. We can start with a (partial) specification of what aesthetic goods are. From there, we can take a look at some empirical facts about the ways in which technology—both social and mechanical—can shape our perception of value. To give a bit of preview: my worry is that AI will push some values outside the scope of our attention, and the attendant practices will be lost.
Worrying About Means and Ends
I said before that I had soured a bit on my ambivalence, but all I’ve done so far is eliminate arguments I don’t really like in perhaps too-cursory fashion and express some optimism about what it might do for the hoity-toity art scene. What’s the issue?
A lot of the practices that go into producing, appreciating, preserving, archiving, curating, and so on artworks are autotelic (Nguyen again, and a bit of Setiya perhaps for some added flavor). They might have some instrumental value, but we also value them for their own sake. We don’t just want a finished statue. We want to spend time carving and molding. We don’t just want a finished piece of music. We want to spend time thinking through harmonies, playing back different samples to get the right timbre, working out how voice leading is supposed to work, and attending to the relationship between local and global structure. Fill in the blanks with whatever artistic practice you like. Maybe this is because these things are pleasant. Or perhaps it’s that using our capacities that is just what living well consists in. Or maybe it’s more basic than that: we seek out engagement for the sake of engaging. I won’t litigate the exact source here. All that we need to see is that we have an interest in participating in practices rather than finished products.
AI art makes some forms of participation eliminable but not others. You no longer need to pull out your pencils and physically make the sketch, or use your brushes, or sit in front of FL studio or a piano plugging away on chords. But this isn’t a cause for concern for two reasons. First, suppose you can get away with dumping all of the paintbrushes, pencils, paper, musical instruments, tap shoes, clay, chisels, carving knives, and pottery wheels into the sea. There’s still a necessary role for curating, prompting, critiquing, narrating, editing, archiving, and many of the other practices that make the artworld tick. Artistic practices remain in the picture. Second, I can still engage in drawing, painting, playing piano, or any other practice by hand. Big AI isn’t marching into my house with a gun and demanding “We’re only doing generative art now!” Moreover, it might turn out that we can see more clearly the autotelic value of engaging in processes of production once we have an easy means of artistic production in hand. In short, we can move up and down levels of artistic production without much issue and those moves are voluntary.
But maybe that last point about seeing more clearly isn’t so obvious. A wave of people would complain online—and perhaps still do—that the existence of AI art made them lose motivation to do the artistic practices they are engaged in. My initial reaction was really to say that they are too under the thrall of an ideology of means-end reasoning to see what they’re doing and why they value what they’re doing. But I suspect the fetters of this ideology clasp tightly and even self-conscious critics have a hard time shaking loose of them. Technology can reshape our cognitive landscape and blind us to the harder-to-pin-down values that nest in what initially appear to be purely instrumental practices.
Albert Borgmann’s Technology and the Character of the Modern Life offers some illustrative examples in this direction. Take the shift from the hearth to central heating. Hearths, Borgmann points out, do not heat homes evenly. Only the room with the actual hearth stays toasty, while the other rooms have to make do with whatever warmth seeps through the doorways and cracks. It becomes less comfortable to sit in one of the outer rooms in isolation, so you’re motivated to spend time with the other people of the house around the fire. The hearth also necessitates a division of labor and process of production involving searching for viable tinder in the brush, trees that can be safely brought down for lumber, tending and sustaining the hearth. New processes of attention can also be honed by some of these tasks. One comes to see the surrounding forest differently when looking for dry firewood, turning attention to dead trees and so on. There are some goods at play here, some social and some appreciative or attentive, that linger in the process rather than the particular end that prescribes the process. Community and more general sociability are incentivized by gathering around the fire in a shared space. The experience of the forest can become more rich and detailed as a result of honing our attention in a certain way.
This latter point can perhaps be brought out more robustly in another example that Borgmann gives. Suppose that there’s a hike up the peak of a mountain with a scenic view. Backpackers regularly make the hike over the course of a few days, feeling the ache of motion in their joints, experiencing the slope of the terrain, the call of hawks and bluejays, the flowers along the mountain path. But one day, they pave an alternative route up the mountain so that cars can travel to the top without much of an effort. Something seems to have gone wrong here. The point is not about the car users stealing the valor of the goods hard-won by the hikers. The idea is that the goods of the process of engagement end up disappearing. There’s something it’s like to feel the slope of the mountain as you work to summit it that disappears when you drive to the end.
Of course, the examples needn’t be so rustic. Cheating in a game is a bit like this as well. When I was a kid I cheated in video games all the time, but could never figure out why it wasn’t satisfying to achieve what I thought was my goal. I wanted to march my forces straight to the end goal. But it wasn’t very fun or satisfying, because there was a rather transparent sense in which I wasn’t playing the game. Playing Warcraft III is about puzzling through how to achieve the goal. It’s about the process of understanding how mana shield and and drain mana can play well off of each other, of knowing how to time a death coil or frost nova. Part of the goodness of the activity involves the processes that constitute it, not merely the goals that it achieves.
But these goods of engagement and other sorts of process-nested goods (e.g. community and socialization) can disappear when we drive to the summit or cheat in the game, and so on. Something is lost from our experience of the world. Things become flatter, less differentiated—ultimately annihilated by the narrowness of the things we take as our goals. Technology often operates according to this strongly telic logic under which the ends are the sources of value. What technology provides is disburdening. It works by eliminating the effort-intensive processes that give shape to this slower, more fine-grained form of valuing.
This process of technological commodification works well when the underlying process is sufficiently dreadful: There is little good in coal mining, and certainly even less that cannot be reaped elsewhere at less drastic a cost. The danger, claustrophobia, hard labor and darkness are steep costs. It also works out well where the ends are sufficiently important. Borgmann remarks on whole families being taken by a particularly cold winter, something he can literally see as he walks through a Montana cemetery. Central heating seems to be fairing a bit better than hearth-tending on the balance sheet of human life. The achievements of technology are nothing to scoff at. But, just as well, we should be hesitant to accept that alienation from processes of production is always better and mindful of the tradeoffs between different forms of life that technology engenders.
The question of AI art then comes down to contingent facts about our psychology and the way we arrange ourselves into societal structures; to whether or not we’re capable of holding onto practices whose “official” ends are freely available in an effortless, commodified form.
Perhaps the prospects are already bleak on that front. Again, illustrators online complain that AI makes them never want to draw again. If something can effortlessly make something better than I can with all that arduous labor, the thought goes, then why should I bother? But this was the state of the Medieval artist living under the shadow of God, and they still found it within themselves to take the world into themselves and to make something new and marked by them. AI is probably already capable of producing better music than I am in some sense, but I don’t think it’s hindered the compulsion to make something that is mine.
But the trouble, I fear, is not for people who already inhabit the practices and have come to see for themselves the more elusive goods that one finds in autotelic activity. Some of my pessimism comes from bearing witness to the ways in which the scope of valuing, particularly aesthetically valuing, has become so narrow. Great works of art are often difficult to get a grip on. Goodreads reviews are a testament to the difficulty of reading Mrs Dalloway or any other canonical modernist novel, and many do not find these novels “pleasant” to read. And people will go on quite frequently to say “Well, I want to read what gives me pleasure, so there’s no real point of reading these difficult novels over fun action-packed adventure.” The dominance of hedonism,5 aesthetic and otherwise, creates a blinding effect. We do not even need to appeal to the kind of Bildung that reading a difficult novel offers—though I think that those benefits are quite real.6 Susan Wolf makes some moves in this direction in her criticism of hedonism. The value of reading a difficult novel, appreciating a subtle painting, an intricate music work, or whatever other blanks you wish to fill in here, is incredibly fine grained. The kinds of value one gets in the fluidity of prose, the strength on the attack of a particular note or chord by a performer, in teasing apart a metaphor, the imagistic rupture of violence nested in an otherwise tranquil stream of consciousness, the elegance of a tightly controlled ballet gesture, and the delicate patina spread over bronze are all different, difficult to assimilate into one another. But the dominance of pleasure makes the goodness of these things indistinct, irrelevant to goodness. The social technology available to us makes all of this harder to see. Like some other artifacts of early liberalism, hedonism7 provides the promise of protecting a healthy pluralism while signaling its death knell.
The means-end form of practical reason enables technology to blind us in a similar way. We merely need to fill “the finished work of art” in for “pleasure” to see how this functions. We no longer go to see for ourselves, because we can achieve the end prescribed by the practice. As a result, we never go to see for ourselves those subtle, incommensurable, and elusive forms of goodness. We’re channeled into uniformity of the path of least resistance. More of the world is tiled over with flat, lifeless instrumentalities to undifferentiated pleasure. This picture of the world is worth resisting.
Loving Aesthetic Practices
I have to confess that I am a bit intrigued by what will happen in the artworld that associates itself with museums, collectors and so on. Experimentation might lead to new, robust practices for artistic creation that realize values we had never previous considered.
But the thrill of newness and potentiality comes tethered to the fear that we might lose some of our most cherished practices. I happen to think that playing music, shaping a vase, and so on realize different values that are irreducible and incommensurable. The idea is that goodness is a plurality and irreducibly so. We might lose some of the diverse array of things that give the world its particular kind of goodness. In this sense I’m something like a conservationist trying to stop a certain breed of bird from dying out, even while fully conscious of the fact that there are goods provided by building new buildings.
We needn’t take such an extreme tack though. Perhaps it’s just enough to say that we love this practices. We fear that what we love—the genuine article of love—might be lost. Love is a de re attitude.8 We love the things we love for being what they are. When we love an aesthetic practice it’s not that we love it just insofar as some aesthetic good comes about from it (though we do love that). We wouldn’t “trade up” for something that gives some marginal increased benefit anymore than we’d leave our romantic partners provided someone marginally more optimal9 came along. The worry about the practices being displaced remains.
Tying a Bow
You’ll notice I’ve taken a less than oracular tone here. I don’t claim to have a knock-down argument that shows the utter necessity of destroying AI art. I don’t have an apocalyptic vision of the inevitable results of artificial intelligence. But, if what I’ve said about beauty, aesthetic practices and so on is plausible, and technology creates the kind of blinding effects that I mentioned earlier, there is reasonable cause for concern. Perhaps we can learn to have a plurality of practices that coexist with AI. Perhaps not. This all seems to come down to a host of contingent facts about the technology, our psychologies and the movement of sociological forces. So I end up here, in a state of dimmer, hazier ambivalence.
If you’ve taken a philosophy of art class or anything similar, I’d say this part is pretty skippable. I’m trying to get everyone on the same page here.
I suspect that a lot of artforms that involve kinesthetic empathy are going to be very interested in expressions of skill.
The historical relationship between the terms “aesthetic value” and “beauty” is too complicated to get into here. For most people, “beauty” is deployed in a sufficiently thin sense for us to identify it with aesthetic value. We can also put aside Nehamas’ complaint that aesthetic value seems to communicate something a bit more anemic than beauty—though I have some issues with how he understands the historiography here.
Or, if Danto is right, then this process of self-clarification just wrapped itself up.
Of course, hedonism doesn’t really enjoy dominance among professional philosophers. But it does enjoy quite a bit of prestige in folk value theory.
A variety of novels have had a rather striking role in getting me to figure out how to live my life, understand how other people think, empathize with the particularity of other people’s situation, and so on. The trick is navigating between two claims: (1) an explanation of how aesthetic goods aren’t just trivial, one-off diversions and actually shape our lives and (2) what it is that’s within the actual practice of engagement that makes them not purely instrumental and therefore eliminable provided some other practice comes along.
Of course, Mill aims for a way out here by talking about qualitatively different pleasures. I think the only way for this to work is to break out of the mold of value monism that he needs to fuel his utilitarianism. Really I think Mill is something of an Aristotelian struggling to fit himself into the mold of utilitarianism provided (perhaps more forceful phrasing is appropriate here) to him by his father.
See Kraut’s “Love De Re.” I think there’s good reason to be skeptical of the claim that the modal logical expression here can really give us what we’re after. But the phrase is used often enough in this way and it’s an irresistibly useful way of talking. If you don’t like the formal construction of this then you can take me to be doing some kind of weird Humean irony.
When I say marginal I mean a bit more than enough to make the displeasure of the break up and getting your legal situation in order worth it. Something repugnant remains.
A good piece that, unlike most writing on 'AI art', actually expresses what I found offputting about the use of AI models as a substitute for existing artforms. Drawing, 3D computer graphics, music, writing etc. have all shaped how I encounter and understand the world in the ways you suggest in that hearth story - you could say they have trained my inner neural network and excited new modes of perception, self-understanding narratives, etc etc. (The many cited philosophers sadly go over my head for the most part.)
However, I have over time come to a less pessimistic conclusion than you. I think as time goes on, humans will be bored of the limited outputs of straightforwardly prompting of image generators, and drawn to engage more substantially. That may remain within the realm of 'AI' - image-to-image generators, training LoRAs to personal taste, fiddling with noise parameters, chaining together multiple models in ComfyUI - or it might involve branching out into other art forms. So much of this discourse seems to assume that AI must be monolithic, rather than fitting into an evolving ecosystem of tools and practices. The 'friction' that AI models provide is quite different than the 'friction' that painting provides, and will shape our engagement in different ways, but it is not nonexistent and I don't think it can be. I've already seen a handful of interesting uses of AI in animation, such as Seth Ickerman's use to create crawling biological forms over his 3D renders in his video for 'Blood for the Blood God' by Health, or the Igorr's use of video models to create uncanny-valley imagery in 'ADHD'. These videos use generative models not to reproduce existing forms, but to reach for forms of expression that wouldn't be possible without it.
When AI models hit the scene, I struggled to know what interesting questions to ask about AI image generations. As the practice of image generation matures and becomes more technical, I feel like there are now questions I can ask about method (sometimes). I could even conceive of using AI image generators for something one day... maybe. Interacting with large language models, probing their capabilities, and exploring the theory around them (character-simulacra and so on) has fed a new interest in neuroscience (predictive coding theory etc.) and new metaphors for self-understanding.
It is certainly possible for technological shifts to sweep aside entire artforms, like digital compositing did to cel animation and the multiplane camera - art forms that only really work at industrial scale. But I don't really think people will stop drawing and painting and making 3D renders and playing instruments and composing music. It was already possible to download images of nearly anything you could think of. People take up drawing not because it fulfils an immediate utilitarian purpose (I want a character portrait in this style with these details), but because they develop a self-narrative about what it means to draw. Most people who draw will never make a penny off it or achieve fame; still more people draw than ever have in history, and the information needed to get started and learn technical aspects is more accessible than ever. So I don't think AI models should change that equation too much.
This weekend I was at the Revision demoparty in Germany. 800+ people travelled to Saarbrücken from across the world, a thousandish more watching live on stream; we saw demos using the latest techniques to do unbelievable things with 4 kilobytes, but equally many targeting oldschool devices and still pushing the medium into new places on 30-year-old computers. The demoscene is steeped in somewhat impenetrable history, but very welcoming to those who will engage with it. This, to me, represents a sort of 'good future' of art-making: done with no expectation of profit, merely the fascination of engaging with a complex medium with a challenging constraint, and the appreciation of peers engaged in the discipline. The same goes for, say, visual novels, or self-published writing to niche audiences.
A small handful of people used AI models as part of their demos, and it was intensely controversial. Over time, I expect it will shake out somehow; we'll decide what constitutes 'appropriate practice' for our particular expression-game. Perhaps we'll separate out AI-using categories from non-AI categories. I don't know!
But I trust in the capacity of humans to always make things more complicated, not less, and explore all the abstruse limits of our latest toy. We're tricky bastards like that.