Editor’s note: This story is the third edition of Link Rot, a new column by Shanti Escalante-De Mattei that explores the intersections of art, technology, and the internet.

Complaints about the use of generative AI in creative fields are often countered with a kind of “slippery-slope” view of history: if new creative tools have always emerged, why should we be so upset about this one? Similar arguments about ease of use were once deployed to defend photography against painting, the AI defender might argue. Historical particularities are flattened in favor of a notion of technological progress so inevitable that it hardly seems worth worrying about.

Late last month, Eline van der Velden, the creator of AI-generated “actress” Tilly Norwood, displayed this line of thinking in an Instagram post responding to rumors that Norwood would soon be signed by a talent agency. “I see AI not as a replacement for people,” der Velden wrote, “but as a new tool—a new paintbrush. Just as animation, puppetry, or CGI opened fresh possibilities without taking away from live acting, AI offers another way to imagine and build stories.” 

Let’s set aside the fact that this is inaccurate—animation and live-action film emerged more or less simultaneously from the invention of celluloid film, and puppetry has existed as long as stage acting. More importantly, the logic creates a false equivalence that erases the history of labor relations that have shaped different kinds of artisan communities over time—in Hollywood, on Madison Avenue, and beyond.

While creatives across fields view generative AI as an axe held by management to eliminate jobs and cut rates, der Velden argues against that zero-sum framing. But let’s take her at her word: what does treating generative AI as simply another tool—a “paintbrush”—actually look like for working artists?

Animators and CGI artists have been using digital tools for decades. Change or “innovation” isn’t the issue. Generative AI is contentious because clients are already using the technology to pressure workers—and because of how unethically the tools were developed. For many animators I’ve spoken with, AI has already reshaped their workflow with commercial clients—mostly for the worse, and mostly in ways that cut into their pay. 

Several said that clients now arrive with AI-generated mood boards and reference images, then ask skilled professionals to mimic what they’ve come up with rather than draw on their creative expertise. After the artists begin work, expectations often shift quickly, with clients expecting them to keep pace—as though they should operate with the ease and speed of ChatGPT, Sora, or whatever the darling application is of the day. Sometimes clients explicitly ask animators to use AI; other times, the expectation is implicit in the impossibly tight deadlines they’re given.

“It feels like AI is teaching them that this stuff can be generated really quickly, but it can’t,” said animator Sam Mason, who has directed animated music videos for hip-hop artist Mac Miller, among others, and worked with major commercial clients like Coca Cola and Toyota.

“They still, at this point, can’t get the AI to do a finished result. But what it does is devalue the whole process by creating this expectation that an artist can create an infinite amount of possibilities in a short amount of time,” he explained.

For Saad Mosajee, another animator who has directed animated music videos (Lil Nas X, Mitski), as well producing work for Apple and other commercial clients, the pressures to use AI don’t just come from clients, but from studios that are being aggressive about adopting AI without concerning themselves with the politics of using a technology that was built on controversial datasets. (Most image and video generators were trained on billions of images and videos scraped from public websites, without permission from the creators.)

“The most ethical and practical solution is to train models on your own work,” Mosajee told me. “Unfortunately, accountability in terms of data sets and training models is not something that’s gained traction, but I find that to be really unfair and a bit oppressive because a lot of these people never consented to their work going into these models.”

For Mosajee and Mason, the issue is a continued lack of shared ethical standards. The way that mass tech companies release AI through open-source channels and constant updates is explicitly designed to subvert the kind of friction that produces a common understanding of what is appropriate and what is egregious. And despite all of these updates, gen-AI tools still have a long way to go in terms of actually being designed for artists – instead of their bosses.

“For traditional visual artists to have any use for these tools, they need to be built expressly to interface with physical skill-based inputs like drawing, sculpting, and performance,” said Isaiah Saxon, who co-founded the animation studio Encyclopedia Pictura and directed the feature film The Legend of Ochi (2025). Yet Saxon is hopeful that these bespoke tools will become available soon. 

So what is the middle path between Luddism and AI evangelism? Ideally, ethical data sets feeding into applications designed by the creative community would allow generative AI to achieve its potential as a tool rather than a threat to workers. Yet that’s not how the industry currently operates. The real middle path is a messy one.

Animators told me that clients have pressured them to incorporate AI into projects on the assumption that it makes the same work cheaper and faster. Yet in many cases, that’s simply not true. In one instance—I won’t name names—a client was so eager for an animator to use AI that he “faked” doing so, completing the work the old-fashioned way and telling the client he had used the generator. While it may not seem prudent to let a client believe parts of your process can be automated when they can’t, it’s sometimes the “smart” thing to do.

Another animator told me that a client cut his budget and reallocated funds when he insisted on using his traditional process, even though they were pushing for a timeline and aesthetic only AI could provide. In other cases, animators have managed to convince clients that the technology can’t yet deliver the finished product they want. But even having that conversation can be a risk.

Another way artists are navigating this middle ground is through personal decisions about when to use the technology. For many, AI tools have no place in their passion projects—at least not for now. Why? There are several reasons. One is simply that the technology doesn’t yet work well enough. Another is that the constant need to keep up with new tools and updates can be demoralizing and exhausting. But ultimately, for the three animators mentioned above, the reason is more emotional and instinctive. Like nearly every skilled artist I’ve spoken with on the subject, they describe a similar uneasy feeling when they use AI. Sometimes it’s described as haunting, other times as emptying out—and that feeling is almost always tied to the loss of process.

“For me, the primary motivator for adopting any new technique has always been about following my nose for what seems joyful, interesting, and fun,” said Saxon. “I’ve been drawn to film in the mountains with my friends, to build huge sets and animatronic puppets, to sculpt and paint, to learn stop-motion, to learn 3d animation software, all because these things are a fun adventure. Using AI, at least for now, with the skills I have, is not a fun adventure.”

Share.
Exit mobile version