ALAN opinion 11 min read

Style Theft and Copyright Leakage: Ethics of Artist-Name Prompts

A painter's signed name typed into a prompt field as a cropped, recognizable style emerges from a blank canvas behind it

The Hard Truth

Type “in the style of Greg Rutkowski” and you summon a ghost of someone still alive. He never agreed, asked publicly to be excluded, and called the autocomplete of his name in prompt fields “terrifying.” Is what you just did artistic homage, infrastructure-scale appropriation, or a quiet little bit of both?

There is a small string of text that has done more to define the politics of generative imagery than any white paper or court ruling. Six syllables. A name. Drop it into a prompt window and a model that has never read your terms produces an image dressed in someone else’s labor. The question is not what the image looks like. The question is what the prompt has quietly become.

The Sentence We Type Without Asking What It Does

Prompt Engineering For Image Generation sounds like a craft. It looks like one too — clauses about lighting, camera distance, mood, materials. Inside that craft sits an unexamined habit: the artist-name token. Greg Rutkowski’s name was cited more than 400,000 times across analyzed Stable Diffusion 1.x prompt corpora, surpassing Picasso and Da Vinci as a prompt term (Creative Bloq). He is not a public-domain Renaissance painter. He is a working illustrator, alive, who can read what is happening on the screen and recognize his own labor reflected back as a stylistic preset.

Is it ethical to use artist names and style reference prompts in commercial image generation? The honest answer begins with admitting we have not really asked. We typed first. We are answering second, in courtrooms, regulator meetings, and the comment threads of artists who would rather not be a search key in a database they never volunteered to enter.

The Case for the Artist-Name Token

The defense of the artist-name prompt is not a strawman. It has a respectable spine. Style, in US copyright doctrine, is not protected expression — only specific fixed works are. The idea/expression dichotomy and the scènes à faire principle leave abstract aesthetic resemblance outside the reach of infringement claims (Cullen and Dykman LLP). Painters have always learned by copying their predecessors. Illustrators describe their influences openly, sometimes proudly. A prompt that names an artist, in this telling, is a citation made visible — better than the silent absorption of a style without acknowledgment.

There is a practical strand too. Naming a recognizable visual tradition is how non-experts communicate aesthetic intent. “Make it look like a Studio Ghibli still” carries information that no list of adjectives can match. Stripping artist names out of prompt vocabularies pushes that information underground rather than removing it; the community simply rebuilds the lookup with LoRA for Image Generation weights on Civitai, as it did the day after Stability AI removed Rutkowski from Stable Diffusion 2.0. The desire to refer to specific human style is not a bug in prompt culture. It is its grammar.

What the Word “Style” Quietly Carries

The hidden assumption inside the steelman is that “style” means today what it has always meant — a tradition, a tendency, a recognizable hand. That assumption no longer holds. Inside a Diffusion Models pipeline, “style” stops being a tradition and becomes a key into a learned distribution of pixel statistics. The same logic runs downstream of the prompt: in AI Image Editing flows, in Image Upscaling that smooths textures into a chosen aesthetic, in AI Background Removal pipelines that re-render scenes around a subject. Every layer of the stack now treats “style” as a retrievable parameter.

That changes the moral situation. When a human illustrator says “I work in the spirit of Rutkowski,” there is friction. They have to study, fail, learn, and come up with their own version. The friction is the apprenticeship, and the apprenticeship is what binds the borrower to the source — emotionally, culturally, sometimes financially. A prompt erases the friction. It also erases the binding. Rutkowski told Artnet News he asked publicly to be excluded from generative-AI training and prompt vocabularies. The infrastructure routed around the request. Style became a parameter, and parameters do not negotiate.

Citation Without the Footnote

We have lived through this argument before, in another domain, with a different result. The academic world also has a problem of building on others’ work, and it solved that problem with citation: name the source, link to it, accept that the credit is part of the act of building. The artist-name token strips half of that out. The name appears in the prompt, sometimes. It almost never appears in the output. The model that “knows” Rutkowski’s style cannot send him a check, an attribution link, or even a courtesy email. The token is citation without the footnote — and without the footnote, citation collapses back into appropriation.

OpenAI’s response was instructive. DALL·E 3 declines prompts of the form “in the style of [living artist]” but permits broader “studio styles,” per PC Gamer. Within hours of GPT-4o image generation launching in March 2025, users generated millions of Studio Ghibli-style frames; the trend exploited exactly that carve-out (TechCrunch). Studio Ghibli has never licensed its style to OpenAI. Hayao Miyazaki, in 2016, called AI-generated animation “an insult to life itself.” A policy that names individuals while forgiving institutions is not an ethical framework. It is citation without the footnote dressed up as a guardrail.

The Real Harm Is Not the Image. It’s the Default.

Thesis (one sentence, required): The ethical problem with artist-name tokens is not that they produce stolen images — it is that they normalize a default in which named human creators function as free training data and free rendering keys, without consent, attribution, or recourse.

Look at the pattern when policy tries to push back. Stable Diffusion 2.0 removed the ability to evoke specific artists by name; the community released LoRAs to restore the lookup the same week. DALL·E 3 banned living-artist prompts; users found the studio carve-out and Sam Altman reported that 130 million people generated 700 million images during the Ghibli moment alone. In October 2025, Variety reported that Japan’s Content Overseas Distribution Association — representing Studio Ghibli, Bandai Namco, Square Enix, Kadokawa and others — sent OpenAI a written demand to stop using member content for Sora 2 training. A spreadsheet of more than 4,700 artists allegedly curated by Midjourney staff was entered as evidence in court filings (The Register). Andersen v. Stability AI survived its motion to dismiss in August 2024 and is scheduled for jury trial on 8 September 2026 (Copyright Alliance). The infrastructure keeps running while the moral and legal layers scramble to catch up. The default — that named humans are inputs by assumption — has already won, even where the policy says otherwise.

The Questions a Prompt Quietly Asks

If a prompt is a question to a model, then an artist-name prompt is also a question to the artist — one we never bothered to send. Reflection means sitting with what that question actually is. Are we treating “Rutkowski” as a tribute, a shortcut, or a substitute? Would we type the same string if the artist were standing behind us? If a commercial client asks for “Ghibli-style” art knowing the studio has refused to license its work, what does that ask of us as the people running the keyboard?

The platforms are starting to bifurcate around answers. Adobe Firefly trains only on licensed Adobe Stock and public-domain content and offers Custom Models so users can train on their own assets rather than naming third-party artists; the Adobe Firefly FAQ positions this as the “commercially safe” option. Tools like IP-Adapter shift the technical surface entirely — users now supply a reference image whose style is transferred without naming anyone, which simply moves the legal question from “did you name the artist?” to “did you have the right to that input image?” The EU AI Act portal lists Article 53(1)(c) compliance as the first mandatory copyright-policy obligation for general-purpose AI, with enforcement by the AI Office beginning 2 August 2026 — about three months after this article publishes. None of this resolves the ethical question. All of it moves the conversation from inside the prompt window to outside of it, where it can finally be heard.

Where This Argument Could Fail

The honest weakness here is that artist-name tokens may be a transient artifact. Flux already underweights named-artist conditioning in its base model; community guidance has shifted toward describing technique rather than naming people. If, in five years, “by Rutkowski” produces nothing recognizable and the relevant signal is purely descriptive, my critique loses force — the harm dissolves into abstraction the way “Picasso-esque” did in everyday speech. I do not think that is what is happening. I do think it is the strongest version of the case against my argument, and any opinion that does not name where it could be wrong is just rhetoric.

The Question That Remains

The artist-name token is a small thing with a long shadow. It encodes a relationship to a stranger we have never asked to enter. As trial dates approach and enforcement deadlines tick down, the legal layer will eventually decide what is permissible. The ethical layer is older and quieter. It asks whether we are willing to type a name we would not say to its bearer’s face — and whether we can build a prompt culture in which the answer to that question is part of the craft.

Ethically, Alan.

Disclaimer

This article is for educational purposes only and does not constitute professional advice. Consult qualified professionals for decisions in your specific situation.

AI-assisted content, human-reviewed. Images AI-generated. Editorial Standards · Our Editors