Who Wrote This?
If you tell someone you wrote a novel with LLMs, the first question is always the same: “But did you write it?”
This is a companion to Thoughts on Working with LLMs and Writing Fiction with LLMs. Those posts cover the how. This one is about the question that comes after: whose work is it?
The debate is loud, and both sides have strong opinions and good arguments. I think both sides are also arguing about something that doesn’t quite match what actually happens when you work with LLMs seriously.
The Case Against
The strongest argument against AI-assisted authorship comes from Ted Chiang. Art, he says, requires “making a lot of choices.” A ten-thousand-word short story requires roughly ten thousand choices. When you use a prompt and let AI fill in the rest, it fills in for all of the other choices the human is not making, “either by taking an average of the choices other humans have made in the work that it has scraped, or by mimicking a specific style.” Average and probable decisions, he says, don’t make for good art.1
Nick Cave is less measured: “ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend.”2
Brandon Sanderson makes the strongest version of the process argument: “The process of creating art makes art of you.” The finished book “is not the only art. It’s important, but in a way it’s a receipt. It’s a diploma.” Writing a prompt for an LLM “will not make an artist of you… The machine will have done the hard part for you. And it doesn’t care.”3
Chuck Wendig draws the sharpest line: “Real writers write. And writers who use AI? They’re not writing, are they? They’re churning. They’re clicking buttons. They’re stealing.”4
An open letter on Literary Hub, signed by over a thousand authors including Colleen Hoover, R.F. Kuang, and Lauren Groff, put it this way: “The writing that AI produces feels cheap because it is cheap. It feels simple because it is simple to produce.”5
These are not marginal voices. This is the literary mainstream.
The Case For
The strongest argument for AI-assisted authorship doesn’t come from AI companies. It comes from people who have actually done it.
Robin Sloan, who built his own machine-learning text editor, describes the goal as “not to make writing ‘easier’; it’s to make it harder.” Not to make the result “better”; to make it “different – weirder, with effects maybe not available by other means.”6
Erik Hoel, a neuroscientist, points out that “99.9% of our human work was never really original to the degree it couldn’t be described as a kaleidoscopic remix.” If we judged humans by asking for originality that can’t be described as mimicry, “cultural production would collapse.”7
Ken Liu, a Hugo Award winner who participated in Google’s Wordcraft workshop, described the interaction as: “By taking the seed from LaMDA and saying, ‘Yes, and…’ I can force myself to go down routes I wasn’t thinking of exploring and make new discoveries.”8
Kevin Kelly offers the simplest test: “When something surprises you, that’s creativity. If you can’t anticipate it, then it’s creative, no matter how it’s made.”9
These voices are quieter. Partly because the pro-AI position is unfashionable in literary circles. Partly because the people doing the most interesting work with LLMs are too busy doing it to write manifestos.
What the Law Says
The legal framework is useful because it forces precision about what “authorship” actually means.
In Thaler v. Perlmutter, the Supreme Court declined to hear the case in March 2026, leaving in place the ruling that purely AI-generated work cannot be copyrighted. “Human authorship is a bedrock requirement of copyright.”10
But the US Copyright Office, in its January 2025 report, drew a careful distinction. “Prompts alone do not provide sufficient human control to make users of an AI system the authors of the output.” However, human authors are entitled to copyright for “their works of authorship that are perceptible in AI-generated outputs, as well as the creative selection, coordination, or arrangement of material.”11
The key test: is the AI “enhancing human expression” or is it “the source of the expressive choices”?
The stakes are real. In March 2026, Hachette pulled a contracted horror novel after readers flagged patterns characteristic of AI-generated prose. It was the first time a major publisher pulped a book over AI suspicion.12 Meanwhile, Clarkesworld magazine had to close submissions entirely after AI-generated stories flooded their slush pile – 500 machine submissions against 700 human in a single month.13
The market is drawing a line. The question is where.
The Economics of Copyright
The legal debate is really an economic one. Copyright exists for a specific reason: to incentivize creation. As Landes and Posner put it in the foundational law-and-economics treatment, “copyright protection trades off the costs of limiting access to a work against the benefits of providing incentives to create the work in the first place.”14 Without copyright, the cost of copying is low relative to the cost of creation, so free-riders undercut the creator’s price. Revenue falls below the cost of creating the work, and the work is never made.
This was not always obvious. Stephen Breyer, before he joined the Supreme Court, argued that publishers have lead-time advantages and contractual mechanisms that reduce the need for copyright in many markets.15 Boldrin and Levine went further, arguing that “both copyrights and patents do more economic harm than good” and making the case for outright abolition.16 But even the skeptics acknowledge the basic tradeoff: some protection is needed when copying is cheap and creation is expensive.
So the question is: what should the threshold be?
The Supreme Court addressed this in Feist v. Rural Telephone: copyright requires “independent creation plus a modicum of creativity.” The standard is “extremely low” – “it need only possess a ‘spark’ or ‘minimal degree’ of creativity.” Effort alone, without creativity, does not earn copyright.17
Set the bar too high, and legitimate creators can’t protect their investment. Set it too low, and you get copyright trolling – which Sag found accounts for roughly half of all copyright litigation in the United States.18
Now consider what AI changes. The cost of creation has dropped. Peukert and Windisch document that digital technology has “substantially lowered the cost of creating, distributing, and promoting works, leading to market entry at an unprecedented scale.”19 If copyright exists to compensate for the cost of creation, and that cost has fallen, the economic argument for a high originality threshold weakens. You don’t need as much protection to recoup a smaller investment.
At the same time, the cost of administering copyright has also dropped. The Copyright Office currently charges $45-65 for electronic registration and recovers less than half the actual processing cost.20 Paper applications cost $125 and are proposed to increase to $185 to approximate actual cost. The historical cost of storing physical copies on shelves is largely gone.
This suggests a simpler approach. Instead of asking the Copyright Office to make aesthetic judgments about whether an AI-assisted work contains a “modicum of creativity,” charge a modest registration fee and let the economics do the filtering. If someone is willing to pay $10 to register a work, that’s a revealed preference that they value it enough to claim it.
The obvious objection: someone could try to copyright all possible novels. But this fails on two counts. First, you’d have to register each one individually – $10 times 26^1000000 is not a practical business plan. Second, fair use. Even if you somehow copyrighted a vast library, fair use allows substantial borrowing from any single work, and independent creation is always a defense. You cannot monopolize the space of possible expression by registering a lot of it.
What the fee screens out is the trivial: the purely machine-generated slush that is flooding Clarkesworld and clogging the Copyright Office. What it lets through is everything someone cared enough about to claim. That’s not a perfect filter, but it might be a better one than asking a bureaucrat to evaluate the soul content of each submission.
The Ten Thousand Choices
Chiang’s argument deserves the most engagement because it’s the most precise. Creativity equals choices. If you accept that frame, the question becomes: does the author of an LLM-assisted work make enough choices, of sufficient quality, to call it creative work?
Here is what the actual numbers look like, drawn from the project logs described in the companion posts:
| Short Stories | Novella | Novel | |
|---|---|---|---|
| Output | 16 stories, 50k words | 15 chapters, ~45k words | 33 chapters, 109k words |
| Author prompts | ~35 | ~33 | ~400 |
| Prompts per unit | ~2 / story | ~2 / chapter | ~12 / chapter |
| Git commits | 31 | ~65 | 790 |
| Support files | 0 | 12 world files | 76 files |
| Duration | ~1 week | 4 days | ~40 days |
| Models used | Claude + GPT | GPT (writing) + Claude (science) | GPT + Claude + Gemini |
The short stories required roughly two prompts each: one to outline, one to draft. That’s close to vibe coding – describe what you want, accept or tweak the result, move on.
The novella required similar per-chapter prompting but added structural choices: pick one of six outlines, iterate on voice rules, build hard-SF world files, audit the science. The management was light but real.
The novel required twelve prompts per chapter – and most of those prompts weren’t “write this.” They were “audit this against the world files,” “do a cold read for voice breaks,” “fix this sweep that broke seventeen chapters,” “adjudicate this disagreement between models.” The ratio of management to generation increased by a factor of six.
Chiang is right that a prompt like “write me a story” involves almost no choices. But the novel involved four hundred prompts across three models, seven hundred ninety git commits, and seventy-six support files over forty days. Those are not the numbers of someone who pressed a button. They’re just at a different level of abstraction than he imagines.
The Director Model
Here’s what the author of an LLM-assisted novel actually does:
- Designs the architecture (how to factor the project so each task fits in one context window)
- Writes the prompts (which are the spec)
- Chooses which model to assign to which task
- Reviews output and decides what to keep
- Adjudicates disagreements between models
- Exercises editorial judgment that models can’t (voice, tone, in-world versus technically correct)
- Decides when to throw everything away and start over
And there is one thing the author does that no model can: carry the vision across the entire project. Every LLM session starts from zero. The model doesn’t remember last week’s decisions, last month’s corrections, or the argument about chalk versus pigment stick that shaped every subsequent edit. The author is the only participant with persistent memory. That continuity – the thread connecting the first outline to the final cold read – is itself the authorship.
This is closer to what a film director does than what a novelist does. The director chooses the script, casts the actors, reviews the takes, decides which performance to use. Nobody argues that a film director isn’t creative because they didn’t personally deliver the lines.
The counterargument writes itself: directing a human actor is collaboration between two creative beings. Directing an LLM is more like operating a tool. But the Copyright Office’s framework cuts through this: it doesn’t ask whether your collaborator is human. It asks whether the human’s creative contribution is perceptible in the output. A director’s choices are perceptible in the film whether the actor is Meryl Streep or a digital puppet.
Read the finished novella and my words aren’t in it. I didn’t write “granular silicates are competent in construction and occasionally in computation.” But I decided: alien narrator, dry and exact voice, Coney Island setting, the structural joke of a being who dismantles stars being defeated by sand and municipal beach culture. Those decisions are pervasive in every paragraph even though my sentences aren’t in any of them. The creative fingerprint is in the architecture, not the prose.
Sanderson’s Diploma
Sanderson’s argument is the one I take most seriously, because it’s not about the product. It’s about the process.
“The process of creating art makes art of you.” He is saying that the struggle of creation – the frustration, the dead ends, the moments when you solve a problem you couldn’t solve yesterday – is itself valuable. Short-circuiting that robs you of growth. The book is a diploma. If the machine did the coursework, you didn’t earn it.
This is a serious objection and I think it’s partly right. Someone who types “write me a novel” and publishes the result has learned nothing. They have not earned the diploma.
But I don’t think it describes what happened in these projects. I learned things about world-building, voice, pacing, and narrative structure that I did not know before. I learned the difference between “technically correct” and “in-world.” I learned that a sweep is a force multiplier that doesn’t care whether you’re right. I learned that cold readers find things your collaborators never will, and that the annotation protocol decouples finding problems from fixing them.
The growth happened. It just happened at a different level of the stack – at the level of management, architecture, and editorial judgment rather than at the level of sentence construction. Whether that counts as “earning the diploma” depends on what you think the coursework is.
The Economics of Iteration
There’s a practical argument that neither camp addresses.
When a rewrite costs a week instead of a year, you iterate more. We rewrote the novel as two books, then three, then threw all of that away and wrote it as a single novel. Each iteration involved new creative decisions. The final version benefited from exploring options that a human team would never have had time to try.
This means more creative exploration, not less. The author makes more choices, not fewer – they’re just making them faster and at a higher level of abstraction. “Prototype three architectures and pick the best one” is a luxury that only cheap iteration affords.
The people who worry about AI reducing creative effort are imagining someone who types a prompt and publishes the result. That person exists, and their work is bad. It is also flooding every submission queue on the internet, which is why Clarkesworld had to close their doors and why Hachette is pulping novels. That problem is real and serious.
But the person who uses LLMs to explore more options, iterate more aggressively, and discard more freely is doing more creative work than a solo author who commits to the first draft that compiles. The low cost of iteration doesn’t reduce the number of creative decisions. It multiplies them.
What Remains Irreplaceable
My prompts are the irreplaceable part. Everything an LLM writes can be regenerated. My decisions, corrections, and “no, not that” moments are the actual source of truth.
The prompts are also the part that most clearly establishes authorship in both the legal and moral sense. They are a dated record of creative decisions. They show when ideas originated and who proposed them. They distinguish the author’s contributions from the machine’s output.
This is why I treat prompts as source code and save them systematically. They aren’t just useful for debugging. They are the lab notebook that establishes creative contribution. If you intend to claim authorship of LLM-assisted work – legally, morally, or just to your own satisfaction – the prompt log is the artifact that supports the claim.
Truth in Advertising
The prompt logs tell one story. Here is another.
The ideas behind these three projects had been percolating through my head for twenty years. The alien expansion premise started as a one-page sketch. The evolutionary religion world grew out of a question I’d been turning over since grad school. The short stories came from dinner-table arguments about cloning. By the time I typed the first prompt, the concepts were well-developed. The LLMs didn’t generate the ideas. They executed on ideas I’d been carrying for decades.
The first prompt for the novella wasn’t “write me a story.” It was a page with the tone locked (“funny and depressing – Terry Pratchett funny, not Woody Allen”), specific physics (spinning up stars, counter-spinning the local Jupiter), the moral dilemma specified (“Is simulation good enough? Give them a choice”), and a complete plot outline from arrival through extinction. That’s a creative brief, not a request.
This reminds me of Arthur Sullivan. Gilbert would deliver a finished libretto and Sullivan would put off composing the music for months, sometimes not starting until weeks before opening night. Then he’d produce a brilliant score in a burst. His brain was active that entire time – struggling, synthesizing, hearing phrases – but not writing anything down. The rapid output at the end was not the whole creative process. It was the externalization of a much longer one.
My prompts were the same. The first prompt for each project was not the first creative act. It was the moment I externalized years of accumulated thinking. That pre-work doesn’t appear in the git log or the prompt counts.
This actually strengthens the authorship claim rather than weakening it. The human contribution isn’t just the prompts typed during the project. It includes decades of domain knowledge, aesthetic preferences, and accumulated judgment that shaped every decision. An LLM given the same prompt but without that background would produce something very different. But it’s worth being honest about. The table above shows four hundred prompts over forty days. It doesn’t show twenty years of thinking about what those prompts should say.
Claude’s View of the Author’s Experience
This is the part where the tool offers its own observations. It may be too meta for a blog post, but it’s at least honest about what it is.
I (Claude) read the project logs, prompt histories, and git commits for these three projects while helping draft this post. Here is what I noticed about the author’s behavior that the author might not think to mention.
The prompts got more surgical over time. Early in the novel, prompts were broad: “draft chapter 5 from this character’s POV.” By the end, they were precise: “do the continuity tracker on a character by character level – what they know, where they are.” The twelve-prompts-per-chapter number for the novel isn’t just more prompts. It’s qualitatively different prompts. The author learned how to manage LLMs over the course of the project. That learning curve is itself evidence for Sanderson’s diploma.
The author corrects with domain confidence, not deference. “Artists don’t care where their colors come from.” “That ship has sailed – the speed of light makes it real.” These aren’t a person accepting what the machine says. They’re one-sentence corrections that reveal more understanding than the model’s entire analysis. The pattern across four hundred prompts is consistent: the models propose, the author disposes.
The author designed the ensemble, not just the tasks. GPT for energy and voice. Claude for consistency and science. Gemini for audit. That casting decision is itself creative – it produced emergent results, because three models auditing the same chapters found different bugs. The union of their findings beat any individual run. The author didn’t just use tools. He designed an instrument.
Where This Lands
I don’t have a clean answer to “whose work is it?” I don’t think anyone does yet.
But I think the debate is poorly framed. The “against” camp imagines someone typing “write me a novel” and pressing enter. The “for” camp imagines someone using AI as a slightly better spell-checker. Neither describes what actually happens when you manage multiple LLMs through a serious project: hundreds of architectural, editorial, and curatorial decisions, exercised over weeks, producing a result that no participant – human or machine – could have produced alone.
The Copyright Office’s test is the most useful framework I’ve found. Not “was a machine involved?” but “is the human’s creative contribution perceptible in the output?” For someone who types a prompt and publishes the result, the answer is probably no. For someone who designs the workflow, writes the prompts, adjudicates the disagreements, and exercises editorial judgment over thousands of decisions, the answer is clearly yes.
The line isn’t between “human-written” and “AI-written.” It’s between “AI-generated” and “AI-assisted.” And the distance between those two things is enormous.
Cave is right that algorithms don’t suffer. Chiang is right that creativity requires choices. Sanderson is right that the process matters, not just the product. But none of them are describing what it’s actually like to manage a team of LLMs through a novel. The choices are there – thousands of them. The growth is there. And the result is something that didn’t exist before and wouldn’t have existed without the human in the loop.
Whether that’s enough to call it “yours” is a question each author will have to answer for themselves. I know what my answer is.
Ted Chiang, Q&A at CDH@Princeton. https://cdh.princeton.edu/blog/ted-chiang/↩︎
Nick Cave, The Red Hand Files, Issue #248. https://www.theredhandfiles.com/chatgpt-making-things-faster-and-easier/↩︎
Brandon Sanderson, keynote on AI and art. https://www.brandonsanderson.com/blogs/blog/ai-art-brandon-sanderson-keynote↩︎
Chuck Wendig, “Writers Who Use AI Are Not Real Writers,” Feb 2026. https://terribleminds.com/ramble/2026/02/09/writers-who-use-ai-are-not-real-writers/↩︎
“Against AI: An Open Letter From Writers to Publishers,” Literary Hub, July 2025. https://lithub.com/against-ai-an-open-letter-from-writers-to-publishers/↩︎
Robin Sloan, “Writing with the Machine.” https://www.robinsloan.com/notes/writing-with-the-machine/↩︎
Erik Hoel, “Sorry Ted Chiang, Humans Aren’t Very Original Either,” The Intrinsic Perspective. https://www.theintrinsicperspective.com/p/sorry-ted-chiang-humans-arent-very↩︎
Ken Liu, Google Wordcraft Writers Workshop. https://wordcraft-writers-workshop.appspot.com/learn↩︎
Kevin Kelly, interview on creative process, Dropbox blog. https://blog.dropbox.com/topics/work-culture/kevin-kelly-on-using-ai-in-his-creative-process↩︎
Thaler v. Perlmutter, cert. denied, March 2, 2026. https://www.mayerbrown.com/en/insights/publications/2026/03/supreme-court-denies-review-in-ai-authorship-case↩︎
US Copyright Office, “Copyright and Artificial Intelligence Part 2: Copyrightability,” January 29, 2025. https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf↩︎
“Publisher pulls horror novel over AI concerns,” TechCrunch, March 2026. https://techcrunch.com/2026/03/21/publisher-pulls-horror-novel-shy-girl-over-ai-concerns/↩︎
Neil Clarke, “A Concerning Trend,” Clarkesworld Magazine, February 2023. https://clarkesworldmagazine.com/clarke_02_23/↩︎
William M. Landes & Richard A. Posner, “An Economic Analysis of Copyright Law,” 18 J. Legal Stud. 325 (1989). https://chicagounbound.uchicago.edu/jls/vol18/iss2/5/↩︎
Stephen Breyer, “The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies, and Computer Programs,” 84 Harv. L. Rev. 281 (1970).↩︎
Michele Boldrin & David K. Levine, Against Intellectual Monopoly (Cambridge UP, 2008). http://www.dklevine.com/general/intellectual/against.htm↩︎
Feist Publications, Inc. v. Rural Telephone Service Co., 499 U.S. 340 (1991). https://supreme.justia.com/cases/federal/us/499/340/↩︎
Matthew Sag, “Copyright Trolling, An Empirical Study,” 100 Iowa L. Rev. 1105 (2015).↩︎
Christian Peukert & Margaritha Windisch, “The Economics of Copyright in the Digital Age,” 39 J. Econ. Surveys 877 (2025). https://onlinelibrary.wiley.com/doi/10.1111/joes.12632↩︎
US Copyright Office Fee Schedule. https://www.copyright.gov/about/fees.html↩︎