Table of Contents
I get why this question keeps coming up: AI can write, rewrite, and even suggest arguments. So why can’t it just be listed as a co-author?
Here’s my thesis, based on what the major research ethics bodies and copyright authorities say: under current rules in most places, AI is treated as a tool—not an author. That’s not just a philosophical stance. It’s tied to how authorship, responsibility, and copyright ownership are defined in practice.
To make this concrete, I’m going to walk through what COPE and the ICMJE require (and what they don’t), what publishers like Nature and JAMA say in their author guidance, and how the US Copyright Office frames AI-generated material. Then I’ll share what I actually do to stay compliant when I’m using AI in a workflow.
Key Takeaways
Key Takeaways
- Most policy frameworks require authors to be human and capable of accountability (COPE/ICMJE-style criteria). That’s why AI typically can’t be listed as a co-author, even if it generated or assisted with text.
- Disclosure is the real battleground: if AI materially helped (drafting, editing, text generation), journals increasingly expect you to disclose that in the submission process or acknowledgements.
- US copyright law turns on “human authorship”. The US Copyright Office’s guidance says works created solely by AI aren’t copyrightable, unless a human contributed sufficient creative input. (That’s a different question than “public domain,” and it matters.)
- Publishers care about accountability: if something is wrong—plagiarism, data problems, an IP issue—someone has to be responsible. AI doesn’t have legal personhood, so the responsibility stays with the human authors.
- Practical compliance beats speculation: document your AI use, verify what each journal requires, and don’t treat an AI output as automatically “safe” to publish without checking rights and originality.

1. Can AI Be a Co-Author? What the Current Rules Say
The short version: most current authorship rules are built around humans, and AI doesn’t fit the legal/ethical requirements that come with that role.
On the research ethics side, bodies like COPE and the ICMJE don’t treat authorship as “who helped write.” They treat it as “who is accountable for the work.” That’s why, in practice, journals look for authors who can take responsibility for the integrity of the content and for compliance with publication standards.
For example, the ICMJE “Defining the Role of Authors and Contributors” framework (widely used across medicine and beyond) is explicit that authorship is tied to substantial contributions and accountability. In other words, it’s not just about generating text—it’s about being able to stand behind the paper.
COPE’s guidance on authorship and accountability similarly focuses on human responsibility. If you can’t sign off in a meaningful way—can you be held to the standard the journal expects?
Now, about the “96% of publishers” claim: I’m not going to repeat a number unless there’s a named study, a date, and a methodology. In my experience, that kind of statistic floats around without a primary source attached, and you deserve better than a vague percentage.
What I can say confidently is this: you’ll almost always be required to disclose AI use, and you’ll almost always be told not to list AI as an author. That pattern shows up across many publishers’ AI disclosure statements and submission checklists.
And yes—Nature and JAMA are good examples of the “disclose, don’t credit” approach. Their policies focus on transparency about AI assistance and keep authorship reserved for human contributors.
2. What Major Publishers Disallow for AI Co-Authorship
When you read publisher policies closely, the reasoning is pretty consistent:
- Authorship is accountability, not just output.
- AI can’t verify claims or take responsibility the way a human author can.
- Journals need a responsible party if there’s an error, a dispute, or an IP problem.
Here’s what that looks like in real submission requirements. Many journals now ask you to describe tools used in writing or editing. In many cases, the “AI disclosure” section is separate from the “authorship” section—because it’s treated as assistance, not contribution to authorship.
In practice, if you submit a manuscript where AI generated major sections (instead of supporting minor editing), you can run into two issues:
- Policy mismatch: the journal may require a disclosure statement and may not allow AI to be listed as an author.
- Substantive responsibility mismatch: reviewers and editors expect authors to be able to explain and defend the work. If you can’t, that’s a problem regardless of what the text says.
Let me give you a scenario I’ve seen play out (and that you can plan for). Suppose an author uses an AI tool to draft the entire “Introduction” and “Discussion,” then lightly edits it. If the journal later asks for details about contributions, the human authors have to show what they actually did: how they shaped arguments, verified citations, ensured accuracy, and made final decisions.
That’s why publishers frame AI as a tool—like a reference manager, a statistical package, or a spell-checker. Helpful, but not a person you can hold accountable.
If you want to double-check quickly, look for these exact policy cues on a journal’s site:
- “Authors must be individuals” (or similar language)
- AI disclosure requirements (often in “Manuscript Preparation” or “Ethics” sections)
- Explicit statements about authorship eligibility (human-only typically)
3. Legal Viewpoints on AI and Authorship Rights
Copyright law is where things get especially tricky, because authorship and copyright ownership aren’t the same as “who wrote the words.” They’re tied to legal definitions of authorship and what kind of creative contribution the law recognizes.
United States (US): The US Copyright Office has taken a clear position on AI-generated material. In its guidance and recent decisions, the Copyright Office emphasizes that copyright protection requires human authorship. If a work is created solely by AI (with no sufficient human creative input), it generally won’t be copyrightable.
So no, “public domain” isn’t a blanket label you can slap on every AI output. What matters is the legal test: was there enough human creative contribution to qualify as “authorship” under the Copyright Office’s standards? If not, you’re not getting copyright protection the way you would for a human-authored work.
United Kingdom (UK) and EU: In many European jurisdictions, the focus is also on human creativity and the legal framework for authorship/rights holders. The details can vary by country and by the type of work, but the direction is similar: AI outputs don’t automatically become protected works just because they were generated.
Liability and infringement risk: If an AI-assisted paper contains an error or infringes someone’s IP, the journal and courts still need a responsible human party. That’s not just a “maybe” issue—it’s a practical necessity. Someone has to be accountable for verification, citation accuracy, and compliance.
That’s why the legal viewpoint tends to reinforce the editorial viewpoint: AI may produce text, but humans remain the ones who can be held responsible for what’s submitted and published.
If you want a policy anchor for your own research workflow, use primary sources first: the US Copyright Office guidance, and the journal’s “AI disclosure” and “authorship criteria” pages. Don’t rely on secondary summaries unless they link back to the original text.
4. Why AI Cannot Act as a True Co-Author
This part sounds obvious, but I think it’s worth spelling out because it explains the “why” behind the rules.
A true co-author isn’t just someone who contributed words. They’re someone who can:
- make and defend creative/interpretive decisions
- take responsibility for the accuracy and integrity of the work
- respond to allegations (plagiarism, misconduct, data issues, IP claims)
AI doesn’t have intent. It doesn’t understand what it’s doing the way a human author does. It doesn’t have moral agency or legal personhood. And it can’t be held to an accountability standard the way human authors can.
Here’s the ethical tension: if you credit AI like a collaborator, you’re implying it shares responsibility. But it doesn’t. You still, as the human author, own the submission.
Also, even when AI outputs are impressive, they don’t automatically solve the core academic problem: verification. Citations can be wrong. Summaries can be misleading. “Fluent” text isn’t the same thing as correct research.
So in my view, the clean framing is: AI can be an assistant, a drafting aid, or a brainstorming tool. But authorship—especially in scholarly publishing—requires human accountability.

5. Ethical and Practical Guidelines for Using AI in Writing
Okay—here’s the part that actually helps when you’re staring at a journal submission form and wondering what to write.
1) Treat disclosure like a checklist, not an afterthought. Before I run AI on anything, I open the target journal’s “AI disclosure” instructions (or the closest equivalent). Then I keep a running note of what the AI did.
2) Keep records of prompts and outputs. This doesn’t mean you need to save every single message forever. But I do keep:
- the prompt(s) used for major drafting or rewriting
- the model/tool name and version (if available)
- what I changed afterward (especially anything affecting claims, structure, or citations)
3) Verify citations and factual claims. If AI proposes references, I check them. Every time. I don’t trust “sounds right” when the stakes are publication.
4) Don’t outsource judgment. AI can help with phrasing, organization, and brainstorming. But the intellectual decisions—what to argue, what to include, what to omit—should come from the human authors.
5) Respect licensing and originality. If an AI tool is trained on content you don’t have rights to reuse (or if it outputs text too close to copyrighted material), you can create risk. I always review the tool’s terms and avoid copying outputs verbatim when I can’t verify originality.
6) Use AI for targeted tasks. In my workflow, I prefer using AI for things like:
- turning rough notes into a clearer outline
- suggesting alternative phrasing (not generating final claims)
- helping draft a first-pass section that I then rewrite and verify
What I’ve noticed: journals don’t mind tools being used—they mind when authors can’t explain and defend how the final content was produced and verified.
7) Re-check COPE/ICMJE-style expectations before submission. COPE and ICMJE guidance tends to emphasize transparency and accountability. Even if you’re not in medicine, those principles show up in how editors evaluate misconduct and authorship disputes.
6. Future Trends and Changes in AI and Author Recognition
Will rules change? Probably. But the direction I expect isn’t “AI becomes an author tomorrow.” It’s more like: more disclosure standards, more auditing, and clearer attribution of human vs AI contributions.
Here are the trends I’m watching:
- Stronger disclosure requirements: more journals will require a specific statement about AI tools used in writing/editing.
- More structured contribution reporting: systems that force authors to describe what they did (and what the tool did).
- Policy harmonization attempts: editors and publishers are converging on “human authors only” for accountability, while still allowing AI tools under defined conditions.
- Copyright rules will keep centering human authorship: the US Copyright Office’s approach is likely to remain influential as other jurisdictions react.
Also, don’t ignore the practical reality: even if a court or legislature eventually recognizes new rights around AI-generated works, journals still have to decide who can be held responsible for content integrity. That’s a separate editorial problem.
So while AI may get more formal recognition in some contexts, the safest assumption for now is simple: human authorship remains the standard, and AI is disclosed as assistance.
FAQs
In most established research and publishing frameworks, AI can’t be listed as a co-author. Authorship is tied to human intellectual contribution and accountability, and AI doesn’t have legal responsibility or the ability to verify and defend the work in the way authors are expected to.
Many major publishers do not allow AI to be credited as an author. Instead, they typically require disclosure of AI tools used for writing or editing, while reserving authorship for human contributors who take responsibility for the manuscript.
Legal analysis generally centers on human authorship and liability. In the US, the Copyright Office’s guidance emphasizes that works created solely by AI lack copyrightability unless a human contributes sufficient creative input. In practice, that keeps rights and responsibility with human creators or organizations.
A true co-author must be able to take responsibility and be held accountable for the work. AI lacks legal personhood, intent, and accountability. So it can support drafting or editing, but it can’t fulfill the role authors are expected to play.



