| The following is a proposed Wikipedia policy, guideline, or process. The proposal may still be in development, under discussion, or in the process of gathering consensus for adoption. |
| This page in a nutshell: Do not use large language models (LLMs) to write Wikipedia articles or to add unreviewed content to existing articles. |
A large language model (LLM) is a specific kind of program that can generate natural-language text either in response to prompts or by transforming existing text. This includes tools marketed as “AI chatbots” or “AI writing assistants”, such as ChatGPT, Google Gemini, Microsoft Copilot, and similar services, whether used in a browser, an app, or built into other software. It does not cover spellcheckers, grammar checkers, or basic autocomplete.
This guideline applies to all models and all LLM-generated output.
More information about LLMs, including why their usage has been found to be problematic on Wikipedia, can be found at Wikipedia:Large language models.
Do not use an LLM to add unreviewed content
[edit]
Editors should not use an LLM to generate content for Wikipedia unless they have thoroughly reviewed and verified the output. LLMs are prone to hallucinating facts and citing non-existent sources and using an LLM to generate new articles or drafts from scratch, or as an expansion of existing articles is not permitted, even if you plan to review the output later. Do not use LLMs to write comments or replies in discussions.
Editors should not:
- Paste raw or unreviewed LLM output as a new article or as a draft intended to become an article.
- Paste raw or unreviewed LLM output into existing articles as new or expanded prose.
- Paste raw or unreviewed LLM output as new discussions or replies to existing discussions.
Where content is largely or entirely based on raw or unreviewed LLM output, it may be draftified, stubified, nominated for deletion, collapsed, or removed entirely, especially where the content is unverifiable, fabricated, or otherwise non-compliant with existing Wikipedia policies.
Repeatedly making problematic LLM-assisted edits may be treated as a competence issue and can result in the editor being blocked.
Editors are strongly discouraged from using LLMs. LLMs, if used at all, should assist with narrow, well-understood tasks such as copyediting. New editors should not use LLMs when editing Wikipedia.
If an experienced editor nonetheless chooses to use an LLM, they must:
- Not use it to generate the bulk of a new article or major expansion.
- Check the output they intend to use against suitable reliable sources.
- Ensure the output complies with existing Wikipedia policies.
- Not treat the output as authoritative or as a substitute for their own judgement.
If the editor cannot confidently check and correct the output, they should not use an LLM for that task. LLMs should not be used for tasks in which the editor has little or no independent experience. Editors should also be cautious about using LLMs to write comments in discussions on Wikipedia.
Disclosure and responsibility
[edit]
Editors should disclose LLM assistance in the edit summary (e.g. “copyedited with the help of ChatGPT 5.1 Thinking”). This helps other editors understand and review the edit.
Regardless of disclosure:
- Editors are wholly responsible for the content they add or change, including LLM-assisted text.
- Disclosure does not make non-compliant content acceptable.
- “The AI wrote it” is not a defence for violations of Wikipedia policy or guideline.
Handling existing LLM-generated content
[edit]
Where content appears to be substantially or wholly LLM-generated, editors may:
- Remove the problematic material outright, especially in biographies of living persons.
- Replace it with sourced, policy-compliant content.
- Tag the page as LLM generated under Template:AI-generated.
- Draftify, stubify, or nominate for deletion under the usual processes.
- Mark the page for speedy deletion under criteria G15.

