Wikipedia Article Approval Rates: What 1,000+ AfC Submissions Reveal
Wikipedia is more than just an online encyclopedia: it's the backbone of search visibility, a primary training source for LLMs, and a validation of a topic's notability. There is enormous demand for new articles, especially from brands and startups looking to influence how they appear in search. To address this interest, a diverse ecosystem of vendors—ranging from prominent PR firms to fly-by-night operatives—has emerged to handle the research, drafting, and submission of new entries.
Despite these vendors' promises, getting a new Wikipedia article approved remains extraordinarily difficult.
Lumino's exclusive audit of 1,009 Articles for Creation (AfC) submissions on the English-language Wikipedia offers a rare, data-driven look at what actually happens at the platform’s front door. The findings challenge common assumptions about how Wikipedia works and reveal deeper structural dynamics shaping the encyclopedia's growth and guidelines.
Key Findings from 1,009 Wikipedia Submissions:
68 percent overall rejection rate: More than two-thirds of the submissions are declined, highlighting just how difficult it is to get a new Wikipedia article.
Most submissions lack notability: The most frequent reason (57 percent) drafts are rejected is that they do not meet Wikipedia's notability guidelines, meaning the submission's content and sourcing do not convey that the topic is important enough to warrant a standalone encyclopedia entry.
Approval rates vary by article type: AfC submissions in the Arts & Culture category have the highest approval rate at 48 percent. Submissions in the Business category have the lowest approval rate at 16 percent. Startups and other tech companies fare especially poorly, with only a six percent approval rate. Business executives likewise have a low approval rate: only 12 percent.
AI use is a big problem: In March 2026, Wikipedia’s editor community took the bold step of banning the usage of AI to generate new articles or rewrite existing ones. Our dataset underscored the challenge this technology poses to the encyclopedia: 16 percent of AfC drafts were flagged for AI/LLM concerns.
Month-long average review time: Approved submissions have an average review duration of 30.5 days while declined submissions have a slightly longer average review duration of 31.7 days.
Limited number of reviewing editors: Nearly 40 percent of the submission reviews in our dataset were handled by 11 especially prolific editors.
The full report answers the following questions:
What are the approval rates of drafts submitted to AfC?
What draft topic areas are most likely to be approved or rejected? Why do submissions about entrepreneurs, startups, and tech-based businesses face such scrutiny?
How do editors determine if a subject is notable? Is that analysis subjective?
Do resubmitted drafts have a higher or lower approval rate?
How do editors apply LLM/AI content guidelines? How widespread is LLM/AI usage across AfC drafts?
How does Wikipedia's deletionist vs inclusionist debate impact content guidelines?
Why did Manon Bannerman from Katseye keep getting declined?
The report is organized as follows:
-
We provide a high-level summary of how Wikipedia's Articles for creation (AfC) workflow operates, why it's so important, and what our analysis of 1,009 submissions uncovered.
-
Wikipedia terminology can be difficult to parse, especially as there are numerous closely related terms with subtle differences in meaning. We offer definitions for key terms and provide examples of how they're deployed by editors.
-
We summarize why there is such a large demand for new Wikipedia articles and how vendors attempt to satisfy that market need. Then we explain our own role in that ecosystem and what we wanted to know about the AfC process.
-
This section is split into two: (1) Wikipedia’s methodology for reviewing AfC submissions, and (2) Our methodology for compiling a dataset of submissions and assessing relevant datapoints.
How we compiled and assessed AfC submissions
To ensure we captured only viable submissions, we pulled our entries from the "1 day ago" category of the AfC pending submissions by age category page. We determined that if a submission survived the first 24 hours, it had typically passed the quick-fail assessment and was therefore deemed sufficiently viable to warrant a standard content review.
We pulled "1 day ago" submissions from two date ranges in 2025—September 22 to September 28 and October 25 to October 31—for a total of 1,009 submissions. Pulling in two datasets separated by a month ensured that we captured a range of submissions (since some submissions come in groups linked to the same core subject) and a range of reviewing editors (as some reviewing editors are especially active over short periods of time). We believe these 1,009 submissions constitute a representative, methodologically sound sample.
We assessed and coded the drafts using the following eight categories:
Submission date
The specific date when the draft was submitted.
Submission outcome
This assessment captures the action taken on the specific submission,not the outcome of the draft to date. Some of the declined submissions we assessed were later improved, resubmitted, and approved. And, conversely, some of the approved submissions were later deleted from the mainspace. But we only considered the specific action (approval, declination, deletion, or other) that occurred to the time-dated submission under review.
Approved meant that a reviewing editor approved the submission by moving it to the mainspace.
Declined meant that a reviewing editor used a draft decline template to indicate the submission wasn't ready for Wikipedia.
Deleted meant that the entire submission draft was deleted from Wikipedia, usually because of copyright violations or "unambiguous advertising or promotion." In these instances, reviewing editors decided that Wikipedia should no longer host the content, even in draft form.
We marked drafts as Other when it was not possible to determine the action taken on the submission or the action taken was not applicable—e.g., the AfC submission template was removed from the draft before a review could proceed.
Reason for decline (if applicable)
If a draft was declined, we noted the main decline reason provided by the reviewing editor (e.g. "This draft's references do not show that the subject qualifies for a Wikipedia article"). In some instances, a secondary reason was also provided, but we only captured the first reason.
Reviewing editor
The reviewing editor who approved or declined the submission.
Approval or rejection date
The date on which the approval or rejection was made.
Resubmission status
We noted (yes or no) if a draft had previously been submitted and rejected. We only looked at previous AfC submissions. In some cases the draft had previously been published and then reverted to draft status. We did not count that as a previous submission, since it wasn't done through AfC.
Draft category
There are two main ways to categorize:
Sorting by primary characteristic (taxonomy / classification)
This method involves placing an item into a single, predetermined, and often hierarchical group based on its most dominant or defining trait.
Tagging all relevant attributes (faceted classification / tagging)
This approach involves assigning multiple, descriptive labels (tags) to an item, acknowledging that an item can belong to several categories at once.
We went with the primary characteristic approach because that's generally (although not always) how Wikipedia handles articles. This Project page shows how AfC submissions are sorted into different categories by a bot.
We classified drafts into five main subjects (Arts & Culture, Business, Person, Organization, and Other). Our subject categories do not entirely align with Wikipedia's categories, most notably because we opted to split businesses and organizations.
Even though businesses and organizations fall under the same notability guidelines (WP:NCORP), in our experience editors assess for-profit businesses differently from other organizations, especially as "organization" is a broad catch-all for everything from religious groups to armed forces to environmental non-profits.
We also created subcategories for each of the five main categories. We discuss these subcategories in greater detail in the Findings section.
Wikipedia is a general-interest encyclopedia that includes entries about nearly every possible subject. Trying to accurately sort draft submissions using consistently applied criteria proved difficult, and some of the subcategory parameters are fuzzy by necessity.
We know there’s significant interest in how Wikipedia handles drafts related to businesses and persons, and those categories (and associated subcategories) were thankfully easier to define and categorize.
Evidence of LLM usage
This was a yes/no assessment of whether editors had left comments indicating concerns about LLM/AI usage.
While LLM tools are not strictly banned for all Wikipedia tasks, editors are forbidden from using them to generate new, unverified articles from scratch. Given this restriction, we wanted to know if reviewing editor feedback, either in the form of a decline notification or a specific draft comment, indicated suspicion of LLM usage (or AI usage more generally).
For this category, we did consider feedback on previous submissions of the same draft, but we ignored feedback that occurred after the specific submission was approved or declined.
-
We summarize our findings across data analysis categories: Overall approval rate, Reason for decline, Approval rates by category, Reviewing editors, Time to review, Resubmission status, and LLM usage.
-
We provide a deep dive into two core areas that stood out in the data: the widespread use of LLM tools, a new problem facing Wikipedia, and the subjective nature of notability, a contentious subject since the encyclopedia's launch in 2001.
-
Articles for creation serves as a human check on AI-generated hallucinations and prose, and on low-quality, poorly sourced content more generally. But like all human decision-making, there is an interpretive element to the AfC assessments—especially regarding notability—that leads to unpredictable approval decisions.