Reddit is rolling out new human verification requirements to combat the surge of automated accounts, joining the platform’s ongoing battle against the bot activity that recently forced competitor Digg to shut down.
New Labels and Verification Protocols
The platform will now label automated accounts that provide specific services to users, mirroring the “good bot” tagging system currently used on X. Furthermore, accounts flagged by Reddit’s detection systems as suspicious will be required to undergo a human verification process to remain active.
Reddit clarified that this is not a platform-wide requirement. Instead, the check is triggered only when account behavior—such as posting speed or technical markers—suggests non-human activity. Accounts that fail to verify their human status may face restrictions.
Privacy-First Verification Methods
To confirm identity, Reddit is leveraging third-party tools, including passkeys from Apple and Google, YubiKey, and biometric services like Face ID. In certain jurisdictions, such as the U.K., Australia, and some U.S. states, government IDs may be utilized to comply with local age-verification regulations, though Reddit emphasized this is not their preferred approach.
“If we need to verify an account is human, we’ll do it in a privacy-first way,” Reddit co-founder and CEO Steve Huffman wrote in the announcement Wednesday. “Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique.”
The Growing Threat of the “Dead Internet”
This initiative addresses the broader web crisis of automated accounts used for political influence, misinformation, and fake marketing. Data from Cloudflare suggests that bot traffic is projected to surpass human traffic by 2027, when accounting for web crawlers and AI agents.
Reddit has become a popular destination for bad actors seeking to manipulate narratives or shill products. Additionally, there are concerns that bots are flooding the site with questions specifically to generate training data for AI models. Co-founder Alexis Ohanian has even addressed the “dead internet theory,” which posits that the majority of online interaction is now synthetic.
The Future of Bot Management
While AI-generated content is not strictly against Reddit’s policies, the company is committed to purging malicious spam, currently removing approximately 100,000 accounts per day. Huffman noted that while current verification tools are a necessary step, the company is looking toward better, more decentralized solutions.
“The best long-term solutions will be decentralized, individualized, private, and ideally not require an ID at all,” Huffman added. Developers managing legitimate, automated services can find more information on the new “APP” labeling system within the r/redditdev community.
