/r/golang draws a line on AI-generated projects
New rules aim to stop the subreddit from becoming a dumping ground for effortless LLM-generated tools.
/r/golang recently expanded the scope of their AI policy to cover projects that are submitted as links. Historically, they have allowed people to post their own personal projects to the subreddit if they might be interesting to the community in some way.
Here’s the most notable section of the new policy:
Amount of AI Coding
If your purpose is for review or feedback, please be clear about the amount of AI coding used, and if relevant, the amount of effort put into the project, which should be reflected in the project itself.
Using AI coding tools is not a disqualification for posting. However, in order to align the effort of creating a post-worthy project with reviewing it. the subreddit will remove posts for "vibe-coded" projects with little human input. This is not because such projects are "bad", but precisely because they are so easy to put out they are no longer noteworthy.
What happened?
The subreddit has been flooded with low-effort posts recently. Here is a post that list some of the threads that people are complaining about. Since some of the links have been deleted and more may be in the future, they are…
A “production ready” high-speed logger that couldn’t even be benchmarked because it had a memory leak.
Something called “hands-on Go” that was an AI-slop repo.
A terminal-based notetaking app that seems to work, but was posted to Reddit with a LLM summary.
A monitoring tool with astroturf support. It seems to be a real tool, but again the Reddit post is obviously generated by AI.
A web framework that looks vibe-coded.
A “production ready” 7000-line message queue that was ostensibly implemented in a single commit.
What’s interesting is that these all have different degrees of slop. Sometimes the objection is straightforward: the damn thing was obviously AI-genrated and doesn’t work. But sometimes the objection is just that the post to Reddit was written with an LLM. It just goes to show: you can’t seperate technical and social problems1.
This adds problems to one of the most thankless jobs: Reddit moderator2.
LLM-powered chatbots have obviously existed on Reddit as long as LLMs have been available, and bots have been creating accounts and spamming reposts or tried-and-true formulas for as long as I can remember3.
It’s already unpaid, most of your work is invisible, the visible parts of your work mostly come when the community is angry, and Reddit can take your mod position at the drop of the hat. And now you need to scale your effort to catch LLM slop flooding your subreddit, while still dealing with all of your previous moderation duties.
If you’re a moderator of a subreddit like /r/Golang, how much do you care about bots in your comments? If you have a bunch of bots spewing gibberish, that’s pretty bad. But if they’re staying on topic, not making obvious errors, and following the community rules? Is it your job to play Turing Test detective? No. Past a certain quality level, it’s Reddit’s job to stop them and not yours. You’re already doing them a favor by moderating. Also, I speak from 18 years of experience that the bottom of a Reddit thread has always been a hive of scum and villainy. So just by virtue of being a long-time Redditor, mods develop some immunity for whatever trash is happening if the discussion is largely on topic and nobody is getting reported.
However, something has changed with the vibecoding era. LLMs have gotten good enough that they can generate a fully-functional website, library, anything really. They are fantastic at zero-to-one implementation. They can generate the READMEs, they can create documentation, they can write the Reddit post summarizing it. And it’s all terrible.
I like how the /r/golang moderators approached the situation. It’s clear that something needed to be done, and the moderators took a thoughtful approach. It’s also clear that LLMs are here to stay — especially for code generation — so by encouraging projects to be open about their LLM usage they are leaning into the fad instead of trying to uselessly block it.
One of my favorite essays of all time, “A group is its own worst enemy,” does a deep dive on online moderation and comes up with this as the conclusion.
I don’t know how y’all do it.
Over/under 50%, how many false positives would you get if you banned everyone who ever asked about sex or “how do you feel about these current events?” on /r/askreddit.