Discourse is Not Going Closed Source

✨ Explore this insightful post from Hacker News 📖

📂 **Category**:

💡 **What You’ll Learn**:

Cal.com have announced they’re closing their codebase and will no longer be an open-source product. Their reasoning is that AI has made open source too dangerous for SaaS companies. Code gets scanned and exploited by AI at near-zero cost, and transparency is now becoming exposure.

I understand where this is coming from; the industry is changing fast. New AIs with new cybersecurity capabilities are being released every few weeks. It’s a scary world, and I agree completely that open-source companies need to adapt.

I do not agree with the decision that closing source is the solution to the security storm that is upon us.

I do not agree it is the correct narrow decision for SaaS providers, and I do not agree it is the correct decision for the industry at large.

I want to be clear and firm about the position Discourse is taking. We are open source, we’ve always been open source, and we will continue to be open source.

Ever since Jeff, Robin, and I shipped the first commits to the Discourse repository on GitHub, over a decade ago, the repository has been licensed under GPLv2. And that’s not changing.

Cal.com’s position boils down to the claim that if attackers can read your code, AI will let them exploit it faster than you can either harden or patch it, and the forced action you need to take is to hide the code so you can buy time. There’s truth to the threat – AI has changed the speed at which vulnerabilities can be discovered. Over the past few months, our team has found and addressed a very large amount of latent security issues in Discourse using GPT-5.3 Codex, GPT-5.4, and Claude Opus 4.6 in our open-source codebase.

OpenAI and Anthropic are both extremely concerned about the vector, and in response GPT-5.4-Cyber and Anthropic Mythos are being rolled out cautiously.

But I think the race to close software off misses something. Those same AI systems don’t actually need your source code to find vulnerabilities; they work against compiled binaries and black-box APIs.

Closed source has always been a weaker defense for SaaS than people want to admit. A web application is not something you ship once and keep hidden. Large parts of it are delivered straight into the user’s browser on every request: JavaScript, API contracts, client-side flows, validation logic, and feature behavior. Attackers can inspect all of that already, and AI makes that inspection dramatically cheaper. Closing the repository may hide some server-side implementation detail, but it does not make the system invisible. What it mostly does is reduce how many defenders can inspect the full picture.

The world’s most important internet infrastructure runs on open-source software, especially Linux. That code is exposed to constant scrutiny from attackers, defenders, researchers, cloud vendors, and maintainers across the globe. It is attacked relentlessly, but it is also hardened relentlessly. That is the real lesson of open source in security: transparency does not eliminate risk, but it enables a much larger defensive response.

AI does change the security calculus, but I still believe it favors open source. Yes, AI-powered scanning tools can now surface in hours the kinds of security issues that used to take human researchers weeks to uncover. In its research preview launch, OpenAI said Codex Security scanned more than 1.2 million commits across external repositories in a 30-day beta period and identified 792 critical findings and 10,561 high-severity findings.

That is a staggering volume of vulnerability discovery.

But the key question is: who gets to use those tools?

If your code is open source, your security team can scan it, your contributors can scan it, and independent researchers can scan it too. That does not guarantee defenders will always get there first, but it dramatically increases the number of people who can help find real problems early. If your code is closed, attackers can still study the product from the outside, through the browser, the API, the mobile client, and the behavior of the running system, while only your internal team gets direct access to the full code. That is not a reduction in exposure. It is a reduction in defensive capacity.

At Discourse, we’ve leaned into this reality. Our last monthly release included fixes for 50 security issues identified through multi-day scans using GPT-5.4 xhigh. Open source creates a useful urgency: when your code is public, you assume it will be examined closely, so you invest earlier and more aggressively in finding and fixing issues before attackers do.

In a closed-source environment, you may mistakenly think you are safe because nobody can look. Some fraction of those issues would still be sitting there, undiscovered by defenders and waiting for an attacker to stumble across them. That’s not a better scenario.

Discourse launched in 2013. Jeff Atwood, Robin Ward, and I started it because the state of community software was embarrassing. Forums were running on decade-old PHP codebases with security and upgrade models from the early 2000s.

Facebook was where all the energy was going. They were swallowing community discussion whole and had absolutely no reason to let any of it be portable or user-controlled. We built Discourse as open source because we thought community software should belong to the communities using it, not to whatever platform happened to be hosting it that year.

That was 13 years ago. Today more than 22,000 communities run Discourse – tiny startups, Fortune 500 companies, everything in between. The whole codebase is on GitHub, GPL-licensed. Hundreds of outside developers have contributed security patches.

In 13 years of running Discourse in the open, we have not seen evidence that public source code made us less secure. We have had vulnerabilities, of course; every substantial piece of software does. But the pattern has generally been the one you would hope for: bugs were reported, coordinated disclosures were handled responsibly, CVEs published, and fixes shipped quickly.

Cal.com is making a bet about the future of software security. They are betting that in an AI-accelerated threat environment, reducing visibility into the codebase will improve their security posture. I think that is the wrong bet. We are making the opposite one: that in a world where AI makes vulnerability discovery dramatically cheaper, the stronger position is to let defenders use the same tools against code they can actually inspect.

Why companies go closed source

I want to be fair to Cal.com here, because I don’t think they’re acting in bad faith. I just think the security argument is a convenient frame for decisions that are actually about something else.

Competitive pressure, mostly. If your code is open, your competitors can read your architecture and your product thinking. That’s painful, and it gets more painful as you grow – especially the first time a well-funded competitor forks your repo and ships a hosted version at half your price.

Governance is the other big one. Open-source communities push back. They file issues about decisions they don’t like. They fork. It’s exhausting to manage, and closing the code makes the noise stop immediately. Then you’ve got investors asking why you’re giving away the thing they just funded, and suddenly “closed source” looks a lot more defensible in a board deck.

These are all legitimate business pressures, and I don’t judge anyone for feeling them. But they’re business decisions, not security decisions. Framing a business decision as a security imperative does a disservice to the open-source ecosystem that helped Cal.com get to where they are.

How we handle security in 2026

Every release cycle, our team deploys the latest AI vulnerability scanners (GPT-5.4 xhigh at the moment, and next up is Opus 4.7 max) for multi-day deep analysis of our codebase. The scans catch the same class of vulnerabilities that an attacker’s AI would find, and we patch them first.

AI scanning is performed using a multi-step process. We loop through hundreds of controllers, looking at each controller independently for vulnerabilities. Then, for each candidate vulnerability we find in the bulk scans, we validate it by directing an agent to write a failing test inside a container running a full working Discourse environment. Only if it is able to demonstrate that the issue it found is real will we count it as an issue and escalate it to the human queue. A huge advantage is that we also get a candidate working patch for us to validate during this process.

Full codebase scans are cheap at the moment because they are heavily subsidized. An OpenAI full-source-code scan for Discourse could cost $2,000 if you were paying retail. The same scan only costs $50 or so on a $200-a-month plan. Furthermore, OpenAI and Anthropic graciously offers plans to many open-source companies and contributors. We are extremely confident prices will go down and quality will go up over the coming months and years.

The calculus in the industry is changing very quickly. Last year we spent tens of thousands of dollars on third-party security scans. It is staggering that you can get significantly better quality today for a fraction of the cost.

Our bug bounty program works better because the code is public. Security researchers can do meaningful analysis without reverse engineering. They find real bugs, and we treat them with urgency. Architecture matters too: even if an attacker finds a vulnerability, sandboxed execution environments, aggressive rate limiting, content security policies, and the principle of least privilege across every service boundary limit the blast radius.

Bug bounties were built for a world where discovery was relatively scarce. AI is pushing us into a world where discovery is abundant. That is great for defense, but it makes cash rewards much harder to adjudicate fairly. We have paused our rewards for now, but very much appreciate the community of defenders and continue to work with HackerOne on our bounty program.

When a vulnerability is identified, our release pipeline can push a patch to every hosted Discourse instance within hours. Speed of response matters most. Faster discovery due to our open-source nature means we tend to patch stuff faster. Upstream contributions close the loop. When we find vulnerabilities in our dependencies (Rails, Ember, PostgreSQL, Redis), we report them and contribute fixes. That makes the entire ecosystem more secure, which makes us more secure.

Biological immune systems work because they’re exposed to threats. They encounter pathogens and build memory. An immune system that’s never been challenged will collapse at the first real infection. Open-source codebases work the same way – vulnerabilities that get found and patched make the software harder to attack. Security researchers who read the code add layers of defense, and public audits build institutional knowledge about where the weak points are and how to shore them up.

Closed source can buy some obscurity, but obscurity is brittle. Code gets leaked, binaries get reverse engineered, APIs get mapped, and attackers learn a lot just by interrogating the running system. The real defense is not keeping the code hidden forever. It is building software and operational practices that hold up when scrutiny arrives.

What we owe the ecosystem

Discourse exists because of open source. We were built on Ruby, on Rails, on PostgreSQL, on Redis, on Ember, on Linux, and many other projects. All of them were open and maintained by communities that believed in transparency. We owe them the same thing back.

Cal.com acknowledged this in their announcement. They said closing their code “is not a rejection of what open source gave us.” But in practice, that’s what it is. You can’t take five years of community contributions, close the gate, and claim you’re grateful. I don’t think it works that way.

We will not be closing our source code. Thirteen years of evidence tells us that openness makes us more secure. Our community deserves access to the code that runs their communities. And the best defense against AI-powered attacks is AI-powered defense, deployed by as many people as possible, against code they can actually read.

Open source isn’t dead. But it takes courage to do security properly instead of retreating behind a locked door and hoping nobody has a key. We’ve done it for 13 years and we’re going to keep on doing it.

💬 **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#Discourse #Closed #Source**

🕒 **Posted on**: 1776400461

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *