How AI Will Transform Open Source Business Models

Tailwind's 80% revenue decline leads to a much more interesting story

15 minutes to read

This is Part 2 of a two-part series. Part 1 covered why AI is accelerating open source adoption, not killing it. This post examines which business models thrive and which struggle in the AI era. Subscribe to our mailing list if you haven’t already.

In Part 1, I argued that AI isn’t killing open source - it’s amplifying adoption at unprecedented rates. Akka.NET’s 35% year-over-year download growth in 2025, combined with industry-wide metrics showing 70-87% growth across major package registries, proves that LLMs are recommending and adopting libraries faster than humans ever could.

But there’s a second story buried in the Tailwind CSS saga that deserves its own analysis: AI isn’t killing open source - it’s disrupting specific types of open source businesses.

Adam Wathan captured this perfectly in his GitHub comment:

“Tailwind is growing faster than it ever has and is bigger than it ever has been, and our revenue is down close to 80%.”

Read that again. The framework is thriving. The business is struggling. This isn’t a contradiction - it’s the clearest illustration of where AI is creating winners and losers in the open source economy.

Let me break down what’s actually happening, using both Tailwind’s situation and our own experience at Petabridge as a counter-example.

AI Won't Kill Open Source - It Will Amplify It

Why the doomsayers are wrong: npm, PyPI, and NuGet downloads are exploding

19 minutes to read

Earlier this week, Adam Wathan, the creator of Tailwind CSS, dropped a bombshell about how AI has impacted his company and its employees:

the reality is that 75% of the people on our engineering team lost their jobs here yesterday because of the brutal impact AI has had on our business. And every second I spend trying to do fun free things for the community like this is a second I’m not spending trying to turn the business around and make sure the people who are still here are getting their paychecks every month.

The reaction was swift and predictable. Hot takes flooded in declaring the death of open source, the end of OSS sustainability, and the AI apocalypse that would render libraries and frameworks obsolete. Geoffrey Huntley captured the prevailing narrative perfectly1:

“AI can generate code, bypassing the need to deal with open-source woes… I’ve found myself using less open source these days… This shift challenges the role of open-source ecosystems.”

I had a tweet about the Tailwind situation go viral and there were dozens of quote tweets and comments echoing the very same. “AI eats open source” is easy to find in abundance online.

Here’s the problem: everyone is getting this perfectly backwards.

AI isn’t killing open source. It’s amplifying it to unprecedented levels. The Tailwind situation isn’t about declining OSS adoption - it’s about a business model that was working great in a world where learning curves were high and documentation was the bottleneck. That world is gone, and the revenue model built on top of it is collapsing. But the underlying Tailwind library? It’s thriving.

Let me show you why the doomsayers are wrong, backed by actual data from our own experience...

Migrating from CRUD to CQRS and Event-Sourcing with Akka.Persistence

How Akka.Persistence allowed us to break the logjam on Sdkbin's massive technical debt.

15 minutes to read

We’re about a year into the process of completely re-working Sdkbin to better support our needs here at Petabridge and, eventually, other third-party software vendors.

Sdkbin was originally built on an extremely flimsy CRUD architecture that violates most of my personal design preferences - you can read about the history behind that here. But to summarize, I tend to use the following heuristics when building software:

  1. Prefer optionality-preserving designs - make sure your design decisions can be reversed or altered when things inevitably change.
  2. Use as few moving parts as possible - most of Akka.NET is constructed this way.
  3. No magic - if nothing magically works, then nothing magically breaks either.
  4. Ensure that coupling happens only where it’s necessary - coupling usually needs to happen in your integration layer (i.e. your UI or HTTP API.) Your accounting system should probably not be coupled to your payments system.

Sdkbin’s original CRUD design violated all of these principles:

  1. Used hard deletes, destroying data with no ability to recover or audit (impossible to reverse);
  2. Relied heavily on AutoMapper-powered generic repositories (magic with lots of moving parts); and
  3. Was highly coupled throughout - Stripe payment events served double-duty as invoices, for instance.

We’ve fixed a ton of these issues already, and one of the most important tools we’re using is Akka.Persistence. We’ve also made some significant improvements to Akka.Persistence in recent releases that made it much easier for us to accomplish our ambitious goals with Sdkbin.

Let me show you what we did.

The Worst Security Vulnerability in Akka.NET - And How to Fix It

Understanding CVE-2025-61778 and securing your Akka.NET clusters with mTLS

15 minutes to read

In October 2025, we disclosed the most critical security vulnerability ever found in Akka.NET: CVE-2025-61778. This vulnerability affects Akka.Remote’s TLS implementation - specifically, we were supposed to implement mutual TLS (mTLS), but we didn’t. The server never validated client certificates, meaning anyone who could reach your Akka.Remote endpoint could potentially join your cluster without any authentication.

The immediate action you should take: upgrade to Akka.NET v1.5.56 or later. The vulnerability has been fully patched in these versions.

In this post, we’ll cover the nature of this vulnerability, who was affected, how we fixed it, and - most importantly - security best practices for securing your Akka.NET applications going forward.

This vulnerability was discovered by one of our Production Support customers during a security audit. Within 2-3 weeks of being notified, we shipped four patches (v1.5.52 through v1.5.56) to address the issue. This is exactly the kind of critical response our support customers receive - and the entire Akka.NET community benefits from their vigilance.

How Do You Fix 70% Data Loss Across 1 Million Concurrent Connections?

A case for why you should consider purchasing an Akka.NET Support Plan for your organization.

6 minutes to read

When your Akka.NET application starts dropping 70-80% of incoming connections in production, who do you call? That’s the situation one of our Production Support customers faced this year - and it’s exactly the kind of problem our Akka.NET Support Plans are designed to solve.

Opportunities to purchase developer expertise with a credit card are rare. That’s exactly what we offer - and I want to show you what that looks like in practice.

Akka.NET + Kubernetes: Everything You Need to Know

Production lessons from years of running Akka.NET clusters at scale

37 minutes to read

Running Akka.NET in Kubernetes can feel like a daunting task if you’re doing it for the first time. Between StatefulSets, Deployments, RBAC permissions, health checks, and graceful shutdowns, there are a lot of moving parts to get right.

But here’s the thing: once you understand how these pieces fit together, Kubernetes actually makes running distributed Akka.NET applications significantly easier than trying to orchestrate everything yourself using ARM templates, bicep scripts, or some other manual approach.

We’ve been running Akka.NET clusters in Kubernetes for years at Petabridge—both for our own products like Sdkbin and for customers who’ve built systems with over 1400 nodes. We’ve learned a lot the hard way, and this post is all about sharing those lessons so you don’t have to make the same mistakes we did.

This isn’t a hand-holding, step-by-step tutorial. Instead, I’m going to focus on the critical decisions you need to make, the pitfalls to avoid, and the best practices that actually matter when running Akka.NET in production on Kubernetes.

You Don't Need to Use Akka.HealthChecks Anymore

Akka.Hosting now includes built-in health checks.

14 minutes to read

We have just recently shipped Akka.Hosting v1.5.48.1 and newer, all of which now have built-in Microsoft.Extensions.HealthCheck integration that is significantly easier to configure, fewer packages to install, and easier to customize than what we had with Akka.HealthChecks.

This post consists of three parts:

  1. How to use the new Akka.Hosting health checks;
  2. Why we deprecated Akka.HealthChecks; and
  3. Migration recommendations for existing Akka.HealthChecks users - because the new Akka.Hosting health checks cause conflicts with the older ones.

Let’s dive in.

Phobos 2.10: Game-Changing Akka.NET Cluster Monitoring and Actor Performance Dashboards

Find bottlenecks, sources of error, and track changes to your cluster in real-time.

9 minutes to read

Phobos - APM for Akka.NET

Phobos 2.10 is here, and it’s a game-changer for anyone running Akka.NET applications in production. This release doesn’t just incrementally improve observability - it fundamentally transforms how you understand and troubleshoot actor performance in your clusters.

The headline features: accurate backpressure measurement across all actors, a bird’s-eye view of your entire Akka.NET cluster activity, detailed actor performance analysis dashboards, and the ability to easily filter /system and /user actors from each other. But here’s what makes this release special - it’s not just about the new metrics (though those are substantial). It’s about the beautiful, production-ready dashboards that make all this data instantly actionable.

The Easiest Way to Do OpenTelemetry in .NET: OTLP + Collector

Decouple your observability configuration from your application code with OTLP and collectors

19 minutes to read

We know OpenTelemetry deeply at Petabridge. We’ve built Phobos, an OpenTelemetry instrumentation plugin for Akka.NET, so we understand the low-level bits. Beyond that, we’ve been using OpenTelemetry in production for years on Sdkbin and we’ve helped over 100 customers implement OpenTelemetry configurations very similar to our own. Through all this experience, one thing has become crystal clear: the easiest, most production-ready approach to OpenTelemetry in .NET is using OTLP (OpenTelemetry Line Protocol) with a collector.

In this post, I’ll walk you through why this approach beats vendor-specific exporters every time, show you exactly how to configure it, and demonstrate the real-world benefits we’ve experienced at Petabridge. This is the companion piece to my recent YouTube video on the topic.

The Problem with Vendor-Specific Exporters

When you’re getting started with OpenTelemetry for the first time in one of your projects, you know your team uses DataDog, or New Relic, or Application Insights. So naturally, the first thing you’ll try is figuring out how to connect your application directly to that specific tool.

You end up with something that looks like this:

builder.Services .AddOpenTelemetry() .WithTracing(builder => { builder .AddHttpClientInstrumentation() .AddAspNetCoreInstrumentation() // Coupling our app to vendor-specific implementations .AddDatadogTracing() // Application code now depends on DataDog SDK .AddNewRelicTracing() // And New Relic SDK .AddAzureMonitorTracing(); // And Azure Monitor SDK }); 

And you’re going to get frustrated doing this because of:

  1. Vendor Coupling: Your application code is now directly coupled to vendor-specific SDKs...

Why Akka.Streams.Kafka is the Best Kafka Client for .NET

Stop writing hundreds of lines of error handling code - there's a better way.

18 minutes to read

If you’re using Kafka in .NET, you’re probably writing hundreds of lines of code just to handle “what happens when my consumer crashes?” or “how do I retry failed messages?” or “what happens when I’m consuming messages too fast?”

What if I told you there was a way to handle all of that in just 5-10 lines of code?

That’s exactly what Akka.Streams.Kafka brings to the table - and it’s one of the most underrated parts of the entire Akka.NET ecosystem.