It’s been a while since we’ve published an official roadmap update for Akka.NET. We are still on track to achieve the goals of the previous roadmap, but with a few minor changes that I will explain here.

Akka.NET 1.1 - Akka.Cluster Release to Market; Akka.Streams Beta

We’ve publicly committed to releasing Akka.NET 1.1 on June 14th, 2016:

This release has the following goals:

  1. Officially releasing Akka.Cluster to market, signifying “it is ready for full-blown production use;”
  2. Deploy Helios 2.0 transport to production, which is significantly faster, more memory efficient, and more reliable than the current Helios 1.4.1-based transport;
  3. Releasing the MultinodeTestRunner and the Akka.Remote.TestKit, used for testing distributed systems built with Akka.NET; and
  4. Releasing the very first beta of the new Akka.Streams module, which you can read more about here.

Akka.Cluster has been available as a beta package for nearly two years and has had thousands of users. It is currently serving production workloads both on Linux and on Windows for a variety of different types of customers. During this period we have collected lots of bug reports, feedback, and data that has been used to help improve its reliability and performance.

This will be a tremendous opportunity for Akka.NET users to build high availability systems of all shapes and sizes on any cloud they wish.

Akka.NET 1.5 - TLS, New Serializer, Faster Transports

The next major release we have planned following Akka.NET 1.1 is Akka.NET 1.5. This release will introduce some breaking changes at the dependency level.

We are making the following two important changes:

    ...

The Business Case for Actors and Akka.NET

From the 1980s to Present Day

Akka.NET is a .NET implementation of the actor model.

The actor model is an old technology, originating in 1973 as an approach to parallel computing at a time when it looked like the computers of the future might be constructed using thousands of small, low-powered CPUs. History didn’t turn out that way thanks to Moore’s Law; CPUs became faster and faster and modern machines were developed with a small number of very high-powered CPUs.

Despite that, the actor model is immensely popular and runs some of the world’s most important software today. Amazon’s SimpleDb, RabbitMQ, Riak, CouchBase, Goldman Sachs, Motorola, Blizzard Games, Cisco, eBay, Credit Suisse, AMN Healthcare, Bank of America, McGraw Hill Financial, and scores of other major organizations use implementations of the actor model to power mission-critical applications responsible for the world’s largest companies.

So why is the actor model so popular today? Why are so many businesses using it for mission-critical applications?

First Adopters of the Actor Model: Telecoms

The truth of the matter is, the actor model has been popular for a long time through the Erlang programming language. Erlang was the first large-scale, production usable implementation of the actor model - developed originally by Joe Armstrong as a proprietary language at Ericsson in 1986 (open sourced later) to build telephone exchanges. Today it’s used to power the GPRS, 3G, and LTE cellular networks that depend Ericsson’s products.

Erlang Logo

Although the actor model was originally developed as a means for running applications on types of computer hardware that never really took off, the emergence of electronic computer networks in the late 70s and early 80s gave the actor model an extremely viable commercial application: distributed and concurrent systems.

As the Internet grew and more...

Performance Testing Should be Mandatory

Plus, How to Actually do it Right

Back in December I released the first publicly available version of NBench - Petabridge’s automated .NET performance testing and benchmarking framework.

NBench Logo

NBench has proven itself to be an invaluable part of our QA process for Akka.NET and scores of other projects, for one critical reason: performance is a mission critical feature for an increasingly large number of applications. And if you can’t measure performance, then you’re shipping a totally untested feature to your end-users.

The Impact of Poor Performance

A simple anecdote to illustrate the real-world impact of shipping non-performant software.

One of my favorite musicians, whom I have never seen live before, is coming to town and I wanted to purchase a ticket. I was out of the country the day tickets went on sale, so if I wanted to attend I’d need to buy a ticket from a secondary market.

I decided to give a new company I’d never purchased tickets through before, SeatGeek, a try. I quickly found two tickets for about $100 each, went through the checkout process, put in my payment information, and submitted payment. A few seconds later I get an error message back letting me know that the tickets were no longer available. So I repeat this process a few more times with progressively more expensive tickets with no success.

Eventually I just gave up and SeatGeek lost about $500 worth of revenue, because I lost any confidence that their reported ticket inventory was available. The crucial error was that the underlying software responsible for reporting inventory availability under-performed - it wasn’t able to keep up with the demand of actual customers, and as a result they nearly lost my business. I tried again on a whim immediately before writing this post and...