Many people in the Akka.NET community have been asking for case studies over the last few weeks, since we shared the MarkedUp case study. There are a ton of production deployments, but getting actual case studies out always has some lag time.

To that end, I wanted to share an email case study that I received from Joel Mueller, an Akka.NET community member, that just BLEW MY MIND (reprinted with permission).

Check out Joel’s story of how Akka.NET changed the trajectory of his business, which is now a part of McGraw Hill—owners of this little thing called Standard & Poor’s—and now proud users of Akka.NET, acquiring SNL Financial for $2 billion:

SNL Financial Joel Mueller Joel Mueller, Software Architect, SNL Financial

One of the features of a larger product I’ve been working on for years is a budgeting/forecasting module for community banks. Picture an ASP.NET project that is roughly the equivalent of 200-300 interrelated Excel workbooks, using a custom mostly-Excel-compatible formula engine, lots of business rules, lots of back-end queries against both SQL and Analysis Services, and a front-end in SlickGrid that communicates with the back-end over XHR and Web API. The object model for Forecast instances is (was) stored in ASP.NET session state until changes are saved to the application database. That, of course, means one instance per session, even if two people open the same forecast.

Then, as an afterthought bullet point at the end of a list of other new features being requested, “oh by the way, can you make it work for multiple concurrent users in a single forecast, Google Docs style?”

After I was done freaking out and yelling at people, I sat down to figure out how to implement it. SignalR was the obvious choice in a .NET project to provide real-time updates to multiple clients, but the bigger problem was how to take a mostly-not-threadsafe object model, and share it between multiple simultaneous users through SignalR without sprinkling locks all over the existing object model code, which would take forever to implement and probably turn the project into an unmaintainable, slow, deadlocking nightmare.

The back end for this module was already written in F# so my initial plan was: ignore the Hubs feature of SignalR, because that’s all about calling JavaScript methods from the server, or .NET methods from the client, and I didn’t need that. Instead, I planned to implement a pure message-passing infrastructure based on SignalR PersistentConnection, and write the back-end message handlers as F# MailboxProcessors.

However, I’ve taken some Scala training classes, and I was aware of Akka already, and also I was already aware of Akka.NET, since that was something I’d been hoping to see for a number of years. The two things that made me pick Akka over MailboxProcessor were the error-kernel pattern, and the potential to some day distribute actors across multiple machines without having to significantly modify the actor code. In other words, it was the infrastructure for managing actors that attracted me more than the actors themselves (which are already a language feature in my language of choice, F#).

So I spent two days reading Akka.NET documentation and working through most of the bootcamp, and then I started development. After about a week, I took Petabridge’s training. The timing was perfect for me, since I had done enough real work with Akka to have some meaningful questions, and to understand more of the subtleties related to designing for Akka. I learned several things I was doing wrong, picked up some useful tips and design patterns, and was able to immediately apply them to my work for good effect.

With two people, and me doing 80% of the work, we were able to completely rewrite the XHR/Web API layer into SignalR/Akka inside of 4 weeks. We now fully support multiple simultaneous users, we added new features, and performance is actually significantly better than it used to be thanks to using a few actors to make long-running calculations non-blocking.

My biggest regret with this project is that the whole actor system is still hosted in ASP.NET. The tight deadlines didn’t allow me enough time to get Remoting/Clustering working to the point where I could be confident hosting the forecast actors in a windows service. We’re no more exposed to IIS recycles than we were before, since we were using session state before, but remoting/clustering is high on my list of future enhancements after this release ships, and I’m sure I’ll have questions then. I also have plans to use Akka in some other areas of the product unrelated to budgeting/forecasting.

I couldn’t be happier with Akka overall. The people who sign my bonus check are going to be happy too, so I plan to buy you beers if we ever meet. I already talked my bosses into paying for that training class. Since I work at a big multinational company and Petabridge isn’t on the “approved training vendors” list, that actually took some doing, but I’m stubborn… I’ll be promoting Akka.NET within SNL Financial.

(a few days later…)

Performance in the product I described was so great with 7,000 (!) excel-style grids, our biggest client decided to bump it up to 840,000 (!!) grids, a 120x increase. Yesterday. That got a little bit hectic… We shipped on time, though!

What do YOU see in Joel’s story that resonates with you?

If you liked this post, you can share it with your followers or follow us on Twitter!
Written by Andrew Skotzko on August 11, 2015

 

 

Observe and Monitor Your Akka.NET Applications with Phobos

Did you know that Phobos can automatically instrument your Akka.NET applications with OpenTelemetry?

Click here to learn more.