Consuming message queues using .net core background workers – part 4: adding System.Threading.Channels
Apparently I was not done yet with this Series! Few days ago I got a comment on Part 3, asking how would I mix background workers with System.Threading.Channels .
That comment first led me to write an introduction on the Channels library, which has been sitting on my ToDo list for too long. Then I finally took the time to update the example repository on GitHub with the new implementation.
From the Publisher perspective nothing has changed: it’s still a simple .NET Core Console application. Once it’s running, the user will be prompted to write a text message which will be sent to a RabbitMQ fanout exchange.
On the Subscriber side instead I had to make some interesting changes.
First of all, I added the Producer and Consumer classes. They’re basically the same as my introductory article.
Now, since they’re using async/await to handle the communication, I had to update the RabbitSubscriber class to use an asynchronous consumer instead.
Last but not least, the Background Worker is not adding the incoming messages directly to the repository anymore, but instead publishes them on a Channel using the Producer.
A certain number of Consumers has been registered at bootstrap, the first available will pick up the message and store it in the repository.
So why in the world would I do that?
Processing an incoming message can be a time consuming operation. If we can just pull it from the queue and handle it to a separate thread, wouldn’t that free us to fetch more data? And that’s exactly what we’re doing.
Every Consumer will asynchronously process the messages, lifting the Background Worker from the responsibility of executing a potentially costly operation. In the demo we are simply adding messages to the repository but I think you got the point.
It is not so different from adding multiple instances of the Web API, only that in this case we’re now also able to process multiple messages concurrently in the same subscriber instance.
Of course this is not a magic trick that solves all the performance problems of our applications. We might get even worse results or even increase the infrastructure costs because now we’re consuming more memory and CPU.
As with almost everything else in our field, measuring and profiling are the keys to success.