w3hello.com logo
Home PHP C# C++ Android Java Javascript Python IOS SQL HTML videos Categories
TPL Dataflow blocks run forever. Forever running producer, consumer modelling and exception handling

Exceptions

I usually wrap the delegates with exception handling because as you said a block's exception is stored in the Completion task and moreover a faulted block stays faulted so you would need to replace it to move on.

var block = new TransfromBlock<string, int>(number =>
{
    try
    {
        return int.Parse(number);
    }
    catch (Exception e)
    {
        Trace.WriteLine(e);
    }
});

Capacity

Another important issue is capping. If some part of your workflow can't handle the load, it's input queue would simply grow infinitely. That could lead to a memory leak or OutOfMemoryExceptions. So it's important to make sure to limit all your blocks with the appropriate BoundedCapacity and decide what to do when that limit is reached ("throw" items, save to storage, etc.)

Parallelism

While the default value for BoundedCapacity is -1 (unbounded), the default value for MaxDegreeOfPrallelism is 1 (no parallelism). Most applications can easily benefit from parallelism so make sure to set an appropriate MaxDegreeOfPrallelism value. When a block's delegate is purely CPU-intensive MaxDegreeOfPrallelism shouldn't be much higher than the available cores. As it has less CPU and more I/O-intensive parts the MaxDegreeOfPrallelism can be increased.

Conclusion

Using TPL dataflow throughout the application's lifetime is really simple. Just make sure to enable configuration through the app.config and tweak according to actual results "in the field".





© Copyright 2018 w3hello.com Publishing Limited. All rights reserved.