Developer Portal Community

    Showing results for 
    Search instead for 
    Did you mean: 

    More efficient BinIo implementation

    More efficient BinIo implementation

    Long-established Member

    In BinIo, most of the methods are implemented with a CASE structure and a single case for each IO. According to my understanding the compiler turns this into very long chains of IF-ELSIF-ELSIF-ELSIF-ELSIF... And for each IO all conditions need to be checked until the right IoIndex is found. For stations with many hundred IOs this is actually costly.

    A faster implementation is to create an array with all data that is currently accessed through CASE structures. That's the variable itself (I used a pointer to the original BOOL), the two event numbers and the additional text. Then, each of the methods uses the IDX_ variables to access the correct entry from the array to get the requested data directly. The compiler turns this into an address calculation, which is basically a multiplication and an addition. This only has complexity O(1), instead of the O(n) of the current implementation. Considering that having n IO in your station also means you are probably going to access these n times per PLC cycle, the respective total complexities are O(n) for my proposal and O(n²) of the current implementation.

    I modified the BinIo export template to test this. See attachment. It also needs a new STRUCT like this:


    TYPE BinIoItemStruct :
      pVariable : POINTER TO BOOL;
      EventS0   : DINT;
      EventS1   : DINT;
      AddText   : EVENTADDLTEXT_T;


    My station only has 258 IO in total (inputs, outputs and flags) and doesn't use them very much. My CPU usage went down from 48 to 47. But I simulated a bigger station by adding some more calls to BinIo.GetState and SetState and the difference was 51 to 48. So it does make a difference.

    Are there any reasons that speak against this solution? I guess how big the effect on speed is also depends on how the most commonly used objects are implemented, e.g. whether they use GetState/SetState or just grab the addresses with GetAddress and then use these directly. And of course what the station programmer is doing in the application directly.

    I also tested a version where the BOOL variables were directly in the array of the structure instead of a pointer, because that should be even faster, but it wasn't noticable compared to the pointer approach. Also, that would not be a compatible change anymore. This is the V2 in the attached zip.

    8 REPLIES 8

    Community Moderator
    Community Moderator

    Very interesting idea! As often, it is probably a matter of balancing between memory usage and performance. I will discuss this with my colleagues.

    About the most commonly used objects: At least BasMove uses GetAddress when entering the Operational state and works with pointers after that, so the performance should be good already.

    Long-established Member

    I expect the bottleneck is rather the CPU and not the memory. I heard you mention memory concerns before, but I am a bit surprised by this actually. I don't see how our PLCs would be even remotely constrained by memory at the moment. The CPU on the other hand is very perceivable as a limited resource. In my example above I'm running at 48% CPU usage with a 5ms cycle time but I'd prefer to go faster than this. Why should I be concerned about memory usage, too?

    Also, I am not familiar with the memory model of the TwinCAT PLC runtime, but it might be a small comfort that the large struct array can be implemented as VAR CONSTANT. This way, if "replace constant" is active the whole data can go into the code or text memory and doesn't have to be on the heap. At the same time, since the methods are much smaller and a lot of the text (in particular the event additional text strings) is not compiled into the code or text area anymore, the overall memory usage should actually be quite similar!

    I ran the original version and my proposal through my project and got the following compiler output (although I am not 100% sure what these numbers mean and whether they can be trusted and are meaningful):

    Memory areaOriginalNew Proposal
    Generated code20841122066608
    Global data43002694321077
    Total (code and data)70868327090136
    Special memory (IO, persistent etc)128000128000

    So the code memory goes down by 20000 and data goes up by 20000. It seems that the compiler either doesn't take the "replace constants" very seriously or the data area also encompasses the text area and that's where the big structure is stored. Anyway, overall it seems bigger by less than 4kB. And a 0.05% increase in memory seems like an acceptable price for 1% CPU 😅

    This is a bit off-topic now, but just writing about the "replace constants" settings, is there a recommendation from BCI which setting should be used and if so, is there some documentation that says so and why?

    Community Moderator
    Community Moderator

    My remark about memory usage was more like a general thought. I just wanted to say we need to have an in-depth look at your proposal before we can adopt it.

    About "Replace constants": This option only applies to scalar types, so it doesn't change anything in this usecase. See at the bottom of this page: Beckhoff Information System - English

    I am not sure if there is a general recommendation by BCI, but my recommendation is to always activate "Replace constants". It should improve performance slightly (as the Beckhoff documentation confirms) and it makes the HMI start faster, especially when having many digital I/O. The disadvantage is that ADR cannot be used anymore and that no symbol is generated for replaced constants. I don't think these constraints really matter, and actually the reduced number of sub-symbols of the BinIo FB is the reason why the HMI starts faster.

    Long-established Member

    Of course I can understand that you don't just take code from someone on the internet and integrate it into your flagship product 😛

    I'm glad we agree on the Replace Constants settings. I also advise my colleagues to activate this setting, although my main concern is not the performance gain. Instead, my argument is that not having this setting allows the PLC engineers to modify constants at runtime like any other variables. Someone may accidentally (or even on purpose) modify a constant of a running machine to test something and then leave the machine without actually changing the constant in the code. Then a restart of the machine by the customer at some later point in time will restart the PLC program with different behavior than when we left it. Not good.

    Like you said, the constraints are hardly important, whereas the benefits are attractive.