I am wondering if there is a better/faster way to filter my data.
The data-table consists of a campaign index, measurement index and some measurement data. Within each measurement campaign index there are hundreds of rows of measurement results .
|Measurement Campaign||Measurement Index||Result|
To do some mathematics for each measurement campaign I created a table with the measurement indexes, which I use for a table to variable loop. Inside the loop I use this variable to filter out the Measurement Results using the row filter node.
To use the variable inport in the row filter node, I had to change the measurement campaign index to string type.
As you can see this is a quite complex workflow. And because of the size (+500.000 rows) it takes quite a long time to do the filtering. Do you have some hints for me to increase the speed?
There are several things I'm a bit confused. Could you please attach a sample workflow to illustrate the problem?
From the data I see, I would recommend to use GroupBy node. In Groups Tab you could group data by Measurement Campaign (or Measurement Index or both). In the Aggregation Tab you would aggregate the values present in the result columns with a mathematical or string operation of your choice (Sum, Count, List, etc).