Data management

Data management > Basic > Accumulate Records

Measurement on the whole image file is performed frame-by-frame (volume-by-volume). Accumulate Records action can be used once or multiple times to obtain records (table rows) from all frames (or volumes) in the selected loop. By default (without accumulate) for performance reasons, records are generated per frame (or per volume). In order to work (sorting, filtering, calculating statistics, ...) on a larger record, use accumulate.

Accumulate loop

Which loop to accumulate. Select All Loops to process all data at once.

Data management > Basic > Append Columns

Columns from the input tables are appended in the order that they appear. This is useful when joining more features measured on the same number of objects. Number of records across all input tables is expected to be the same.

Data management > Basic > Calculated Column

This node simply works as a calculator with pre-defined operators. Existing columns or their statistics (see Data management > Grouping > Aggregate Rows) can be used as variables. Type of the calculated column and Unit should be set first. Once the calculation is written, confirm it by clicking Apply.

Data management > Basic > Modify Columns

Selects which features are displayed in the resulting table, change their order using the arrows, change their name (New Title), number format (New Format) and Precision.

Note

The Modify Table Columns dialog having the same functionality as this node can be opened directly from Analysis Results by clicking .

Data management > Basic > Reduce Records

Groups records and computes the selected statistic of rows. If the table is grouped the statistic is calculated for each group. Each group of rows becomes one row in the output table.

For more information about grouping see Data management > Grouping > Group Records, for more information about statistics see Data management > Grouping > Aggregate Rows.

Data management > Basic > Scale Column Data

This node is used for changing units, skipping values and similar tasks. Select the Column which is about to be changed, define the new Unit (NewVal example is shown below) and set the value Offset or Gain. No new column is created, only the values in the source column are skipped.

Data management > Grouping > Aggregate Rows

Aggregate Rows computes the selected statistic of rows. If the table is grouped the statistic is calculated for each group. Each group of rows becomes one row in the output table.

Table 6. Aggregation Statistics

NoneAny value (typically first).
TotalNumber of values including empty (null).
CountNumber of values not including empty (nulls).*
DistinctNumber of distinct (unique) values.
MeanMean value. SUM(values)/N.*
MedianMiddle value of all ordered values.*
SumAll values added.*
MinMinimum value.*
MaxMaximum value.*
StDev.PPopulation Standard deviation.*
StDev.SSample Standard deviation.*
Var.PPopulation Variance.*
Var.SSample Variance.*
VarCoef.PPopulation Coefficient of Variation.*
VarCoef.SSample Coefficient of Variation.*
StErr.PPopulation Standard Error.*
StErr.SSample Standard Error.*
Skewness.PPopulation Skewness.*
Skewness.SSample Skewness.*
Kurtosis.PPopulation Kurtosis.*
Kurtosis.SSample Kurtosis.*
RootMeanSquare.PPopulation Root Mean Square.*
RootMeanSquare.SSample Root Mean Square.*
CoalesceFirst not empty (non-null) value.
LastMinusFirstLast value minus first (values[N-1] - values[0])
FirstFirst value (values[0])
LastLast value. (values[N-1])
ModeMode.*
EntropyEntropy.*
HistoUniformityUniformity of histogram (link to Measurement - Uniformity).*
QuartileQ1First Quartile.*
QuartileQ3Third Quartile.*
PercentileP011st Percentile.*
PercentileP055th Percentile.*
PercentileP1010th Percentile.*
PercentileP9090th Percentile.*
PercentileP9595th Percentile.*
PercentileP9999th Percentile.*

* Does not include empty (null) values.

Data management > Grouping > Filter Groups

Filters whole groups (selected in the Column) based on the selected Statistics per group. Define the filtering using the Comparator drop-down menu and the Value edit box.

This node uses aggregation statistics. Please see Data management > Grouping > Aggregate Rows.

Data management > Grouping > Group Records

All rows are in one group by default. Rows having an equal given column (selected in the Group Records action) form a group which is visualized in the table. Grouping is used with statistics (Aggregate Rows) for calculating per group (BinaryLayer, ZStack, Object...) aggregates. Only limitation is that the result of the grouping is currently not visible in the preview and in the final result.

Data management > Grouping > Ungroup Records

Previously grouped records are ungrouped by this action.

Data management > Sort & Select > Current Records

Filters records associated to the current frame. This node is useful only when the connected table is accumulated.

Data management > Sort & Select > Filter Records

This action can be used to filter table records (rows) using a selected column. Resulting table contains only the rows satisfying the filter condition applied to the given column.

Column

Column to be filtered.

Comparator

Comparison operator.

Value

Comparison operand.

Example 11. Filtering out Spots not having any Cell as a parent.


Example 12. Filter Records action used in series to act as an intersection. Both conditions (10 <= EqDiameter AND 0.9 <= Circularity) must be met at the same time.


Example 13. Filter Records action used in parallel to act as an union. One or both conditions (10 <= EqDiameter OR 0.9 <= Circularity) must be true.


Data management > Sort & Select > Pivot Table

This node โ€œpivotsโ€ the input table by the specified Pivot Column (โ€œObjectIdโ€ in the example below), thus it creates a new column for each โ€œObjectIdโ€ value. The new column will contain values of columns having a column Role in the bottom definition table (โ€œEntityโ€ and โ€œEqDiameterโ€ in the example below). E.g. in the second column of the results table, the โ€œEqDiameterโ€ values for objects with ID==1 are arranged in rows by the โ€œZStackIndexโ€. The number of rows depends on the possible combinations of all values of columns marked with the role Row. Each column will be present for the first number of values specified in the Column Count edit box. The order of the columns is specified by the Column Order switch. In our example below, Append would add 10x Entity whereas Mix orders them: Entity, EqDia, Entity, EqDia, etc. Column Suffix sets the name of the created column. PIVOT_NAME (โ€œObjectIdโ€ in our example) and PIVOT_VALUE (โ€œObjectIdโ€ value) or any other text can be used here.

Data management > Sort & Select > Select First & Last

Shows the first and last record from the connected table.

Data management > Sort & Select > Select Records

Select Records action takes a specified number of records (Count) from a selected starting row (First Row). Typical use-case is to select the first row of a sorted record set.

Data management > Sort & Select > Select Top

Reorders the records based on a given column and then shows the defined number of records (either smallest or biggest). This node is a combination of Sort Records, Reverse Records and Select Records.

Data management > Sort & Select > Sort Records

Sort action reorders records so that the selected column is sorted in an ascending order.

Data management > Table Manipulation > Append Records

Appends records (rows) from input tables one by one into the output table. If an input table contains a column already present (has same column ID) in the output table (from the preceeding input tables) it will use it (i.e. not append a new column).

The resulting table is ordered by the identification columns.

Column ID

Every column has an implicit ID (invisible to the user) given to it by the node that creates the column. The ID is used internally to reference columns. Therefore, even after a column is renamed it is still correctly pointed to by subsequent nodes. Consequently, columns are considered โ€œsameโ€ if they have the same ID. Special identification columns like Loop indexes, Object Entity, Object IDs are given the same ID for each special column. Therefore ObjectID column will be considered the same from all tables. This behavior is usually expected. If not, it can be altered with these nodes: Data management > Table Manipulation > Copy Column ID, Data management > Table Manipulation > New Column ID and Data management > Table Manipulation > Compact Columns.

Appending records is useful when joining two tables with the same columns. For example for joining two disjunct filtrations or aggregations.

Data management > Table Manipulation > Compact Columns

Aggregates all columns with the same title (or title different just in numerical suffix) to the first column using the First Valid rule.

This is useful when Data management > Table Manipulation > Join Records or Data management > Table Manipulation > Append Records produces duplicate colums because of their different IDs (please see Data management > Table Manipulation > Append Records for more on Column IDs).

For more control on which columns will merge use Copy Column ID.

The following example demonstrates how to force two different columns to merge into single one using Data management > Table Manipulation > Append Records.

After append records there are two columns MeanObjectIntensity and MeanObjectIntensity2 that need to be merged into one.

Data management > Table Manipulation > Copy Column ID

The resulting table contains all columns from table A with specified columns IDs replaced with IDs from reference table B.

This is useful when tables A and B are intended to be merged using Data management > Table Manipulation > Join Records or Data management > Table Manipulation > Append Records node and they contain columns with different IDs (please see Data management > Table Manipulation > Append Records for more on Column IDs) that should be treated as same column.

There is a simpler automatic way using Data management > Table Manipulation > Compact Columns.

The following example demonstrates how to force two different columns to merge into a single one using Data management > Table Manipulation > Append Records.

Data management > Table Manipulation > Duplicate Column

Duplicates the selected column.

Data management > Table Manipulation > Join Records

This action is useful for merging incompatible tables:

  • frames and objects

  • cells (objects) and nuclei (objects)

It joins records from two or more unrelated (with different number of rows and columns) record sets. Resulting table contains a union of all distinct input table columns (loop indexes, entity and Object Ids are considered the same).

Select the type of join (see below) and click to add a table row where you specify the relation between the joined features taken from two different tables.

Inner Join

Inner Join outputs a Cartesian product of the related rows (rows where โ€œusing columnsโ€ have same value):

CountRows(R) = CountRows(A) ร— CountRows(B) ร— CountRows(C) ร— ...,

where A, B, C, ... are input tables and R is an output table.

A related row (e.g. ZStackIndex = 3) must be present in all tables.

Left Join

Left Join is an inner join plus all related rows which do not occur in tables to the right. All rows from the leftmost table (A) are in the result table (R).

Right Join

Right Join is the same as the Left Join but with reversed input cables (... C, B, A).

Outer Join

Outer join is a union of the left and right join.

Please see Join for more information.

Data management > Table Manipulation > New Column ID

Makes the specified columns IDs unique (please see Data management > Table Manipulation > Append Records for more on Column IDs).

It is useful when nodes like Data management > Basic > Append Columns, Data management > Table Manipulation > Join Records and others treat a column coming from two different tables as the same one and does not include it twice. This is typically because both columns were made by one node or the is a โ€œsystemโ€ column like Loop Index Columns, Entity or Object ID.

Without the Data management > Table Manipulation > New Column ID node both source columns are merged (overwritten) into one.

Data management > Table Manipulation > Shift Records

Shifts data in selected columns by given number of rows.

Fill

Empty

Rows which are empty after shift are left empty.

Original

Rows which are empty after shift are filled with original values.

Cycle

Rows which are empty after shift are filled with values of rows on the other end of columns.

Data management > Statistics > Aggregate Columns

Aggregate Columns action creates a new column with the given name (Title) in which the result of a calculation is placed (Aggregation). The calculation is based on the checked parameters.

Title

Name of the new column.

Aggregation

Statistics to be computed.

Column

Checked columns will be used for calculation.

This node uses aggregation statistics. Please see Data management > Grouping > Aggregate Rows.

Data management > Statistics > Binning

Creates new bin column and assigns each row to one bin based on the value of the selected column. Then the rows can be grouped based on the bins.

Source column

Column for binning.

New column

Name of the new bin column.

Column Type

Sets the column type either to a Number or a String (sequence of characters).

Unit

Unit of the new bin column.

Add bin

Adds a new table row (one bin).

Remove selected bin

Deletes a row record (one bin).

Remove all bins

Clears the table (deletes all bins).

Generate bins automatically

Opens the Generate Bins window used to calculates linear or logarithmic bins based on the bin width. Set the lower starting value of the bin set (Min), then the higher ending value (max) and enter the width of the bin. Choose whether the generated bins will be Linear or Logarithmic with a specified base. Set the Label for each newly generated bin. Enter โ€œloโ€ to start from the lowest generated value or โ€œhiโ€ to start from the lowest generated value + width. Any other mathematical combination of โ€œloโ€ and โ€œhiโ€ is also possible (see the examples below).

Validate bins

If the bins are created improperly, this function automatically corrects the classes so that they follow each other.

Copy to clipboard

Copies the full table to clipboard.

Paste from clipboard

Inserts the table data from clipboard into the frequency table.

lo (incl.) / hi (excl.) / value

Fill in the table so that lo and hi values represent the bin size and value sets the user defined label to each bin. When lo or hi is empty (null), it is interpreted as negative infinity or positive infinity respectively. Bins should be exclusive otherwise the behavior is not defined.

Data management > Statistics > Binning (simple)

Creates a new column and fills it with bin labels to which the value in the source column falls into. Bins are equidistant intervals between min and max.

New column name

Name of the new column.

Source column

Column for binning.

Min (incl.)

Minimal bin value.

Max (excl.)

Maximal bin value.

Count of bins

Number of bins.

Class label

Sets options how each bin is labeled (Start point: min of each bin, Middle point: middle of each bin, Bin class Id: bin index starting from 1.

Data management > Statistics > Frequency Table

This table can be used to classify the results data to see the number of elements in each defined class. Select the Source column, name the New column and add units (Unit). Fill in the table so that From and To values represent the bin size (range of source values) which will be substituted by the new value specified in the third column.

Other tools are similar to the Data management > Statistics > Binning node.

Data management > Statistics > Generate Distribution

(requires: Local Option)

Creates a table with points of Probability density function or Cumulative distribution function. Select Distribution, its Parameters, interval Endpoints and Step. If checked, y- values of Critical Region are generated to separate column. Tested Value is generated to separated column as well.

Data management > Statistics > Statistics

Creates a summary statistics table, as seen in the example below.

This node uses aggregation statistics. Please see Data management > Grouping > Aggregate Rows.

Data management > Statistical tests > ANOVA One-way

(requires: Local Option)

Performs One-way ANOVA (analysis of variance). Please see one-way ANOVA.

Select a column with Samples Data and parameters of the test. The table must be grouped. Every group is one factor (i.e. treatment group). The output results of the analysis are displayed in a one row table.

Working example from a Wikipedia page:

  1. Create a table with the example data and group it by the โ€œgroupโ€ column.

    return [
    [ 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3 ],
    [ 6, 8, 4, 5, 3, 4, 8, 12, 9, 11, 6, 8, 13, 9, 11, 8, 7, 12 ]
    ];
  2. Setup the ANOVA and get the results.

Data management > Statistical tests > F-test

(requires: Local Option)

This test can be used to test:

  • the hypothesis that the means of a given set of normally distributed populations, all having the same standard deviation, are equal.

  • the hypothesis that a proposed regression model fits the data well.

  • the hypothesis that two normal populations have the same variance.

    Please see F-test.

Select Sample A from table A, Sample B from table B and parameters of test. If tables are grouped, then count of the groups in each table must be same. F-test is done for each pair of the groups (i-th group in table A and i-th group in table B). Output is a table with one row of data for each group pair.

Data management > Statistical tests > t-test 1s

(requires: Local Option)

This is a location test of whether the mean of a population has a value specified in a null hypothesis (please see t-test).

Select Sample and parameters of test. If table is grouped, then the t-test is done for each group. Output is a table with one row of data for each group.

Data management > Statistical tests > t-test 2s pair

(requires: Local Option)

Paired samples t-tests typically consist of a sample of matched pairs of similar units, or one group of units that has been tested twice (a "repeated measures" t-test). A typical example of the repeated measures t-test would be where subjects are tested prior to a treatment, say for high blood pressure, and the same subjects are tested again after treatment with a blood-pressure-lowering medication. By comparing the same patient's numbers before and after treatment, we are effectively using each patient as their own control (please see t-test).

Select Sample A, Sample B and parameters of test. If table is grouped, then the t-test is done for each group. Output is a table with one row of data for each group.

Data management > Statistical tests > t-test 2s unpair

(requires: Local Option)

The independent (unpaired) samples t-test is used when two separate sets of independent and identically distributed samples are obtained, one from each of the two populations being compared. For example, suppose we are evaluating the effect of a medical treatment, and we enroll 100 subjects into our study, then randomly assign 50 subjects to the treatment group and 50 subjects to the control group. In this case, we have two independent samples and would use the unpaired form of the t-test (please see t-test).

Select Sample A from table A, Sample B from table B and parameters of test. If tables are grouped, then count of the groups in each table must be the same. t-test is done for each pair of groups (i-th group in table A and i-th group in table B). Output is a table with one row of data for each group pair.

Data management > Statistical tests > Z-factor

Calculates Z-factor for positive and negative control groups.

Labels

Select column containing labels (positive, negative and others).

Values

Select column containing measured control values.

Negative Label

Enter label of rows with negative control (in Labels column).

Positive Label

Enter label of rows with positive control (in Labels column).

Z-factor

Check to create column with Z-factor value and enter its name.

Data management > Position > Addition

Vectorally adds two positions. Z positions are optional. The result is two or three new columns (X, Y, (Z)).

ResX = X0 + X1, ...

Data management > Position > Difference (vector)

Vectorally subtracts two positions in time. Z positions are optional. The result is two or three new columns (X, Y, (Z)).

ResX = X0 - X1, ...

Data management > Position > Distance

Calculates the distance between two points in time. Z positions are optional. The result is two or three new columns (X, Y, (Z)).

Dist = SQRT( (X0 - X1)^2 + (Y0 - Y1)^2 + (Z0 - Z1)^2 )

Data management > Position > Optimal path

Not documented yet.

Data management > Position > Vector Length

Calculates the length of the vector. If Z (Vector Z) column is left blank 2D length is calculated:

Data management > Position > Vector Orientation

Calculates the heading of the vector. Heading is the angle in degrees between the positive X axis and the counterclockwise in range <0; 360>. Elevation is calculated as well when the Z (Vector Z) column is given. Elevation is the angle in degrees between the XY plane and the in range <-90; 90>.

Data management > Position > Stage Transformation

Transforms (recalculates) the position coordinates between the stage (absolute) and the image (relative) system.

Position X,Y,Z

Defines the X, Y, Z positions.

Transformation

Defines the source and destination format of the position.

New Column

Name of the transformed features.

Unit

Unit of the transformed features.

Data management > Processing > Find Local Extrema

Only the local extreme values found in the source Column are copied into the new column. Name the New Column and specify its units (Unit). Select which Extrema are taken into account and adjust the Threshold to filter out subtle changes.

Data management > Processing > Rolling Average

Calculates the rolling average value as an average of three consecutive values (current value and the value before and after the current one). Select the Column from which the rolling average will be calculated, name the New Column and specify its units (Unit). Use Window Radius to expand the number of consecutive values from which the average is calculated.

Data management > Processing > Rolling Median

Calculates the rolling median value as a median of three consecutive values (current value and the value before and after the current one).

Data management > Processing > Rolling Minimum

Calculates the rolling minimum value as a minimum of three consecutive values (current value and the value before and after the current one).

Data management > Processing > Rolling Maximum

Calculates the rolling maximum value as a maximum of three consecutive values (current value and the value before and after the current one).

Data management > Curve Fitting > Curve Fitting

Fits a curve to data using the method of least squares. Fit values are placed into a new column. Select the Dependent Column and Independent Column, choose the Curve type and specify the Outputs.

Data

Independent Column

Column with independent data, [x]

Dependent Column

column with dependent data, [y=f(x)]

Model

Curve

Type of the curve in case of higher polynomial curve its degree n.

Table 7. Curve type

CurveFormulaParameters
Mean a0
Linear a1, a0
Quadratic a2, a1, a0
Higher Polynomial an, ... , a2, a1, a0
Simple ExponentialA, B
GaussianA, ยต, ฯƒ

Note

Gaussian curve is calculated using weighted linear least squares, if it cannot be calculated this way, nonlinear least square method is used instead and is 0.

use significant parameters only

P-value is often between 0 and 1, higher value shows that parameter is less significant. You can filter out all values higher than given value. This is done iteratively until all p-values are lower than given value.

Outputs

Fitted value

Approximated value given by the resulting equation.

Equation of Trend Line

Resulting equation which can be used in a linechart.

R2

Displays goodness of fit, between 0 and 1, higher means better.

Coefficients

Appends columns with parameters.

p-values

Appends columns with p-values.

Data management > Curve Fitting > Dose Response

Fits a Dose-Response curve to data using the method of non-linear least squares. Fitted values are placed into a new column.

Data

Independent Column

Column with independent data, [x] i.e. Dose.

Dependent Column

Column with dependent data, [y=f(x)] i.e. Response.

Model

Curve

Type of the curve in case of higher polynomial curve its degree n.

Table 8. Curve type

CurveFormulaParameters
4PL (Symmetrical) B, T, E, H
5PL (Asymmetrical) B, T, E, H, S

Note

You can constrain (fix) any of the parameters.

Table 9. 

ParameterMeaning
BBottom (minimum of the function).
TTop (maximum of the function).
EInflection point of the curve.
HHill (hill coefficient), gives direction and how steep the response curve is.
SGives assymetry around the inflection point.

EC50, IC50, LD50, ... are same as E for the 4LP model, for 5PL they are calculated as follows:

Outputs

Fitted value

Approximated value given by the resulting equation.

Equation of Dose-Response Curve

Resulting equation, can be used in a linechart.

Coefficients

Appends columns with parameters.

Data management > Curve Fitting > Gauss Mixture

Fits multiple gaussian curves to data using the method of nonlinear least squares.

Data

Independent Column

Column with independent data, [x].

Dependent Column

Column with dependent data, [y=f(x)].

Model

Number of Peaks

Number of gaussian curves n, number of outputed curves can be smaller.

Table 10. Curve type

CurveFormulaParameters
Multiple Gaussians Ai, ฮผi, ฯƒi

Outputs

Fitted value

Approximated value given by the resulting equation.

Equation of Mixture

Resulting equation containing all curves, can be used in a linechart.

Separate equation for each peak

Appends column for each fitted curve.

Coefficients

Appends columns with parameters.

Data management > Detect > DBSCAN

This Density-Based Spatial Clustering of Applications with Noise (DBSCAN) action clusters 2D points. Cluster IDs are inserted into a new column. Name the new column (New Column Name), select the Position X Column and Position Y Column, set the Maximal Distance in Cluster and the Minimal Number of Neighbours. Two points are neighbors if their distance from each other is not greater than the Maximal Distance in Cluster. If any point has at least Minimal Number of Neighbours, this point and all his neighbors are added to the same cluster. If any point is part of a cluster, all of his neighbors are also part of this cluster.

For details about this action, please see the DBSCAN method.

Data management > Detect > Grid Points

Detects a square grid and appends rows with coordinates of the missing points. Select columns containing coordinates of the grid points and select what to ignore during the detecting of the missing points.

Data management > Detect > Parse Well Name

Recognizes and parses well names to row and column.

Table 11. 

Input ColumnWellRowWellColumn
AA01AA01
01AA01AA
ABAB
0101

Data management > Detect > TMA Dearraying

Inserts a new column containing new indexes of binary objects of detected TMA (Tissue microarray) cores. Enter the New Column Name, select the Position X Column and Position Y Column with position of core centers. If you check Skip Gaps, the indexing will not include numbers of missing cores (binaries will be numbered from 1 to N, where N is count of binary objects).

This node automatically detects columns and rows of grid of TMA cores. Then it assigns new indexes so that cores in the upper row have smaller indexes than cores in the lower row. In each row, cores are indexed from left to right. Numbers of missing cores are not used (for example, if you have 10 cores in a 4x3 grid, core in the bottom right corner (if not missing) will always have an index of 12).

For good results cores should be laying in a square grid which is not rotated.

Note

This node can be used on any binary objects laying in a square grid (not just TMA cores).

Data management > Detect > Values Run

(requires: Local Option)

Detects run of the same values and returns the column with IDs. Same ID indicates the same values in the Column for subsequent indexes in the Index Column.

Data management > Sequences > Difference

Calculates a new column as a difference (subtraction) between the second and first value, third and second, etc. Select the Column from which the difference will be calculated, name the New Column and specify its units (Unit). First record is blank when First item Is Empty is selected.

Data management > Sequences > Integrate

Calculates a new column as a sum of the first and the second value, second and the third value, etc. First row stays the same. Select the Column from which the integration will be calculated, name the New Column and specify its units (Unit). Enter a Constant if you want to add a number to each sum of the two values.

Data management > Sequences > Position Difference

Computes difference (vector) X, Y, Z columns in every two consecutive rows.

Data management > Sequences > Position Integrate

Adds up X, Y, Z columns in every two consecutive rows.

Data management > Sequences > High Pass Filter

ฮ”T is a time interval [s] and fc is a cut-off frequency [Hz].

Please see High Pass Filter for more details.

Data management > Sequences > Low Pass Filter

ฮ”T is a time interval [s] and fc is a cut-off frequency [Hz].

Please see Low Pass Filter for more details.

Data management > Sequences > Sequence (Int)

Generates a new column with an integer sequence. Name the New Column and set the Start and Step value of the sequence. Optionally the sequence can respect a selected loop over column value if switched from the default <rows> value.

Data management > Sequences > Sequence (Exp)

Generates a new column with an exponential sequence. Name the New Column and set the Start and Factor (exponent) value of the sequence. To create an inverted exponential sequence, click Invert factor and the Factor value is automatically changed. Optionally the sequence can respect a selected loop over column value if switched from the default <rows> value.

Data management > JavaScript > JS Create Column

Transforms the source record set into the resulting record set that has a new column. The transformation is to be programmed in JavaScript.

See the dedicated documentation: Extending GA3.

Data management > JavaScript > JS Create Table

Transforms N source record sets into resulting single record set. The transformation is to be programmed in JavaScript.

See the dedicated documentation: Extending GA3.

Data management > JavaScript > JS Scalar Expression

Outputs a scalar record set by evaluating an expression on its scalar inputs.