NIS.ai

NIS.ai > Preprocessing > Clarify.ai

This function removes out of focus blur from the source images using neural networks. It is intended for widefield images and works best for thick samples. It is a preferred choice for under-sampled images, whereas deconvolution is a preferred method for well-sampled images.

See Deconvolution.

Clarify.ai requires valid image metadata (similar to deconvolution). It is a parameterless method which does not increase the resolution and does not denoise the image however it can be combined with NIS.ai > Denoise.ai. Check the Denoise.ai check box next to a channel to perform denoising first before clarifying. Check this check box only for very noisy images with SNR value smaller than 20.

Modality

To handle the out-of-focus planes correctly, it is important to know how exactly the image sequence has been acquired. Select the proper microscopic modality from the combo box.

Pinhole size

Depending on the Modality setting, set the pinhole/slit size value and choose the proper units.

Magnification

Specify magnification of the objective used to capture the image sequence.

Numerical Aperture

Enter the numerical aperture of the objective.

Immersion Refractive Index

Enter the refraction index of the immersion medium used. Predefined refraction indexes of different media can be selected from the pull-down menu.

Calibration

Enter the image calibration in μm/px.

Output

Check if you need to create new document. Otherwise the clarifying is applied to the original image.

Channels

Select which channels will be clarified and which will be denoised. You can also adjust the emission wavelength. To revert the changes, click .

Preview

If checked, the clarifying preview is shown in the original image.

OK

Confirms the settings and performs the clarifying.

Cancel

Closes the window without executing any process.

NIS.ai > Preprocessing > Restore.ai

(requires: Local Option)(requires: 2D Deconvolution)(requires: 3D Deconvolution)

Opens the Restore.ai dialog window. This function is designed to be used when denoise and deconvolution processes are combined. It can be applied on all types of fluorescence images (widefield, confocal, 2D/3D, etc.).

Modality

To handle the out-of-focus planes correctly, it is important to know how exactly the image sequence has been acquired. Select the proper microscopic modality from the combo box.

Magnification

Specify magnification of the objective used to capture the image sequence.

Numerical Aperture

Enter the numerical aperture of the objective.

Refraction Index

Enter refraction index of the immersion medium used. There are some predefined refraction indexes of different media in the nearby pull down menu.

Calibration

Enter the image calibration in μm/px.

Channels

Image channels produced by your camera are listed within this table. You can decide which channel(s) shall be processed by checking the check boxes next to the channel names. The emission wavelength value may be edited (except the Live De-Blur method).

Note

Brightfield channels are omitted automatically.

NIS.ai > Preprocessing > Denoise.ai

Performs image denoising with the use of neural networks. This function is used especially for static scenes because moving objects may get blurred.

NIS.ai > Preprocessing > Cells Presence.ai

Detects whether cells are present in the brightfield image or not. Works even for out of focus images. Node outputs:

Verdict

Verdict is 1 if cells are present, 0 if they are not present.

Confidence

Confidence of the detection, ranging from 0 (not confident) to 1 (very confident).

NIS.ai > Preprocessing > Cells Localization.ai

Detects cells and outputs a binary image with dots on the detected cell centers.

Works only on images with bright field modality and in a narrow Z range around focus (+/- 25 µm).

NIS.ai > Transformations > Enhance.ai

For the function description please see Enhance.ai.

Trained AI

Selects the trained network from a file (click Browse to locate the *.eai file).

Details...

Opens metadata associated with training of the currently selected neural network.

NIS.ai > Transformations > Convert.ai

For the function description please see Convert.ai.

Trained AI

Selects the trained network from a file (click Browse to locate the *.cai file).

Details...

Opens metadata associated with training of the currently selected neural network.

NIS.ai > Segmentation > Segment.ai

For the function description please see Segment.ai.

Trained AI

Selects the trained network from a file (click Browse to locate the *.sai file).

Details...

Opens metadata associated with training of the currently selected neural network.

Advanced >>

Reveals post-processing tools and restrictions used for enhancing the results of the neural network.

NIS.ai > Segmentation > Segment Objects.ai

For the function description please see Segment Objects.ai.

Trained AI

Selects the trained network from a file (click Browse to locate the *.oai file).

Details...

Opens metadata associated with training of the currently selected neural network.

Advanced >>

Reveals post-processing tools and restrictions used for enhancing the results of the neural network.

NIS.ai > Segmentation > Homogeneous Area / Cells.ai

Uses Cells Localization.ai to segment the image into areas with cells and into homogeneous areas without cells. The resulting binary image is equal to 0 at the cells and to 1 at the homogeneous areas. The segmentation fails if less than 10 cells are found.

Channel

Select the channel for segmentation.

Inversion

Check to invert the resulting binary (change 0s to 1s and vice versa).

NIS.ai > Trained files > Select Trained File.ai

Selects an appropriate trained AI file according to the sample objective magnification. Output is expected to be used as a dynamic input parameter for another AI node. Paths to the trained AI files, either relative or absolute, can be defined using a standard regular expression.

NIS.ai > Measurement > Quality Estimate.ai

(requires: Local Option)

Estimates the Signal to Noise Ratio (SNR) value as it is used with Autosignal.ai (AutoSignal.ai).

NIS.ai > Evaluation > Segmentation Accuracy

Calculates the average precision to evaluate AI on objects. This node has two inputs - GT (Ground Truth) and Pred (Prediction). It compares the ground truth binary layer (A) and predicted binary layer (B) generated by segmentation using AI. It also pairs the objects from both layers and classifies them (based on the IoU threshold) into:

  • true positives (TP) matched correctly,

  • false positives (FP) incorrectly segmented objects and

  • false negatives (FN) incorrectly missed objects.

Based on these numbers it calculates:

  • precision = TP / (TP + FP),

  • recall = TP / (TP + FN) and

  • F1 = 2 x precision x recall / (precision + recall)

IoU Threshold

Defines a threshold above which two overlapping objects are considered correctly matched - threshold of quantity:

.

NIS.ai > Evaluation > Object Segmentation Accuracy

Calculates the average precision to evaluate AI on objects. This node has two inputs - GT (Ground Truth) and Pred (Prediction). It compares the ground truth binary layer (A) and predicted binary layer (B) generated by segmentation using AI. It also pairs the objects from both layers and classifies them (based on the IoU threshold) into:

  • true positives (TP) matched correctly,

  • false positives (FP) incorrectly segmented objects and

  • false negatives (FN) incorrectly missed objects.

Based on these numbers it calculates:

  • precision = TP / (TP + FP),

  • recall = TP / (TP + FN) and

  • F1 = 2 x precision x recall / (precision + recall)

IoU Threshold

Defines a threshold above which two overlapping objects are considered correctly matched - threshold of quantity:

.

(requires: NIS.ai)