The Helper library

The library bundled with the plugins contains a few helper classes under the org.incenp.imagej namespace. Those classes may be used in other plugins or scripts.

Refer to the API documentation for the complete list of available classes. This page describes the classes expected to be the most useful.

The BatchReader class

This class is intended to facilitate the batch processing of images listed in a CSV input file.

The input file is expected to contain image filenames in the first column; if those filenames are not absolute, they will be considered relative to the directory containing the CSV file itself.

Once a BatchReader instance has been created, use the next() method to iterate through the images listed in the file, and the getImage() to access the current image, as in the following example:

BatchReader reader = new BatchReader("input.csv");
while ( ) {
    ImagePlus image = reader.getImage();
    /* Process the image as required. */

The getImage() method will always return a single image, even if the underlying file contains several images: in that case, calling the next() method will move the reader to the next image within that file instead of moving to the next row in the CSV input file, until all the images have been read.

(This is probably the main interest of that class: discharging the client code from having to handle a mix of files containing only one image and files containing several images.)

If the CSV file contains more than one column, the contents of the columns for any given row can be accessed using either the getRow() method, which will return an array of strings (the first string being the title of the current image), or the getCell(i) method, which returns a string with the contents of the ith column (0-based). This is useful if your image processing requires any kind of supplementary information not found in the image file themselves.

The first line of the CSV file is expected to be a header line, unless the BatchReader constructor is called with boolean second argument set to true. If a header line is present, then you can also access the contents of an arbitrary cell within the current row with the getCell(s) method, with s being a string matching the header of the column you want.

As a convenience, the fillResultsTable(rt) method will add to the specified ResultsTable a new line containing the title of the current image and the contents of any additional columns in the CSV file for the current row.

Here´s a full Jython example:

#@ File (label='Choose a CSV file', style='file') input

from ij.mesure import ResultsTable
from org.incenp.imagej import BatchReader

reader = BatchReader(input)
results = ResultsTable()
    # Get the current image
    image = reader.getImage()
    # Pre-fill the results table with the image title
    # and whatever data are contained in the CSV file
    # Do some meaningfull analysis of the image, using
    # the contents of the "Extra1" column in the CSV file
    value = foo_analysis(image, reader.getCell("Extra1"))
    # Add the value to the current row in the table
    results.addValue("Foo", value)"Foo Results")

The ChannelMasker class

The ChannelMasker class is intended to facilitate creating and applying binary masks to image.

A ChannelMasker object is creating by calling the createMasker static method with a string describing the operations to peform on the image the masker will be applied to. The string is a comma-separated list of operations, each operation being of the form X:OPERATION(ARGUMENTS) where X is the 1-based index of the source channel of the image, or a one-letter code serving as an indirect reference to the channel (more on that below).

If the image the mask is applied to contains several t-frames and/or z-slices, the operation will be applied to all the frames and/or all the slices. That is, the resulting hyperstack will always have the same dimensions as the original image.

Available operations

The MASK operation

The MASK operation creates a binary mask from the source channel by applying a threshold to it; all pixels above the threshold in the source channel will be white in the generated mask, and all pixels below the threshold will be black.

The MASK operation expects at least one argument which is either the value of the threshold to apply directly, or a string corresponding to a value in the org.incenp.imagej.ThresholdingMethod enumeration representing an automatic thresholding algorithm (method names are matched in a case-insensitive way). If the thresholding algorithm choosen is a local thresholding algorithm, then an optional second argument may be specified, indicating the radius the thresholding algorithm will consider; otherwise the default radius is 15.

Here are some examples of masking operations:

Creates a mask from the first channel by applying a user-defined threshold of 127.
1:MASK(fixed, 127)
This is the same as the previous example, with the FIXED thresholding method explicitly specified.
Creates a mask from the second channel by applying the HUANG automatic thresholding algorithm.
Creates a mask from the second channel by applying the local variant of the Otsu thresholding algorithm, with the default radius of 15 pixels.
2:MASK(otsu_local, 20)
Same as above, but with a radius of 20 pixels.
The APPLY operation

The APPLY operation applies one or several binary masks found in channels of an image to another channel of the same image.

The APPLY operation expects at least one argument which is the 1-based index of the channel containing a mask to apply. Other channels may be specified after the first, the masks they contain will be successively applied. The last argument indicates which logical operation to perform when applying the mask(s); it should be one of AND, NAND, OR, NOR, XOR, or XNOR. That argument may be omitted if there is only one mask to apply, in which case it will default to AND.

Here are some examples:

Applies to the first channel of the image the binary mask found in the second channel, with the AND logical operator.
1:APPLY(3, or)
Applies to the first channel the binary mask found in the third channel, with the OR logical operator.
1:APPLY(2,3, and)
Applies to the first channel the binary masks found in the second and third channels, with the AND logical operator.
The COPY and INVERT operations

The COPY operation, as it name suggests, simply copies verbatim the source channel to the destination image. It does not need any argument.

The INVERT operation copies the source channel to the destination, but inverts it first. It also does not take any argument.

Specifying a channel order

When writing a ChannelMasker command, it is possible to refer to a channel by a one-letter code rather than by the channel index directly. A channel order specification is a string providing the necessary information to match the one-letter codes with the actual indexes in the image the ChannelMasker is applied to.

For example, let´s consider the following ChannelMasker command:


This command will always apply the Huang algorithm to the first channel, apply the Moments algorithm to the second channel, and copy the third channel verbatim, to any images the masker is used on.

The command may however be re-written like this:


With a channel order specification of ABC, that command will be equivalent to the one above. But if the channel order is set to, say, BAC, then the Huang algorithm will be applied to the second channel and the Moments algorithm will be applied to the first channel (the copy operation will still involve the third channel).

In short, the use of channel order specification allows to write a ChannelMasker command without knowing, at the time the ChannelMasker object is created, the precise order of the channels. That order can be provided later, through the channel order specification string, at the time the ChannelMasker object is actually used on a image.

Chaining masker objects

The chain() method allows to chain ChannelMasker objects. When two or more ChannelMasker objects are chained, applying the first masker of the chain to an image will automatically apply the next masker of the chain to the output of the previous masker.

The method returns the object it is called on, allowing to create a chain as follows (assuming CM1, CM2, and CM3 are all previously created ChannelMasker objects):


Applying the CM1 masker to an image will return the output of CM3 applied to the output of CM2 applied to the output of CM1.

A complete example

This is a complete Jython example, based on a real use-case, of using both the BatchReader class and the ChannelMasker class. This code will batch process images that are expected to contain three channels: one channel contains the signal of interest (S), the two other contain the signal from fluorophores used to mark regions in the observed field (D and M). The aim of this code is to quantify the signal from the S channel in the regions marked by the fluorophores from the D and M channels.

#@ File (label='Choose a CSV file', style='file') input

from ij.measure import ResultsTable
from ij.process import ImageProcessor, ImageStatistics

from org.incenp.imagej.ChannelMasker import createMasker
from org.incenp.imagej import BatchReader

# Create the masker chain. There are 2 steps:
# 1. Threshold the 'D' channel with the "Minimum" algorithm,
#    and the 'M' channel with the "Huang" algorithm;
#    carry over the 'S' channel.
# 2. Apply both masks created in the previous steps to the 'S' channel;
#    note that we no longer need to use letter codes here, since after
#    the first masker is applied the order of the channels is always
#    known.
masker = createMasker('D:MASK(Minimum),M:MASK(Huang),S:COPY()').chain(
batch = BatchReader(input)
results = ResultsTable()

    image = batch.getImage()
    # Get the channel order from the column "Channel order"
    # in the input CSV file
    order = batch.getCell("Channel order")
    # Perform the masking operations
    masked = masker.apply(image, image.getTitle(), order)
    # Quantify the signal on the masked image
    for i,label in enumerate(["Control", "Marked"]):
        # Select the channel
        masked.setC(i + 1)
        # Exclude black pixels (resulting from applying the masks)
        masked.getProcessor().setThreshold(1, 255, ImageProcessor.NO_LUT_UPDATE)
        # Extract mean intensity and area from the tresholded region
        stats = ImageStatistics.getStatistics(masked.getProcessor(),
            ImageStatistics.AREA | ImageStatistics.MEAN | ImageStatistics.LIMIT,
     # Add the results to the table
        results.addValue("Image", image.getTitle())
        results.addValue("Region", label)
        results.addValue("Area", stats.area)
        results.addValue("Mean", stats.mean)