Skip to content

Latest commit

 

History

History
169 lines (134 loc) · 10.6 KB

HowToUse.md

File metadata and controls

169 lines (134 loc) · 10.6 KB

Japanese version here

How to use

Folder manager:

UNI-EM can open folders that contain tiff/png consecutive image files, Dojo style files, and the other types of files (Figure). Drag and drop target folders (a), then they are open. The opened folders appear in the file dropdown menu (b), which are callable from UNI-EM programs. Users can open up to 6 folders simultaneously. Click left mouse button on the opened folder to close it (c).


Dojo Proofreading software


Dojo proofreader

This is a proofreading software as a part of Rhoana pipeline (Copyright, Lichtman/Pfister lab, Harvard, USA). We extend Dojo for wider use.

- https://www.rhoana.org/dojo/
  1. Select Dojo → Open Dojo Folder from the dropdown menu, and specify the folder of sample EM/segmentation dojo files. Dojo will be launched as a web application.
  2. Please push the "Reload" button first if Dojo is in trouble. Dojo can also be seen in the other web browser if users copy and paste the URL ( e.g., [ http://X.X.X.X:888X/dojo/ ] ). Users can also use Dojo through the web browsers of other PCs within the same LAN.
  3. The usage of Dojo is described in the original web page [ https://www.rhoana.org/dojo/ ] . For example, users can move between layers by pressing w/s keys, and change the opacity of segmentation by pressing c/d keys.
  4. Users can create a Dojo folder from a pair of new EM images and segmentation. Select File → Create Dojo Folder. Specify the folders containing a stack of EM images and a stack of segmentation images through the dialog (sequentially numbered, gray-scale png/tiff files).
  5. The edited segmentation images can be exported as sequentially numbered, gray-scale png/tiff files, by selecting Dojo → Export EM Stack / Export.

Dojo Proofreading software


3D annotator

Select Annotator → Open in the dropdown menu. The 3D Annotator will be launched.

  1. Check red crosses in the object table of the right side. The checked objects will appear in the left side. Objects in the object table can be re-ordered by size, and users can see visible big objects by clicking the red crosses of large size objects.
  2. The appeared objects can be rotated, panned, zoomed in/out with the mouse, and their names and colors (RGB) can be changed though the object table.
  3. Users can control the background color, bounding box, and light projection through the accordion menu "Appearance".
  4. The edited contents of the object table can be saved by clicking the button under the object table (CSV).

Turn on the toggle switch in the accordion menu 'Marker label' (right side), then click on any appeared objects. Users will see red markers at the clicked surface location.

  1. The appeared markers are registered in the marker table. Their colors (RGB), names, radiuses, and deletion can be controlled through the marker table.
  2. Users can also define the colors, names, and numbers of next makers through the accordion menu "Marker label" (right side).
  3. The edited contents of the marker table can be saved by clicking the download button under the marker table (CSV).

Click the "Save image" button at the right side. A screenshot of the scene will be saved as "Screenshot.png".


3D Annotator


2D CNN

We implemented 2D CNN (Resnet/U-net/Highwaynet/Densenet)-based segmentation programs. All the CNNs accept single-channel (gray-scale) or three-channel (RGB) images.

- https://github.com/tbullmann/imagetranslation-tensorflow

Requirements.

  1. one-page ground truth over 512 x 512 xy-pixels.
  2. Training period (6 min with a NVIDIA-GPU card GTX1070).

Procedure:

  1. Select Segmentation → 2D DNN in the pulldown menu. A dialog that has the two tabs appears: training and inference.
  2. Select the training tab and specify parameters:
    • Image Folder: Folder containing EM images (tiff/png images).
    • Segmentation Folder: Folder containing ground truth segmentation (tiff/png images).
    • Model folder: Tensorflow CNN model will be stored.
    • Generator: "unet", "resnet", "highwaynet", "densenet"
    • Loss function:"hinge", "square", "softmax", "approx", "dice", "logistic"
    • Augmentation: {fliplr ,flipud, transpose}
    • Maximal epochs
    • Display Frequency
    • Save Parameters
    • Load Parameters
  3. Execute training. Specify the folders containing a sample EM image "[ExampleCNN]/TrainingImages" and ground truth segmentation "[ExampleCNN]/GroundTruth".
  4. Select Segmentation → Tensorboard to inspect the progression of training. It took 5 min for the training of sample data by use of NVIDIA GeForce GTX 1070.
  5. The console window shows the end of training as "saving model".
  6. Check the generation of the file "model-XXXXX.data-XXXXX-of-XXXXX" (800 MB) in the Model folder.
  7. Select Segmentation → 2D DNN, and set the parameters of the inference tab.
    • Image Folder: Folder containing EM images (tiff/png images).
    • Output Segmentation Folder
    • Model Folder
  8. Execute inference.
  9. Check that the inference results are stored in the Output Segmentation Folder.

2D DNN


3D FFN

Here, we wrapped an excellent membrane segmentation program that was developed by Dr. Michał Januszewski et al. : flood filling networks (FFN, Nature Methods, vol. 15 (2018), pp. 605-610 ; https://github.com/google/ffn ). The FFN, which is a recurrent 3D convolutional network, directly produce 3D volume segmentation with high precision.

Requirements.

  1. 3D ground truth over 512 x 512 xy-pixels and 50 Z-slices.
  2. Long training period (-1 weeks) using a high-performance NVIDIA-GPU card (GTX1080ti or higher).

The VAST Lite is recommended for 3D ground truth generation (https://software.rc.fas.harvard.edu/lichtman/vast/ ).

Procedure:

  1. Select Segmentation → 3D FFN in the pulldown menu. A dialog that has the four tabs appears: Preprocessing, Training, Inference, and Postprocessing.

  2. Select the preprocessing tab and specify folders:

    • Training Image Folder: Folder containing EM images for training (sequentially numbered grayscale tiff/png images).
    • Ground Truth Folder: Folder containing ground truth segmentation (sequentially numbered grayscale tiff/png images).
    • Empty Folder for FFNs: Empty folder to store generated preprocessed files for training.
    • Save Parameters
    • Load Parameters

    Users can use an example EM image volume and their segmentation (kasthuri15) by downloading the following example data.

  3. Execute the preprocessing. It takes 5 to 60 min depending on target image volume and machine speed. It produces three files in the Empty Folder for FFNs: af.h5, groundtruth.h5, and tf_record_file .

  4. Select the training tab and specify parameters:

    • Max Training Steps: The number of training FFN, a key parameter.
    • Sparse Z: Check it if the target EM-image stack is anisotropic.
    • FFNs Folder: Folder storing generated preprocessed files.
    • Model Folder: Folder to store trained Tensorflow model.
  5. Execute the training. It requires over a few days depending on the target image volume, machine speed, and the Max Training Steps. A few million training steps are required for minimal quality inference. Users can execute additive training by specifying the same parameter settings with the increasing number of "Max Training Steps".

  6. Select the inference tab and specify folders:

    • Target Image Folder: Folder containing EM images for inference (sequentially numbered grayscale tiff/png images).
    • Model Folder: Folder storing the trained Tensorflow model, i.e., trios of ”model.ckpt-XXXXX.data-00000-of-00001", "model.ckpt-XXXXX.index", and "model.ckpt-4000000.meta".
    • FFNs Folder: FFNs Folder. Inference results will be stored in this folder.
    • Sparse Z: Check if it was checked it at the training process.
    • Checkpoint interval: Checkpoint interval.
    • Save Parameters
    • Load Parameters
  7. Execute the inference. It requires 5-60 min depending on target image volume and machine speed. It produces inference results " 0/0/seg-0_0_0.npz " and " 0/0/seg-0_0_0.prob " in the FFNs Folder. It also produces "inference_params.pbtxt" in the FFNs folder.

  8. Select the postprocessing tab and specify parameters:

    • FFNs Folder: FFNs Folder that stores inference results, i.e., 0/0/seg-0_0_0.npz.
    • Output Segmentation Folder (Empty): Folder to store generated sequential segmentation images.
    • Output Filetype: Select one of them. 8 bit color PNG is good for visual inspection. 16 bit gray-scale filetypes are good for further analyses.
    • Save Parameters
    • Load Parameters
  9. Execute the postprocessing. It generally requires less than 5 min. It produces the inference result in the Output Inference Folder.

  10. Check the quality of segmentation (inference) by use of colored images or Dojo proofreader.

2D/3D Filters

UNI-EM has a variety of 2D and 3D image filters. Select Plugins → 2D/3D Filters in the dropdown menu of UNI-EM. The dialog of 2D/3D Filters will appear.

  1. Specify "Target Folder" and "Output Folder" at the bottom of the Dialog. The target folder contains target images applied to filters, and the filtered images are stored in the output folder. Confirm that the target images appear in the area "Target Image."
  2. Drug a filter from "2D Filter" or "3D Filter" and drop it in "Filter Application". If users apply multiple filters, drug and drop them to "Filter Application".
  3. Specify filter parameters. Parameter setting widget appears if users click the parameter in "Filter Application".
  4. Click "Obtain sample output" to obtain a sample output image after the filter application. Repeat trial-and-error until obtaining a required image.
  5. Click "Execute". The filter(s) are applied to the images in "Target Folder", and the outputs are stored in "Output Folder."
  6. Click "Save Parameters" and "Load Paraemters" to save and load the parameter setting.
  7. Check "Normalized" then image intensity is normalized for visibility.

Filter