Skip to content

Commit

Permalink
Merge pull request #41 from urakubo/210906
Browse files Browse the repository at this point in the history
210906
  • Loading branch information
urakubo authored Sep 17, 2021
2 parents ec6fcc3 + 33ba736 commit 78ffad8
Show file tree
Hide file tree
Showing 44 changed files with 850 additions and 542 deletions.
50 changes: 38 additions & 12 deletions README.ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,24 +43,22 @@
## 動作条件:
OS: Microsoft Windows 10 (64 bit) または Linux (Ubuntu 18.04にて動作確認済)

推奨:NVIDIA社の高性能GPUを搭載したグラフィックスカード (Compute capability 3.5 以上のGPU搭載のもの。GeForce GTX 1080tiなど)。
推奨:NVIDIA社の高性能GPUを搭載したグラフィックスカード (Compute capability 3.5 以上のGPU搭載のもの。GeForce GTX1080ti, RTX2080ti, RTX3090など)。

- https://developer.nvidia.com/cuda-gpus

注意:現在のUNI-EMはNVIDIA社の最新GPU (RTX30X0, A100など)で動作しません。UNI-EMがTensorflow1.Xに基づいて開発されている一方、最新GPUはTensorflow2.Xに対応しますが、Tensorflow1.Xに対応しないためです。詳細は、下記サイトをご参照ください。

- https://www.pugetsystems.com/labs/hpc/How-To-Install-TensorFlow-1-15-for-NVIDIA-RTX30-GPUs-without-docker-or-CUDA-install-2005/
- https://qiita.com/tanreinama/items/6fc3c71f21d64e61e006

## インストール方法:
Pythonのインストールの必要のないPyinstaller版とPythonソースコードの両方を提供します。

### Pyinstaller版 (Windows10のみ):
1. GPU 版とCPU版を用意しました。いずれかをダウンロードして展開してください。

- Version 0.90.4 (2021/05/31):
- [CPU version (Ver0.90.4; 363MB)](https://bit.ly/3uwKHkB)
- [GPU version (Ver0.90.4: 1,068 MB)](https://bit.ly/2QWfFFb)
- Version 0.92 (2021/09/13; NVIDIA Ampere (RTX30X等) 対応):
- [CPU & GPU version (Ver0.92; 2,166 MB)](https://bit.ly/2VFvaDS)

- 前Version 0.90.4 (2021/05/31; NVIDIA Ampere (RTX30X等) 非対応):
- [CPU version (Ver0.90.4; 363 MB)](https://bit.ly/3uwKHkB)
- [GPU version (Ver0.90.4: 1,068 MB)](https://bit.ly/2QWfFFb)

2. 公開サンプルデータkasthuri15をダウンロードして適当なフォルダに展開してください。
- https://www.dropbox.com/s/pxds28wdckmnpe8/ac3x75.zip?dl=0
Expand All @@ -70,15 +68,30 @@ Pythonのインストールの必要のないPyinstaller版とPythonソースコ

4. 上端のドロップダウンメニューより一番左のDojo → Open Dojo Folderを選択して、ダイアログよりkasthuri15フォルダ下のmojoを指定してください。サンプルデータがダウンロードされてDojoが起動します。

* Tensorflow training時、inference時に下のエラーが出る場合は、NVIDIA Driverをアップデートしてください。
- tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
- https://www.nvidia.com/Download/index.aspx

* TF1のFFNモデルと、TF2のFFNモデルは同一ではありません。TF1でFFNトレーニングを行ったあと、TF2で更なるトレーニングを行ったり、推論を行う事はできません。

* Tensorflow トレーニングの際に、下の警告が出ることを浦久保は認識しておりますが、解決できておりません。どなたかのご助力をお願いいたします。
- WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.


### Python版:
1. Windows10またはLinux (Ubuntu18.04 にて動作確認済)にて Python3.5-3. をインストールしてください。
2. Tensorflow 1.14 のためにGPUを利用する場合はcuda 10.0 および cuDNN 7.4 をインスト―ルしてください **[参考1]**
1. Windows10またはLinux (Ubuntu18.04 にて動作確認済)にて Python3.6- をインストールしてください。
2. Tensorflow 1.15, Tensorflow2.0- にて動作します。 NVIDIA GPU 利用する場合、Tensorflow 2.4.1の場合は"CUDA 11.0, cuDNN 8.0.4", TensorFlow 2.5.0の場合は"CUDA 11.2.2, cuDNN 8.1.1"をインスト―ルしてください **[参考1]**
3. 次の命令を実行してGithubより必要プログラムをダウンロードしてください。

- git clone https://github.com/urakubo/UNI-EM.git


4. requirements-[os]-[cpu or gpu].txtを参考に、Pythonに必要モジュール「Tensorflow-gpu 1.12, PyQt5, openCV3, pypng, tornado, pillow, libtiff, mahotas, h5py, lxml, numpy, scipy, scikit-image, pypiwin32, numpy-stl」を pip install -r requirements-[os]-[cpu or gpu].txt などのコマンドを用いてインストールしてください。
4. requirements-[os].txtを参考に、Pythonに必要モジュールを pip install -r requirements-[os].txt などのコマンドによりインストールしてください。Ubuntu18.04, Ubuntu20.04の場合は、 opencv, pyqt5 は下のコマンド "apt" でインストールしてください。

- sudo apt install python3-dev python3-pip
- sudo apt install python3-opencv
- sudo apt install python3-pyqt5
- sudo apt install python3-pyqt5.qtwebengine

5. コマンドププロンプトにて[UNI-EM]フォルダへ移動して、 python main.py と実行してコントロールパネルを起動してください。
6. 公開サンプルデータkasthuri15をダウンロードして適当なフォルダに展開してください。
Expand All @@ -87,6 +100,19 @@ Pythonのインストールの必要のないPyinstaller版とPythonソースコ

7. 上端のドロップダウンメニューより一番左のDojo → Open Dojo Folderを選択して、ダイアログよりkasthuri15フォルダ下のmojoを指定してください。サンプルデータがダウンロードされてDojoが起動します。

* Tensorflow training時、inference時に下のエラーが出る場合は、NVIDIA Driverをアップデートしてください。
- tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
- https://www.nvidia.com/Download/index.aspx

* TF1のFFNモデルと、TF2のFFNモデルは同一ではありません。TF1でFFNトレーニングを行ったあと、TF2で更なるトレーニングを行ったり、推論を行う事はできません。

* Windows10の場合、下のTF1.15.4 と cuda 11.1 の組み合わせで、1.4倍程度トレーニング速度が速くなることを確認しています。
- https://github.com/fo40225/tensorflow-windows-wheel/tree/master/1.15.4+nv20.12/

* Tensorflow トレーニングの際に、下の警告が出ることを浦久保は認識しておりますが、解決できておりません。どなたかのご助力をお願いいたします。
- WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.


## お願い:
日本国内の実験研究者、情報学研究者さまのフィードバックをお待ちします(hurakubo あっと gmail.com; **[参考3]** )。私一人で開発を続けることは困難なので、共同開発者も募集いたします。本アプリは、自然画像のセグメンテーション等に利用することも可能と思われますので、多様なコメントをお待ちしております。本アプリの開発には、革新脳、新学術、基盤Cのご支援をいただいております。

Expand Down
53 changes: 34 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,32 +42,25 @@ Multiple users can simultaneously use it through web browsers. The goal is to de
## System requirements
Operating system: Microsoft Windows 10 (64 bit) or Linux (Ubuntu 18.04).

Recommendation: High-performance NVIDIA graphics card whose GPU has over 3.5 compute capability.
Recommendation: High-performance NVIDIA graphics card whose GPU has over 3.5 compute capability (e.g., GeForce GTX1080ti, RTX2080ti, and RTX3090).

- https://developer.nvidia.com/cuda-gpus

Caution: currently, UNI-EM cannot run on the newest NVIDIA GPUs, such as A100 and RTX30X0 (in particular, under microsoft windows). This is because UNI-EM is based on tensorflow1.X, while the newest GPUs are compatible with tensorflow2.X, but not tensorflow1.X. Please refer to the following website.

- https://www.pugetsystems.com/labs/hpc/How-To-Install-TensorFlow-1-15-for-NVIDIA-RTX30-GPUs-without-docker-or-CUDA-install-2005/
- https://developer.nvidia.com/blog/accelerating-tensorflow-on-a100-gpus/
- https://github.com/NVIDIA/tensorflow

## Installation
We provide standalone versions (pyinstaller version) and Python source codes.

### Pyinstaller version (Microsoft Windows 10 only)
1. Download one of the following two versions, and unzip it:

- Version 0.90.4 (2021/05/31):
- [CPU version (Ver0.90.4; 363MB)](https://bit.ly/3uwKHkB)
- [GPU version (Ver0.90.4: 1,068 MB)](https://bit.ly/2QWfFFb)
- Version 0.92 (2021/09/13):
- [CPU & GPU version (Ver0.92; 2,166 MB)](https://bit.ly/2VFvaDS)

- Release summary:
- Bug fix.
- Bug fix version of FFNs was used.
- Tentative solution in “Cannot lock file” error in the inference of 2D CNN.
- Safe launch of Tensorboard.
- Abolish the use of mcube (caused an occasional error in launching).
- Compatibility with both tensorflow1.X and 2.X. It now works also on NVIDIA Ampere GPUS (RTX30X0, etc).
- Caution: FFN model in TF2 is not identical to that in TF1. Trained model using TF1 cannot be used for further training or inference in TF2.
- Revision of documents for FFNs.
- Bug fix (Tensorboard, 2D/3D watersheds, Filetypes of 2D CNN inference, etc).


2. Download one of sample EM/segmentation dojo folders from the following link, and unzip it:
- https://www.dropbox.com/s/pxds28wdckmnpe8/ac3x75.zip?dl=0
Expand All @@ -77,9 +70,19 @@ We provide standalone versions (pyinstaller version) and Python source codes.

4. Select Dojo → Open Dojo Folder from the dropdown menu, and specify the folder of the sample EM/segmentation dojo files. The proofreading software Dojo will be launched.

* Update the dirver of NVIDIA GPU if you see the following error.
- tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
- https://www.nvidia.com/Download/index.aspx

* Caution: FFN model in TF1 is not consistent with that in TF2. The trained model using TF1 cannot be served for the further training or inference in TF2.

* In the process of traning, HU sees the following warning, and has not found out how to suppress it. I ask someone for help.
- WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.


### Python version
1. Install Python 3.5-3.7 in a Microsoft Windows PC (64 bit) or Linux PC (Ubuntu 18.04 confirmed).
2. Install cuda 10.0 and cuDNN 7.4 for Tensorflow 1.14 if your PC has a NVIDIA-GPU.
1. Install Python 3.6- in a Microsoft Windows PC (64 bit) or Linux PC (Ubuntu 18.04 confirmed).
2. Install "cuda 11.0 and cuDNN 8.0.4 for Tensorflow 2.4.1", or "cuda 11.2.2 and cuDNN 8.1.1 for Tensorflow 2.5.0" if your PC has a NVIDIA-GPU.

- https://www.tensorflow.org/install/source
- https://www.tensorflow.org/install/source_windows
Expand All @@ -88,9 +91,12 @@ We provide standalone versions (pyinstaller version) and Python source codes.

- git clone https://github.com/urakubo/UNI-EM.git

4. Install the following modules of Python: Tensorflow-gpu, PyQt5, openCV3, pypng, tornado, pillow, libtiff, mahotas, h5py, lxml, numpy, scipy, scikit-image, pypiwin32, numpy-stl. Check "requirements-[os]-[cpu or gpu].txt". Users can install those module using the following command.
4. Install the following modules of Python: Tensorflow-gpu, PyQt5, openCV3, pypng, tornado, pillow, libtiff, mahotas, h5py, lxml, numpy, scipy, scikit-image, pypiwin32, numpy-stl. Use the command "requirements-[os]-.txt". Use the following "apt" commands to install opencv and pyqt5 if you use Ubuntu/Linux:

- pip install -r requirements-[os]-[cpu or gpu].txt
- sudo apt install python3-dev python3-pip
- sudo apt install python3-opencv
- sudo apt install python3-pyqt5
- sudo apt install python3-pyqt5.qtwebengine

5. Download one of sample EM/segmentation dojo folders from the following link, and unzip it:
- https://www.dropbox.com/s/pxds28wdckmnpe8/ac3x75.zip?dl=0
Expand All @@ -100,6 +106,15 @@ We provide standalone versions (pyinstaller version) and Python source codes.

7. Select Dojo → Open Dojo Folder from the dropdown menu, and specify the sample EM/segmentation dojo folder. The proofreading software Dojo will be launched.

* Update the dirver of NVIDIA GPU if you see the following error.
- tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
- https://www.nvidia.com/Download/index.aspx

* Caution: FFN model in TF1 is not consistent with that in TF2. The trained model using TF1 cannot be served for the further training or inference in TF2.

* In the process of traning, HU sees the following warning, and has not found out how to suppress it. I ask someone for help.
- WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.

## Authors

* [**Hidetoshi Urakubo**](https://researchmap.jp/urakubo/?lang=english) - *Initial work* -
Expand Down
Binary file added data/parameters/Filters.pickle
Binary file not shown.
37 changes: 17 additions & 20 deletions docs/HowToUse.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,39 +114,36 @@ The VAST Lite is recommended for 3D ground truth generation (https://software.rc

#### Procedure:
1. Select Segmentation → 3D FFN in the pulldown menu. A dialog that has the four tabs appears: Preprocessing, Training, Inference, and Postprocessing.
2. Select the preprocessing tab and specify parameters:
- Image Folder: Folder containing EM images (grayscale sequential tiff/png images).
- Ground Truth Folder: Folder containing ground truth segmentation (grayscale sequential tiff/png images).
- FFN File Folder: Folder storing generated files for training.
2. Select the preprocessing tab and specify folders:
- Training Image Folder: Folder containing EM images for training (sequentially numbered grayscale tiff/png images).
- Ground Truth Folder: Folder containing ground truth segmentation (sequentially numbered grayscale tiff/png images).
- Empty Folder for FFNs: Empty folder to store generated preprocessed files for training.
- Save Parameters
- Load Parameters

Users will see an example EM image volume and their segmentation (kasthuri15) by downloading the following example data.
- ExampleFFN.zip 522MB: https://www.dropbox.com/s/cztcf8w0ywj1pmz/ExampleFFN.zip?dl=0
Users can use an example EM image volume and their segmentation (kasthuri15) by downloading the following example data.
- ExampleFFN.zip 154MB: https://www.dropbox.com/s/06eyzakq9o87cmk/ExampleFFN.zip?dl=0

3. Execute the preprocessing. It takes 5 to 60 min depending on target image volume and machine speed. It produces three files in the FFN file folder: af.h5, groundtruth.h5, and tf_record_file .
3. Execute the preprocessing. It takes 5 to 60 min depending on target image volume and machine speed. It produces three files in the Empty Folder for FFNs: af.h5, groundtruth.h5, and tf_record_file .
4. Select the training tab and specify parameters:
- Max Training Steps: The number of training FFN, a key parameter.
- Sparse Z: Check it if the target EM-image stack is anisotropic.
- Training Image h5 File: Generated file
- Ground truth h5 File: Generated file.
- Tensorflow Record File: Generated file.
- Tensorflow Model Folder: Folder storing training results.
- FFNs Folder: Folder storing generated preprocessed files.
- Model Folder: Folder to store trained Tensorflow model.
5. Execute the training. It requires over a few days depending on the target image volume, machine speed, and the Max Training Steps. A few million training steps are required for minimal quality inference. Users can execute additive training by specifying the same parameter settings with the increasing number of "Max Training Steps".
6. Select the inference tab and specify parameters:
- Target Image Folder: Folder containing EM images (sequential grayscale tiff/png images).
- Output Inference Folder: Folder that will store the inference result.
- Tensorflow Model Files: Specify the trained model files. Please remove their suffix, and just specify the prefix such as "model.ckpt-2000000."
6. Select the inference tab and specify folders:
- Target Image Folder: Folder containing EM images for inference (sequentially numbered grayscale tiff/png images).
- Model Folder: Folder storing the trained Tensorflow model, i.e., trios of ”model.ckpt-XXXXX.data-00000-of-00001", "model.ckpt-XXXXX.index", and "model.ckpt-4000000.meta".
- FFNs Folder: FFNs Folder. Inference results will be stored in this folder.
- Sparse Z: Check if it was checked it at the training process.
- Checkpoint interval: Checkpoint interval.
- FFN File Folder: Folder storing generated files for inference "inference_params.pbtxt."
- Save Parameters
- Load Parameters
7. Execute the inference. It requires 5-60 min depending on target image volume and machine speed. It produces inference results "0/0/seg-0_0_0.npz " and " seg-0_0_0.prob " in the Output Inference Folder. It also produces "inference_params.pbtxt" in the FFN file folder.
7. Execute the inference. It requires 5-60 min depending on target image volume and machine speed. It produces inference results " 0/0/seg-0_0_0.npz " and " 0/0/seg-0_0_0.prob " in the FFNs Folder. It also produces "inference_params.pbtxt" in the FFNs folder.
8. Select the postprocessing tab and specify parameters:
- Target Inference File: Specify inferred segmentation file such as seg-0_0_0.npz.
- Output Inference Folder: Folder storing generated sequential image files.
- OUtput Filetype: Please select one of them. 16 bit images are recommended.
- FFNs Folder: FFNs Folder that stores inference results, i.e., 0/0/seg-0_0_0.npz.
- Output Segmentation Folder (Empty): Folder to store generated sequential segmentation images.
- Output Filetype: Select one of them. 8 bit color PNG is good for visual inspection. 16 bit gray-scale filetypes are good for further analyses.
- Save Parameters
- Load Parameters
9. Execute the postprocessing. It generally requires less than 5 min. It produces the inference result in the Output Inference Folder.
Expand Down
Binary file modified docs/Images/FFN_Prep.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 78ffad8

Please sign in to comment.