diff --git a/README.ja.md b/README.ja.md
index 80ed7de6..790025ee 100644
--- a/README.ja.md
+++ b/README.ja.md
@@ -43,24 +43,22 @@
## 動作条件:
OS: Microsoft Windows 10 (64 bit) または Linux (Ubuntu 18.04にて動作確認済)
-推奨:NVIDIA社の高性能GPUを搭載したグラフィックスカード (Compute capability 3.5 以上のGPU搭載のもの。GeForce GTX 1080tiなど)。
+推奨:NVIDIA社の高性能GPUを搭載したグラフィックスカード (Compute capability 3.5 以上のGPU搭載のもの。GeForce GTX1080ti, RTX2080ti, RTX3090など)。
- https://developer.nvidia.com/cuda-gpus
-注意:現在のUNI-EMはNVIDIA社の最新GPU (RTX30X0, A100など)で動作しません。UNI-EMがTensorflow1.Xに基づいて開発されている一方、最新GPUはTensorflow2.Xに対応しますが、Tensorflow1.Xに対応しないためです。詳細は、下記サイトをご参照ください。
-
-- https://www.pugetsystems.com/labs/hpc/How-To-Install-TensorFlow-1-15-for-NVIDIA-RTX30-GPUs-without-docker-or-CUDA-install-2005/
-- https://qiita.com/tanreinama/items/6fc3c71f21d64e61e006
-
## インストール方法:
Pythonのインストールの必要のないPyinstaller版とPythonソースコードの両方を提供します。
### Pyinstaller版 (Windows10のみ):
1. GPU 版とCPU版を用意しました。いずれかをダウンロードして展開してください。
-- Version 0.90.4 (2021/05/31):
- - [CPU version (Ver0.90.4; 363MB)](https://bit.ly/3uwKHkB)
- - [GPU version (Ver0.90.4: 1,068 MB)](https://bit.ly/2QWfFFb)
+ - Version 0.92 (2021/09/13; NVIDIA Ampere (RTX30X等) 対応):
+ - [CPU & GPU version (Ver0.92; 2,166 MB)](https://bit.ly/2VFvaDS)
+
+ - 前Version 0.90.4 (2021/05/31; NVIDIA Ampere (RTX30X等) 非対応):
+ - [CPU version (Ver0.90.4; 363 MB)](https://bit.ly/3uwKHkB)
+ - [GPU version (Ver0.90.4: 1,068 MB)](https://bit.ly/2QWfFFb)
2. 公開サンプルデータkasthuri15をダウンロードして適当なフォルダに展開してください。
- https://www.dropbox.com/s/pxds28wdckmnpe8/ac3x75.zip?dl=0
@@ -70,15 +68,30 @@ Pythonのインストールの必要のないPyinstaller版とPythonソースコ
4. 上端のドロップダウンメニューより一番左のDojo → Open Dojo Folderを選択して、ダイアログよりkasthuri15フォルダ下のmojoを指定してください。サンプルデータがダウンロードされてDojoが起動します。
+* Tensorflow training時、inference時に下のエラーが出る場合は、NVIDIA Driverをアップデートしてください。
+ - tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
+ - https://www.nvidia.com/Download/index.aspx
+
+* TF1のFFNモデルと、TF2のFFNモデルは同一ではありません。TF1でFFNトレーニングを行ったあと、TF2で更なるトレーニングを行ったり、推論を行う事はできません。
+
+* Tensorflow トレーニングの際に、下の警告が出ることを浦久保は認識しておりますが、解決できておりません。どなたかのご助力をお願いいたします。
+ - WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
+
+
### Python版:
-1. Windows10またはLinux (Ubuntu18.04 にて動作確認済)にて Python3.5-3. をインストールしてください。
-2. Tensorflow 1.14 のためにGPUを利用する場合はcuda 10.0 および cuDNN 7.4 をインスト―ルしてください **[参考1]** 。
+1. Windows10またはLinux (Ubuntu18.04 にて動作確認済)にて Python3.6- をインストールしてください。
+2. Tensorflow 1.15, Tensorflow2.0- にて動作します。 NVIDIA GPU 利用する場合、Tensorflow 2.4.1の場合は"CUDA 11.0, cuDNN 8.0.4", TensorFlow 2.5.0の場合は"CUDA 11.2.2, cuDNN 8.1.1"をインスト―ルしてください **[参考1]** 。
3. 次の命令を実行してGithubより必要プログラムをダウンロードしてください。
- git clone https://github.com/urakubo/UNI-EM.git
-4. requirements-[os]-[cpu or gpu].txtを参考に、Pythonに必要モジュール「Tensorflow-gpu 1.12, PyQt5, openCV3, pypng, tornado, pillow, libtiff, mahotas, h5py, lxml, numpy, scipy, scikit-image, pypiwin32, numpy-stl」を pip install -r requirements-[os]-[cpu or gpu].txt などのコマンドを用いてインストールしてください。
+4. requirements-[os].txtを参考に、Pythonに必要モジュールを pip install -r requirements-[os].txt などのコマンドによりインストールしてください。Ubuntu18.04, Ubuntu20.04の場合は、 opencv, pyqt5 は下のコマンド "apt" でインストールしてください。
+
+ - sudo apt install python3-dev python3-pip
+ - sudo apt install python3-opencv
+ - sudo apt install python3-pyqt5
+ - sudo apt install python3-pyqt5.qtwebengine
5. コマンドププロンプトにて[UNI-EM]フォルダへ移動して、 python main.py と実行してコントロールパネルを起動してください。
6. 公開サンプルデータkasthuri15をダウンロードして適当なフォルダに展開してください。
@@ -87,6 +100,19 @@ Pythonのインストールの必要のないPyinstaller版とPythonソースコ
7. 上端のドロップダウンメニューより一番左のDojo → Open Dojo Folderを選択して、ダイアログよりkasthuri15フォルダ下のmojoを指定してください。サンプルデータがダウンロードされてDojoが起動します。
+* Tensorflow training時、inference時に下のエラーが出る場合は、NVIDIA Driverをアップデートしてください。
+ - tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
+ - https://www.nvidia.com/Download/index.aspx
+
+* TF1のFFNモデルと、TF2のFFNモデルは同一ではありません。TF1でFFNトレーニングを行ったあと、TF2で更なるトレーニングを行ったり、推論を行う事はできません。
+
+* Windows10の場合、下のTF1.15.4 と cuda 11.1 の組み合わせで、1.4倍程度トレーニング速度が速くなることを確認しています。
+ - https://github.com/fo40225/tensorflow-windows-wheel/tree/master/1.15.4+nv20.12/
+
+* Tensorflow トレーニングの際に、下の警告が出ることを浦久保は認識しておりますが、解決できておりません。どなたかのご助力をお願いいたします。
+ - WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
+
+
## お願い:
日本国内の実験研究者、情報学研究者さまのフィードバックをお待ちします(hurakubo あっと gmail.com; **[参考3]** )。私一人で開発を続けることは困難なので、共同開発者も募集いたします。本アプリは、自然画像のセグメンテーション等に利用することも可能と思われますので、多様なコメントをお待ちしております。本アプリの開発には、革新脳、新学術、基盤Cのご支援をいただいております。
diff --git a/README.md b/README.md
index adc9c72e..df5240a0 100644
--- a/README.md
+++ b/README.md
@@ -42,32 +42,25 @@ Multiple users can simultaneously use it through web browsers. The goal is to de
## System requirements
Operating system: Microsoft Windows 10 (64 bit) or Linux (Ubuntu 18.04).
-Recommendation: High-performance NVIDIA graphics card whose GPU has over 3.5 compute capability.
+Recommendation: High-performance NVIDIA graphics card whose GPU has over 3.5 compute capability (e.g., GeForce GTX1080ti, RTX2080ti, and RTX3090).
- https://developer.nvidia.com/cuda-gpus
-Caution: currently, UNI-EM cannot run on the newest NVIDIA GPUs, such as A100 and RTX30X0 (in particular, under microsoft windows). This is because UNI-EM is based on tensorflow1.X, while the newest GPUs are compatible with tensorflow2.X, but not tensorflow1.X. Please refer to the following website.
-
-- https://www.pugetsystems.com/labs/hpc/How-To-Install-TensorFlow-1-15-for-NVIDIA-RTX30-GPUs-without-docker-or-CUDA-install-2005/
-- https://developer.nvidia.com/blog/accelerating-tensorflow-on-a100-gpus/
-- https://github.com/NVIDIA/tensorflow
-
## Installation
We provide standalone versions (pyinstaller version) and Python source codes.
### Pyinstaller version (Microsoft Windows 10 only)
1. Download one of the following two versions, and unzip it:
-- Version 0.90.4 (2021/05/31):
- - [CPU version (Ver0.90.4; 363MB)](https://bit.ly/3uwKHkB)
- - [GPU version (Ver0.90.4: 1,068 MB)](https://bit.ly/2QWfFFb)
+- Version 0.92 (2021/09/13):
+ - [CPU & GPU version (Ver0.92; 2,166 MB)](https://bit.ly/2VFvaDS)
- Release summary:
- - Bug fix.
- - Bug fix version of FFNs was used.
- - Tentative solution in “Cannot lock file” error in the inference of 2D CNN.
- - Safe launch of Tensorboard.
- - Abolish the use of mcube (caused an occasional error in launching).
+ - Compatibility with both tensorflow1.X and 2.X. It now works also on NVIDIA Ampere GPUS (RTX30X0, etc).
+ - Caution: FFN model in TF2 is not identical to that in TF1. Trained model using TF1 cannot be used for further training or inference in TF2.
+ - Revision of documents for FFNs.
+ - Bug fix (Tensorboard, 2D/3D watersheds, Filetypes of 2D CNN inference, etc).
+
2. Download one of sample EM/segmentation dojo folders from the following link, and unzip it:
- https://www.dropbox.com/s/pxds28wdckmnpe8/ac3x75.zip?dl=0
@@ -77,9 +70,19 @@ We provide standalone versions (pyinstaller version) and Python source codes.
4. Select Dojo → Open Dojo Folder from the dropdown menu, and specify the folder of the sample EM/segmentation dojo files. The proofreading software Dojo will be launched.
+* Update the dirver of NVIDIA GPU if you see the following error.
+ - tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
+ - https://www.nvidia.com/Download/index.aspx
+
+* Caution: FFN model in TF1 is not consistent with that in TF2. The trained model using TF1 cannot be served for the further training or inference in TF2.
+
+* In the process of traning, HU sees the following warning, and has not found out how to suppress it. I ask someone for help.
+ - WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
+
+
### Python version
-1. Install Python 3.5-3.7 in a Microsoft Windows PC (64 bit) or Linux PC (Ubuntu 18.04 confirmed).
-2. Install cuda 10.0 and cuDNN 7.4 for Tensorflow 1.14 if your PC has a NVIDIA-GPU.
+1. Install Python 3.6- in a Microsoft Windows PC (64 bit) or Linux PC (Ubuntu 18.04 confirmed).
+2. Install "cuda 11.0 and cuDNN 8.0.4 for Tensorflow 2.4.1", or "cuda 11.2.2 and cuDNN 8.1.1 for Tensorflow 2.5.0" if your PC has a NVIDIA-GPU.
- https://www.tensorflow.org/install/source
- https://www.tensorflow.org/install/source_windows
@@ -88,9 +91,12 @@ We provide standalone versions (pyinstaller version) and Python source codes.
- git clone https://github.com/urakubo/UNI-EM.git
-4. Install the following modules of Python: Tensorflow-gpu, PyQt5, openCV3, pypng, tornado, pillow, libtiff, mahotas, h5py, lxml, numpy, scipy, scikit-image, pypiwin32, numpy-stl. Check "requirements-[os]-[cpu or gpu].txt". Users can install those module using the following command.
+4. Install the following modules of Python: Tensorflow-gpu, PyQt5, openCV3, pypng, tornado, pillow, libtiff, mahotas, h5py, lxml, numpy, scipy, scikit-image, pypiwin32, numpy-stl. Use the command "requirements-[os]-.txt". Use the following "apt" commands to install opencv and pyqt5 if you use Ubuntu/Linux:
- - pip install -r requirements-[os]-[cpu or gpu].txt
+ - sudo apt install python3-dev python3-pip
+ - sudo apt install python3-opencv
+ - sudo apt install python3-pyqt5
+ - sudo apt install python3-pyqt5.qtwebengine
5. Download one of sample EM/segmentation dojo folders from the following link, and unzip it:
- https://www.dropbox.com/s/pxds28wdckmnpe8/ac3x75.zip?dl=0
@@ -100,6 +106,15 @@ We provide standalone versions (pyinstaller version) and Python source codes.
7. Select Dojo → Open Dojo Folder from the dropdown menu, and specify the sample EM/segmentation dojo folder. The proofreading software Dojo will be launched.
+* Update the dirver of NVIDIA GPU if you see the following error.
+ - tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
+ - https://www.nvidia.com/Download/index.aspx
+
+* Caution: FFN model in TF1 is not consistent with that in TF2. The trained model using TF1 cannot be served for the further training or inference in TF2.
+
+* In the process of traning, HU sees the following warning, and has not found out how to suppress it. I ask someone for help.
+ - WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
+
## Authors
* [**Hidetoshi Urakubo**](https://researchmap.jp/urakubo/?lang=english) - *Initial work* -
diff --git a/data/parameters/Filters.pickle b/data/parameters/Filters.pickle
new file mode 100644
index 00000000..3fd41310
Binary files /dev/null and b/data/parameters/Filters.pickle differ
diff --git a/docs/HowToUse.md b/docs/HowToUse.md
index 659239b1..546c71ae 100644
--- a/docs/HowToUse.md
+++ b/docs/HowToUse.md
@@ -114,39 +114,36 @@ The VAST Lite is recommended for 3D ground truth generation (https://software.rc
#### Procedure:
1. Select Segmentation → 3D FFN in the pulldown menu. A dialog that has the four tabs appears: Preprocessing, Training, Inference, and Postprocessing.
-2. Select the preprocessing tab and specify parameters:
- - Image Folder: Folder containing EM images (grayscale sequential tiff/png images).
- - Ground Truth Folder: Folder containing ground truth segmentation (grayscale sequential tiff/png images).
- - FFN File Folder: Folder storing generated files for training.
+2. Select the preprocessing tab and specify folders:
+ - Training Image Folder: Folder containing EM images for training (sequentially numbered grayscale tiff/png images).
+ - Ground Truth Folder: Folder containing ground truth segmentation (sequentially numbered grayscale tiff/png images).
+ - Empty Folder for FFNs: Empty folder to store generated preprocessed files for training.
- Save Parameters
- Load Parameters
- Users will see an example EM image volume and their segmentation (kasthuri15) by downloading the following example data.
- - ExampleFFN.zip 522MB: https://www.dropbox.com/s/cztcf8w0ywj1pmz/ExampleFFN.zip?dl=0
+ Users can use an example EM image volume and their segmentation (kasthuri15) by downloading the following example data.
+ - ExampleFFN.zip 154MB: https://www.dropbox.com/s/06eyzakq9o87cmk/ExampleFFN.zip?dl=0
-3. Execute the preprocessing. It takes 5 to 60 min depending on target image volume and machine speed. It produces three files in the FFN file folder: af.h5, groundtruth.h5, and tf_record_file .
+3. Execute the preprocessing. It takes 5 to 60 min depending on target image volume and machine speed. It produces three files in the Empty Folder for FFNs: af.h5, groundtruth.h5, and tf_record_file .
4. Select the training tab and specify parameters:
- Max Training Steps: The number of training FFN, a key parameter.
- Sparse Z: Check it if the target EM-image stack is anisotropic.
- - Training Image h5 File: Generated file
- - Ground truth h5 File: Generated file.
- - Tensorflow Record File: Generated file.
- - Tensorflow Model Folder: Folder storing training results.
+ - FFNs Folder: Folder storing generated preprocessed files.
+ - Model Folder: Folder to store trained Tensorflow model.
5. Execute the training. It requires over a few days depending on the target image volume, machine speed, and the Max Training Steps. A few million training steps are required for minimal quality inference. Users can execute additive training by specifying the same parameter settings with the increasing number of "Max Training Steps".
-6. Select the inference tab and specify parameters:
- - Target Image Folder: Folder containing EM images (sequential grayscale tiff/png images).
- - Output Inference Folder: Folder that will store the inference result.
- - Tensorflow Model Files: Specify the trained model files. Please remove their suffix, and just specify the prefix such as "model.ckpt-2000000."
+6. Select the inference tab and specify folders:
+ - Target Image Folder: Folder containing EM images for inference (sequentially numbered grayscale tiff/png images).
+ - Model Folder: Folder storing the trained Tensorflow model, i.e., trios of ”model.ckpt-XXXXX.data-00000-of-00001", "model.ckpt-XXXXX.index", and "model.ckpt-4000000.meta".
+ - FFNs Folder: FFNs Folder. Inference results will be stored in this folder.
- Sparse Z: Check if it was checked it at the training process.
- Checkpoint interval: Checkpoint interval.
- - FFN File Folder: Folder storing generated files for inference "inference_params.pbtxt."
- Save Parameters
- Load Parameters
-7. Execute the inference. It requires 5-60 min depending on target image volume and machine speed. It produces inference results "0/0/seg-0_0_0.npz " and " seg-0_0_0.prob " in the Output Inference Folder. It also produces "inference_params.pbtxt" in the FFN file folder.
+7. Execute the inference. It requires 5-60 min depending on target image volume and machine speed. It produces inference results " 0/0/seg-0_0_0.npz " and " 0/0/seg-0_0_0.prob " in the FFNs Folder. It also produces "inference_params.pbtxt" in the FFNs folder.
8. Select the postprocessing tab and specify parameters:
- - Target Inference File: Specify inferred segmentation file such as seg-0_0_0.npz.
- - Output Inference Folder: Folder storing generated sequential image files.
- - OUtput Filetype: Please select one of them. 16 bit images are recommended.
+ - FFNs Folder: FFNs Folder that stores inference results, i.e., 0/0/seg-0_0_0.npz.
+ - Output Segmentation Folder (Empty): Folder to store generated sequential segmentation images.
+ - Output Filetype: Select one of them. 8 bit color PNG is good for visual inspection. 16 bit gray-scale filetypes are good for further analyses.
- Save Parameters
- Load Parameters
9. Execute the postprocessing. It generally requires less than 5 min. It produces the inference result in the Output Inference Folder.
diff --git a/docs/Images/FFN_Prep.png b/docs/Images/FFN_Prep.png
index 034feac5..41ac9015 100644
Binary files a/docs/Images/FFN_Prep.png and b/docs/Images/FFN_Prep.png differ
diff --git a/docs/Workflow2.ja.md b/docs/Workflow2.ja.md
index 0becca0d..60c47018 100644
--- a/docs/Workflow2.ja.md
+++ b/docs/Workflow2.ja.md
@@ -20,10 +20,10 @@ UNI-EMによる3D FFNセグメンテーションの一例として、ATUM/SEMに
#### ●EM画像と教師セグメンテーション
-1. 下の ExampleFFN.zip をダウンロードして展開してください。dataフォルダの中身をUNI-EMフォルダ([UNI-EM])中のdataフォルダの中身と入れ替えてください。"[UNI-EM]/data/DNN_training_images" にトレーニング画像 (0000.png, ..., 0099.png; 8bit, grayscale png) 、"[UNI-EM]/data/DNN_ground_truth" に教師セグメンテーション (0000.png, ..., 0099.png; 16bit, grayscale png) が入っています(**Fig. 1**)。教師セグメンテーションの作成にはVast liteの使用をお勧めします
-( https://software.rc.fas.harvard.edu/lichtman/vast/ )。
+1. 下の ExampleFFN.zip をダウンロードして展開してください。ExampleFFN中のフォルダ"DNN_training_images"にトレーニング画像 (0000.png, ..., 0099.png; 8bit, grayscale) 、"DNN_ground_truth" に教師セグメンテーション (0000.png, ..., 0099.png; 16bit, grayscale)、 "DNN_test_images" に推論用画像 (0000.png, ..., 0099.png; 8bit RGBですが推論時に自動的に8bit grayscale に変換されます)が入っています(**Fig. 1**)。ご自身で教師セグメンテーションを作成される際にはVast liteの使用をお勧めします
+( https://software.rc.fas.harvard.edu/lichtman/vast/ )。"ffn", "DNN_model_tensorflow", "DNN_segmentation"は空フォルダです。
-**ExampleFFN.zip** 522MB: https://www.dropbox.com/s/cztcf8w0ywj1pmz/ExampleFFN.zip?dl=0
+**ExampleFFN.zip** 154MB: https://www.dropbox.com/s/06eyzakq9o87cmk/ExampleFFN.zip?dl=0
@@ -37,11 +37,20 @@ UNI-EMによる3D FFNセグメンテーションの一例として、ATUM/SEMに
#### ●前処理
-2. UNI-EMを起動してください。
+2. UNI-EMを起動して、ExampleFFN中の各フォルダ"DNN_training_images", "DNN_ground_truth", "DNN_test_images", "ffn", "DNN_model_tensorflow", "DNN_segmentation"をすべてUNI-EM上にドラッグ&ドロップして読み込ませてください。
3. UNI-EM上端のドロップダウンメニューより Segmentation → 3D FFN を選択して、3D FFN ダイアログを起動してください。
- Preprocessing タブを選択してください(**Fig. 2a**)。
- - Browse... をクリックして Training Image Folder "[UNI-EM]/data/DNN_training_images" にEM画像が存在すること(**Fig. 2b**)、Ground Truth Folder "[UNI-EM]/data/DNN_ground_truth"に教師セグメンテーション画像が存在することを確認してください(**Fig. 2c**)。同様にFFN File Folder ("[UNI-EM]/data/ffn") が存在することを確認してください(**Fig. 2d**)。
+ - 下段の"Training Image Folder", "Groud Truth Folder", "Empty Folder for FFNs" の右側のプルダウンメニューより、各々"DNN_training_images" (8bit grayscale/RGB, png/tif/jpg; **Fig. 2b**), "DNN_ground_truth" (8bit/16bit grayscale, png/tif; **Fig. 2c**), "ffn" (空; **Fig. 2d**) フォルダを選択してください。プルダウンメニューに該当フォルダ名が現れない場合は、もう一度フォルダをUNI-EM上にドラッグ&ドロップするか、右の"Open...", "Browse..."より該当フォルダを指定してください。
+
+
+
+
+
+ Figure 2. FFN Preprocessing +
+
diff --git a/docs/Workflow2.md b/docs/Workflow2.md
index 4a3b6b14..c1ea2611 100644
--- a/docs/Workflow2.md
+++ b/docs/Workflow2.md
@@ -21,9 +21,9 @@ Here we try automated membrane segmentation of a stack of EM images from mouse s
#### Target EM images and ground truth
-1. Download the file "ExampleFFN.zip" from the link below and unzip it on your UNI-EM installed PC. Copy and paste the unzipped contents to the "data" folder of UNI-EM ([UNI-EM]). Here the training image is stored in "[UNI-EM]/data/DNN_training_images" (0000.png, ..., 0099.png; 8bit, grayscale png), and the ground truth segmentation is stored in "[UNI-EM]/data/DNN_ground_truth" (0000.png, ..., 0099.png; 16bit, grayscale png; **Fig. 1**). The software Vast lite is recommend to make such ground truth segmentation ( https://software.rc.fas.harvard.edu/lichtman/vast/ ).
+1. Download the file "ExampleFFN.zip" from the link below and unzip it on your UNI-EM installed PC. Here the EM images for training are stored in the folder "DNN_training_images" (0000.png, ..., 0099.png; 8bit, grayscale), and the ground truth segmentations are stored in the folder "DNN_ground_truth" (0000.png, ..., 0099.png; 16bit, grayscale; **Fig. 1**), and the EM images for inference are stored in "DNN_test_images" (0000.png, ..., 0099.png; 8bit RGB that are converted to 8 bit grayscale). The software Vast lite is recommend to make such ground truth segmentation ( https://software.rc.fas.harvard.edu/lichtman/vast/ ). The folders "ffn", "DNN_model_tensorflow", and "DNN_segmentation" are empty.
-- ExampleFFN.zip 522MB: https://www.dropbox.com/s/cztcf8w0ywj1pmz/ExampleFFN.zip?dl=0
+- ExampleFFN.zip 154MB: https://www.dropbox.com/s/06eyzakq9o87cmk/ExampleFFN.zip?dl=0
@@ -35,13 +35,22 @@ Here we try automated membrane segmentation of a stack of EM images from mouse s
#### Preprocessing
-2. Launch UNI-EM.
+2. Launch UNI-EM, and drag and drop the unzipped folders on UNI-EM. The zipped file should contain "DNN_training_images", "DNN_ground_truth", "DNN_test_images", "ffn", "DNN_model_tensorflow", and "DNN_segmentation".
3. Select "Segmentation → 3D FFN" from a UNI-EM dropdown menu to launch the dialogue, 3D FFN.
- Select Preprocessing tab (**Fig. 2a**).
- - Confirm that "Training Image Folder" ( [UNI-EM]/data/DNN_training_images ) contains the training EM images (**Fig. 2b**), "Ground Truth Folder" ( [UNI-EM]/data/DNN_ground_truth ) contains the ground truth images (**Fig. 2c**), and the empty "FFN File Folder" ( [UNI-EM]/data/ffn ) exists (**Fig. 2d**).
+ - Select the folder "DNN_training_images" from the pulldown menu of "Training Image Folder." It should contain training EM images (sequentially numbered image files; 8bit grayscale/RGB and png/tif/jpg; **Fig. 2b**). Also select the folder "DNN_ground_truth" for "Ground Truth Folder." It should contains ground truth images (sequentially numbered image files; 8bit/16bit grayscale, png/tif; **Fig. 2c**). Select the folder "ffn" for "Empty folder for FFNs" (or any empty folder; **Fig. 2d**).
-4. Start preprocessing by clicking the "Execute" button (**Fig. 2f**). Four intermediate files are generated in the FFN File Folder. It takes 6-60 min, depending mainly on image volume. Users will see progress messages in the console window (shown below).
+
+
+
+
+ Figure 2. Preprocessing of FFN +
+
-
-
- Figure 2. Preprocessing of FFN -
-diff --git a/miscellaneous/DialogImageFolder.py b/miscellaneous/DialogImageFolder.py index 18bf83c9..5df1c18c 100644 --- a/miscellaneous/DialogImageFolder.py +++ b/miscellaneous/DialogImageFolder.py @@ -15,6 +15,7 @@ main_dir = path.abspath(path.dirname(sys.argv[0])) # Dir of main icon_dir = path.join(main_dir, "icons") sys.path.append(main_dir) +import miscellaneous.Miscellaneous as m class _MyListModel(QAbstractListModel): @@ -69,10 +70,9 @@ def __init__(self, parent, title, init_path): self.listview.setViewMode(QListView.IconMode) self.listview.setIconSize(QSize(192,192)) - targetfiles1 = glob.glob(os.path.join( init_path, '*.png')) - targetfiles2 = glob.glob(os.path.join( init_path, '*.tif')) - targetfiles3 = glob.glob(os.path.join( init_path, '*.tiff')) - targetfiles = targetfiles1 + targetfiles2 + targetfiles3 + ## Problem: JPEG + targetfiles = m.ObtainImageFiles(init_path) + lm = _MyListModel(targetfiles, self.parent) self.listview.setModel(lm) @@ -101,10 +101,8 @@ def current_row_changed(self): def on_clicked(self, index): path = self.dirModel.fileInfo(index).absoluteFilePath() - targetfiles1 = glob.glob(os.path.join( path, '*.png')) - targetfiles2 = glob.glob(os.path.join( path, '*.tif')) - targetfiles3 = glob.glob(os.path.join( path, '*.tiff')) - targetfiles = targetfiles1 + targetfiles2 + targetfiles3 + ## Problem: JPEG + targetfiles = m.ObtainImageFiles(path) lm = _MyListModel(targetfiles, self.parent) self.listview.setModel(lm) diff --git a/miscellaneous/Miscellaneous.py b/miscellaneous/Miscellaneous.py index 404d7456..f47a8ceb 100644 --- a/miscellaneous/Miscellaneous.py +++ b/miscellaneous/Miscellaneous.py @@ -9,6 +9,7 @@ import PIL.Image import cv2 import png +import tifffile from itertools import product import glob @@ -89,19 +90,46 @@ def CloseFolder(u_info, dir): u_info.open_files.remove(dir) - +# Due to unicode comaptibitiy # https://qiita.com/SKYS/items/cbde3775e2143cad7455 +# 16bit png seems not to be read in "np.fromfile". +# http://jamesgregson.ca/16-bit-image-io-with-python.html + +def imread(filename, flags=cv2.IMREAD_UNCHANGED, dtype=None): -def imread(filename, flags=cv2.IMREAD_COLOR, dtype=np.uint8): try: - n = np.fromfile(filename, dtype) - img = cv2.imdecode(n, flags) + +# n = np.fromfile(filename, dtype) +# img = cv2.imdecode(n, flags) +# root, ext = os.path.splitext(filename) +# +# if ext in ['.png','.PNG']: +# img = png.Reader(filename).read() +# elif ext in ['.TIF','.tif', '.TIFF', '.tiff','.png','.PNG','.jpg', '.jpeg','.JPG', '.JPEG']: +# img = tifffile.imread(filename) +# else: +# + + pil_img = PIL.Image.open(filename) + img = np.array(pil_img) + + if img.dtype == 'int32': + img = img.astype('uint16') + if dtype != None: + img = img.astype(dtype) + if img.ndim == 3: + img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) + if flags == cv2.IMREAD_GRAYSCALE: + img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) +# print('Image dtype: ', img.dtype, img.shape) return img except Exception as e: print(e) return None + + def imwrite(filename, img, params=None): try: ext = os.path.splitext(filename)[1] @@ -234,13 +262,28 @@ def save_npy(self, id_data, filename): np.save(filename, id_data) def save_tif16(id_data, filename): - cv2.imwrite(filename, id_data.astype('uint16')) + imwrite(filename, id_data.astype('uint16')) + +def save_tif8(id_data, filename, compression=5): + + # int(cv.IMWRITE_TIFF_COMPRESSION) == 1: No compression + # int(cv.IMWRITE_TIFF_COMPRESSION) == 5: Lempel-Ziv & Welch (LZW) compression + + if compression == 5: + imwrite(filename, id_data.astype('uint8') ) + else : + imwrite(filename, id_data.astype('uint8'), params=( int(cv2.IMWRITE_TIFF_COMPRESSION), compression) ) -def save_tif8(id_data, filename): - cv2.imwrite(filename, id_data.astype('uint8')) # pilOUT = PIL.Image.fromarray(np.uint8(tile_image)) # pilOUT.save(current_tile_image_name) + +def save_jpg16(id_data, filename): + imwrite(filename, id_data.astype('uint16')) + +def save_jpg8(id_data, filename): + imwrite(filename, id_data.astype('uint8')) + def save_png16(id_data, filename): # Use pypng to write zgray as a grayscale PNG. with open(filename, 'wb') as f: @@ -277,11 +320,45 @@ def save_hdf5( file, dataset_name, array ): hdf5.close() def ObtainImageFiles(input_path): - search1 = os.path.join(input_path, '*.png') - search2 = os.path.join(input_path, '*.tif') - search3 = os.path.join(input_path, '*.tiff') - filestack = sorted(glob.glob(search1)) - filestack.extend(sorted(glob.glob(search2))) - filestack.extend(sorted(glob.glob(search3))) - return filestack + Image_files = [] + files_in_folder = glob.glob(os.path.join(input_path, "*")) + for file in files_in_folder: + root, ext = os.path.splitext(file) + if ext in ['.TIF','.tif', '.TIFF', '.tiff','.png','.PNG','.jpg', '.jpeg','.JPG', '.JPEG'] : + Image_files.append(file) + + Image_files = sorted(Image_files) + return Image_files + + +def SaveImage(output_image, filename): + output_dtype = output_image.dtype + root, ext = os.path.splitext(filename) + if ext in ['.TIF','.tif', '.TIFF', '.tiff']: + if output_dtype == 'uint16': + save_tif16(output_image, filename) + elif output_dtype == 'uint8': + save_tif8(output_image, filename) + else: + print('dtype mismatch: ', ext, output_dtype) + return False + elif ext in ['.png','.PNG']: + if output_dtype == 'uint16': + save_png16(output_image, filename) + elif output_dtype == 'uint8': + save_png8(output_image, filename) + else: + print('dtype mismatch: ', ext, output_dtype) + return False + elif ext in ['.jpg', '.jpeg','.JPG', '.JPEG']: + if output_dtype == 'uint16': + save_jpg16(output_image, filename) + elif output_dtype == 'uint8': + save_jpg8(output_image, filename) + else: + print('dtype mismatch: ', ext, output_dtype) + return False + + return True + diff --git a/plugins/Filter2D3D/filters/Skimg.py b/plugins/Filter2D3D/filters/Skimg.py index 500cce2d..86c287c6 100644 --- a/plugins/Filter2D3D/filters/Skimg.py +++ b/plugins/Filter2D3D/filters/Skimg.py @@ -7,7 +7,7 @@ from scipy import ndimage as ndi -from skimage.morphology import watershed +from skimage.segmentation import watershed from skimage.feature import peak_local_max class Skimg(): @@ -16,10 +16,12 @@ def Filter(self, input_image, params): binary_image = np.logical_not(input_image > params['Binarization threshold']) distance = ndi.distance_transform_edt(binary_image) - local_maxi = peak_local_max(distance, min_distance=params['Min distance'], indices=False, footprint=np.ones((20, 20))) - markers, n_markers = ndi.label(local_maxi) + local_maxi = peak_local_max(distance, labels=binary_image, min_distance=params['Min distance']) + mask = np.zeros(distance.shape, dtype=bool) + mask[tuple(local_maxi.T)] = True + markers, n_markers = ndi.label(mask) print('Number of markers: ', n_markers) - labels = watershed(input_image, markers) + labels = watershed(-distance, markers, mask=binary_image) return labels diff --git a/plugins/Filter2D3D/filters/Skimg3D.py b/plugins/Filter2D3D/filters/Skimg3D.py index 5088312a..931151dd 100644 --- a/plugins/Filter2D3D/filters/Skimg3D.py +++ b/plugins/Filter2D3D/filters/Skimg3D.py @@ -7,7 +7,7 @@ from scipy import ndimage as ndi -from skimage.morphology import watershed +from skimage.segmentation import watershed from skimage.feature import peak_local_max class Skimg3D(): @@ -16,10 +16,13 @@ def Filter(self, input_image, params): binary_image = np.logical_not(input_image > params['Binarization threshold']) distance = ndi.distance_transform_edt(binary_image) - local_maxi = peak_local_max(distance, min_distance=params['Min distance'], indices=False ) - markers, n_markers = ndi.label(local_maxi) + local_maxi = peak_local_max(distance, labels=binary_image, min_distance=params['Min distance']) + mask = np.zeros(distance.shape, dtype=bool) + mask[tuple(local_maxi.T)] = True + markers, n_markers = ndi.label(mask) print('Number of markers: ', n_markers) - labels = watershed(input_image, markers) + labels = watershed(-distance, markers, mask=binary_image) + return labels diff --git a/plugins/miscellaneous/MiscellaneousFilters.py b/plugins/miscellaneous/MiscellaneousFilters.py index ca4cc242..58077e2d 100644 --- a/plugins/miscellaneous/MiscellaneousFilters.py +++ b/plugins/miscellaneous/MiscellaneousFilters.py @@ -137,14 +137,7 @@ def ObtainTarget(self): # for ofileobj in ofolder.values(): # ofileobj.close() # - search1 = os.path.join(input_path, '*.png') - search2 = os.path.join(input_path, '*.tif') - search3 = os.path.join(input_path, '*.tiff') - filestack = sorted(glob.glob(search1)) - filestack.extend(sorted(glob.glob(search2))) - filestack.extend(sorted(glob.glob(search3))) - - # print('filestack : ', filestack) + filestack = m.ObtainImageFiles( input_path ) return filestack @@ -197,14 +190,13 @@ def Execute3D(self, w): for zi, filename in enumerate(filestack): output_name = os.path.basename(filename) savename = os.path.join(output_path, output_name) - root, ext = os.path.splitext(savename) - if ext == ".tif" or ext == ".tiff" or ext == ".TIF" or ext == ".TIFF": - m.save_tif16(input_volume[:, :, zi], savename) - elif ext == ".png" or ext == ".PNG": - m.save_png16(input_volume[:, :, zi], savename) + print("Save: ",savename) + flag = m.SaveImage(input_volume[:, :, zi], savename) + print('2D/3D filters were applied!') - # Lock Folder - m.LockFolder(self.parent.u_info, output_path) + # Change folder type + self.parent.parent.ExecuteCloseFileFolder(output_path) + self.parent.parent.OpenFolder(output_path) def Execute2D(self, w): @@ -226,26 +218,15 @@ def Execute2D(self, w): # input_image = cv2.imread(filename, cv2.IMREAD_GRAYSCALE) input_image = m.imread(filename, flags=cv2.IMREAD_GRAYSCALE) output_image = self.FilterApplication2D(w, input_image) - output_dtype = output_image.dtype savename = os.path.join(output_path, output_name) - root, ext = os.path.splitext(savename) - if ext == ".tif" or ext == ".tiff" or ext == ".TIF" or ext == ".TIFF": - if output_dtype == 'uint16': - m.save_tif16(output_image, savename) - elif output_dtype == 'uint8': - m.save_tif8(output_image, savename) - else: - print('dtype mismatch: ', output_dtype) - elif ext == ".png" or ext == ".PNG": - if output_dtype == 'uint16': - m.save_png16(output_image, savename) - elif output_dtype == 'uint8': - m.save_png8(output_image, savename) - else: - print('dtype mismatch: ', output_dtype) + + flag = m.SaveImage(output_image, savename) + print('2D filters were applied!') - # Lock Folder - m.LockFolder(self.parent.u_info, output_path) + # Change folder type + self.parent.parent.ExecuteCloseFileFolder(output_path) + self.parent.parent.OpenFolder(output_path) + def FilterApplication2D(self, w, image): diff --git a/requirements-linux-cpu.txt b/requirements-linux-cpu.txt deleted file mode 100644 index 896a49f0..00000000 --- a/requirements-linux-cpu.txt +++ /dev/null @@ -1,46 +0,0 @@ -numpy==1.16.4 -absl-py==0.7.0 -altgraph==0.16.1 -astor==0.7.1 -certifi==2018.11.29 -cloudpickle==0.7.0 -dask==1.1.1 -decorator==4.3.2 -future==0.17.1 -gast==0.2.2 -grpcio==1.18.0 -h5py==2.9.0 -icc-rt==2019.0 -intel-openmp==2019.0 -Keras-Applications==1.0.7 -Keras-Preprocessing==1.0.9 -libtiff==0.4.2 -lxml==4.3.1 -macholib==1.11 -mahotas==1.4.5 -Markdown==3.0.1 -networkx==2.2 -numpy-stl==2.9.0 -opencv-python==4.0.0.21 -pefile==2018.8.8 -Pillow==6.2.0 -protobuf==3.6.1 -pypng==0.0.19 -PyQt5==5.12.1 -PyQt5-sip==4.19.14 -PyQtWebEngine==5.12.1 -python-utils==2.3.0 -PyWavelets==1.0.1 -scikit-image==0.14.2 -scipy==1.2.0 -sip==4.19.8 -six==1.12.0 -tbb==2019.0 -tbb4py==2019.0 -tensorboard==1.12.2 -tensorflow==1.12.0 -termcolor==1.1.0 -toolz==0.9.0 -tornado==5.1.1 -Werkzeug==0.15.3 -wincertstore==0.2 diff --git a/requirements-linux-gpu.txt b/requirements-linux-gpu.txt deleted file mode 100644 index a08608b4..00000000 --- a/requirements-linux-gpu.txt +++ /dev/null @@ -1,46 +0,0 @@ -numpy==1.16.4 -absl-py==0.7.0 -altgraph==0.16.1 -astor==0.7.1 -certifi==2018.11.29 -cloudpickle==0.7.0 -dask==1.1.1 -decorator==4.3.2 -future==0.17.1 -gast==0.2.2 -grpcio==1.18.0 -h5py==2.9.0 -icc-rt==2019.0 -intel-openmp==2019.0 -Keras-Applications==1.0.7 -Keras-Preprocessing==1.0.9 -libtiff==0.4.2 -lxml==4.3.1 -macholib==1.11 -mahotas==1.4.5 -Markdown==3.0.1 -networkx==2.2 -numpy-stl==2.9.0 -opencv-python==4.0.0.21 -pefile==2018.8.8 -Pillow==6.2.0 -protobuf==3.6.1 -pypng==0.0.19 -PyQt5==5.12.1 -PyQt5-sip==4.19.14 -PyQtWebEngine==5.12.1 -python-utils==2.3.0 -PyWavelets==1.0.1 -scikit-image==0.14.2 -scipy==1.2.0 -sip==4.19.8 -six==1.12.0 -tbb==2019.0 -tbb4py==2019.0 -tensorboard==1.12.2 -tensorflow-gpu==1.12.0 -termcolor==1.1.0 -toolz==0.9.0 -tornado==5.1.1 -Werkzeug==0.15.3 -wincertstore==0.2 diff --git a/requirements-linux.txt b/requirements-linux.txt new file mode 100644 index 00000000..d6bc715d --- /dev/null +++ b/requirements-linux.txt @@ -0,0 +1,63 @@ +numpy==1.19.5 +absl-py==0.13.0 +astunparse==1.6.3 +cachetools==4.2.2 +certifi==2021.5.30 +charset-normalizer==2.0.4 +cycler==0.10.0 +dataclasses==0.8 +decorator==4.4.2 +flatbuffers==1.12 +gast==0.3.3 +google-auth==1.35.0 +google-auth-oauthlib==0.4.6 +google-pasta==0.2.0 +grpcio==1.32.0 +h5py==2.10.0 +idna==3.2 +imageio==2.9.0 +importlib-metadata==4.8.1 +Keras-Preprocessing==1.1.2 +kiwisolver==1.3.1 +Markdown==3.3.4 +matplotlib==3.3.4 +networkx==2.5.1 +numpy-stl==2.16.3 +oauthlib==3.1.1 +opencv-python==4.5.3.56 +opt-einsum==3.3.0 +Pillow==8.3.2 +pkg_resources==0.0.0 +protobuf==3.17.3 +pyasn1==0.4.8 +pyasn1-modules==0.2.8 +pyparsing==2.4.7 +pypng==0.0.21 +PyQt5-Qt5==5.15.2 +PyQt5-sip==12.9.0 +PyQtWebEngine==5.15.4 +PyQtWebEngine-Qt5==5.15.2 +python-dateutil==2.8.2 +python-utils==2.5.6 +PyWavelets==1.1.1 +requests==2.26.0 +requests-oauthlib==1.3.0 +rsa==4.7.2 +ruamel.yaml==0.17.16 +ruamel.yaml.clib==0.2.6 +scikit-image==0.17.2 +scipy==1.5.4 +six==1.15.0 +tensorboard==2.6.0 +tensorboard-data-server==0.6.1 +tensorboard-plugin-wit==1.8.0 +tensorflow==2.4.0 +tensorflow-estimator==2.4.0 +termcolor==1.1.0 +tifffile==2020.9.3 +tornado==6.1 +typing-extensions==3.7.4.3 +urllib3==1.26.6 +Werkzeug==2.0.1 +wrapt==1.12.1 +zipp==3.5.0 diff --git a/requirements-win-cpu.txt b/requirements-win-cpu.txt deleted file mode 100644 index 3bfaeea6..00000000 --- a/requirements-win-cpu.txt +++ /dev/null @@ -1,55 +0,0 @@ -numpy==1.16.4 -absl-py==0.7.0 -altgraph==0.16.1 -astor==0.7.1 -certifi==2018.11.29 -cloudpickle==0.7.0 -dask==1.1.1 -decorator==4.3.2 -future==0.17.1 -gast==0.2.2 -google-pasta==0.2.0 -grpcio==1.18.0 -h5py==2.9.0 -icc-rt==2019.0 -intel-openmp==2019.0 -Keras-Applications==1.0.8 -Keras-Preprocessing==1.0.9 -libtiff==0.4.2 -lxml==4.3.1 -macholib==1.11 -mahotas==1.4.5 -Markdown==3.0.1 -networkx==2.2 -numpy-stl==2.9.0 -opencv-python==4.4.0.46 -opt-einsum==3.3.0 -packaging==20.4 -pefile==2018.8.8 -Pillow==6.2.0 -protobuf==3.6.1 -pyinstaller==4.0 -pyinstaller-hooks-contrib==2020.10 -pyparsing==2.4.7 -pypng==0.0.19 -PyQt5==5.15.1 -PyQt5-sip==12.8.1 -PyQtWebEngine==5.15.1 -python-utils==2.3.0 -PyWavelets==1.0.1 -pywin32-ctypes==0.2.0 -scikit-image==0.14.2 -scipy==1.2.0 -sip==5.4.0 -six==1.15.0 -tbb==2019.0 -tensorboard==1.14.0 -tensorflow==1.14.0 -tensorflow-estimator==1.14.0 -termcolor==1.1.0 -toml==0.10.2 -toolz==0.9.0 -tornado==5.1.1 -Werkzeug==0.15.3 -wincertstore==0.2 -wrapt==1.12.1 diff --git a/requirements-win-gpu.txt b/requirements-win-gpu.txt deleted file mode 100644 index 24858b76..00000000 --- a/requirements-win-gpu.txt +++ /dev/null @@ -1,55 +0,0 @@ -numpy==1.16.4 -absl-py==0.7.0 -altgraph==0.16.1 -astor==0.7.1 -certifi==2018.11.29 -cloudpickle==0.7.0 -dask==1.1.1 -decorator==4.3.2 -future==0.17.1 -gast==0.2.2 -google-pasta==0.2.0 -grpcio==1.18.0 -h5py==2.9.0 -icc-rt==2019.0 -intel-openmp==2019.0 -Keras-Applications==1.0.8 -Keras-Preprocessing==1.0.9 -libtiff==0.4.2 -lxml==4.3.1 -macholib==1.11 -mahotas==1.4.5 -Markdown==3.0.1 -networkx==2.2 -numpy-stl==2.9.0 -opencv-python==4.4.0.46 -opt-einsum==3.3.0 -packaging==20.4 -pefile==2018.8.8 -Pillow==6.2.0 -protobuf==3.6.1 -pyinstaller==4.0 -pyinstaller-hooks-contrib==2020.10 -pyparsing==2.4.7 -pypng==0.0.19 -PyQt5==5.15.1 -PyQt5-sip==12.8.1 -PyQtWebEngine==5.15.1 -python-utils==2.3.0 -PyWavelets==1.0.1 -pywin32-ctypes==0.2.0 -scikit-image==0.14.2 -scipy==1.2.0 -sip==5.4.0 -six==1.15.0 -tbb==2019.0 -tensorboard==1.14.0 -tensorflow-GPU==1.14.0 -tensorflow-estimator==1.14.0 -termcolor==1.1.0 -toml==0.10.2 -toolz==0.9.0 -tornado==5.1.1 -Werkzeug==0.15.3 -wincertstore==0.2 -wrapt==1.12.1 diff --git a/requirements-win.txt b/requirements-win.txt new file mode 100644 index 00000000..b5eedf90 --- /dev/null +++ b/requirements-win.txt @@ -0,0 +1,67 @@ +numpy==1.19.5 +absl-py==0.13.0 +altgraph==0.17 +astor==0.8.1 +astunparse==1.6.3 +cachetools==4.2.2 +certifi==2021.5.30 +charset-normalizer==2.0.4 +cycler==0.10.0 +flatbuffers==1.12 +future==0.18.2 +gast==0.3.3 +google-auth==1.35.0 +google-auth-oauthlib==0.4.6 +google-pasta==0.2.0 +grpcio==1.32.0 +h5py==2.10.0 +idna==3.2 +imageio==2.9.0 +Keras-Applications==1.0.8 +Keras-Preprocessing==1.1.2 +kiwisolver==1.3.2 +lxml==4.6.3 +mahotas==1.4.11 +Markdown==3.3.4 +matplotlib==3.4.3 +networkx==2.6.2 +numpy-stl==2.16.2 +oauthlib==3.1.1 +opencv-python==4.5.3.56 +opt-einsum==3.3.0 +pefile==2021.9.3 +Pillow==8.3.2 +protobuf==3.17.3 +pyasn1==0.4.8 +pyasn1-modules==0.2.8 +pyinstaller==4.5.1 +pyinstaller-hooks-contrib==2021.3 +pyparsing==2.4.7 +pypng==0.0.21 +PyQt5==5.15.4 +PyQt5-Qt5==5.15.2 +PyQt5-sip==12.9.0 +PyQtWebEngine==5.15.4 +PyQtWebEngine-Qt5==5.15.2 +python-dateutil==2.8.2 +python-utils==2.5.6 +PyWavelets==1.1.1 +pywin32-ctypes==0.2.0 +requests==2.26.0 +requests-oauthlib==1.3.0 +rsa==4.7.2 +scikit-image==0.18.3 +scipy==1.7.1 +six==1.15.0 +tensorboard==2.6.0 +tensorboard-data-server==0.6.1 +tensorboard-plugin-wit==1.8.0 +tensorflow==2.4.1 +tensorflow-estimator==2.4.0 +termcolor==1.1.0 +tifffile==2021.8.30 +tornado==6.1 +typing-extensions==3.7.4.3 +urllib3==1.26.6 +Werkzeug==2.0.1 +wrapt==1.12.1 diff --git a/segment/_2D_DNN/InferenceExe.py b/segment/_2D_DNN/InferenceExe.py index 6c5592f6..59316344 100644 --- a/segment/_2D_DNN/InferenceExe.py +++ b/segment/_2D_DNN/InferenceExe.py @@ -21,13 +21,8 @@ class InferenceExe(): def _Run(self, parent, params, comm_title): + input_files = m.ObtainImageFiles( params['Image Folder'] ) - input_files = glob.glob(os.path.join(params['Image Folder'], "*.jpg")) - input_png = glob.glob(os.path.join(params['Image Folder'], "*.png")) - input_tif = glob.glob(os.path.join(params['Image Folder'], "*.tif")) - input_files.extend(input_png) - input_files.extend(input_tif) - input_files = sorted(input_files) if len(input_files) == 0: print('No images in the Image Folder.') return False @@ -100,7 +95,9 @@ def _Run(self, parent, params, comm_title): filename = path.basename(input_file) print(filename+' ') - filename = filename.replace('.tif', '.png') + for ext in ['.TIF','.tif', '.TIFF', '.tiff','.PNG','.jpg', '.jpeg','.JPG', '.JPEG'] : + filename = filename.replace(ext, '.png') + output_files.append(filename) # add fringe X @@ -158,10 +155,10 @@ def _Run(self, parent, params, comm_title): if (num_tiles_x == 1) and (num_tiles_y == 1) : ## Remove fringes filename = os.path.join( tmpdir_output, output_file ) - inferred_segmentation = m.imread(filename) + inferred_segmentation = m.imread(filename, flags=cv2.IMREAD_GRAYSCALE, dtype='uint8') else : ## Merge split images. - inferred_segmentation = np.zeros((converted_size_y, converted_size_x, 3), dtype = int) + inferred_segmentation = np.zeros((converted_size_y, converted_size_x), dtype='uint8') for iy in range( num_tiles_y ): for ix in range( num_tiles_x ): y0 = iy * unit_image_size_y @@ -170,14 +167,30 @@ def _Run(self, parent, params, comm_title): x1 = x0 + unit_image_size_x current_tile_filename = str(ix).zfill(3)[-3:]+'_'+ str(iy).zfill(3)[-3:]+'_'+output_file current_tile_filename = os.path.join( tmpdir_output, current_tile_filename ) - current_tile = m.imread(current_tile_filename) - inferred_segmentation[y0:y1, x0:x1] = current_tile + current_tile = m.imread(current_tile_filename, flags=cv2.IMREAD_GRAYSCALE, dtype='uint8') + inferred_segmentation[y0:y1, x0:x1] = current_tile[:,:] inferred_segmentation = inferred_segmentation[0:image_size_y, 0:image_size_x] - filename = os.path.splitext(os.path.basename(output_file))[0] + ext_image + print('inferred_segmentation: ', inferred_segmentation.shape, inferred_segmentation.dtype) + + ## Save + filename_base = os.path.splitext(os.path.basename(output_file))[0] + filename_base = os.path.join( params['Output Segmentation Folder (Empty)'], filename_base ) + + filetype = params['Output Filetype'] + + if filetype == '8-bit gray scale PNG': + filename = filename_base + '.png' + m.save_png8(inferred_segmentation, filename) + elif filetype == '8-bit gray scale TIFF (Uncompressed)': + filename = filename_base + '.tif' + m.save_tif8(inferred_segmentation, filename, compression=1) + elif filetype == '8-bit gray scale TIFF (Compressed)': + filename = filename_base + '.tif' + m.save_tif8(inferred_segmentation, filename) + else: + print('Internel error: bad filetype.') print(filename) - filename = os.path.join( params['Output Segmentation Folder (Empty)'], filename ) - m.imwrite(filename, inferred_segmentation) ## @@ -203,5 +216,4 @@ def _ChangeIntoColor(self, img): img = cv2.cvtColor(img, cv2.COLOR_BGRA2BGR) return img - diff --git a/segment/_2D_DNN/InferenceTab.py b/segment/_2D_DNN/InferenceTab.py index b3a74481..b7275690 100644 --- a/segment/_2D_DNN/InferenceTab.py +++ b/segment/_2D_DNN/InferenceTab.py @@ -18,17 +18,19 @@ def __init__(self, u_info): self.title = '2D Inference' self.tips = [ - 'Path to folder containing images', - 'Path to folder to store segmentation', + 'Path to folder containing images for inference', 'Tensorflow model folder', - 'Large image will be splited into pieces of the unit images' + 'Path to folder to store inferred segmentation', + 'Output Filetype', + 'Unit size of images for inference. Large image will be splited into pieces of the unit-size images.' ] self.args = [ ['Image Folder', 'SelectImageFolder', 'OpenImageFolder'], - ['Output Segmentation Folder (Empty)', 'SelectEmptyFolder', 'OpenEmptyFolder'], ['Model Folder', 'SelectModelFolder', 'OpenModelFolder'], + ['Output Segmentation Folder (Empty)', 'SelectEmptyFolder', 'OpenEmptyFolder'], + ['Output Filetype', 'ComboBox', ['8-bit gray scale PNG', '8-bit gray scale TIFF (Uncompressed)', '8-bit gray scale TIFF (Compressed)']], ['Maximal unit image size', 'ComboBox', ["512", "1024", "2048"]] ] diff --git a/segment/_2D_DNN/translate.py b/segment/_2D_DNN/translate.py index d9daa64e..5d7346d3 100644 --- a/segment/_2D_DNN/translate.py +++ b/segment/_2D_DNN/translate.py @@ -3,13 +3,13 @@ from __future__ import print_function #HU{ -import warnings -warnings.filterwarnings('ignore', category=DeprecationWarning) -warnings.filterwarnings('ignore', category=FutureWarning) +#import warnings +#warnings.filterwarnings('ignore', category=DeprecationWarning) +#warnings.filterwarnings('ignore', category=FutureWarning) #}HU -import tensorflow as tf -import tensorflow.contrib as contrib + + import numpy as np import argparse import os @@ -20,16 +20,26 @@ import math import time -#HU{ -if tf.__version__ == '1.12.0': - from tensorflow.python.util import deprecation - deprecation._PRINT_DEPRECATION_WARNINGS = False - -if ('1.14' in tf.__version__) | ('1.15' in tf.__version__): - tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) - -# os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true' +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + import tensorflow.compat.v1 as tf + tf.disable_v2_behavior() +else: + import tensorflow as tf +## +import logging +import warnings +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +warnings.simplefilter(action='ignore', category=FutureWarning) +warnings.simplefilter(action='ignore', category=Warning) +tf.get_logger().setLevel('INFO') +tf.autograph.set_verbosity(0) +tf.get_logger().setLevel(logging.ERROR) +## +#HU{ gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: @@ -612,7 +622,7 @@ def create_model(inputs, targets, network=a.network, target_loss=a.loss): ema = tf.train.ExponentialMovingAverage(decay=0.99) update_losses = ema.apply([ loss]) - global_step = tf.contrib.framework.get_or_create_global_step() + global_step = tf.train.get_or_create_global_step() incr_global_step = tf.assign(global_step, global_step+1) return Model( @@ -687,8 +697,8 @@ def append_index(filesets, step=False, image_kinds=("inputs", "outputs", "target def main(): - if tf.__version__.split('.')[0] != "1": - raise Exception("Tensorflow version 1 required") +# if tf.__version__.split('.')[0] != "1": +# raise Exception("Tensorflow version 1 required") if a.seed is None: a.seed = random.randint(0, 2**31 - 1) diff --git a/segment/_3D_FFN/FFNInference.py b/segment/_3D_FFN/FFNInference.py index 1c6ba5a6..5c087c8e 100644 --- a/segment/_3D_FFN/FFNInference.py +++ b/segment/_3D_FFN/FFNInference.py @@ -57,7 +57,7 @@ def _Run(self, parent, params, comm_title): removal_file2 = os.path.join( params['FFNs Folder'], '0','0','seg-0_0_0.prob') if os.path.isfile(removal_file1) or os.path.isfile(removal_file2) : - question = "seg-0_0_0 files were found in the FFNs Folder. Remove them?" + question = "Previous result of inference has been found in the FFNs Folder. Remove them?" reply = self.query_yes_no(question, default="yes") if reply == True: @@ -65,7 +65,7 @@ def _Run(self, parent, params, comm_title): os.remove(removal_file1) with contextlib.suppress(FileNotFoundError): os.remove(removal_file2) - print('seg-0_0_0 files were removed.') + print('Inference files were removed.') else: print('FFN inference was canceled.') m.LockFolder(parent.u_info, params['FFNs Folder']) diff --git a/segment/_3D_FFN/FFNPostprocessing.py b/segment/_3D_FFN/FFNPostprocessing.py index 7f7812bb..daa54c9e 100644 --- a/segment/_3D_FFN/FFNPostprocessing.py +++ b/segment/_3D_FFN/FFNPostprocessing.py @@ -89,9 +89,9 @@ def __init__(self, u_info): self.title = 'Postprocessing' self.tips = [ - 'Folder that contains 0/0/seg-0_0_0.npz.', - 'Output segmentation folder.', - 'Output filetype.' + 'Folder that contains 0/0/seg-0_0_0.npz', + 'Output segmentation folder', + 'Output filetype' ] self.args = [ diff --git a/segment/_3D_FFN/FFNTraining.py b/segment/_3D_FFN/FFNTraining.py index 4a411eb4..6187e887 100644 --- a/segment/_3D_FFN/FFNTraining.py +++ b/segment/_3D_FFN/FFNTraining.py @@ -31,7 +31,7 @@ def _Run(self, parent, params, comm_title): record_file_path = os.path.join( params['FFNs Folder'] , "tf_record_file" ) with h5py.File( training_image_file , 'r') as f: - image = f['raw'].value + image = f['raw'][()] image_mean = np.mean(image).astype(np.int16) image_std = np.std(image).astype(np.int16) print('Training image mean: ', image_mean) diff --git a/segment/_3D_FFN/ffn/build_coordinates.py b/segment/_3D_FFN/ffn/build_coordinates.py index 9b464b16..3eabe9db 100644 --- a/segment/_3D_FFN/ffn/build_coordinates.py +++ b/segment/_3D_FFN/ffn/build_coordinates.py @@ -16,7 +16,28 @@ import h5py import numpy as np -import tensorflow as tf + + +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + import tensorflow.compat.v1 as tf + tf.disable_v2_behavior() +else: + import tensorflow as tf +## +import os +import logging +import warnings +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +warnings.simplefilter(action='ignore', category=FutureWarning) +warnings.simplefilter(action='ignore', category=Warning) +tf.get_logger().setLevel('INFO') +tf.autograph.set_verbosity(0) +tf.get_logger().setLevel(logging.ERROR) +## + FLAGS = flags.FLAGS @@ -86,9 +107,9 @@ def main(argv): np.random.shuffle(indices) logging.info('Saving coordinates.') - record_options = tf.python_io.TFRecordOptions( - tf.python_io.TFRecordCompressionType.GZIP) - with tf.python_io.TFRecordWriter(FLAGS.coordinate_output, + record_options = tf.io.TFRecordOptions( + tf.io.TFRecordCompressionType.GZIP) + with tf.io.TFRecordWriter(FLAGS.coordinate_output, options=record_options) as writer: for i, coord_idx in indices: z, y, x = np.unravel_index(coord_idx, vol_shapes[i]) diff --git a/segment/_3D_FFN/ffn/ffn/inference/executor.py b/segment/_3D_FFN/ffn/ffn/inference/executor.py index b08850b8..08fbb460 100644 --- a/segment/_3D_FFN/ffn/ffn/inference/executor.py +++ b/segment/_3D_FFN/ffn/ffn/inference/executor.py @@ -22,11 +22,7 @@ from __future__ import division from __future__ import print_function -# HU -import warnings -warnings.filterwarnings('ignore', category=DeprecationWarning) -warnings.filterwarnings('ignore', category=FutureWarning) -# + import logging import os @@ -39,7 +35,20 @@ from concurrent import futures import numpy as np -import tensorflow as tf + + +# import tensorflow as tf +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + import tensorflow.compat.v1 as tf + tf.disable_v2_behavior() +else: + import tensorflow as tf +## + + from .inference_utils import timer_counter diff --git a/segment/_3D_FFN/ffn/ffn/inference/inference.py b/segment/_3D_FFN/ffn/ffn/inference/inference.py index 7b892161..39acd075 100644 --- a/segment/_3D_FFN/ffn/ffn/inference/inference.py +++ b/segment/_3D_FFN/ffn/ffn/inference/inference.py @@ -38,9 +38,32 @@ from scipy.special import logit from skimage import transform -import tensorflow as tf -from tensorflow import gfile +#import tensorflow as tf +#from tensorflow import gfile +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + import tensorflow.compat.v1 as tf + from tensorflow.compat.v1 import gfile + tf.disable_v2_behavior() +else: + import tensorflow as tf + from tensorflow import gfile +## +## +import os +import logging +import warnings +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +warnings.simplefilter(action='ignore', category=FutureWarning) +warnings.simplefilter(action='ignore', category=Warning) +tf.get_logger().setLevel('INFO') +tf.autograph.set_verbosity(0) +tf.get_logger().setLevel(logging.ERROR) +## + from . import align from . import executor from . import inference_pb2 @@ -60,18 +83,6 @@ MAX_SELF_CONSISTENT_ITERS = 32 -#HU{ -if tf.__version__ == '1.12.0': - from tensorflow.python.util import deprecation - deprecation._PRINT_DEPRECATION_WARNINGS = False - -if ('1.14' in tf.__version__) | ('1.15' in tf.__version__): - tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) -tf.logging.set_verbosity(tf.logging.INFO) -os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true' -#}HU - - # Visualization. # --------------------------------------------------------------------------- class DynamicImage(object): diff --git a/segment/_3D_FFN/ffn/ffn/inference/movement.py b/segment/_3D_FFN/ffn/ffn/inference/movement.py index ee840f96..3d2bb271 100644 --- a/segment/_3D_FFN/ffn/ffn/inference/movement.py +++ b/segment/_3D_FFN/ffn/ffn/inference/movement.py @@ -23,7 +23,19 @@ import weakref import numpy as np from scipy.special import logit -import tensorflow as tf + + +#import tensorflow as tf +#from tensorflow import gfile +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + import tensorflow.compat.v1 as tf + tf.disable_v2_behavior() +else: + import tensorflow as tf +## from ..training.import_util import import_symbol diff --git a/segment/_3D_FFN/ffn/ffn/inference/resegmentation.py b/segment/_3D_FFN/ffn/ffn/inference/resegmentation.py index fa919a39..9e869caf 100644 --- a/segment/_3D_FFN/ffn/ffn/inference/resegmentation.py +++ b/segment/_3D_FFN/ffn/ffn/inference/resegmentation.py @@ -31,7 +31,29 @@ from scipy import ndimage from scipy.special import expit -from tensorflow import gfile +#from tensorflow import gfile +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + import tensorflow.compat.v1 as tf + from tensorflow.compat.v1 import gfile + tf.disable_v2_behavior() +else: + import tensorflow as tf + from tensorflow import gfile +#### +import os +import logging +import warnings +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +warnings.simplefilter(action='ignore', category=FutureWarning) +warnings.simplefilter(action='ignore', category=Warning) +tf.get_logger().setLevel('INFO') +tf.autograph.set_verbosity(0) +tf.get_logger().setLevel(logging.ERROR) +## + from . import storage from .inference_utils import timer_counter diff --git a/segment/_3D_FFN/ffn/ffn/inference/storage.py b/segment/_3D_FFN/ffn/ffn/inference/storage.py index dc130bf3..bb49ac6b 100644 --- a/segment/_3D_FFN/ffn/ffn/inference/storage.py +++ b/segment/_3D_FFN/ffn/ffn/inference/storage.py @@ -29,7 +29,31 @@ import h5py import numpy as np -from tensorflow import gfile + +#from tensorflow import gfile +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + import tensorflow.compat.v1 as tf + from tensorflow.compat.v1 import gfile + tf.disable_v2_behavior() +else: + import tensorflow as tf + from tensorflow import gfile +## +## +import os +import logging +import warnings +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +warnings.simplefilter(action='ignore', category=FutureWarning) +warnings.simplefilter(action='ignore', category=Warning) +tf.get_logger().setLevel('INFO') +tf.autograph.set_verbosity(0) +tf.get_logger().setLevel(logging.ERROR) +## + from . import align from . import segmentation from ..utils import bounding_box diff --git a/segment/_3D_FFN/ffn/ffn/training/augmentation.py b/segment/_3D_FFN/ffn/ffn/training/augmentation.py index c2b90529..6e9edb65 100644 --- a/segment/_3D_FFN/ffn/ffn/training/augmentation.py +++ b/segment/_3D_FFN/ffn/ffn/training/augmentation.py @@ -19,7 +19,18 @@ from __future__ import print_function import numpy as np -import tensorflow as tf + +#import tensorflow as tf +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + import tensorflow.compat.v1 as tf + tf.disable_v2_behavior() +else: + import tensorflow as tf +## + def reflection(data, decision): diff --git a/segment/_3D_FFN/ffn/ffn/training/convstack_3d.py b/segment/_3D_FFN/ffn/ffn/training/convstack_3d.py index 3f1d250e..09b2f5b8 100644 --- a/segment/_3D_FFN/ffn/ffn/training/convstack_3d.py +++ b/segment/_3D_FFN/ffn/ffn/training/convstack_3d.py @@ -18,38 +18,38 @@ from __future__ import division from __future__ import print_function -# HU -import warnings -warnings.filterwarnings('ignore', category=DeprecationWarning) -warnings.filterwarnings('ignore', category=FutureWarning) -# - -import tensorflow as tf -#HU{ -if tf.__version__ == '1.12.0': - from tensorflow.python.util import deprecation - deprecation._PRINT_DEPRECATION_WARNINGS = False +## Modified by HU + +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ( '2.' in ver ): + import tensorflow.compat.v1 as tf + tf.disable_v2_behavior() +elif ('1.15' in ver) : + import tensorflow.compat.v1 as tf + tf.disable_v2_behavior() + import tensorflow.contrib as tf_contrib +else: + import tensorflow as tf + import tensorflow.contrib as tf_contrib +## +#import os +#import logging +#import warnings +#os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +#warnings.simplefilter(action='ignore', category=FutureWarning) +#warnings.simplefilter(action='ignore', category=Warning) +#tf.get_logger().setLevel('INFO') +#tf.autograph.set_verbosity(0) +#tf.get_logger().setLevel(logging.ERROR) +## +## Modified by HU -#if ('1.14' in tf.__version__) | ('1.15' in tf.__version__): -# tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) import os os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true' tf.logging.set_verbosity(tf.logging.INFO) -#gpus = tf.config.experimental.list_physical_devices('GPU') -#if gpus: -# try: -# # Currently, memory growth needs to be the same across GPUs -# for gpu in gpus: -# tf.config.experimental.set_memory_growth(gpu, True) -# logical_gpus = tf.config.experimental.list_logical_devices('GPU') -# print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs") -# except RuntimeError as e: -# # Memory growth must be set before GPUs have been initialized -# print(e) - -#}HU import sys @@ -58,14 +58,15 @@ current_dir = path.join(main_dir, "ffn","training") sys.path.append(current_dir) import model -tf.logging.set_verbosity(tf.logging.INFO) + + # Note: this model was originally trained with conv3d layers initialized with # TruncatedNormalInitializedVariable with stddev = 0.01. -def _predict_object_mask(net, depth=9): - """Computes single-object mask prediction.""" - conv = tf.contrib.layers.conv3d - with tf.contrib.framework.arg_scope([conv], num_outputs=32, +def _predict_object_mask_TF1(net, depth=9): + + conv = tf_contrib.layers.conv3d + with tf_contrib.framework.arg_scope([conv], num_outputs=32, kernel_size=(3, 3, 3), padding='SAME'): net = conv(net, scope='conv0_a') @@ -85,6 +86,31 @@ def _predict_object_mask(net, depth=9): return logits +## Modified by HU + +def _predict_object_mask_TF2(net, depth=9): + + conv = tf.layers.conv3d + net = conv(net, filters=32, kernel_size=(3, 3, 3), padding='same', activation=tf.nn.relu, name='conv0_a') + net = conv(net, filters=32, kernel_size=(3, 3, 3), padding='same', activation=None, name='conv0_b') + + for i in range(1, depth): + with tf.name_scope('residual%d' % i): + in_net = net + net = tf.nn.relu(net) + net = conv(net, filters=32, kernel_size=(3, 3, 3), padding='same', activation=tf.nn.relu, name='conv%d_a' % i) + net = conv(net, filters=32, kernel_size=(3, 3, 3), padding='same', activation=None, name='conv%d_b' % i) + net += in_net + + net = tf.nn.relu(net) + logits = conv(net, filters=1, kernel_size=(1, 1, 1), activation=None, name='conv_lom') + + return logits + +## End: modified by HU + + + class ConvStack3DFFNModel(model.FFNModel): dim = 3 @@ -104,7 +130,12 @@ def define_tf_graph(self): net = tf.concat([self.input_patches, self.input_seed], 4) with tf.variable_scope('seed_update', reuse=False): - logit_update = _predict_object_mask(net, self.depth) + + if ( '2.' in ver ): + logit_update = _predict_object_mask_TF2(net, self.depth) + else: + logit_update = _predict_object_mask_TF1(net, self.depth) + logit_seed = self.update_seed(self.input_seed, logit_update) diff --git a/segment/_3D_FFN/ffn/ffn/training/inputs.py b/segment/_3D_FFN/ffn/ffn/training/inputs.py index 506738d1..384c8373 100644 --- a/segment/_3D_FFN/ffn/ffn/training/inputs.py +++ b/segment/_3D_FFN/ffn/ffn/training/inputs.py @@ -26,19 +26,22 @@ import re import numpy as np -import tensorflow as tf -from tensorflow import gfile -from ..utils import bounding_box - -#HU -if tf.__version__ == '1.12.0': - from tensorflow.python.util import deprecation - deprecation._PRINT_DEPRECATION_WARNINGS = False +#import tensorflow as tf +#from tensorflow import gfile +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + from tensorflow.compat.v1 import gfile + import tensorflow.compat.v1 as tf + tf.disable_v2_behavior() +else: + import tensorflow as tf + from tensorflow import gfile +## -if ('1.14' in tf.__version__) | ('1.15' in tf.__version__): - tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) -# +from ..utils import bounding_box def create_filename_queue(coordinates_file_pattern, shuffle=True): diff --git a/segment/_3D_FFN/ffn/ffn/training/mask.py b/segment/_3D_FFN/ffn/ffn/training/mask.py index 4268fc02..18016325 100644 --- a/segment/_3D_FFN/ffn/ffn/training/mask.py +++ b/segment/_3D_FFN/ffn/ffn/training/mask.py @@ -18,7 +18,17 @@ from __future__ import print_function import numpy as np -import tensorflow as tf + +# import tensorflow as tf +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + import tensorflow.compat.v1 as tf + tf.disable_v2_behavior() +else: + import tensorflow as tf +## # TODO(mjanusz): Consider integrating this with the numpy-only crop_and_pad, diff --git a/segment/_3D_FFN/ffn/ffn/training/model.py b/segment/_3D_FFN/ffn/ffn/training/model.py index b0427044..108f99a5 100644 --- a/segment/_3D_FFN/ffn/ffn/training/model.py +++ b/segment/_3D_FFN/ffn/ffn/training/model.py @@ -18,7 +18,18 @@ from __future__ import division from __future__ import print_function -import tensorflow as tf + +# import tensorflow as tf +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + import tensorflow.compat.v1 as tf + tf.disable_v2_behavior() +else: + import tensorflow as tf +## + import sys from os import path diff --git a/segment/_3D_FFN/ffn/ffn/training/optimizer.py b/segment/_3D_FFN/ffn/ffn/training/optimizer.py index 9a1633e3..bab23710 100644 --- a/segment/_3D_FFN/ffn/ffn/training/optimizer.py +++ b/segment/_3D_FFN/ffn/ffn/training/optimizer.py @@ -18,7 +18,17 @@ from __future__ import division from __future__ import print_function -import tensorflow as tf + +# import tensorflow as tf +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + import tensorflow.compat.v1 as tf + tf.disable_v2_behavior() +else: + import tensorflow as tf +## from absl import flags diff --git a/segment/_3D_FFN/ffn/ffn/training/variables.py b/segment/_3D_FFN/ffn/ffn/training/variables.py index 8c2a0161..4fd48d2f 100644 --- a/segment/_3D_FFN/ffn/ffn/training/variables.py +++ b/segment/_3D_FFN/ffn/ffn/training/variables.py @@ -18,7 +18,16 @@ from __future__ import division from __future__ import print_function -import tensorflow.google as tf +#import tensorflow.google as tf +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + import tensorflow.compat.v1.google as tf + tf.disable_v2_behavior() +else: + import tensorflow as tf +## class FractionTracker(object): diff --git a/segment/_3D_FFN/ffn/run_inference_win.py b/segment/_3D_FFN/ffn/run_inference_win.py index a129eb7b..5eceb1fb 100644 --- a/segment/_3D_FFN/ffn/run_inference_win.py +++ b/segment/_3D_FFN/ffn/run_inference_win.py @@ -34,7 +34,29 @@ from google.protobuf import text_format from absl import app from absl import flags -from tensorflow import gfile + +#from tensorflow import gfile +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + from tensorflow.compat.v1 import gfile + import tensorflow.compat.v1 as tf + tf.disable_v2_behavior() +else: + from tensorflow import gfile +## +import logging +import warnings +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +warnings.simplefilter(action='ignore', category=FutureWarning) +warnings.simplefilter(action='ignore', category=Warning) +tf.get_logger().setLevel('INFO') +tf.autograph.set_verbosity(0) +tf.get_logger().setLevel(logging.ERROR) +## + + from ffn.utils import bounding_box_pb2 from ffn.inference import inference diff --git a/segment/_3D_FFN/ffn/train.py b/segment/_3D_FFN/ffn/train.py index 8df0cb41..39b3fa2a 100644 --- a/segment/_3D_FFN/ffn/train.py +++ b/segment/_3D_FFN/ffn/train.py @@ -49,11 +49,31 @@ from scipy.special import expit from scipy.special import logit -import tensorflow as tf + +## HU +import pkg_resources +ver = pkg_resources.get_distribution('tensorflow').version +if ('1.15' in ver) |( '2.' in ver ): + import tensorflow.compat.v1 as tf + tf.disable_v2_behavior() + from tensorflow.compat.v1 import gfile + +else: + import tensorflow as tf + from tensorflow import gfile +## +#import logging +#import warnings +#os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +#warnings.simplefilter(action='ignore', category=FutureWarning) +#warnings.simplefilter(action='ignore', category=Warning) +#tf.get_logger().setLevel('INFO') +#tf.autograph.set_verbosity(0) +#tf.get_logger().setLevel(logging.ERROR) +## from absl import app from absl import flags -from tensorflow import gfile from ffn.inference import movement from ffn.training import mask @@ -66,16 +86,6 @@ # pylint: enable=unused-import -#HU -if tf.__version__ == '1.12.0': - from tensorflow.python.util import deprecation - deprecation._PRINT_DEPRECATION_WARNINGS = False - -if ('1.14' in tf.__version__) | ('1.15' in tf.__version__): - tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) -tf.logging.set_verbosity(tf.logging.INFO) -# - FLAGS = flags.FLAGS diff --git a/segment/_tensorb/_tensorb.py b/segment/_tensorb/_tensorb.py index 02bebb26..84c1c32c 100644 --- a/segment/_tensorb/_tensorb.py +++ b/segment/_tensorb/_tensorb.py @@ -53,7 +53,7 @@ def StartTensorboard(self, newdir): self.parent.process_tensorboard = s.Popen(comm, stdout=s.PIPE) time.sleep(1) self.parent.table_widget.addTab('tensorboard', 'Tensorboard', - 'http://' + socket.gethostbyname(socket.gethostname()) + ':6006') + 'http://' + socket.gethostbyname(socket.gethostname()) + ':6006/') print("Start tensorboard") return True except s.CalledProcessError as e: diff --git a/segment/_tensorb/launch_tensorboard.py b/segment/_tensorb/launch_tensorboard.py index d5c91f7e..6ef615da 100644 --- a/segment/_tensorb/launch_tensorboard.py +++ b/segment/_tensorb/launch_tensorboard.py @@ -1,46 +1,45 @@ -#import warnings -#warnings.filterwarnings('ignore', category=DeprecationWarning) -#warnings.filterwarnings('ignore', category=FutureWarning) +# Only for tensorboard 2.60 +# Modified from : +# C:\Users\uraku\AppData\Local\Programs\Python\Python38\Lib\site-packages\tensorboard\main.py + +import os +from os import path, pardir +main_dir = path.abspath(path.dirname(sys.argv[0])) # Dir of main +webfiles = path.join(main_dir,'tensorboard','webfiles.zip') -# Only for tensorboard 1.14 import sys +from absl import app from tensorboard import default +from tensorboard import main_lib from tensorboard import program -from tensorboard.compat import tf from tensorboard.plugins import base_plugin +from tensorboard.uploader import uploader_subcommand from tensorboard.util import tb_logging -from argparse import ArgumentParser -# import socket -import time -from os import path, pardir -main_dir = path.abspath(path.dirname(sys.argv[0])) # Dir of main +logger = tb_logging.get_logger() -usage = 'Usage: python tensorb [--logdir] [--host]' -argparser = ArgumentParser(usage=usage) -argparser.add_argument('--logdir', type=str, - help='') -argparser.add_argument('--host', type=str, - help='') -args = argparser.parse_args() -argv=[None, '--logdir', args.logdir,'--host', args.host] -logger = tb_logging.get_logger() -program.setup_environment() +def run_main(): + """Initializes flags and calls main().""" + main_lib.global_init() + + tensorboard = program.TensorBoard( + plugins=default.get_plugins(), + assets_zip_provider=lambda: open(webfiles, 'rb'), + subcommands=[uploader_subcommand.UploaderSubcommand()], + ) + try: + app.run(tensorboard.main, flags_parser=tensorboard.configure) + except base_plugin.FlagsError as e: + print("Error: %s" % e, file=sys.stderr) + sys.exit(1) + + +if __name__ == "__main__": + run_main() -# See -# Tensorboard/program.py: get_default_assets_zip_provider -webfiles = path.join(main_dir,'tensorboard','webfiles.zip') -tb = program.TensorBoard(default.get_plugins() + default.get_dynamic_plugins(), lambda: open(webfiles, 'rb')) -tb.configure(argv=argv) -tb.launch() -try: - while True: - time.sleep(1) -except KeyboardInterrupt: - print('Tensorboard interrupted!') diff --git a/specs/main_.spec b/specs/main_.spec index 86484e5a..8cba5a9d 100644 --- a/specs/main_.spec +++ b/specs/main_.spec @@ -4,6 +4,19 @@ from os import path, pardir main_dir = os.path.abspath(SPECPATH) main_dir = os.path.dirname(main_dir) + +from pathlib import Path + +# CUDA_BIN = "C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.X\\bin" + +CUDA_BIN = os.environ.get('PATH').split(";") +CUDA_BIN = [s for s in CUDA_BIN if "CUDA" in s] +CUDA_BIN = [s for s in CUDA_BIN if "bin" in s] + +binaries=[(str(i), ".") for i in Path(CUDA_BIN[0]).rglob("*.dll")] +binaries.append( (path.join(CUDA_BIN[0], "ptxas.exe"), ".") ) + + block_cipher = None def analysis(_spec_path_list, _pathex, _datas, _hiddenimports, _name): @@ -15,7 +28,7 @@ def analysis(_spec_path_list, _pathex, _datas, _hiddenimports, _name): a = Analysis(_spec_path_list, pathex=_pathex, - binaries=[], + binaries=binaries, datas=_datas, hiddenimports=_hiddenimports, hookspath=['./hooks'], @@ -94,6 +107,7 @@ for dirpath, dirnames, filenames in os.walk( path.join(main_dir, "segment","_3D_ if os.path.basename(dirpath) != '__pycache__': pathex.append(path.join(main_dir, "segment", dirpath)) + translate=[path.join(main_dir, "segment","_3D_FFN","ffn","train.py")] hiddenimports=['scipy._lib.messagestream','pywt._extensions._cwt','gast','astor','termcolor','google.protobuf.wrappers_pb2','tensorflow.contrib'] @@ -107,6 +121,7 @@ for dirpath, dirnames, filenames in os.walk( path.join(main_dir, "segment","_3D_ if os.path.basename(dirpath) != '__pycache__': pathex.append(path.join(main_dir, "segment", dirpath)) + translate=[path.join(main_dir, "segment","_3D_FFN","ffn","run_inference_win.py")] hiddenimports=['scipy._lib.messagestream','pywt._extensions._cwt','PyQt5.sip','gast','astor','termcolor','google.protobuf.wrappers_pb2','tensorflow.contrib'] @@ -120,6 +135,7 @@ for dirpath, dirnames, filenames in os.walk( path.join(main_dir, "segment","_3D_ if os.path.basename(dirpath) != '__pycache__': pathex.append(path.join(main_dir, "segment", dirpath)) + translate=[path.join(main_dir, "segment","_3D_FFN","ffn","build_coordinates.py")] hiddenimports=['scipy._lib.messagestream','pywt._extensions._cwt','tensorflow.contrib'] @@ -133,6 +149,7 @@ for dirpath, dirnames, filenames in os.walk( path.join(main_dir, "segment","_3D_ if os.path.basename(dirpath) != '__pycache__': pathex.append(path.join(main_dir, "segment", dirpath)) + translate=[path.join(main_dir, "segment","_3D_FFN","ffn","compute_partitions.py")] hiddenimports=['scipy._lib.messagestream','pywt._extensions._cwt','tensorflow.contrib'] @@ -156,6 +173,7 @@ coll += analysis(tensorb, pathex, datas, hiddenimports, 'launch_tensorboard') ########################## translate ########################## pathex=[path.join(main_dir, "segment","_2D_DNN")] + translate=[path.join(main_dir, "segment","_2D_DNN","translate.py")] hiddenimports=['scipy._lib.messagestream','pywt._extensions._cwt','tensorflow.contrib'] diff --git a/system/FileManager.py b/system/FileManager.py index 27a39a42..11d32e6b 100644 --- a/system/FileManager.py +++ b/system/FileManager.py @@ -271,13 +271,10 @@ def CheckFolderImage(self, folder_path): filetypes.add('tif') if ext in ['.png','.PNG'] : filetypes.add('png') - if ext in ['.jpg', '.jpeg'] : + if ext in ['.jpg', '.jpeg','.JPG', '.JPEG'] : filetypes.add('jpg') return list(filetypes) - tmp = glob.glob(os.path.join(params['Image Folder'], "*.png")) - input_files.extend(tmp) - def CheckFolderDojo(self, folder_path): tmp_info = Params() diff --git a/system/MainWindow.py b/system/MainWindow.py index 8b4af436..e94b16f9 100644 --- a/system/MainWindow.py +++ b/system/MainWindow.py @@ -50,7 +50,19 @@ class MainWindow(QMainWindow, FileMenu, DojoMenu, DojoFileIO, Credit, Script): - def __del__(self): + def closeEvent(self, event): + + super(QMainWindow, self).closeEvent(event) + # print ('In close event') + + #while self.table_widget.appl != []: + # self.table_widget.closeTab(0) + + if 'tensorboard' in self.table_widget.appl: + id = self.table_widget.appl.index('tensorboard') + self.table_widget.closeTab(id) + + for ofile in self.u_info.open_files4lock.values(): if type(ofile) == dict : for ofileobj in ofile.values(): @@ -272,7 +284,7 @@ def closeTab(self, index): return ### if ('tensorboard' == self.appl[index]): - flag = self.parent.process_tensorboard.terminate() + flag = self.parent.process_tensorboard.kill() if (flag == 1): print('Error ocurred in closing tensorboard.') return