-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
48 changed files
with
554 additions
and
6,436 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -153,4 +153,4 @@ version.py | |
|
||
*.out | ||
|
||
slurm_logs | ||
BK_folder |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,51 +1,81 @@ | ||
## HowTOs | ||
# HowTOs | ||
|
||
### ✨**Installation** | ||
***善用加粗、倾斜*** | ||
|
||
1. Clone repo | ||
### 📖 **RESIN: A self-supervised framework for enhancing axial resolution of volumetric imaging data** | ||
> [](https://baidu.com)<br> | ||
> [Author](https://github.com) | ||
Based on BasicSR. | ||
|
||
### 🚩 Updates | ||
* **2024.9.16**: XXX | ||
|
||
## Web Demo and API | ||
|
||
[]() | ||
|
||
## Video Demos | ||
|
||
*Coming soon* | ||
|
||
## 🔧 Dependencies and Installation | ||
- Python >= 3.7 (Recommend to use Anaconda or Miniconda) | ||
- PyTorch >= 1.7 | ||
- Other required packages in requirements.txt | ||
|
||
### **Installation** | ||
|
||
1. Clone the repo. | ||
|
||
```bash | ||
git clone https://github.com/wwbwang/sr_3dunet | ||
cd sr_3dunet | ||
``` | ||
2. Install dependent packages and RESIN | ||
2. Install dependent packages and RESIN. | ||
|
||
```bash | ||
# Install dependent packages | ||
python==3.9.0 | ||
pip install basicsr | ||
pip install -r requirements.txt | ||
# Install sr_3dunet | ||
# Install RESIN | ||
python setup.py develop | ||
``` | ||
|
||
3. Switch Branch to RESIN | ||
3. Switch Branch to RESIN. | ||
```bash | ||
git checkout RESIN | ||
``` | ||
|
||
### ✨**Start training** | ||
We take the de-anisotropy pipeline with VISoR's NISSL datasets for example. | ||
#### **Axial anisotropy** | ||
1. Get a big ROI regin from your data, save it to h5 (or tiff). | ||
2. Crop the above big single data to sub-images, 128×128, for example. The training patch is usually small, and get these patch should has a rule. Note that you can set some percentile rules during selecting these sub-images, and the size of this sub-image is different from the traing patch_size, the dataloader will furter randomly crop the sub-images to GT_size×GT_size patchs for training. | ||
Run the script extract_subimages.py: | ||
```bash | ||
# | ||
bash scripts/ | ||
``` | ||
支持neuron,nissl,dapi等 | ||
### ✨**Inference** | ||
## ⚡ **Quick Inference** | ||
|
||
```bash | ||
# inference tif files in folder | ||
bash inference_from_folder.sh | ||
# inference a single h5 file | ||
bash inference_from_h5.sh | ||
``` | ||
``` | ||
|
||
## 💻 Training | ||
|
||
### **Dataset Preparation** | ||
|
||
1. Get a big ROI regin from your data, the format of your data should be h5/ims (or tif/tiff). | ||
2. Crop the above big single image to sub-images, 128×128, for example. Note that you can apply filtering rules (using args arguments or write yourself's code) when selecting data subsets, as ensuring the cleanliness of the dataset is essential for successful training. | ||
The scripts is as follows: | ||
```bash | ||
# Get sub-images from a single tifffile | ||
python scripts/get_subdatasets_from_bigtif.py --size=128 --percentiles_lower_bound=0.75 --percentiles_upper_bound=0.99 --minmax=450 --maxmin=60000 --input_folder="path" --output_folder="path" --input_tif_name="name" --crop_size=50 | ||
# Get sub-images from a single h5file | ||
python scripts/get_subdatasets_from_ims.py --size=128 --percentiles_lower_bound=0 --percentiles_upper_bound=0.9999 --minmax=400 --maxmin=60000 --input_folder="path" --output_folder="path" --input_tif_name="name" --x_floor=0 --x_upper=8700 --y_floor=0 --y_upper=8400 --z_floor=0 --z_upper=900 --channel=1 | ||
``` | ||
If the direction of anisotropy is not axial but 45° orientation, add `--rotated_flag` at the end of the command. The py_script will generate 2 (or 3 if rotated_flag is True) datasets, as `frontground_datasets`, `background_datasets` (and `rotated_frontground_datatsets`). Note that the `background_datasets` is just for check. | ||
### **Start Training** | ||
Let's take the de-anisotropy pipeline with VISoR's NISSL datasets for example. | ||
支持neuron,nissl,dapi等 |
Empty file.
This file was deleted.
Oops, something went wrong.
Oops, something went wrong.