Skip to content

Commit

Permalink
[add] add documentation files
Browse files Browse the repository at this point in the history
  • Loading branch information
marta-seq committed May 20, 2024
1 parent 3c9bcaa commit 8dbbb06
Show file tree
Hide file tree
Showing 68 changed files with 7,281 additions and 0 deletions.
20 changes: 20 additions & 0 deletions docs/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#

# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build

# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

.PHONY: help Makefile

# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
Binary file added docs/build/doctrees/environment.pickle
Binary file not shown.
Binary file added docs/build/doctrees/functions_usage.doctree
Binary file not shown.
Binary file added docs/build/doctrees/index.doctree
Binary file not shown.
Binary file added docs/build/doctrees/notebook_usage.doctree
Binary file not shown.
Binary file added docs/build/doctrees/script_usage.doctree
Binary file not shown.
Binary file added docs/build/doctrees/usage.doctree
Binary file not shown.
4 changes: 4 additions & 0 deletions docs/build/html/.buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: d16f62a72d6416d6b4f4458de1c82549
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file added docs/build/html/_images/all_ch_image.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/build/html/_images/all_ch_image_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/build/html/_images/save_table.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
73 changes: 73 additions & 0 deletions docs/build/html/_sources/functions_usage.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
Functions usage
===============

Parse Image
-----------
To parse tiffiles into numpy arrays you can use the ``ImageParser.parse_image()`` function:

.. autofunction:: ImageParser.parse_image

If your tiffiles are not stacks but page based tiffs:

.. autofunction:: ImageParser.parse_image_pages

Lastly, if you want to extract the channel names from the pages of TIFF use ``ImageParser.parse_image_pages_namesCH()`` function.

.. autofunction:: ImageParser.parse_image_pages_namesCH

Preprocessing Image
-------------------
In this pipeline there are two main preprocessing functions: saturation of outliers and the normalization.

To saturate outliers you can use:
.. autofunction:: ImagePreprocessFilters.remove_outliers

To normalize, PENGUIN uses:

.. autofunction:: ImagePreprocessFilters.normalize_channel_cv2_minmax

Thresholding
------------
Thresholding allows to discard background signals, essentially removing signals of low intensity (already normalized).

To do this, the most straightforward approach is thresholding based on the pixel value, where pixel values below this threshold are set to 0.

.. autofunction:: ImagePreprocessFilters.out_ratio2

Other thresholding techniques are also available:

.. autofunction:: ImagePreprocessFilters.th_otsu
.. autofunction:: ImagePreprocessFilters.th_isodata
.. autofunction:: ImagePreprocessFilters.th_li
.. autofunction:: ImagePreprocessFilters.th_yen
.. autofunction:: ImagePreprocessFilters.th_triangle
.. autofunction:: ImagePreprocessFilters.th_mean
.. autofunction:: ImagePreprocessFilters.th_local

Percentile Filter
-----------------
In median filters, the center pixel is substituted with the median of the ranked values from its surrounding pixels. They excel in dealing with impulse noise, as such noise usually ranks at the extreme ends of the brightness scale. Percentile filters, akin to median filters, adjust pixel values based on a range of percentiles rather than solely the median (50th percentile). Different markers may benefit from different values of noise reduction, as they may display more or less shot noise.

To apply percentile filter to each channel:

.. autofunction:: ImagePreprocessFilters.percentile_filter

If you want to apply the hybrid median filter, you can check this implementation:

.. autofunction:: ImagePreprocessFilters.hybrid_median_filter

Save Images
-----------
Lastly, to save the denoised images one can use ``ImagePreprocessFilters.save_images()`` to multitiffs:

.. autofunction:: ImagePreprocessFilters.save_images

To save as multipage tiffs with page names as metadata:

.. autofunction:: ImagePreprocessFilters.save_images_ch_names

And to save the channel names as page names use:

.. autofunction:: ImagePreprocessFilters.save_img_ch_names_pages


48 changes: 48 additions & 0 deletions docs/build/html/_sources/index.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
.. PENGUIN documentation master file, created by
sphinx-quickstart on Sun May 19 17:43:17 2024.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to PENGUIN's documentation!
===================================

**PENGUIN** - Percentile Normalization GUI Image deNoising is a rapid and efficient image preprocessing pipeline for multiplexed spatial proteomics. In comparison to existing approaches, PENGUIN stands out by eliminating the need for manual annotation or machine learning model training. It effectively preserves signal intensity differences and reduces noise.

PENGUIN's simplicity, speed, and user-friendly interface, deployed both as script and as a Jupyter notebook, facilitate parameter testing and image processing.

This repository contains the documentation files for running PENGUIN.

The general view of PENGUIN:

.. image:: D:\\PycharmProjects\\phd\\Preprocess_IMC\\figs\\main_figure.png


.. toctree::
:maxdepth: 2
:caption: Contents:

usage
notebook_usage
script_usage
functions_usage

Credits
==========
If you find this repository useful in your research or for educational purposes please refer to:

License
==========

Developed at the Leiden University Medical Centre, The Netherlands and
Centre of Biological Engineering, University of Minho, Portugal

Released under the GNU Public License (version 3.0).



Indices and tables
==================

* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
51 changes: 51 additions & 0 deletions docs/build/html/_sources/notebook_usage.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
Notebook usage
===============

There are 2 Notebooks available.

Use check_th_all_ch_per_image if each FOV is a stack of channels.

Use check_th_one_ch_per_image if each FOV is a directory with multiple tiffs inside (one per each channel)

Open the notebook and click Kernel -> Restart and Run all

For all channels in a stack it should look like this:

.. image:: D:\\PycharmProjects\\phd\\Preprocess_IMC\\figs\\all_ch_image.png
:alt: layout of the noteboook and folder structure for stacks of channels


Change the path to the location of your data and click 'Change Path'.

You can now change the channels to visualize and select different percentiles and thresholds.

Compare images tab gives you the comparison between raw and the clean image with the defined settings

Compare Zoom plots the images using Plotly library that allows for zooming some areas.

You can change the number of images that are displayed and specified a image name.

In the case of stacks of channels, your channel names should be in the page tags. Otherwise, the channel names will be
set as index numbers.

In the case of directory with multiple files, the channel names should be in the file names.


.. image:: D:\\PycharmProjects\\phd\\Preprocess_IMC\\figs\\all_ch_image_2.png
:alt: layout of the noteboook with image comparison

Once you have the values for percentile and thresholds defined you can save your images by just clicking
the save button.

In case of a file per channel, you can save all the images of the same
channel at once.

In case of stacks with multiple channels per FOV, you need to define
the values per each channel in the pop up table and click save (see below).

.. image:: D:\\PycharmProjects\\phd\\Preprocess_IMC\\figs\\save_table.png
:alt: saving table

Saving images will mimick your structure and filenames (and pagetags) in the
saving directory.

61 changes: 61 additions & 0 deletions docs/build/html/_sources/script_usage.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@

Script usage
============


If you want to process your images directly without notebooks, there are two example pipelines to apply to images
with stacks of channels, or with each channel in a different file.

In this case, you will not be able to interactively check which thresholdings and percentiles best apply to each channel.

The scripts apply the pipeline:
- saturation of outliers
- channel normalization
- thresholding
- percentile filtering
- save

The following code is only a snapshot, please check the full script.

For stacks of channels, and with all the parameters defined, the general idea would be as follow:


.. code-block:: console
images_original = list(map(IP.parse_image_pages, files))
imgs_out = map(lambda p: IPrep.remove_outliers(p, up_limit, down_limit), images_original)
imgs_norm = map(IPrep.normalize_channel_cv2_minmax, imgs_out)
filtered_images = map(lambda i: preprocess_image(i, thresholds, percentiles), imgs_norm)
imgs_filtered = list(filtered_images)
# save with channel names
images_final = map(
lambda p, f: IPrep.save_img_ch_names_pages(p, f, ch_last=True, channel_names=channel_names),
imgs_filtered, names_save)
preprocess_image is a function defined in the example and applies thresholding and percentile
per channel.


For channels defined by a single file and organized in patient folders, and with all the parameters defined,
the general idea would be as follow:

.. code-block:: console
for channel, th, perc in zip(channel_names, thresholds, percentiles):
file_paths = [file for file in files if str(channel + '.ome.tiff') in str(file)]
images_original = list(map(IP.parse_image, file_paths))
imgs_out = map(lambda p: IPrep.remove_outliers(p, up_limit, down_limit), images_original)
imgs_norm = map(IPrep.normalize_channel_cv2_minmax, imgs_out)
if isinstance(threshold, float):
imgs_filtered = list(map(lambda p: IPrep.out_ratio2(p, th=threshold), imgs_norm))
if percentile is not None:
imgs_filtered = map(
lambda p: IPrep.percentile_filter(p, window_size=3, percentile=percentile, transf_bool=True),
imgs_filtered)
map(lambda p, f: IPrep.save_images(p, f, ch_last=True), imgs_filtered, names_save)
Please check the scripts for additional parameters. Feel free to adjust all the code.
46 changes: 46 additions & 0 deletions docs/build/html/_sources/usage.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
Set up
======

Installation
--------------
To use PENGUIN, first clone the GitHUb repo.

To clone this repository to your local machine, use the following command:

.. code-block:: bash
git clone https://github.com/your_username/PENGUIN.git
Environment set up
-------------------

You can create the environment installing the packages or using the ymal file.

To manually create and install packages use:

.. code-block:: bash
conda create --name penguin
conda activate penguin
conda install matplotlib pandas panel numpy opencv scikit-image ipywidgets jupyter ipykernel plotly
pip install apeer-ometiff-library --no-deps
Alternatively, you can create the environemtn using the yml file:

.. code-block:: bash
conda env create --file penguin_env.yml
conda activate penguin
pip install apeer-ometiff-library --no-deps
After you created the environment, and if you want to use the Jupyter notebooks

add the environment kernel to jupyter

.. code-block:: bash
python -m ipykernel install --user --name=penguin
launch the jupyter and be sure that you are running with the penguin kernel.

Loading

0 comments on commit 8dbbb06

Please sign in to comment.