Skip to content

Commit

Permalink
Updated conda env
Browse files Browse the repository at this point in the history
  • Loading branch information
LisaSikkema committed Mar 21, 2022
1 parent a262437 commit 67b68cc
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 15 deletions.
8 changes: 4 additions & 4 deletions envs/scarches_mapping_conda_env.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@ dependencies:
- pip
- pip:
- jupyterlab
- scanpy==1.8.2
- torch==1.8.0
- scanpy>=1.8.2
- torch>=1.3,<=1.8.0
- scarches==0.3.5
- scvi-tools==0.8.1
- umap-learn==0.5.2
- pynndescent==0.5.5
- umap-learn>=0.5.2
- pynndescent #>=0.5.5

22 changes: 11 additions & 11 deletions notebooks/LCA_scArches_mapping_new_data_to_hlca.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"In this notebook, we will guide you through how to map your data to the Human Lung Cell Atlas (core reference), perform label transfer, and more. For that purpose we use scArches, a method to map new single cell/single nucleus data to an existing reference (see also Lotfollahi et al., Nature Biotechnology 2021 https://doi.org/10.1038/s41587-021-01001-7). "
"In this notebook, we will guide you through how to map your data to the [Human Lung Cell Atlas](https://www.biorxiv.org/content/10.1101/2022.03.10.483747v1) (core reference), perform label transfer, and more. For that purpose we use scArches, a method to map new single cell/single nucleus data to an existing reference (see also [Lotfollahi et al., Nature Biotechnology 2021](https://doi.org/10.1038/s41587-021-01001-7)). "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Import the needed modules. __Note that we use scArches version 0.3.5.__ For efficiency of knn-graph and umap calculation, we recommend using scanpy>=1.8.2, umap-learn>0.5, and installing pynndescent: `pip install pynndescent`."
"Import the needed modules. __Note that we use scArches version 0.3.5. and scvi-tools 0.8.1__ For efficiency of knn-graph and umap calculation, we recommend using scanpy>=1.8.2, umap-learn>0.5, and installing pynndescent: `pip install pynndescent`. If you used the conda environment provided on our GitHub repo, all of these packages were automatically installed with the correct versions, so no need to check!"
]
},
{
Expand Down Expand Up @@ -211,7 +211,7 @@
"pydevd_tracing NA\n",
"pygments 2.11.2\n",
"pyparsing 3.0.7\n",
"pytz 2021.3\n",
"pytz 2022.1\n",
"requests 2.27.1\n",
"rich NA\n",
"scarches NA\n",
Expand Down Expand Up @@ -246,7 +246,7 @@
"Linux-3.10.0-1160.42.2.el7.x86_64-x86_64-with-centos-7.9.2009-Core\n",
"112 logical CPU cores, x86_64\n",
"-----\n",
"Session information updated at 2022-03-18 16:46\n",
"Session information updated at 2022-03-21 11:43\n",
"\n"
]
}
Expand Down Expand Up @@ -326,8 +326,8 @@
"name": "stderr",
"output_type": "stream",
"text": [
"2022-03-18 16:46:57 URL:https://zenodo.org/record/6337966/files/HLCA_emb_and_metadata.h5ad [217785664/217785664] -> \"HLCA_emb_and_metadata.h5ad\" [1]\n",
"2022-03-18 16:46:59 URL:https://zenodo.org/record/6337966/files/HLCA_reference_model.zip [5321666/5321666] -> \"HLCA_reference_model.zip\" [1]\n"
"2022-03-21 11:44:31 URL:https://zenodo.org/record/6337966/files/HLCA_emb_and_metadata.h5ad [217785664/217785664] -> \"HLCA_emb_and_metadata.h5ad\" [1]\n",
"2022-03-21 11:44:35 URL:https://zenodo.org/record/6337966/files/HLCA_reference_model.zip [5321666/5321666] -> \"HLCA_reference_model.zip\" [1]\n"
]
}
],
Expand Down Expand Up @@ -459,7 +459,7 @@
{
"data": {
"text/plain": [
"('../test/testmeta.csv.gz', <http.client.HTTPMessage at 0x7f0a60ac3590>)"
"('../test/testmeta.csv.gz', <http.client.HTTPMessage at 0x7f3ffb61b250>)"
]
},
"execution_count": 15,
Expand Down Expand Up @@ -810,15 +810,15 @@
"\u001b[34mINFO \u001b[0m Training Unsupervised Trainer for \u001b[1;36m400\u001b[0m epochs. \n",
"\u001b[34mINFO \u001b[0m Training SemiSupervised Trainer for \u001b[1;36m500\u001b[0m epochs. \n",
"\u001b[34mINFO \u001b[0m KL warmup for \u001b[1;36m400\u001b[0m epochs \n",
"Training...: 57%|█████▋ | 283/500 [02:19<02:00, 1.80it/s]\u001b[34mINFO \u001b[0m Reducing LR on epoch \u001b[1;36m283\u001b[0m. \n",
"Training...: 57%|█████▋ | 285/500 [02:21<02:01, 1.77it/s]\u001b[34mINFO \u001b[0m \n",
"Training...: 57%|█████▋ | 283/500 [02:04<01:34, 2.30it/s]\u001b[34mINFO \u001b[0m Reducing LR on epoch \u001b[1;36m283\u001b[0m. \n",
"Training...: 57%|█████▋ | 285/500 [02:05<01:34, 2.28it/s]\u001b[34mINFO \u001b[0m \n",
" Stopping early: no improvement of more than \u001b[1;36m0.001\u001b[0m nats in \u001b[1;36m10\u001b[0m epochs \n",
"\u001b[34mINFO \u001b[0m If the early stopping criterion is too strong, please instantiate it with different \n",
" parameters in the train method. \n",
"Training...: 57%|█████▋ | 285/500 [02:21<01:46, 2.01it/s]\n",
"Training...: 57%|█████▋ | 285/500 [02:05<01:35, 2.26it/s]\n",
"\u001b[34mINFO \u001b[0m Training is still in warming up phase. If your applications rely on the posterior \n",
" quality, consider training for more epochs or reducing the kl warmup. \n",
"\u001b[34mINFO \u001b[0m Training time: \u001b[1;36m52\u001b[0m s. \u001b[35m/\u001b[0m \u001b[1;36m500\u001b[0m epochs \n"
"\u001b[34mINFO \u001b[0m Training time: \u001b[1;36m46\u001b[0m s. \u001b[35m/\u001b[0m \u001b[1;36m500\u001b[0m epochs \n"
]
}
],
Expand Down

0 comments on commit 67b68cc

Please sign in to comment.