From 0e36d6a82284cd28776383269b61752b581b24cc Mon Sep 17 00:00:00 2001 From: cmalinmayor Date: Thu, 22 Aug 2024 15:35:02 +0000 Subject: [PATCH] Commit from GitHub Actions (Build Notebooks) --- exercise.ipynb | 164 ++++++++++++++++++++++++------------------------- solution.ipynb | 162 ++++++++++++++++++++++++------------------------ 2 files changed, 163 insertions(+), 163 deletions(-) diff --git a/exercise.ipynb b/exercise.ipynb index 86a239f..445a6c0 100644 --- a/exercise.ipynb +++ b/exercise.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "0b39ad81", + "id": "138bd7ef", "metadata": {}, "source": [ "# Exercise 9: Tracking-by-detection with an integer linear program (ILP)\n", @@ -45,7 +45,7 @@ }, { "cell_type": "markdown", - "id": "64b847d5", + "id": "a6cfa9e1", "metadata": {}, "source": [ "Visualizations on a remote machine\n", @@ -71,7 +71,7 @@ { "cell_type": "code", "execution_count": null, - "id": "b665c14c", + "id": "ee0decd3", "metadata": {}, "outputs": [], "source": [ @@ -81,7 +81,7 @@ }, { "cell_type": "markdown", - "id": "937b2de0", + "id": "865ea852", "metadata": {}, "source": [ "## Import packages" @@ -90,7 +90,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ab5e2850", + "id": "4559a80c", "metadata": {}, "outputs": [], "source": [ @@ -119,7 +119,7 @@ }, { "cell_type": "markdown", - "id": "0ebad18d", + "id": "e49ab080", "metadata": {}, "source": [ "## Load the dataset and inspect it in napari" @@ -127,7 +127,7 @@ }, { "cell_type": "markdown", - "id": "33e97db7", + "id": "604250b9", "metadata": {}, "source": [ "For this exercise we will be working with a fluorescence microscopy time-lapse of breast cancer cells with stained nuclei (SiR-DNA). It is similar to the dataset at https://zenodo.org/record/4034976#.YwZRCJPP1qt. The raw data, pre-computed segmentations, and detection probabilities are saved in a zarr, and the ground truth tracks are saved in a csv. The segmentation was generated with a pre-trained StartDist model, so there may be some segmentation errors which can affect the tracking process. The detection probabilities also come from StarDist, and are downsampled in x and y by 2 compared to the detections and raw data." @@ -135,7 +135,7 @@ }, { "cell_type": "markdown", - "id": "3e95c378", + "id": "3c173137", "metadata": {}, "source": [ "Here we load the raw image data, segmentation, and probabilities from the zarr, and view them in napari." @@ -144,7 +144,7 @@ { "cell_type": "code", "execution_count": null, - "id": "386ab3bf", + "id": "dce30673", "metadata": {}, "outputs": [], "source": [ @@ -157,7 +157,7 @@ }, { "cell_type": "markdown", - "id": "6ce8a47f", + "id": "187715c4", "metadata": {}, "source": [ "Let's use [napari](https://napari.org/tutorials/fundamentals/getting_started.html) to visualize the data. Napari is a wonderful viewer for imaging data that you can interact with in python, even directly out of jupyter notebooks. If you've never used napari, you might want to take a few minutes to go through [this tutorial](https://napari.org/stable/tutorials/fundamentals/viewer.html). Here we visualize the raw data, the predicted segmentations, and the predicted probabilities as separate layers. You can toggle each layer on and off in the layers list on the left." @@ -166,7 +166,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5b34aae4", + "id": "287a8c29", "metadata": { "lines_to_next_cell": 1 }, @@ -180,7 +180,7 @@ }, { "cell_type": "markdown", - "id": "44883cee", + "id": "64794ac6", "metadata": {}, "source": [ "After running the previous cell, open NoMachine and check for an open napari window." @@ -188,7 +188,7 @@ }, { "cell_type": "markdown", - "id": "32ce9844", + "id": "4a5a7caf", "metadata": {}, "source": [ "## Read in the ground truth graph\n", @@ -206,7 +206,7 @@ }, { "cell_type": "markdown", - "id": "e55490f9", + "id": "451d5275", "metadata": {}, "source": [ "\n", @@ -230,7 +230,7 @@ { "cell_type": "code", "execution_count": null, - "id": "8c46df68", + "id": "6762408f", "metadata": { "tags": [ "task" @@ -249,7 +249,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7914aca5", + "id": "c4cd5a11", "metadata": {}, "outputs": [], "source": [ @@ -269,7 +269,7 @@ }, { "cell_type": "markdown", - "id": "51c2b76c", + "id": "37392c6b", "metadata": {}, "source": [ "Here we set up a napari widget for visualizing the tracking results. This is part of the motile napari plugin, not part of core napari.\n", @@ -279,17 +279,17 @@ { "cell_type": "code", "execution_count": null, - "id": "0a6e2524", + "id": "a3b8a6ba", "metadata": {}, "outputs": [], "source": [ "widget = plugin_widgets.TreeWidget(viewer)\n", - "viewer.window.add_dock_widget(widget, name=\"Lineage View\", area=\"bottom\")" + "viewer.window.add_dock_widget(widget, name=\"Lineage View\", area=\"right\")" ] }, { "cell_type": "markdown", - "id": "25d8b7ce", + "id": "d73a9db9", "metadata": {}, "source": [ "Here we add a \"MotileRun\" to the napari tracking visualization widget (the \"view_controller\"). A MotileRun includes a name, a set of tracks, and a segmentation. The tracking visualization widget will add:\n", @@ -306,7 +306,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e86bfbe0", + "id": "7e1767ae", "metadata": {}, "outputs": [], "source": [ @@ -321,7 +321,7 @@ }, { "cell_type": "markdown", - "id": "ffd1eff5", + "id": "14d2aaec", "metadata": { "lines_to_next_cell": 2 }, @@ -338,7 +338,7 @@ }, { "cell_type": "markdown", - "id": "5e73893b", + "id": "79a311b4", "metadata": {}, "source": [ "

Task 2: Extract candidate nodes from the predicted segmentations

\n", @@ -361,7 +361,7 @@ { "cell_type": "code", "execution_count": null, - "id": "6ab031cd", + "id": "95b3bad5", "metadata": { "tags": [ "task" @@ -395,7 +395,7 @@ { "cell_type": "code", "execution_count": null, - "id": "c9e0be21", + "id": "4392e656", "metadata": {}, "outputs": [], "source": [ @@ -417,7 +417,7 @@ }, { "cell_type": "markdown", - "id": "cd914270", + "id": "a27f4de4", "metadata": {}, "source": [ "We can visualize our candidate points using the napari Points layer. You should see one point in the center of each segmentation when we display it using the below cell." @@ -426,7 +426,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7d4dd97e", + "id": "0f496da7", "metadata": {}, "outputs": [], "source": [ @@ -437,7 +437,7 @@ }, { "cell_type": "markdown", - "id": "e35ddd5e", + "id": "f07ad5cb", "metadata": {}, "source": [ "### Adding Candidate Edges\n", @@ -450,7 +450,7 @@ { "cell_type": "code", "execution_count": null, - "id": "289bedec", + "id": "bb3e1128", "metadata": {}, "outputs": [], "source": [ @@ -519,7 +519,7 @@ }, { "cell_type": "markdown", - "id": "92e5f251", + "id": "b623b1cb", "metadata": {}, "source": [ "Visualizing the candidate edges in napari is, unfortunately, not yet possible. However, we can print out the number of candidate nodes and edges, and compare it to the ground truth nodes and edgesedges. We should see that we have a few more candidate nodes than ground truth (due to false positive detections) and many more candidate edges than ground truth - our next step will be to use optimization to pick a subset of the candidate nodes and edges to generate our solution tracks." @@ -528,7 +528,7 @@ { "cell_type": "code", "execution_count": null, - "id": "48dba093", + "id": "6317ab6f", "metadata": {}, "outputs": [], "source": [ @@ -538,7 +538,7 @@ }, { "cell_type": "markdown", - "id": "09cd906f", + "id": "3b0ce77f", "metadata": {}, "source": [ "## Checkpoint 1\n", @@ -550,7 +550,7 @@ }, { "cell_type": "markdown", - "id": "5ea1e7b0", + "id": "281db9ac", "metadata": {}, "source": [ "## Setting Up the Tracking Optimization Problem" @@ -558,7 +558,7 @@ }, { "cell_type": "markdown", - "id": "3131c8da", + "id": "fc5b5740", "metadata": {}, "source": [ "As hinted earlier, our goal is to prune the candidate graph. More formally we want to find a graph $\\tilde{G}=(\\tilde{V}, \\tilde{E})$ whose vertices $\\tilde{V}$ are a subset of the candidate graph vertices $V$ and whose edges $\\tilde{E}$ are a subset of the candidate graph edges $E$.\n", @@ -573,7 +573,7 @@ }, { "cell_type": "markdown", - "id": "b63205bf", + "id": "2b5106ef", "metadata": {}, "source": [ "## Task 3 - Basic tracking with motile\n", @@ -597,7 +597,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e3f3ec71", + "id": "72cbbc54", "metadata": { "tags": [ "task" @@ -625,7 +625,7 @@ }, { "cell_type": "markdown", - "id": "0126006f", + "id": "aa9f22b9", "metadata": {}, "source": [ "Here is a utility function to gauge some statistics of a solution." @@ -634,7 +634,7 @@ { "cell_type": "code", "execution_count": null, - "id": "272a25dc", + "id": "a78e997d", "metadata": {}, "outputs": [], "source": [ @@ -644,7 +644,7 @@ }, { "cell_type": "markdown", - "id": "d3182e66", + "id": "d8a3db9b", "metadata": {}, "source": [ "Here we actually run the optimization, and compare the found solution to the ground truth.\n", @@ -662,7 +662,7 @@ { "cell_type": "code", "execution_count": null, - "id": "016dd730", + "id": "69307c32", "metadata": {}, "outputs": [], "source": [ @@ -676,7 +676,7 @@ }, { "cell_type": "markdown", - "id": "8a44c2f8", + "id": "808d3a01", "metadata": {}, "source": [ "If you haven't selected any nodes or edges in your solution, try adjusting your weight and/or constant values. Make sure you have some negative costs or selecting nothing will always be the best solution!" @@ -684,7 +684,7 @@ }, { "cell_type": "markdown", - "id": "9724a28b", + "id": "157b20a7", "metadata": {}, "source": [ "

Question 1: Interpret your results based on statistics

\n", @@ -696,7 +696,7 @@ }, { "cell_type": "markdown", - "id": "a333b3f8", + "id": "c7c28275", "metadata": {}, "source": [ "

Checkpoint 2

\n", @@ -706,7 +706,7 @@ }, { "cell_type": "markdown", - "id": "fad0fa38", + "id": "2ee5e2e9", "metadata": {}, "source": [ "## Visualize the Result\n", @@ -722,7 +722,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e6c97073", + "id": "5046281c", "metadata": {}, "outputs": [], "source": [ @@ -759,7 +759,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a5a66a45", + "id": "0f02c7cc", "metadata": {}, "outputs": [], "source": [ @@ -774,7 +774,7 @@ }, { "cell_type": "markdown", - "id": "5cdd69ae", + "id": "9e927b27", "metadata": {}, "source": [ "

Question 2: Interpret your results based on visualization

\n", @@ -786,7 +786,7 @@ }, { "cell_type": "markdown", - "id": "e873085d", + "id": "869428e5", "metadata": { "lines_to_next_cell": 2 }, @@ -805,7 +805,7 @@ }, { "cell_type": "markdown", - "id": "31397b18", + "id": "ad0c86c6", "metadata": {}, "source": [ "The metrics we want to compute require a ground truth segmentation. Since we do not have a ground truth segmentation, we can make one by drawing a circle around each ground truth detection. While not perfect, it will be good enough to match ground truth to predicted detections in order to compute metrics." @@ -814,7 +814,7 @@ { "cell_type": "code", "execution_count": null, - "id": "40a0bb7c", + "id": "7047e38f", "metadata": {}, "outputs": [], "source": [ @@ -837,7 +837,7 @@ { "cell_type": "code", "execution_count": null, - "id": "cc4f9769", + "id": "3f17dab5", "metadata": {}, "outputs": [], "source": [ @@ -895,7 +895,7 @@ { "cell_type": "code", "execution_count": null, - "id": "8ccdcb3d", + "id": "a4a68921", "metadata": {}, "outputs": [], "source": [ @@ -906,7 +906,7 @@ }, { "cell_type": "markdown", - "id": "e555529a", + "id": "85b7185d", "metadata": {}, "source": [ "

Question 3: Interpret your results based on metrics

\n", @@ -918,7 +918,7 @@ }, { "cell_type": "markdown", - "id": "a265a335", + "id": "c3c06a4c", "metadata": {}, "source": [ "

Checkpoint 3

\n", @@ -931,7 +931,7 @@ { "cell_type": "code", "execution_count": null, - "id": "83f4bb68", + "id": "c6e252d9", "metadata": { "tags": [ "task" @@ -975,7 +975,7 @@ }, { "cell_type": "markdown", - "id": "656d1f54", + "id": "0ef3a125", "metadata": {}, "source": [ "## Customizing the Tracking Task\n", @@ -990,7 +990,7 @@ }, { "cell_type": "markdown", - "id": "f8ca7c2e", + "id": "c1ae7c8d", "metadata": {}, "source": [ "## Task 4 - Incorporating Known Direction of Motion\n", @@ -1001,7 +1001,7 @@ }, { "cell_type": "markdown", - "id": "8a4c15ff", + "id": "39cdb879", "metadata": {}, "source": [ "

Task 4a: Add a drift distance attribute

\n", @@ -1012,7 +1012,7 @@ { "cell_type": "code", "execution_count": null, - "id": "bbd67721", + "id": "33ae70e2", "metadata": { "tags": [ "task" @@ -1035,7 +1035,7 @@ }, { "cell_type": "markdown", - "id": "e27afa8f", + "id": "331551f1", "metadata": {}, "source": [ "

Task 4b: Add a drift distance attribute

\n", @@ -1047,7 +1047,7 @@ { "cell_type": "code", "execution_count": null, - "id": "b5141e0a", + "id": "8b4b97c7", "metadata": { "tags": [ "task" @@ -1094,7 +1094,7 @@ }, { "cell_type": "markdown", - "id": "a92849b8", + "id": "ad17dd84", "metadata": {}, "source": [ "Feel free to tinker with the weights and constants manually to try and improve the results.\n", @@ -1103,7 +1103,7 @@ }, { "cell_type": "markdown", - "id": "11401664", + "id": "40c715c8", "metadata": {}, "source": [ "

Checkpoint 4

\n", @@ -1113,7 +1113,7 @@ }, { "cell_type": "markdown", - "id": "fdb636e2", + "id": "def0932f", "metadata": {}, "source": [ "## Bonus: Learning the Weights" @@ -1121,7 +1121,7 @@ }, { "cell_type": "markdown", - "id": "8b8dfbb7", + "id": "a39859eb", "metadata": {}, "source": [ "Motile also provides the option to learn the best weights and constants using a [Structured Support Vector Machine](https://en.wikipedia.org/wiki/Structured_support_vector_machine). There is a tutorial on the motile documentation [here](https://funkelab.github.io/motile/learning.html), but we will also walk you through an example below.\n", @@ -1132,7 +1132,7 @@ { "cell_type": "code", "execution_count": null, - "id": "2bd79aec", + "id": "fdd656b1", "metadata": {}, "outputs": [], "source": [ @@ -1161,7 +1161,7 @@ }, { "cell_type": "markdown", - "id": "7e22fdde", + "id": "69425db4", "metadata": {}, "source": [ "The SSVM does not need dense ground truth - providing only some annotations frequently is sufficient to learn good weights, and is efficient for both computation time and annotation time. Below, we create a validation graph that spans the first three time frames, and annotate it with our ground truth." @@ -1170,7 +1170,7 @@ { "cell_type": "code", "execution_count": null, - "id": "3019d7fd", + "id": "fa336638", "metadata": { "lines_to_next_cell": 2 }, @@ -1186,7 +1186,7 @@ }, { "cell_type": "markdown", - "id": "c6593837", + "id": "813d459f", "metadata": {}, "source": [ "Here we print the number of nodes and edges that have been annotated with True and False ground truth. It is important to provide negative/False annotations, as well as positive/True annotations, or the SSVM will try and select weights to pick everything." @@ -1195,7 +1195,7 @@ { "cell_type": "code", "execution_count": null, - "id": "287e2c98", + "id": "77098b62", "metadata": {}, "outputs": [], "source": [ @@ -1210,7 +1210,7 @@ }, { "cell_type": "markdown", - "id": "4baf3508", + "id": "ea31a046", "metadata": {}, "source": [ "

Bonus task: Add your best solver parameters

\n", @@ -1221,7 +1221,7 @@ { "cell_type": "code", "execution_count": null, - "id": "584860d4", + "id": "5f559694", "metadata": { "tags": [ "task" @@ -1241,7 +1241,7 @@ { "cell_type": "code", "execution_count": null, - "id": "1e2df00c", + "id": "4ad37fe2", "metadata": {}, "outputs": [], "source": [ @@ -1264,7 +1264,7 @@ }, { "cell_type": "markdown", - "id": "ca5d4aee", + "id": "ba4f57f6", "metadata": {}, "source": [ "To fit the best weights, the solver will solve the ILP many times and slowly converge to the best set of weights in a structured manner. Running the cell below may take some time - we recommend getting a Gurobi license if you want to use this technique in your research, as it speeds up solving quite a bit.\n", @@ -1275,7 +1275,7 @@ { "cell_type": "code", "execution_count": null, - "id": "503b2181", + "id": "838551d2", "metadata": {}, "outputs": [], "source": [ @@ -1287,7 +1287,7 @@ }, { "cell_type": "markdown", - "id": "e63f3f66", + "id": "771f164b", "metadata": {}, "source": [ "After we have our optimal weights, we need to solve with them on the full candidate graph." @@ -1296,7 +1296,7 @@ { "cell_type": "code", "execution_count": null, - "id": "b437ec1e", + "id": "d82ffd67", "metadata": { "lines_to_next_cell": 2 }, @@ -1314,7 +1314,7 @@ }, { "cell_type": "markdown", - "id": "6ac6c102", + "id": "8b510487", "metadata": {}, "source": [ "Finally, we can visualize and compute metrics on the solution found using the weights discovered by the SSVM." @@ -1323,7 +1323,7 @@ { "cell_type": "code", "execution_count": null, - "id": "9cf8213c", + "id": "10cd3b62", "metadata": {}, "outputs": [], "source": [ @@ -1333,7 +1333,7 @@ { "cell_type": "code", "execution_count": null, - "id": "50bb10d1", + "id": "2920af2e", "metadata": {}, "outputs": [], "source": [ @@ -1349,7 +1349,7 @@ { "cell_type": "code", "execution_count": null, - "id": "d0310f69", + "id": "d3f6a632", "metadata": {}, "outputs": [], "source": [ @@ -1360,7 +1360,7 @@ }, { "cell_type": "markdown", - "id": "e7547065", + "id": "3d27c3ac", "metadata": {}, "source": [ "

Bonus Question: Interpret SSVM results

\n", diff --git a/solution.ipynb b/solution.ipynb index 22426e4..090b3a1 100644 --- a/solution.ipynb +++ b/solution.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "0b39ad81", + "id": "138bd7ef", "metadata": {}, "source": [ "# Exercise 9: Tracking-by-detection with an integer linear program (ILP)\n", @@ -45,7 +45,7 @@ }, { "cell_type": "markdown", - "id": "64b847d5", + "id": "a6cfa9e1", "metadata": {}, "source": [ "Visualizations on a remote machine\n", @@ -71,7 +71,7 @@ { "cell_type": "code", "execution_count": null, - "id": "b665c14c", + "id": "ee0decd3", "metadata": {}, "outputs": [], "source": [ @@ -81,7 +81,7 @@ }, { "cell_type": "markdown", - "id": "937b2de0", + "id": "865ea852", "metadata": {}, "source": [ "## Import packages" @@ -90,7 +90,7 @@ { "cell_type": "code", "execution_count": null, - "id": "ab5e2850", + "id": "4559a80c", "metadata": {}, "outputs": [], "source": [ @@ -119,7 +119,7 @@ }, { "cell_type": "markdown", - "id": "0ebad18d", + "id": "e49ab080", "metadata": {}, "source": [ "## Load the dataset and inspect it in napari" @@ -127,7 +127,7 @@ }, { "cell_type": "markdown", - "id": "33e97db7", + "id": "604250b9", "metadata": {}, "source": [ "For this exercise we will be working with a fluorescence microscopy time-lapse of breast cancer cells with stained nuclei (SiR-DNA). It is similar to the dataset at https://zenodo.org/record/4034976#.YwZRCJPP1qt. The raw data, pre-computed segmentations, and detection probabilities are saved in a zarr, and the ground truth tracks are saved in a csv. The segmentation was generated with a pre-trained StartDist model, so there may be some segmentation errors which can affect the tracking process. The detection probabilities also come from StarDist, and are downsampled in x and y by 2 compared to the detections and raw data." @@ -135,7 +135,7 @@ }, { "cell_type": "markdown", - "id": "3e95c378", + "id": "3c173137", "metadata": {}, "source": [ "Here we load the raw image data, segmentation, and probabilities from the zarr, and view them in napari." @@ -144,7 +144,7 @@ { "cell_type": "code", "execution_count": null, - "id": "386ab3bf", + "id": "dce30673", "metadata": {}, "outputs": [], "source": [ @@ -157,7 +157,7 @@ }, { "cell_type": "markdown", - "id": "6ce8a47f", + "id": "187715c4", "metadata": {}, "source": [ "Let's use [napari](https://napari.org/tutorials/fundamentals/getting_started.html) to visualize the data. Napari is a wonderful viewer for imaging data that you can interact with in python, even directly out of jupyter notebooks. If you've never used napari, you might want to take a few minutes to go through [this tutorial](https://napari.org/stable/tutorials/fundamentals/viewer.html). Here we visualize the raw data, the predicted segmentations, and the predicted probabilities as separate layers. You can toggle each layer on and off in the layers list on the left." @@ -166,7 +166,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5b34aae4", + "id": "287a8c29", "metadata": { "lines_to_next_cell": 1 }, @@ -180,7 +180,7 @@ }, { "cell_type": "markdown", - "id": "44883cee", + "id": "64794ac6", "metadata": {}, "source": [ "After running the previous cell, open NoMachine and check for an open napari window." @@ -188,7 +188,7 @@ }, { "cell_type": "markdown", - "id": "32ce9844", + "id": "4a5a7caf", "metadata": {}, "source": [ "## Read in the ground truth graph\n", @@ -206,7 +206,7 @@ }, { "cell_type": "markdown", - "id": "e55490f9", + "id": "451d5275", "metadata": {}, "source": [ "\n", @@ -230,7 +230,7 @@ { "cell_type": "code", "execution_count": null, - "id": "fb2bac5c", + "id": "6168387e", "metadata": { "tags": [ "solution" @@ -262,7 +262,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7914aca5", + "id": "c4cd5a11", "metadata": {}, "outputs": [], "source": [ @@ -282,7 +282,7 @@ }, { "cell_type": "markdown", - "id": "51c2b76c", + "id": "37392c6b", "metadata": {}, "source": [ "Here we set up a napari widget for visualizing the tracking results. This is part of the motile napari plugin, not part of core napari.\n", @@ -292,17 +292,17 @@ { "cell_type": "code", "execution_count": null, - "id": "0a6e2524", + "id": "a3b8a6ba", "metadata": {}, "outputs": [], "source": [ "widget = plugin_widgets.TreeWidget(viewer)\n", - "viewer.window.add_dock_widget(widget, name=\"Lineage View\", area=\"bottom\")" + "viewer.window.add_dock_widget(widget, name=\"Lineage View\", area=\"right\")" ] }, { "cell_type": "markdown", - "id": "25d8b7ce", + "id": "d73a9db9", "metadata": {}, "source": [ "Here we add a \"MotileRun\" to the napari tracking visualization widget (the \"view_controller\"). A MotileRun includes a name, a set of tracks, and a segmentation. The tracking visualization widget will add:\n", @@ -319,7 +319,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e86bfbe0", + "id": "7e1767ae", "metadata": {}, "outputs": [], "source": [ @@ -334,7 +334,7 @@ }, { "cell_type": "markdown", - "id": "ffd1eff5", + "id": "14d2aaec", "metadata": { "lines_to_next_cell": 2 }, @@ -351,7 +351,7 @@ }, { "cell_type": "markdown", - "id": "5e73893b", + "id": "79a311b4", "metadata": {}, "source": [ "

Task 2: Extract candidate nodes from the predicted segmentations

\n", @@ -374,7 +374,7 @@ { "cell_type": "code", "execution_count": null, - "id": "b9393e3e", + "id": "0181c284", "metadata": { "tags": [ "solution" @@ -417,7 +417,7 @@ { "cell_type": "code", "execution_count": null, - "id": "c9e0be21", + "id": "4392e656", "metadata": {}, "outputs": [], "source": [ @@ -439,7 +439,7 @@ }, { "cell_type": "markdown", - "id": "cd914270", + "id": "a27f4de4", "metadata": {}, "source": [ "We can visualize our candidate points using the napari Points layer. You should see one point in the center of each segmentation when we display it using the below cell." @@ -448,7 +448,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7d4dd97e", + "id": "0f496da7", "metadata": {}, "outputs": [], "source": [ @@ -459,7 +459,7 @@ }, { "cell_type": "markdown", - "id": "e35ddd5e", + "id": "f07ad5cb", "metadata": {}, "source": [ "### Adding Candidate Edges\n", @@ -472,7 +472,7 @@ { "cell_type": "code", "execution_count": null, - "id": "289bedec", + "id": "bb3e1128", "metadata": {}, "outputs": [], "source": [ @@ -541,7 +541,7 @@ }, { "cell_type": "markdown", - "id": "92e5f251", + "id": "b623b1cb", "metadata": {}, "source": [ "Visualizing the candidate edges in napari is, unfortunately, not yet possible. However, we can print out the number of candidate nodes and edges, and compare it to the ground truth nodes and edgesedges. We should see that we have a few more candidate nodes than ground truth (due to false positive detections) and many more candidate edges than ground truth - our next step will be to use optimization to pick a subset of the candidate nodes and edges to generate our solution tracks." @@ -550,7 +550,7 @@ { "cell_type": "code", "execution_count": null, - "id": "48dba093", + "id": "6317ab6f", "metadata": {}, "outputs": [], "source": [ @@ -560,7 +560,7 @@ }, { "cell_type": "markdown", - "id": "09cd906f", + "id": "3b0ce77f", "metadata": {}, "source": [ "## Checkpoint 1\n", @@ -572,7 +572,7 @@ }, { "cell_type": "markdown", - "id": "5ea1e7b0", + "id": "281db9ac", "metadata": {}, "source": [ "## Setting Up the Tracking Optimization Problem" @@ -580,7 +580,7 @@ }, { "cell_type": "markdown", - "id": "3131c8da", + "id": "fc5b5740", "metadata": {}, "source": [ "As hinted earlier, our goal is to prune the candidate graph. More formally we want to find a graph $\\tilde{G}=(\\tilde{V}, \\tilde{E})$ whose vertices $\\tilde{V}$ are a subset of the candidate graph vertices $V$ and whose edges $\\tilde{E}$ are a subset of the candidate graph edges $E$.\n", @@ -595,7 +595,7 @@ }, { "cell_type": "markdown", - "id": "b63205bf", + "id": "2b5106ef", "metadata": {}, "source": [ "## Task 3 - Basic tracking with motile\n", @@ -619,7 +619,7 @@ { "cell_type": "code", "execution_count": null, - "id": "86f48686", + "id": "22dec045", "metadata": { "tags": [ "solution" @@ -657,7 +657,7 @@ }, { "cell_type": "markdown", - "id": "0126006f", + "id": "aa9f22b9", "metadata": {}, "source": [ "Here is a utility function to gauge some statistics of a solution." @@ -666,7 +666,7 @@ { "cell_type": "code", "execution_count": null, - "id": "272a25dc", + "id": "a78e997d", "metadata": {}, "outputs": [], "source": [ @@ -676,7 +676,7 @@ }, { "cell_type": "markdown", - "id": "d3182e66", + "id": "d8a3db9b", "metadata": {}, "source": [ "Here we actually run the optimization, and compare the found solution to the ground truth.\n", @@ -694,7 +694,7 @@ { "cell_type": "code", "execution_count": null, - "id": "016dd730", + "id": "69307c32", "metadata": {}, "outputs": [], "source": [ @@ -708,7 +708,7 @@ }, { "cell_type": "markdown", - "id": "8a44c2f8", + "id": "808d3a01", "metadata": {}, "source": [ "If you haven't selected any nodes or edges in your solution, try adjusting your weight and/or constant values. Make sure you have some negative costs or selecting nothing will always be the best solution!" @@ -716,7 +716,7 @@ }, { "cell_type": "markdown", - "id": "9724a28b", + "id": "157b20a7", "metadata": {}, "source": [ "

Question 1: Interpret your results based on statistics

\n", @@ -728,7 +728,7 @@ }, { "cell_type": "markdown", - "id": "a333b3f8", + "id": "c7c28275", "metadata": {}, "source": [ "

Checkpoint 2

\n", @@ -738,7 +738,7 @@ }, { "cell_type": "markdown", - "id": "fad0fa38", + "id": "2ee5e2e9", "metadata": {}, "source": [ "## Visualize the Result\n", @@ -754,7 +754,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e6c97073", + "id": "5046281c", "metadata": {}, "outputs": [], "source": [ @@ -791,7 +791,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a5a66a45", + "id": "0f02c7cc", "metadata": {}, "outputs": [], "source": [ @@ -806,7 +806,7 @@ }, { "cell_type": "markdown", - "id": "5cdd69ae", + "id": "9e927b27", "metadata": {}, "source": [ "

Question 2: Interpret your results based on visualization

\n", @@ -818,7 +818,7 @@ }, { "cell_type": "markdown", - "id": "e873085d", + "id": "869428e5", "metadata": { "lines_to_next_cell": 2 }, @@ -837,7 +837,7 @@ }, { "cell_type": "markdown", - "id": "31397b18", + "id": "ad0c86c6", "metadata": {}, "source": [ "The metrics we want to compute require a ground truth segmentation. Since we do not have a ground truth segmentation, we can make one by drawing a circle around each ground truth detection. While not perfect, it will be good enough to match ground truth to predicted detections in order to compute metrics." @@ -846,7 +846,7 @@ { "cell_type": "code", "execution_count": null, - "id": "40a0bb7c", + "id": "7047e38f", "metadata": {}, "outputs": [], "source": [ @@ -869,7 +869,7 @@ { "cell_type": "code", "execution_count": null, - "id": "cc4f9769", + "id": "3f17dab5", "metadata": {}, "outputs": [], "source": [ @@ -927,7 +927,7 @@ { "cell_type": "code", "execution_count": null, - "id": "8ccdcb3d", + "id": "a4a68921", "metadata": {}, "outputs": [], "source": [ @@ -938,7 +938,7 @@ }, { "cell_type": "markdown", - "id": "e555529a", + "id": "85b7185d", "metadata": {}, "source": [ "

Question 3: Interpret your results based on metrics

\n", @@ -950,7 +950,7 @@ }, { "cell_type": "markdown", - "id": "a265a335", + "id": "c3c06a4c", "metadata": {}, "source": [ "

Checkpoint 3

\n", @@ -963,7 +963,7 @@ { "cell_type": "code", "execution_count": null, - "id": "558db1b2", + "id": "a84e25a9", "metadata": { "lines_to_next_cell": 2, "tags": [ @@ -1017,7 +1017,7 @@ }, { "cell_type": "markdown", - "id": "656d1f54", + "id": "0ef3a125", "metadata": {}, "source": [ "## Customizing the Tracking Task\n", @@ -1032,7 +1032,7 @@ }, { "cell_type": "markdown", - "id": "f8ca7c2e", + "id": "c1ae7c8d", "metadata": {}, "source": [ "## Task 4 - Incorporating Known Direction of Motion\n", @@ -1043,7 +1043,7 @@ }, { "cell_type": "markdown", - "id": "8a4c15ff", + "id": "39cdb879", "metadata": {}, "source": [ "

Task 4a: Add a drift distance attribute

\n", @@ -1054,7 +1054,7 @@ { "cell_type": "code", "execution_count": null, - "id": "0af48b09", + "id": "a27e74e2", "metadata": { "tags": [ "solution" @@ -1080,7 +1080,7 @@ }, { "cell_type": "markdown", - "id": "e27afa8f", + "id": "331551f1", "metadata": {}, "source": [ "

Task 4b: Add a drift distance attribute

\n", @@ -1092,7 +1092,7 @@ { "cell_type": "code", "execution_count": null, - "id": "850889eb", + "id": "7fe039dc", "metadata": { "tags": [ "solution" @@ -1148,7 +1148,7 @@ }, { "cell_type": "markdown", - "id": "a92849b8", + "id": "ad17dd84", "metadata": {}, "source": [ "Feel free to tinker with the weights and constants manually to try and improve the results.\n", @@ -1157,7 +1157,7 @@ }, { "cell_type": "markdown", - "id": "11401664", + "id": "40c715c8", "metadata": {}, "source": [ "

Checkpoint 4

\n", @@ -1167,7 +1167,7 @@ }, { "cell_type": "markdown", - "id": "fdb636e2", + "id": "def0932f", "metadata": {}, "source": [ "## Bonus: Learning the Weights" @@ -1175,7 +1175,7 @@ }, { "cell_type": "markdown", - "id": "8b8dfbb7", + "id": "a39859eb", "metadata": {}, "source": [ "Motile also provides the option to learn the best weights and constants using a [Structured Support Vector Machine](https://en.wikipedia.org/wiki/Structured_support_vector_machine). There is a tutorial on the motile documentation [here](https://funkelab.github.io/motile/learning.html), but we will also walk you through an example below.\n", @@ -1186,7 +1186,7 @@ { "cell_type": "code", "execution_count": null, - "id": "2bd79aec", + "id": "fdd656b1", "metadata": {}, "outputs": [], "source": [ @@ -1215,7 +1215,7 @@ }, { "cell_type": "markdown", - "id": "7e22fdde", + "id": "69425db4", "metadata": {}, "source": [ "The SSVM does not need dense ground truth - providing only some annotations frequently is sufficient to learn good weights, and is efficient for both computation time and annotation time. Below, we create a validation graph that spans the first three time frames, and annotate it with our ground truth." @@ -1224,7 +1224,7 @@ { "cell_type": "code", "execution_count": null, - "id": "3019d7fd", + "id": "fa336638", "metadata": { "lines_to_next_cell": 2 }, @@ -1240,7 +1240,7 @@ }, { "cell_type": "markdown", - "id": "c6593837", + "id": "813d459f", "metadata": {}, "source": [ "Here we print the number of nodes and edges that have been annotated with True and False ground truth. It is important to provide negative/False annotations, as well as positive/True annotations, or the SSVM will try and select weights to pick everything." @@ -1249,7 +1249,7 @@ { "cell_type": "code", "execution_count": null, - "id": "287e2c98", + "id": "77098b62", "metadata": {}, "outputs": [], "source": [ @@ -1264,7 +1264,7 @@ }, { "cell_type": "markdown", - "id": "4baf3508", + "id": "ea31a046", "metadata": {}, "source": [ "

Bonus task: Add your best solver parameters

\n", @@ -1275,7 +1275,7 @@ { "cell_type": "code", "execution_count": null, - "id": "1e2df00c", + "id": "4ad37fe2", "metadata": {}, "outputs": [], "source": [ @@ -1298,7 +1298,7 @@ }, { "cell_type": "markdown", - "id": "ca5d4aee", + "id": "ba4f57f6", "metadata": {}, "source": [ "To fit the best weights, the solver will solve the ILP many times and slowly converge to the best set of weights in a structured manner. Running the cell below may take some time - we recommend getting a Gurobi license if you want to use this technique in your research, as it speeds up solving quite a bit.\n", @@ -1309,7 +1309,7 @@ { "cell_type": "code", "execution_count": null, - "id": "503b2181", + "id": "838551d2", "metadata": {}, "outputs": [], "source": [ @@ -1321,7 +1321,7 @@ }, { "cell_type": "markdown", - "id": "e63f3f66", + "id": "771f164b", "metadata": {}, "source": [ "After we have our optimal weights, we need to solve with them on the full candidate graph." @@ -1330,7 +1330,7 @@ { "cell_type": "code", "execution_count": null, - "id": "b437ec1e", + "id": "d82ffd67", "metadata": { "lines_to_next_cell": 2 }, @@ -1348,7 +1348,7 @@ }, { "cell_type": "markdown", - "id": "6ac6c102", + "id": "8b510487", "metadata": {}, "source": [ "Finally, we can visualize and compute metrics on the solution found using the weights discovered by the SSVM." @@ -1357,7 +1357,7 @@ { "cell_type": "code", "execution_count": null, - "id": "9cf8213c", + "id": "10cd3b62", "metadata": {}, "outputs": [], "source": [ @@ -1367,7 +1367,7 @@ { "cell_type": "code", "execution_count": null, - "id": "50bb10d1", + "id": "2920af2e", "metadata": {}, "outputs": [], "source": [ @@ -1383,7 +1383,7 @@ { "cell_type": "code", "execution_count": null, - "id": "d0310f69", + "id": "d3f6a632", "metadata": {}, "outputs": [], "source": [ @@ -1394,7 +1394,7 @@ }, { "cell_type": "markdown", - "id": "e7547065", + "id": "3d27c3ac", "metadata": {}, "source": [ "

Bonus Question: Interpret SSVM results

\n",