Skip to content

Commit

Permalink
Iterate on predict_yolo example
Browse files Browse the repository at this point in the history
  • Loading branch information
bnorthan committed May 31, 2024
1 parent 8ae75b6 commit ca81951
Showing 1 changed file with 122 additions and 55 deletions.
177 changes: 122 additions & 55 deletions notebooks/ladybugs/predict_yolo.ipynb
Original file line number Diff line number Diff line change
@@ -1,16 +1,33 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## YOLO + SAM Prediction"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"\n",
"## Import Napari\n",
"## there is some error that happens if we import napari after importing other libraries so we import it first\n",
"\n",
"import napari\n",
"viewer = napari.Viewer()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Import Yolo and setup paths"
]
},
{
"cell_type": "code",
"execution_count": 2,
Expand All @@ -25,48 +42,41 @@
]
},
{
"cell_type": "code",
"execution_count": 3,
"cell_type": "markdown",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"WARNING ⚠️ Ultralytics settings reset to default values. This may be due to a possible problem with your settings or a recent ultralytics package update. \n",
"View settings with 'yolo settings' or at '/home/bnorthan/.config/Ultralytics/settings.yaml'\n",
"Update settings with 'yolo settings key=value', i.e. 'yolo settings runs_dir=path/to/dir'. For help see https://docs.ultralytics.com/quickstart/#ultralytics-settings.\n"
]
}
],
"source": [
"yolo_detector1 = YoloDetector( str(parent_path / r\"YOLO-training-2/100-epochs-ladybug/weights/best.pt\"), \"RegularYOLO\", 'cuda')"
"Set up a default 8m YoloDetector and a second YoloDetector using the weights we trained in the fine tune notebook"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"from skimage.io import imread\n",
"\n",
"\n",
"img = imread(parent_path / r\"522_img_crop.png\") "
"yolo_detector1 = YoloDetector( str(parent_path / r\"YOLO-training-2/100-epochs-ladybug/weights/best.pt\"), \"RegularYOLO\", 'cuda')\n",
"yolo_detector_8m = YoloDetector( 'yolov8m.pt', \"RegularYOLO\", 'cuda')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load the ladybug image"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x79215cb3b0e0>"
"<matplotlib.image.AxesImage at 0x78c03a23cd10>"
]
},
"execution_count": 5,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
},
Expand All @@ -82,23 +92,33 @@
}
],
"source": [
"from skimage.io import imread\n",
"import matplotlib.pyplot as plt\n",
"\n",
"\n",
"img = imread(parent_path / r\"522_img_crop.png\") \n",
"plt.imshow(img)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Generate bounding boxes with YOLO"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"0: 416x512 9 ladybugs, 18 spots, 94.8ms\n",
"Speed: 0.9ms preprocess, 94.8ms inference, 59.6ms postprocess per image at shape (1, 3, 416, 512)\n"
"0: 416x512 9 ladybugs, 18 spots, 7.0ms\n",
"Speed: 0.8ms preprocess, 7.0ms inference, 0.7ms postprocess per image at shape (1, 3, 416, 512)\n"
]
}
],
Expand All @@ -108,16 +128,16 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x79215c9ba780>"
"<matplotlib.image.AxesImage at 0x78c03a23d700>"
]
},
"execution_count": 7,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
},
Expand All @@ -140,26 +160,16 @@
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"#model_8m = YOLO('yolov8m.pt') # Transfer the weights from a pretrained model (recommended for training)\n",
"yolo_detector_8m = YoloDetector( 'yolov8m.pt', \"RegularYOLO\", 'cuda')"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"0: 416x512 7 sports balls, 1 baseball glove, 9.1ms\n",
"Speed: 0.7ms preprocess, 9.1ms inference, 0.7ms postprocess per image at shape (1, 3, 416, 512)\n"
"0: 416x512 7 sports balls, 1 baseball glove, 9.6ms\n",
"Speed: 0.8ms preprocess, 9.6ms inference, 0.6ms postprocess per image at shape (1, 3, 416, 512)\n"
]
}
],
Expand All @@ -170,16 +180,16 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7921336010d0>"
"<matplotlib.image.AxesImage at 0x78c03d32db50>"
]
},
"execution_count": 10,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
},
Expand All @@ -200,19 +210,33 @@
"plt.imshow(test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get the classes and the bounding boxes from the Yolo result"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"classes = results[0].boxes.cls.cpu().numpy()\n",
"bbs=results[0].boxes.xyxy.cpu().numpy()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Convert bounding boxes and classes to StackedLabels class and display in Napari"
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -222,17 +246,9 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 21,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/bnorthan/mambaforge/envs/segment_everything_fresh/lib/python3.12/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
},
{
"name": "stdout",
"output_type": "stream",
Expand All @@ -257,6 +273,57 @@
"stacked_labels_to_napari(stacked_labels)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a new stacked labels by segmenting the first stacked labels (with bounding boxes only) with MobileSAMV2"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"VIT checkpoint loaded successfully\n"
]
}
],
"source": [
"from segment_everything.detect_and_segment import segment_from_stacked_labels\n",
"new_stacked_labels = segment_from_stacked_labels(stacked_labels, \"MobileSamV2\") "
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"area 22.0 1491.0\n",
"label_num 1 27\n",
"solidity 0.5641025641025641 0.975609756097561\n",
"circularity 0.643459194320178 1\n",
"mean_intensity 27.043478260869566 198.64705882352942\n",
"10th_percentile_intensity 0.0 98.60000000000001\n",
"mean_hue 59.130434782608695 85.0\n",
"mean_saturation 177.3913043478261 255.0\n",
"predicted_iou 0.39588719606399536 0.984914243221283\n",
"stability_score 0.7358490824699402 1.0\n"
]
}
],
"source": [
"stacked_labels_to_napari(new_stacked_labels)"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down

0 comments on commit ca81951

Please sign in to comment.