-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add README, clean requirements and notebook
- Loading branch information
1 parent
dcf5e2a
commit a45f2f7
Showing
10 changed files
with
377 additions
and
246 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,62 @@ | ||
# ADLC | ||
|
||
As of right now, this just includes a script for generating a dataset from images and annotations and a notebook for preliminary testing on using a CNN for object detection. | ||
|
||
Here is a sample output of `visualize_detections` | ||
|
||
![four img with bounding box](./img/output.png) | ||
|
||
Since the images are really high resolution, the lines cover the targets, but the boxes are accurate. | ||
|
||
## Data Annotation | ||
|
||
For now, I have done data annotation using OpenCVs GUI tool [`opencv_annotation`](https://docs.opencv.org/4.x/dc/d88/tutorial_traincascade.html#Preparation-of-the-training-data), which uses a XYWH bounding-box format. There are annotations in `data/annotation_238.txt`. | ||
|
||
To include the corresponding images, you will need to download them from the Kraken computer and place them in `data/flight_238/*.jpg`. They are located in `/RAID/Flights/Flight_238/*.jpg`. | ||
|
||
## Setup Development Environment | ||
|
||
### Using a Conda/Mamba Environment | ||
|
||
Create a conda environment using: | ||
|
||
```sh | ||
conda env create --file ncsuadlc_condaenv.yaml -n ncsuadlc | ||
conda activate ncsuadlc | ||
|
||
# Some requirements are only up-to-date on PyPi | ||
pip install -r ncsuadlc_pipreqs.txt | ||
``` | ||
|
||
### Pip Only | ||
|
||
```sh | ||
pip install -r requirements.txt | ||
``` | ||
|
||
## Using CuDNN Acceleration on VLC | ||
|
||
NCSU provides VLCs with RTX 2080 GPUs that can be used for training the CNN quickly. CUDA is already installed on these systems but you will need to install CuDNN as well: | ||
|
||
```sh | ||
sudo apt-get install libcudnn8=8.8.0.121-1+cuda12.1 | ||
sudo apt-get install libcudnn8-dev=8.8.0.121-1+cuda12.1 | ||
sudo apt-get install libcudnn8-samples=8.8.0.121-1+cuda12.1 | ||
``` | ||
|
||
To check that CuDNN was set up correctly, run built-in test suite: | ||
|
||
```sh | ||
cp -r /usr/src/cudnn_samples_v8/ $HOME | ||
cd $HOME/cudnn_samples_v8/mnistCUDNN | ||
make clean && make | ||
sudo apt-get install libfreeimage3 libfreeimage-dev | ||
make clean && make | ||
./mnistCUDNN | ||
``` | ||
|
||
You will also need to make sure that Tensorflow has needed GPU dependencies using: | ||
|
||
```sh | ||
pip install tensorflow[and-cuda] | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
name: ncsuadlc | ||
channels: | ||
- conda-forge | ||
dependencies: | ||
- _libgcc_mutex==0.1 | ||
- _openmp_mutex==4.5 | ||
- bzip2==1.0.8 | ||
- ca-certificates | ||
- click | ||
- empy | ||
- ipykernel | ||
- lark | ||
- ld_impl_linux-64==2.40 | ||
- libblas==3.9.0 | ||
- libcblas==3.9.0 | ||
- libexpat==2.5.0 | ||
- libffi==3.4.2 | ||
- libgcc-ng==13.2.0 | ||
- libgfortran-ng==13.2.0 | ||
- libgfortran5==13.2.0 | ||
- libgomp==13.2.0 | ||
- liblapack==3.9.0 | ||
- libnsl==2.0.0 | ||
- libopenblas==0.3.24 | ||
- libsqlite==3.43.0 | ||
- libstdcxx-ng==13.2.0 | ||
- libuuid==2.38.1 | ||
- libzlib==1.2.13 | ||
- ncurses==6.4 | ||
- numpy==1.25.2 | ||
- openssl | ||
- pandas | ||
- pillow | ||
- pip==23.2.1 | ||
- protobuf | ||
- pycocotools | ||
- python==3.11.5 | ||
- python_abi==3.11 | ||
- readline==8.2 | ||
- scikit-learn | ||
- setuptools==68.1.2 | ||
- tk==8.6.12 | ||
- tqdm | ||
- transforms3d==0.4.1 | ||
- tzdata==2023c | ||
- wheel==0.41.2 | ||
- xz==5.2.6 | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,116 @@ | ||
absl-py==1.4.0 | ||
array-record==0.5.0 | ||
astor==0.8.1 | ||
asttokens==2.4.0 | ||
astunparse==1.6.3 | ||
atomicwrites==1.4.1 | ||
backcall==0.2.0 | ||
backports.functools-lru-cache==1.6.5 | ||
cachetools==5.3.2 | ||
certifi==2023.7.22 | ||
charset-normalizer==3.3.1 | ||
click==8.1.7 | ||
colorama==0.4.6 | ||
comm==0.1.4 | ||
contourpy==1.1.1 | ||
cycler==0.12.1 | ||
Cython==3.0.5 | ||
debugpy==1.8.0 | ||
decorator==5.1.1 | ||
dm-tree==0.1.8 | ||
empy==3.3.4 | ||
etils==1.5.2 | ||
exceptiongroup==1.1.3 | ||
executing==1.2.0 | ||
flatbuffers==23.5.26 | ||
fonttools==4.43.1 | ||
fsspec==2023.10.0 | ||
gast==0.5.4 | ||
google-auth==2.23.4 | ||
google-auth-oauthlib==1.0.0 | ||
google-pasta==0.2.0 | ||
googleapis-common-protos==1.61.0 | ||
grpcio==1.59.2 | ||
h5py==3.10.0 | ||
idna==3.4 | ||
importlib-metadata==6.8.0 | ||
importlib-resources==6.1.0 | ||
ipykernel==6.25.2 | ||
ipython==8.16.1 | ||
jedi==0.19.1 | ||
joblib==1.3.2 | ||
Js2Py==0.74 | ||
jupyter_client==8.4.0 | ||
jupyter_core==5.4.0 | ||
keras==2.14.0 | ||
keras-core==0.1.7 | ||
keras-cv==0.6.4 | ||
kiwisolver==1.4.5 | ||
lark==1.1.7 | ||
libclang==16.0.6 | ||
Markdown==3.5.1 | ||
markdown-it-py==3.0.0 | ||
MarkupSafe==2.1.3 | ||
matplotlib==3.8.0 | ||
matplotlib-inline==0.1.6 | ||
mdurl==0.1.2 | ||
ml-dtypes==0.2.0 | ||
munkres==1.1.4 | ||
namex==0.0.7 | ||
nest-asyncio==1.5.8 | ||
oauthlib==3.2.2 | ||
opencv-python==4.8.1.78 | ||
opt-einsum==3.3.0 | ||
packaging==23.2 | ||
pandas==2.1.2 | ||
parso==0.8.3 | ||
pexpect==4.8.0 | ||
pickleshare==0.7.5 | ||
Pillow==10.1.0 | ||
platformdirs==3.5.1 | ||
promise==2.3 | ||
prompt-toolkit==3.0.39 | ||
protobuf==3.20.3 | ||
psutil==5.9.5 | ||
ptyprocess==0.7.0 | ||
pure-eval==0.2.2 | ||
pyasn1==0.5.0 | ||
pyasn1-modules==0.3.0 | ||
pycocotools==2.0.6 | ||
Pygments==2.16.1 | ||
pyjsparser==2.7.1 | ||
pyparsing==3.1.1 | ||
python-dateutil==2.8.2 | ||
pytz==2023.3.post1 | ||
pyzmq==25.1.1 | ||
regex==2023.10.3 | ||
requests==2.31.0 | ||
requests-oauthlib==1.3.1 | ||
rich==13.6.0 | ||
rsa==4.9 | ||
scikit-learn==1.3.2 | ||
scipy==1.11.3 | ||
six==1.16.0 | ||
stack-data==0.6.2 | ||
tensorboard==2.14.1 | ||
tensorboard-data-server==0.7.2 | ||
tensorflow==2.14.0 | ||
tensorflow-datasets==4.9.3 | ||
tensorflow-estimator==2.14.0 | ||
tensorflow-io-gcs-filesystem==0.34.0 | ||
tensorflow-metadata==1.14.0 | ||
termcolor==2.3.0 | ||
tfds-nightly==4.9.3.dev202310060044 | ||
threadpoolctl==3.2.0 | ||
toml==0.10.2 | ||
tornado==6.3.3 | ||
tqdm==4.66.1 | ||
traitlets==5.11.2 | ||
typing_extensions==4.8.0 | ||
tzdata==2023.3 | ||
tzlocal==5.1 | ||
urllib3==2.0.7 | ||
wcwidth==0.2.8 | ||
Werkzeug==3.0.1 | ||
wrapt==1.14.1 | ||
zipp==3.17.0 |
This file was deleted.
Oops, something went wrong.
File renamed without changes.
Oops, something went wrong.