Real-time face recognition program using Google's facenet.
- OpenFace
- I refer to the facenet repository of davidsandberg.
- also, shanren7 repository was a great help in implementing.
- Tensorflow 1.2.1 - gpu
- Python 3.5
- Same as requirement.txt in davidsandberg repository.
- Inception_ResNet_v1 CASIA-WebFace-> 20170511-185253
- You need det1.npy, det2.npy, and det3.npy in the davidsandberg repository.
- First, we need align face data. So, if you run 'Make_aligndata.py' first, the face data that is aligned in the 'output_dir' folder will be saved.
- Second, we need to create our own classifier with the face data we created.
(In the case of me, I had a high recognition rate when I made 30 pictures for each person.)
Your own classifier is a ~.pkl file that loads the previously mentioned pre-trained model ('20170511-185253.pb') and embeds the face for each person.
All of these can be obtained by running 'Make_classifier.py'. - Finally, we load our own 'my_classifier.pkl' obtained above and then open the sensor and start recognition.
(Note that, look carefully at the paths of files and folders in all .py)
- First, clone this repo and change current directory to its directory on your device. Next, put the images of each person in separated directories with the names that you want to be shown for each person inside the 'data' directory; for example, Daehyun's photos are in: './data/Daehyun' and Byeonggil's are in: './data/Byeonggil'. Then, you can run it in two ways:
1. Build the Docker image by running:docker build -t "your_image_name":"your_image_tag" .
After the build is done, run:docker run --device /dev/video0:/dev/video0 "your_image_name":"your_image_tag
2. Run:docker compose up
.
(Note that Windows doesn’t give camera access to Docker containers. So, you can just run this container on Linux.)