Skip to content

Rassibassi/mediapipeDemos

Folders and files

NameName
Last commit message
Last commit date

Latest commit

47c6330 · Nov 17, 2022

History

16 Commits
Mar 15, 2022
Jul 15, 2021
Jul 19, 2021
Jul 14, 2022
Mar 15, 2022
Jul 19, 2021
Jul 14, 2022
Jul 14, 2022
Jul 19, 2021
Jul 19, 2021
Jul 19, 2021
Nov 17, 2022
Jul 14, 2022
Jul 19, 2021
Jul 15, 2021
Jan 7, 2022

Repository files navigation

Mediapipe examples

From: https://google.github.io/mediapipe/

Installation

python -m venv env
source env/bin/activate
pip install -r requirements.txt

For the iris example, put iris_landmark.tflite into models directory, by unpacking following zip file:

https://github.com/google/mediapipe/files/10012191/iris_landmark.zip

The facial expression example uses the trained weights from github.com/zengqunzhao/EfficientFace, but converted to tflite. For the facial expression example download both models (fast and slow) into the models directory:

wget -P models https://rassibassi-mediapipedemos.s3.eu-central-1.amazonaws.com/efficient_face_model.tflite
wget -P models https://rassibassi-mediapipedemos.s3.eu-central-1.amazonaws.com/dlg_model.tflite

How to run

One of the following:

python facial_expression.py
python face_detection.py
python face_mesh.py
python hands.py
python head_posture.py
python holistic.py
python iris.py
python objectron.py
python pose.py
python selfie_segmentation.py

pose.py and iris.py include the possibility to process a video file instead of the webcam input stream. Run like this:

python iris.py -i /path/to/some/file/i-am-a-video-file.mp4
python pose.py -i /path/to/some/file/i-am-a-video-file.mp4

Numpy

See example python pose.py for how to extract numpy array from the mediapipe landmark objects.

About

Real-time Python demos of google mediapipe

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages