The official PyTorch implementation of L2CS-Net for gaze estimation and tracking.
Install package with the following:
pip install git+https://github.com/edavalosanaya/L2CS-Net.git@main
Or, you can git clone the repo and install with the following:
pip install [-e] .
Now you should be able to import the package with the following command:
$ python
>>> import l2cs
Detect face and predict gaze from webcam
from l2cs import Pipeline, render
import cv2
gaze_pipeline = Pipeline(
weights=CWD / 'models' / 'L2CSNet_gaze360.pkl',
arch='ResNet50',
device=torch.device('cpu') # or 'gpu'
)
cap = cv2.VideoCapture(cam)
_, frame = cap.read()
# Process frame and visualize
results = gaze_pipeline.step(frame)
frame = render(frame, results)
- Download the pre-trained models from here and Store it to models/.
- Run:
python demo.py \
--snapshot models/L2CSNet_gaze360.pkl \
--gpu 0 \
--cam 0 \
This means the demo will run using L2CSNet_gaze360.pkl pretrained model
- Gaze Detection and Eye Tracking: A How-To Guide: Use L2CS-Net through a HTTP interface with the open source Roboflow Inference project.
We provide the code for train and test MPIIGaze dataset with leave-one-person-out evaluation.
- Download MPIIFaceGaze dataset from here.
- Apply data preprocessing from here.
- Store the dataset to datasets/MPIIFaceGaze.
python train.py \
--dataset mpiigaze \
--snapshot output/snapshots \
--gpu 0 \
--num_epochs 50 \
--batch_size 16 \
--lr 0.00001 \
--alpha 1 \
This means the code will perform leave-one-person-out training automatically and store the models to output/snapshots.
python test.py \
--dataset mpiigaze \
--snapshot output/snapshots/snapshot_folder \
--evalpath evaluation/L2CS-mpiigaze \
--gpu 0 \
This means the code will perform leave-one-person-out testing automatically and store the results to evaluation/L2CS-mpiigaze.
To get the average leave-one-person-out accuracy use:
python leave_one_out_eval.py \
--evalpath evaluation/L2CS-mpiigaze \
--respath evaluation/L2CS-mpiigaze \
This means the code will take the evaluation path and outputs the leave-one-out gaze accuracy to the evaluation/L2CS-mpiigaze.
We provide the code for train and test Gaze360 dataset with train-val-test evaluation.
-
Download Gaze360 dataset from here.
-
Apply data preprocessing from here.
-
Store the dataset to datasets/Gaze360.
python train.py \
--dataset gaze360 \
--snapshot output/snapshots \
--gpu 0 \
--num_epochs 50 \
--batch_size 16 \
--lr 0.00001 \
--alpha 1 \
This means the code will perform training and store the models to output/snapshots.
python test.py \
--dataset gaze360 \
--snapshot output/snapshots/snapshot_folder \
--evalpath evaluation/L2CS-gaze360 \
--gpu 0 \
This means the code will perform testing on snapshot_folder and store the results to evaluation/L2CS-gaze360.