Model for 3D Gaze Estimation based on L2CS-Net
Gaze_Businesswoman.-.129427_hd.mp4
This is an appearance-based 3D gaze estimation method. It is a method to improve the accuracy of the conventional method, L2CS-Net.
To be presented at IEICE (as of 2023.02.02).
Results for the Gaze360 dataset using Mean Angular Error(MAE) as the evaluation index are as follows.
L2CS-Net | Ours | |
---|---|---|
MAE(degrees) | 10.41 | 10.30 |
A simple demonstration can be performed using a pre-trained model and a web camera.
- Download the pre-trained models from here.
python demo.py --snapshot "./"
Argument: Give the path to the model's weight.
python train.py --image_dir "./" --label_dir "./"
Argument: Give the path to the images and labels in your environment.
python test.py --snapshot "./"
Argument: Give the path where the model you want to test is stored.
- Ubuntu : 20.04
- Python : 3.7
pip install -r requirements.txt
To improve the accuracy of the conventional method, L2CS-Net, we devised a method that uses face and both eye images as input. Our gaze estimation network is shown below. (These images are taken from Gaze360.)
The project contains follwing files/folders.
model.py
: the model code.train.py
: the entry for training and validation.test.py
: the entry fot testing.dataset.py
: the data loader code.utils.py
: the utils code.
-
Dowanload Gaze360 dataset.
-
Apply pre-processing to the dataset.
-
The path of the dataset should be
./datasets/Gaze360
.
Gaze360:
@InProceedings{Kellnhofer_2019_ICCV,
author = {Kellnhofer, Petr and Recasens, Adria and Stent, Simon and Matusik, Wojciech and Torralba, Antonio},
title = {Gaze360: Physically Unconstrained Gaze Estimation in the Wild},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}