This repo mainly contains:
- Training code of DeltaGRU using the dataset from EdgeDRNN-AMPRO
- SystemVerilog HDL code of EdgeDRNN
- Xilinx SDK Bare-Metal C code for controlling EdgeDRNN on AVNET MiniZed
.
└── hdl # SystemVerilog HDL code of EdgeDRNN
└── tb # Testbench and stimuli
└── python # PyTorch Training Code
├── data # AMPRO Walking Dataset
├── modules # PyTorch Modules
├── nnlayers # PyTorch NN Layers
└── steps # PyTorch training steps (Pretrain, Retrain, Export)
└── vivado # Xilinx Vivado Projects
└── boardfile # Boardfile for MiniZed (or add your own board here)
This project replies on PyTorch Lightning and was tested in Ubuntu 20.04 LTS.
Install Miniconda
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
chmod +x Miniconda3-latest-Linux-x86_64.sh
./Miniconda3-latest-Linux-x86_64.sh
Create an environment using the following command:
conda create -n pt python=3.8 numpy matplotlib pandas scipy \
pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
Activate the environment.
conda activate pt
Install the nightly-built PyTorch Lightning.
pip install https://github.com/PyTorchLightning/pytorch-lightning/archive/master.zip
DeltaGRU can be trained from randomly initialized parameters or by following a pretrain(GRU)-retrain(DeltaGRU) scheme. The code under ./python
shows how to train a DeltaGRU on AMPRO dataset using PyTorch Lightning and finally export the parameters of the network and SystemVerilog testbench stimuli. To run the code, navigate to ./python
in your terminal and run the following command:
conda activivate pt
python main.py --step pretrain --run_through 1
To make it faster for you to run the code and functional simulation, the default python code trains a tiny 2L-16H-DeltaGRU network. If you want to change the network size or other hyperparameters, modify ./python/project.py
. After the training and export is done, there will be three extra folders created: ./python/logs/
, ./python/save/
and ./python/sdk/
.
./python/logs/
contains the logged metrics during training../python/save/
contains the saved models during training../python/sdk/
contains the exported model parameters in Xilinx SDK Bare-Metal C libraries.
Moreover, the python code also exports SystemVerilog testbench stimuli to ./hdl/tb/
.
- Please download our Example Vivado Project and extracted under the
./vivado/
folder. - Use Xilinx Vivado 2018.2
- All the source files in the Vivado project are embedded inside the project folder to make sure it runs seemlessly on all machines. If you update the source code, please make sure to update the Vivado project accordingly (overwrite source files inside the project folder. You can do this simply in Vivado GUI).
- Before running the functional simulation, make sure to define
SIM_DEBUG
inhdr_macros.v
. - Before synthesizing the code, make sure to remove the definition of
SIM_DEBUG
inhdr_macros.v
. - Before using Vivado, please install the MiniZed boardfile by following this guide.
- Before you connect the MiniZed board to your PC, make sure the Xilinx Cable Driver is correctly installed.
- To launch the test programme on MiniZed, you need to open Xilinx SDK in Vivado from
File->Launch SDK
. - In Xilinx SDK, right click the project
edgedrnn_test
and clickRun As->Launch on Hardware (GDB)
.
If you find this repository helpful, please cite our work.
- [JETCAS 2020] EdgeDRNN: Recurrent Neural Network Accelerator for Edge Inference (AICAS 2020 Best Paper)
@ARTICLE{Gao2020EdgeDRNN,
author={Gao, Chang and Rios-Navarro, Antonio and Chen, Xi and Liu, Shih-Chii and Delbruck, Tobi},
journal={IEEE Journal on Emerging and Selected Topics in Circuits and Systems},
title={EdgeDRNN: Recurrent Neural Network Accelerator for Edge Inference},
year={2020},
volume={10},
number={4},
pages={419-432},
doi={10.1109/JETCAS.2020.3040300}}