-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Yannick Goumaz edited this page Jul 7, 2023
·
50 revisions
The following pages explain how to access to the JUMAX machine, how to compile and start Webots on it and also expose a benchmark of the 2 Webots display options.
The following pages explain the basics of accelerated applications, how to start the Maxeler IDE to code DFE applications (MaxIDE), and how to compile them using MaxCompiler.
- Basics of DFE Applications
- Start MaxIDE
- Run DFE Applications with MaxIDE
- Compile DFE Applications for Webots
- Compilation Debug and Timing Improvement
In order to know the order of magnitude of the gain that can be obtained on the Jumax FPGAs, the following workflow is applied:
- Implementation of multilayers perceptrons on CPU :
- Adapting the multilayers perceptron (MLP) inference for FPGA:
The following steps show the implementation of a state-of-the-art robotic application in simulation which uses a Convolutional Neural Network (CNN) and highlights the difference in performance between CPU and FPGA in this context.
- Implementation of a state-of-the-art robotic application in simulation
- Convolutional Neural Network inference on CPU
- Convolutional Neural Network inference on FPGA
- Deliverable 1: CNN performance comparison CPU/FPGA
- Deliverable 2: CNN performance comparison CPU/FPGA
- How to run the most optimized car simulation
- Basics of DFE Applications
- Start MaxIDE
- Run DFE Applications with MaxIDE
- Compile DFE Applications for Webots
- Compilation Debug and Timing Improvement
- Implementation of multilayers perceptrons on CPU :
- Adapting the multilayers perceptron (MLP) inference for FPGA:
- Implementation of a state-of-the-art robotic application in simulation
- Convolutional Neural Network inference on CPU
- Convolutional Neural Network inference on FPGA
- Deliverable 1: CNN performance comparison CPU/FPGA
- Deliverable 2: CNN performance comparison CPU/FPGA
- How to run the most optimized car simulation