[Xylo IMU] Losing performance when inferencing on XyloIMU #22
Replies: 1 comment 5 replies
-
Hi @PeterRolfes , thanks for reaching out about this. Generally speaking I would expect to see some drop in accuracy due to quantisation error, but this is a larger drop than I would expect. I assume that 97% is the simulation test accuracy, is that correct? You could try running with XyloSim, and see if there is a difference between XyloSim and XyloSamna (there shouldnt' be). What quantisation method are you using? You could try both channel_quantise and global_quantise, and see if you obtain better performance. Finally you could investigate the internal activity of the simulation and quantised network, to see if you can identify where the difference arises. |
Beta Was this translation helpful? Give feedback.
-
I have an issue with the XyloIMU neuromorphic chip when inferencing on it. The whole notebook including the results is here: https://github.com/PeterRolfes/XyloIMU-Train
The idea is to simply train a spiking neural network using rockpool on some gearbox kaggle data: https://www.kaggle.com/code/abhinavraj24/data-gear-box-fault-detection
The training itself worked just fine as it produced accuracies of ~97% on training and test data. The problems occurs when i am trying to inference on the XyloIMU, because the accuracy suddenly drops to ~82%.
Can someone help me to get the performance close to the original one or explain why it drops that heavily?
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions