You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So, the number of multiplications and additions in equations (1) and (3) should be 8 multiplications and 5 additions.
Considering an input array of 192 samples. This should result in 1536 multiplications and 960 additions per RF neuron. If we have a layer of 32 RF Neurons, this should result in 49,152 multiplications and 30,720 additions in the entire RF layer.
Now, assume an FFT is applied to the same 64 samples, resulting in Nlog2(N) complex additions and N/2Log2(N) complex multiplications. Thus, we have 384 complex additions and 192 complex multiplications. Which results in 1152 real additions and 768 real multiplications. Assume we have 32 of these 64 samples, resulting in 36,864 real additions and 24,576 multiplications.
So, now, even though the RF layer received just 192 samples ( 3 of the 64 samples), it has more computations than the FFTs, and the FFTs that received 32 of the 64 samples are cheaper, so I do not understand how it is claimed that RF neurons are cheaper in FFT as mentioned here: https://link.springer.com/article/10.1007/s11265-022-01772-5
Please correct my understanding if I am missing anything. It would be helpful to also provide a detailed description of the computational complexity in the RF neurons and the FFT.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Dear all,
I have a question regarding the computations within the RF neurons. The RF neurons should be defined following these equations:
ℜ𝔢(𝑧[𝑡]) = (1 − 𝛼)[(cos(𝜙)ℜ𝔢(𝑧[𝑡 − 1]) − sin(𝜙)ℑ𝔪(𝑧[𝑡 − 1]))] + ℜ𝔢(𝑥[𝑡]) + ℜ𝔢(𝑏𝑖𝑎𝑠). (1)
ℑ𝔪(𝑧[𝑡]) = (1 − 𝛼)[(sin(𝜙)ℜ𝔢(𝑧[𝑡 − 1]) + cos(𝜙)ℑ𝔪(𝑧[𝑡 − 1]))] + ℑ𝔪(𝑥[𝑡]) + ℑ𝔪(𝑏𝑖𝑎𝑠), (2)
If we do not have an imaginary input in our signal, then equation (2) can be simplified into:
ℑ𝔪(𝑧[𝑡]) = (1 − 𝛼)[(sin(𝜙)ℜ𝔢(𝑧[𝑡 − 1]) + cos(𝜙)ℑ𝔪(𝑧[𝑡 − 1]))] + ℑ𝔪(𝑏𝑖𝑎𝑠). (3)
So, the number of multiplications and additions in equations (1) and (3) should be 8 multiplications and 5 additions.
Considering an input array of 192 samples. This should result in 1536 multiplications and 960 additions per RF neuron. If we have a layer of 32 RF Neurons, this should result in 49,152 multiplications and 30,720 additions in the entire RF layer.
Now, assume an FFT is applied to the same 64 samples, resulting in Nlog2(N) complex additions and N/2Log2(N) complex multiplications. Thus, we have 384 complex additions and 192 complex multiplications. Which results in 1152 real additions and 768 real multiplications. Assume we have 32 of these 64 samples, resulting in 36,864 real additions and 24,576 multiplications.
So, now, even though the RF layer received just 192 samples ( 3 of the 64 samples), it has more computations than the FFTs, and the FFTs that received 32 of the 64 samples are cheaper, so I do not understand how it is claimed that RF neurons are cheaper in FFT as mentioned here: https://link.springer.com/article/10.1007/s11265-022-01772-5
Please correct my understanding if I am missing anything. It would be helpful to also provide a detailed description of the computational complexity in the RF neurons and the FFT.
Beta Was this translation helpful? Give feedback.
All reactions