Slow torch_module_forward #55
Replies: 2 comments 7 replies
-
Hi @yohyamamoto thanks for getting in touch. Are you saying that the inference call is slower compared to running inference in python, or that it is slower than a physics-based Fortran scheme that you are trying to emulate? Without seeing the code it is difficult to suggest what, if anything, the issue might be. |
Beta Was this translation helpful? Give feedback.
-
@yohyamamoto Did you make any progress with this? Without more information I still have a suspicion that you need to instantiate the net only once rather than at each iteration., As there has been no update in 6 months I am closing this, but please do re-open if you wish to follow up. |
Beta Was this translation helpful? Give feedback.
-
I'm trying to use FTorch to load my trained network in a fortran code. I followed the installation guide and the example fortran code in the README file for using the model. So far I'm finding a very slow performance where the torch_module_forward call is taking significant time (much slower than calculating O(n**4) integrals). Is this normal? If not, is there a way to speed this up? Any suggestions would be appreciated.
I'm running this on my desktop (CPU machine).
cmake version 3.27.4
pytorch version 2.1.0+cpu
gcc version 13.2.1
My input array has the size of 80000x6 and output array is 80000x1
My model on pytorch looks like this:
Sequential(
(0): Linear(in_features=6, out_features=100, bias=True)
(1): ReLU()
(2): Linear(in_features=100, out_features=100, bias=True)
(3): ReLU()
(4): Linear(in_features=100, out_features=100, bias=True)
(5): ReLU()
(6): Linear(in_features=100, out_features=1, bias=True)
)
Beta Was this translation helpful? Give feedback.
All reactions