You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When it should Test, it instead hogs on a single cpu thread.
This happens here, in test(), lines 57-65:
with torch.no_grad():
for data, target in test_loader:
print(1)
data, target = data.to(device), target.to(device)
print(2)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
It happens between print(1) and print(2)\
I then kill it with pkill pt_main_thread
Setting the test batch size to low does not help.
Possible Solution
--no-cuda flag or ROCR_VISIBLE_DEVICES=2 to run it on cpu
Context
Your Environment
Expected Behavior
The trained data should be tested
Current Behavior
When it should Test, it instead hogs on a single cpu thread.
This happens here, in
test()
, lines 57-65:It happens between
print(1)
andprint(2)
\I then kill it with
pkill pt_main_thread
Setting the test batch size to low does not help.
Possible Solution
--no-cuda
flag orROCR_VISIBLE_DEVICES=2
to run it on cpuFailure Logs [if any]
The text was updated successfully, but these errors were encountered: