You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I recently read your paper,I‘m very interested in your paper,Can you provide comparison experiment code?For example,I I reproduced the result of Bigclam method is very low( NMI=0.05.But the result you get is NMI=0.26)So I am very curious about what I did wrong.Thank you very much
The text was updated successfully, but these errors were encountered:
Hi, the link to the TensorFlow 1.0 code used to run the experiments is provided in README.md (https://figshare.com/s/30894e4172505d5dc070). I haven't ported the baselines like BigCLAM to Pytorch, but that should be relatively easy - just make F a learnable nn.Parameter and clip the negative values to zero after each gradient descent step.
I don't remember the details very well now – I wrote it about 3 years ago – but I think the two main reasons for poor performance of BigCLAM can be:
Poor initialization. You can have a look at the original code to see how the F matrix was initialized there for BigCLAM.
Poor choice of the threshold for assigning nodes to communities after training. IIRC, 0.5 was a good choice for balanced loss (edges balanced with non-edges), otherwise it was either 0.01 or 0.1, I don't remember exactly.
Hi, I recently read your paper,I‘m very interested in your paper,Can you provide comparison experiment code?For example,I I reproduced the result of Bigclam method is very low( NMI=0.05.But the result you get is NMI=0.26)So I am very curious about what I did wrong.Thank you very much
The text was updated successfully, but these errors were encountered: