Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparison experiment #9

Open
haory-95 opened this issue Sep 11, 2021 · 1 comment
Open

Comparison experiment #9

haory-95 opened this issue Sep 11, 2021 · 1 comment

Comments

@haory-95
Copy link

Hi, I recently read your paper,I‘m very interested in your paper,Can you provide comparison experiment code?For example,I I reproduced the result of Bigclam method is very low( NMI=0.05.But the result you get is NMI=0.26)So I am very curious about what I did wrong.Thank you very much

@shchur
Copy link
Owner

shchur commented Sep 20, 2021

Hi, the link to the TensorFlow 1.0 code used to run the experiments is provided in README.md (https://figshare.com/s/30894e4172505d5dc070). I haven't ported the baselines like BigCLAM to Pytorch, but that should be relatively easy - just make F a learnable nn.Parameter and clip the negative values to zero after each gradient descent step.

I don't remember the details very well now – I wrote it about 3 years ago – but I think the two main reasons for poor performance of BigCLAM can be:

  1. Poor initialization. You can have a look at the original code to see how the F matrix was initialized there for BigCLAM.
  2. Poor choice of the threshold for assigning nodes to communities after training. IIRC, 0.5 was a good choice for balanced loss (edges balanced with non-edges), otherwise it was either 0.01 or 0.1, I don't remember exactly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants