-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About Experiments with Large-Scale Data #1
Comments
@ZoomWang666 @lipan00123 |
Hello, I think maybe you can try to reduce the PE dimension to save GPU memory. For PEG-LE+ and PEG-DW+, they need more GPU memory than PEG-LE and PEG-DW, and we are planning to address this issue. |
@ZoomWang666 |
What do you think about this one? |
This project is still ongoing. The ICLR paper mainly focus on the theoretical understanding of positional encoding. We are now working on a standard framework of PEG for large-scale networks (100M+ nodes),and the standard framework will be released later. |
Thank you for your reply.
I am looking forward to it! |
@dcm-nakashima Thanks for checking our work. May I ask a follow-up question? When you decreased the hidden_channels dimension and use PEG-DW or PEG-LE instead of PEG-DW++ or PEG-LE++, did you successfully run the code on your dataset with 700k nodes? I am also curious if you are able to run standard GCN on your network based on your 16G GPU. If even standard GCN cannot work, the current pipeline of PEG may not work as well. Further graph partitioning/downsampling-based pipeline that we are working on is needed. |
@lipan00123 |
@lipan00123 |
Thanks for sharing the script.
I am trying to do an experiment on my own large dataset.
#Nodes | #Edges
700,000 | 6,000,000
I would like to get the node embedding result of this graph
Unfortunately, CUDA out of memory occurred even on the collab dataset in my environment.
I used g4dn.16xlarge aws instance. (GPU: NVIDIA T4 GPU/1/16GB)
Can I use PEG while saving GPU memory?
Have you done any experiments with datasets of this size?
For example,
The text was updated successfully, but these errors were encountered: