-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
List boundary discards one token in the context window #10
Comments
Why creating a boundary like this: and not use simply the window_size value instead of boundary? |
Because with the random function you are implicitly giving "more importance" to the closest words in the neighbourhood, by creating more data with those "close" tokens. |
You wanna say explicitly? So why not reduce the window size and fix it instead of using a boundary? This use of boundary is in other implementations? |
Does the concept of boundary can be apply to cbow-style? I implemented it and i'm in stuck because the size of the context varies from phrase to phrase as boundary changes as well and put it all in a unique tensor create me big problems! |
word2vec-pytorch/word2vec/data_reader.py
Line 102 in 36b93a5
I think
i + boundary
should include a+ 1
to make it inclusive, otherwise the right context takes 1 token less in the resulting skipgrams.The text was updated successfully, but these errors were encountered: