You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a feature request to amend the attention transformer output dimension to equal the number of features, instead of the number of post-embedding dimensions.
This has shown convergence improvement and better RMSE on the Forest Cover Type dataset.
This may fix the training with large embedding dimensions, that makes the mask size go up.
One problem I see is that sparsemax does not know about which columns come from the same embedded columns, this could create something a bit difficult for the model to learn:
- create embeddings that make sense
- mask embeddings without destroying them, in fact since sparsemax is sparse it's very unlikely that all the columns from a same embedding are used, so you lose the power of your embedding
see dreamquark-ai P.R. #443
The text was updated successfully, but these errors were encountered:
This is a feature request to amend the attention transformer output dimension to equal the number of features, instead of the number of post-embedding dimensions.
This has shown convergence improvement and better RMSE on the
Forest Cover Type
dataset.This may fix the training with large embedding dimensions, that makes the mask size go up.
see dreamquark-ai P.R. #443
The text was updated successfully, but these errors were encountered: