Skip to content

In this project I train and evaluate three different methods for next-word prediction using LSTMs and continuous value inputs and outputs. With what I call sequence to token (S2T), an input sequence is encoded as float values (embeddings) and used to predict a final masked token in the sequence.

Notifications You must be signed in to change notification settings

eschaffn/Continuous-Representation-Experiment

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 

Repository files navigation

Continuous-Representation-Experiment

In this project I train and evaluate three different methods for next-word prediction using LSTMs and continuous value inputs and outputs. With what I call sequence to token (S2T), an input sequence is encoded as float values (embeddings) and used to predict a final masked token in the sequence.

About

In this project I train and evaluate three different methods for next-word prediction using LSTMs and continuous value inputs and outputs. With what I call sequence to token (S2T), an input sequence is encoded as float values (embeddings) and used to predict a final masked token in the sequence.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published