From ca1356ef7bc808cbacc870d5d14e0f91b8e0ac99 Mon Sep 17 00:00:00 2001 From: BobConanDev Date: Fri, 22 Nov 2024 14:30:26 -0500 Subject: [PATCH] Updated Readme.md, fix typo(s) --- Readme.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Readme.md b/Readme.md index a14aa2de..f0053ac5 100644 --- a/Readme.md +++ b/Readme.md @@ -7,7 +7,7 @@ If you are new to Torch/Lua/Neural Nets, it might be helpful to know that this c ## Update: torch-rnn -[Justin Johnson](http://cs.stanford.edu/people/jcjohns/) (@jcjohnson) recently re-implemented char-rnn from scratch with a much nicer/smaller/cleaner/faster Torch code base. It's under the name [torch-rnn](https://github.com/jcjohnson/torch-rnn). It uses Adam for optimization and hard-codes the RNN/LSTM forward/backward passes for space/time efficiency. This also avoids headaches with cloning models in this repo. In other words, torch-rnn should be the default char-rnn implemention to use now instead of the one in this code base. +[Justin Johnson](http://cs.stanford.edu/people/jcjohns/) (@jcjohnson) recently re-implemented char-rnn from scratch with a much nicer/smaller/cleaner/faster Torch code base. It's under the name [torch-rnn](https://github.com/jcjohnson/torch-rnn). It uses Adam for optimization and hard-codes the RNN/LSTM forward/backward passes for space/time efficiency. This also avoids headaches with cloning models in this repo. In other words, torch-rnn should be the default char-rnn implementation to use now instead of the one in this code base. ## Requirements