Skip to content

DanielCufino/Text2ASCII

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Text2ASCII

Goals:

  1. Figure out how to train GPT-3

  2. Collect datasets we can use for training

Proposal Feedback for easy access:

This is a great project idea and I acknowledge the risk involved. I think you should try doing this. I'm fine even if your model only works in some restricted scenario -- e.g. generate ascii art for emoticons or something like that. I think you have good ideas here. Directly connecting an exisiting text-to-image + image-to-ascii model and evaluating on some text-ascii dataset of your creation could be a good minimal viable project solution assuring you have something to deliver. On top of that your other ideas are also good. E.g. fine-tuning a regular language model e.g. GPT-2 would be more feasible if you plan to use your own resources and not give money to OpenAI. Another solution would be to train end-to-end the text-to-image image-to-ascii models. Eg. backprop gradients from image-to-ascii into the text-to-image model. Lots of good ideas, please feel free to start.

Reesources:

https://github.com/norahsakal/fine-tune-gpt3-model

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published