This is super cool! #72
Replies: 10 comments 1 reply
-
@turnipsforme which model did you download? - There's a bug I found recently, will patch soon to enable model with smaller context windows like Dolly. |
Beta Was this translation helpful? Give feedback.
-
i couldn't get any model to work other than MPT 7B, and that one primarily returned gibberish (speaking in symbols, chinese, or just random sentences). this was actually pretty funny, at one point it said "Hello, ���� come to my world" in chinese in response to me saying hello (in english haha) |
Beta Was this translation helpful? Give feedback.
-
@turnipsforme how much RAM do you have? Can you try the wizard model? Those should work very well at least on my machine if you can load MPT :-?... |
Beta Was this translation helpful? Give feedback.
-
@turnipsforme The MPT models are very bound to their prompt formats, especially if you use the Instruct or Chat versions, furthermore this project doesn't use the huggingface tokenizers yet. Which results in some weird decoding errors. |
Beta Was this translation helpful? Give feedback.
-
Ooh right I need to add the remote vocab soon, need to embed into model look-up somehow :d..... |
Beta Was this translation helpful? Give feedback.
-
@turnipsforme Just shipped an update fix which should enable more model such as Dolly and GPTJ, please try them out. Also I'd test out Wizard model if you machine can run it :) |
Beta Was this translation helpful? Give feedback.
-
Yess both gptj and dolly work! i only have 8 gigs of ram, maybe what’s why? wizard works too (it’s the closest so far to chatgpt in terms of smarts, everyone is else has been a little.. off haha). regardless dolly responds suuuper quickly and app feels much better now! looking forward to seeing this continue to grow! |
Beta Was this translation helpful? Give feedback.
-
I just wanted to also give some thoughts on this in terms of uses. I would personally primarily use this to help with text transformation (shorter/longer/remove elements/rewrite as a checklist…) since sensitive work documents can’t go into chatgpt. if the app was better suited for this kind of work it would be amazing (i think a lot of people’s work is sensitive and has to stay local)! either way best of luck!!:) |
Beta Was this translation helpful? Give feedback.
-
@turnipsforme yeah 8GB of ram is too little to run 7B model and even 3B model IMO. You need at least 12GB (or 16GB preferred) for 3B model to run at an ok speed.
That's one of the core purpose for local.ai IMO! |
Beta Was this translation helpful? Give feedback.
-
This project looks so cool but unfortunately on my MBP M1 16GB RAM, I cannot get even start a new thread...I can download models ok (have to delete them manually because the integrated trash button doesn't react either) any idea what I can do to be able to use it? |
Beta Was this translation helpful? Give feedback.
-
Hi! LOOOVE the idea behind this (for privacy reasons, but also the idea of running a model locally is just super cool). I can't get past downloading a model and sending a message. not getting anything back. on m1 macbook air. thanks!
Beta Was this translation helpful? Give feedback.
All reactions