We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
So I'm running this on a Chromebook that llamafile reports as having the following CPU: Intel Core i5-8200Y CPU @ 1.30GHz (skylake)
llamafile
Intel Core i5-8200Y CPU @ 1.30GHz (skylake)
I have successfully run SmolLM2 (135M & 360M), TinyLlama 1.1B and Qwen 2.5 0.5B on this machine.
As is my wont, I used the following command line (after renaming the downloaded model to something less generic):
./llamafile --chat -f watchmen_chronology.txt -m Falcon3-1B-Instruct-1.58bit-GGUF
It crashed immediately with the following error:
llama.cpp/ggml.c:19663: GGML_ASSERT(0 <= info->type && info->type < GGML_TYPE_COUNT) failed error: Uncaught SIGABRT (SI_TKILL) at 0x3e8000040ce on penguin pid 16590 tid 16590 ./llamafile No error information Linux Cosmopolitan 4.0.2 MODE=x86_64; #1 SMP PREEMPT_DYNAMIC Thu, 7 Nov 2024 16:44:28 +0000 penguin 6.6.54-05528-gdd4efe62d86b
llamafile v0.9.0
Linux
The text was updated successfully, but these errors were encountered:
I followed up by trying a less exotic Falcon 3 quant from tiiuae/Falcon3-1B-Instruct-GGUF.
Running ./llamafile --verbose --chat -f watchmen_chronology.txt -m Falcon3-1B-Instruct-q4_k_m.gguf crashed with the following error:
./llamafile --verbose --chat -f watchmen_chronology.txt -m Falcon3-1B-Instruct-q4_k_m.gguf
llama_model_load: error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'falcon3'
Sorry, something went wrong.
No branches or pull requests
What happened?
So I'm running this on a Chromebook that
llamafile
reports as having the following CPU:Intel Core i5-8200Y CPU @ 1.30GHz (skylake)
I have successfully run SmolLM2 (135M & 360M), TinyLlama 1.1B and Qwen 2.5 0.5B on this machine.
As is my wont, I used the following command line (after renaming the downloaded model to something less generic):
./llamafile --chat -f watchmen_chronology.txt -m Falcon3-1B-Instruct-1.58bit-GGUF
It crashed immediately with the following error:
Version
llamafile v0.9.0
What operating system are you seeing the problem on?
Linux
Relevant log output
The text was updated successfully, but these errors were encountered: