Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to more recent ggml format #4

Open
Phylliida opened this issue Jun 20, 2023 · 0 comments
Open

Update to more recent ggml format #4

Phylliida opened this issue Jun 20, 2023 · 0 comments

Comments

@Phylliida
Copy link

Phylliida commented Jun 20, 2023

I put your changes of llama.cpp into the most recent llama.cpp

Then I had to modify LLaMPPL/llamppl/llama_cpp.py to use the new code from llama_cpp_python, you can see the new file here

Probably the easier change on your end is to pull changes of your llama_cpp branch from main and edit llama_cpp.py, but here's these if needed

Edit: hmm I'm having this issue with my changes when I offload to gpu, hold on lemme look into it:

GGML_ASSERT: C:\...\llama-cpp-python\vendor\llama.cpp\ggml.c:15154: tensor->src0->backend == GGML_BACKEND_CPU

Edit edit: nvm those are just bc eval_multi doesn't have gpu support yet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant