Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage] How do you run the LLaVA NeXT-Vicuna 7B Baseline #5

Open
DanaOsama opened this issue Feb 10, 2025 · 1 comment
Open

[Usage] How do you run the LLaVA NeXT-Vicuna 7B Baseline #5

DanaOsama opened this issue Feb 10, 2025 · 1 comment

Comments

@DanaOsama
Copy link

Describe the issue

Issue:
I am unable to run the LLaVA NeXT -Vicuna 7B baseline. Can I know the exact hugging face checkpoint you were using?
I am using the following: llava-hf/llava-v1.6-vicuna-7b-hf, but it is not working.
I have also followed the steps outlined by the lmms-eval repo, but to no avail. Can you please specify the exact command to use to run this baseline?

@jungle-gym-ac
Copy link
Collaborator

Hi, we use the official llava-1.6 checkpoint liuhaotian/llava-v1.6-vicuna-7b(huggingface link is here). You can use this model name and follow the steps outlined by the lmms-eval repo for evaluation.

By the way, compared to the original data used to train liuhaotian/llava-v1.6-vicuna-7b, the released 779K data lmms-lab/LLaVA-NeXT-Data lacks 15k user instruction data due to license issue and policy concern. To ensure a fair comparison, we train a LLaVA-NeXT model with official LLaVA-NeXT codebase on this released version of 779k data and use this model as our baseline.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants