Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about loading local models #8

Open
ChenHong30 opened this issue Feb 24, 2025 · 1 comment
Open

Question about loading local models #8

ChenHong30 opened this issue Feb 24, 2025 · 1 comment

Comments

@ChenHong30
Copy link

Hi, thank you for your awesome work. When I trying to reproduce your work on my local machine, I was wondering am I able to use some local models (e.g. Llama3.2-1B/Llama3.1-8B downloaded from HF)?

@siyan-zhao
Copy link
Collaborator

Hi, thank you for your interest! Yes, you can use local models from Huggingface, and you'll just need to add the Huggingface generation code to the generate_message function.
Our current code accepts different model types (Claude, Mistral, Llama, GPT) through Amazon Bedrock API. For local models, you'll need to replace Bedrock-specific API calls with Huggingface's generation function and ensure you format the prompts correctly according to the chat template expected by your specific Llama model version. Let me know if you have further questions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants