You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 30, 2024. It is now read-only.
When I load "meta-llama/Meta-Llama-3-8B-Instruct" model like this
from transformers import AutoTokenizer, TextStreamer from intel_extension_for_transformers.transformers import AutoModelForCausalLM model_name = "meta-llama/Meta-Llama-3-8B-Instruct" # Hugging Face model_id or local model tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) streamer = TextStreamer(tokenizer) model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True)
it got hanged. Then only way is to restart instance to recover it.
Is there any issue in my spec?
my instance spec ubunu 32 GB RAM.
The text was updated successfully, but these errors were encountered:
When I load "meta-llama/Meta-Llama-3-8B-Instruct" model like this
from transformers import AutoTokenizer, TextStreamer from intel_extension_for_transformers.transformers import AutoModelForCausalLM model_name = "meta-llama/Meta-Llama-3-8B-Instruct" # Hugging Face model_id or local model tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) streamer = TextStreamer(tokenizer) model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True)
it got hanged. Then only way is to restart instance to recover it.
Is there any issue in my spec?
my instance spec ubunu 32 GB RAM.
The text was updated successfully, but these errors were encountered: