Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support with sentence transformers #161

Open
chrisconstant opened this issue Jan 10, 2025 · 0 comments
Open

Support with sentence transformers #161

chrisconstant opened this issue Jan 10, 2025 · 0 comments

Comments

@chrisconstant
Copy link

chrisconstant commented Jan 10, 2025

I've been trying to use your library with sentence transformers for contrastive supervised finetuning, but with no luck. I get the following error:
element 0 of tensors does not require grad and does not have a grad_fn

Minimum reproducible example.

Code:

l2v = LLM2Vec.from_pretrained(
    "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp",
    peft_model_name_or_path="McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-unsup-simcse",
    device_map="cuda" if torch.cuda.is_available() else "cpu",
    torch_dtype=torch.bfloat16,
)

query = "What is the capital of France?"
doc = "The capital of France is Paris"
labels = torch.Tensor([1])
loss = torch.nn.CosineEmbeddingLoss()

query_token = l2v.tokenizer(query, return_tensors='pt')
doc_token = l2v.tokenizer(doc, return_tensors='pt')
query_emb = l2v.encode(query_token)
doc_emb = l2v.encode(doc_token)

out = loss(query_emb, doc_emb, labels)
out.backward()

Any plan to add support for sentence transformers library?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant