Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

當資料量為數千萬個token時,embedding model需要大約一分鐘才能找到top-k的vector是正常的麼 #5210

Open
Glen-Chen-Blue opened this issue Feb 2, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@Glen-Chen-Blue
Copy link

我目前使用ollma的mxbai-embed-large 作為embedding model,資料庫內是一千篇paper,大約有六千萬個字元,向量庫初始化需要四五個小時,embedding問題一次需要大約一分鐘,這樣是正常的麼,我的gpu是4090 24G。

@Glen-Chen-Blue Glen-Chen-Blue added the bug Something isn't working label Feb 2, 2025
@Glen-Chen-Blue
Copy link
Author

我的向量庫是使用預設的faiss

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant