-
Notifications
You must be signed in to change notification settings - Fork 7.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated to allow the selection of GPU for embedding where there is mo… #1734
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -7,7 +7,7 @@ | |
from private_gpt.settings.settings import Settings | ||
|
||
logger = logging.getLogger(__name__) | ||
|
||
import torch | ||
|
||
@singleton | ||
class EmbeddingComponent: | ||
|
@@ -28,9 +28,31 @@ def __init__(self, settings: Settings) -> None: | |
"Local dependencies not found, install with `poetry install --extras embeddings-huggingface`" | ||
) from e | ||
|
||
# Get the number of available GPUs | ||
num_gpus = torch.cuda.device_count() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Adding code to the codebase just to print information is not a good practive. I'd remove this. whole block of prints. |
||
|
||
if num_gpus > 0: | ||
print("Available CUDA devices:") | ||
for i in range(num_gpus): | ||
print(f"GPU {i}: {torch.cuda.get_device_name(i)}") | ||
else: | ||
print("No CUDA devices available. Switching to CPU.") | ||
|
||
# Check if CUDA is available | ||
if torch.cuda.is_available(): | ||
# If settings.embedding.gpu is specified, use that GPU index | ||
if hasattr(settings, 'huggingface') and hasattr(settings.huggingface, 'gpu_type'): | ||
device = torch.device(f"{settings.huggingface.gpu_type}:{settings.huggingface.gpu_number}") | ||
else: | ||
device = torch.device('cuda:0') | ||
else: | ||
# If CUDA is not available, use CPU | ||
device = torch.device("cpu") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What happens with laptops using a GPU that is not Nvidia based? For example Mac book running Metal GPU? Will this make embedding slower forcing them to go to CPU? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This logic looks similar this: llama_index.core.utils.infer_torch_device which handles Metal (mps). |
||
print("Embedding Device: ",device) | ||
self.embedding_model = HuggingFaceEmbedding( | ||
model_name=settings.huggingface.embedding_hf_model_name, | ||
cache_folder=str(models_cache_path), | ||
device=device | ||
) | ||
case "sagemaker": | ||
try: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd move this to the
try
block within "huggingface" case. There is no "torch" general dependency declared in pyproject.toml, so this could break the whole execution for people not using huggingface. Actually, we may need to add torch toembeddings-huggingface = ["llama-index-embeddings-huggingface"]
asin pyproject.toml.
I think huggingface package from llamaindex already depends on torch, but given we are now importing it explicitly we should also depende on it.