Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HAMi针对vGPU core的算力使用严格限制问题 #786

Open
yizhouv5 opened this issue Jan 8, 2025 · 2 comments
Open

HAMi针对vGPU core的算力使用严格限制问题 #786

yizhouv5 opened this issue Jan 8, 2025 · 2 comments

Comments

@yizhouv5
Copy link

yizhouv5 commented Jan 8, 2025

Please provide an in-depth description of the question you have:
在1个8卡A100的GPU节点上,申请两个vGPU(annotation指定具体的物理GPU),1个作业申请10%算力、另外1个作业申请50%算力,从监控和AI作业指标未看到这个vcores百分比配置对业务性能有影响
作业1:
01

作业2:
02

监控指标:
03

What do you think about this question?:
这个core限制是否与Cuda版本相关,作业yaml是否有问题

Environment:

  • HAMi version: v2.4.1
  • Kubernetes version: v1.29.7
  • Cuda: 12.6
@archlitchi
Copy link
Collaborator

Is the device-memory successfully limited?

@yizhouv5
Copy link
Author

The device-memory can be successfully limited

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants