diff --git a/docs/user-manuals/device-scheduling-gpu-share-with-hami.md b/docs/user-manuals/device-scheduling-gpu-share-with-hami.md index dc7e19702..839558de4 100644 --- a/docs/user-manuals/device-scheduling-gpu-share-with-hami.md +++ b/docs/user-manuals/device-scheduling-gpu-share-with-hami.md @@ -24,7 +24,6 @@ The scheduled GPU devices are bound to the container requires support from the r | Runtime Environment | Installation | | --------------------------------------------- | ------------------------------------------------------------ | | Containerd >= 1.7.0
Koordinator >= 1.6 | Please make sure NRI is enabled in containerd. If not, please refer to [Enable NRI in Containerd](https://github.com/containerd/containerd/blob/main/docs/NRI.md) | -| others | Please make sure koord-runtime-proxy component is correctly installed in you cluser. If not, please refer to [Installation Runtime Proxy](installation-runtime-proxy). | #### HAMi-Core Installation @@ -59,7 +58,7 @@ spec: - /bin/sh - -c - | - cp -f /lib64/libvgpu.so /data/bin && sleep 3600000 + cp -f /k8s-vgpu/lib/nvidia/libvgpu.so /data/bin && sleep 3600000 image: docker.m.daocloud.io/projecthami/hami:v2.4.0 imagePullPolicy: Always name: name @@ -73,10 +72,6 @@ spec: volumeMounts: - mountPath: /data/bin name: data-bin - hostNetwork: true - hostPID: true - runtimeClassName: nvidia - schedulerName: kube-scheduler tolerations: - operator: Exists volumes: @@ -94,7 +89,7 @@ DeviceScheduling is *Enabled* by default. You can use it without any modificatio ## Use GPU Share With HAMi -1. Create a Pod to apply for a GPU card with 50% computing power and 50% video memory, and specify the need for hami-core isolation through the Pod Label koordinator.sh/gpu-isolation-provider +1. Create a Pod to apply for a GPU card with 50% computing power and 50% gpu memory, and specify the need for hami-core isolation through the Pod Label koordinator.sh/gpu-isolation-provider ```yaml apiVersion: v1 @@ -150,5 +145,8 @@ metadata: You can find the concrete device allocate result through annotation `scheduling.koordinator.sh/device-allocated`. -2. 通过 kubectl exec 进入 Pod,NVIDIA-SMI 观察 Pod 能够使用的内存上限 +2. Enter the pod and you can see that the upper limit of the gpu memory seen by the program inside the pod is the value shown in the allocation result above. +```bash +$ kubectl exec -it -n default pod-example bash +``` \ No newline at end of file