Skip to content

Commit

Permalink
Merge pull request #255 from tokk-nv/dev/nvme-message
Browse files Browse the repository at this point in the history
Change storage requirement message
  • Loading branch information
tokk-nv authored Jan 22, 2025
2 parents 658c687 + 51e315a commit c27aa46
Show file tree
Hide file tree
Showing 34 changed files with 39 additions and 34 deletions.
2 changes: 1 addition & 1 deletion docs/agent_studio.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Rapidly design and experiment with creating your own automation agents, personal

<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `22GB` for `nano_llm` container image
- Space for models (`>5GB`)
Expand Down
2 changes: 1 addition & 1 deletion docs/cosmos.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ inference scripts and generate videos.

<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `12.26GB` for [`cosmos`](https://hub.docker.com/r/dustynv/cosmos) container image
- Space for models and datasets (`>50GB`)
Expand Down
5 changes: 5 additions & 0 deletions docs/css/colors.css
Original file line number Diff line number Diff line change
Expand Up @@ -246,6 +246,11 @@ a.nv-buy-link:hover,.load-buy-link:hover {
background: #222; color: #FFF; font-size: 0.8em; border-radius: 0.3em; padding-left: 0.3em; padding-right: 0.3em; margin: 0.2em;
}

.markedYellow {
background: #dfff00;
padding: 0.2em
}

.highlightYellow {
background: #ffc105;
border-radius: 0.5em;
Expand Down
2 changes: 1 addition & 1 deletion docs/lerobot.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Let's run HuggingFace [`LeRobot`](https://github.com/huggingface/lerobot/) to tr

<span class="blobPink2">JetPack 6 GA (L4T r36.3)</span> <span class="blobPink1">JetPack 6.1 (L4T r36.4)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `16.5GB` for [`lerobot`](https://hub.docker.com/r/dustynv/lerobot) container image
- Space for models (`>2GB`)
Expand Down
2 changes: 1 addition & 1 deletion docs/llama_vlm.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ While quantization and optimization efforts are underway, we have started with r

<span class="blobPink2">JetPack 6 (L4T r36)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `12.8GB` for `llama-vision` container image
- Space for models (`>25GB`)
Expand Down
2 changes: 1 addition & 1 deletion docs/nerf.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@

<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `17.6GB` for [`nerfstudio`](https://hub.docker.com/r/dustynv/nerfstudio) container image
- Space for models and datasets (`>5GB`)
Expand Down
2 changes: 1 addition & 1 deletion docs/openvla.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ OpenVLA reserves 256 of the least-frequently used tokens out of the Llama-7B voc

<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `22GB` for `nano_llm` container image
- Space for models and datasets (`>15GB`)
Expand Down
2 changes: 1 addition & 1 deletion docs/ros.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ The [`ros2_nanollm`](https://github.com/NVIDIA-AI-IOT/ros2_nanollm) package prov
<span class="blobPink2">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `22GB` for `nano_llm:humble` container image
- Space for models (`>10GB`)
Expand Down
2 changes: 1 addition & 1 deletion docs/tensorrt_llm.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ We've provided pre-compiled TensorRT-LLM [wheels](http://jetson.webredirect.org/

<span class="blobPink2">JetPack 6.1 (L4T r36.4)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `18.5GB` for `tensorrt_llm` container image
- Space for models (`>10GB`)
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_api-examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ It's good to know the code for generating text with LLM inference, and ancillary
<span class="blobPink2">JetPack 5 (L4T r35)</span>
<span class="blobPink2">JetPack 6 (L4T r36)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `22GB` for `l4t-text-generation` container image
- Space for models (`>10GB`)
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_audiocraft.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Let's run Meta's [AudioCraft](https://github.com/facebookresearch/audiocraft), t

<span class="blobPink1">JetPack 5 (L4T r35.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `10.7 GB` for `audiocraft` container image
- Space for checkpoints
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_holoscan.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ So, let's walk through how to run the Surgical Tool Tracking example application

<span class="blobPink2">JetPack 6 (L4T r36.x)</span>
3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `13.7 GB` for `efficientvit` container image
- `850 Mb` for Tool Tracking ONNX model + example video
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_jetson-copilot.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `6 GB` for `jetrag` container image
- About `4 GB` for downloading some default models (`llama3` and `mxbai-embed-large`)
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_jps.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Jetson Plaform Services (JPS) provide a platform to simplify development, deploy

<span class="blobPink2">JetPack 6 (L4T r36.x)</span>
3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

To get started with Jetson Platform Services, follow the quickstart guide to install and setup JPS. Then explore the reference workflows to learn how to use DeepStream, Analytics, Generative AI and more with JPS:

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_live-llava.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ It uses models like [LLaVA](https://llava-vl.github.io/){:target="_blank"} or [V

<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `22GB` for `nano_llm` container image
- Space for models (`>10GB`)
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_llamaindex.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Let's use [LlamaIndex](https://www.llamaindex.ai/), to realize RAG (Retrieval Au
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `5.5 GB` for `llama-index` container image
- Space for checkpoints
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_llamaspeak.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ The [`WebChat`](https://dusty-nv.github.io/NanoLLM/agents.html#web-chat){:target

<span class="blobPink2">JetPack 6 (L4T r36)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `22GB` for `nano_llm` container image
- Space for models (`>10GB`)
Expand Down
4 changes: 2 additions & 2 deletions docs/tutorial_llava.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ In addition to Llava, the [`NanoVLM`](tutorial_nano-vlm.md) pipeline supports [V
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `6.2GB` for `text-generation-webui` container image
- Space for models
Expand Down Expand Up @@ -90,7 +90,7 @@ Go to **Chat** tab, drag and drop an image into the **Drop Image Here** area, an
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `6.1GB` for `llava` container
- `14GB` for Llava-7B (or `26GB` for Llava-13B)
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_minigpt4.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Give your locally running LLM an access to vision, by running [MiniGPT-4](https:
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `5.8GB` for container image
- Space for [pre-quantized MiniGPT-4 model](https://github.com/Maknee/minigpt4.cpp/tree/master#3-obtaining-the-model)
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_nano-vlm.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ This FPS measures the end-to-end pipeline performance for continuous streaming l

<span class="blobPink2">JetPack 6 (L4T r36)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `22GB` for `nano_llm` container image
- Space for models (`>10GB`)
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_nanodb.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Let's run [NanoDB](https://github.com/dusty-nv/jetson-containers/blob/master/pac
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `16GB` for container image
- `40GB` for MS COCO dataset
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_ollama.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ In this tutorial, we introduce two installation methods: (1) the default native
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `7GB` for `ollama` container image
- Space for models (`>5GB`)
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_openwebui.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ It can work with Ollama as a backend as well as other backend that is compatible
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `7GB` for `open-webui` container image
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_slm.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Based on user interactions, the recommended models to try are [`stabilityai/stab

<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `22GB` for `nano_llm` container image
- Space for models (`>5GB`)
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_stable-diffusion-xl.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `6.8GB` for container image
- `12.4GB` for SDXL models
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_stable-diffusion.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Let's run AUTOMATIC1111's [`stable-diffusion-webui`](https://github.com/AUTOMATI
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `6.8GB` for container image
- `4.1GB` for SD 1.5 model
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_text-generation.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Interact with a local AI assistant by running a LLM with oobabooga's [`text-gene
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `6.2GB` for container image
- Spaces for models
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_voicecraft.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Let's run [VoiceCraft](https://github.com/jasonppy/VoiceCraft), a Zero-Shot Spee
<!-- <span class="blobPink1">JetPack 5 (L4T r35.x)</span> -->
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `15.6 GB` for `voicecraft` container image
- Space for models
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial_whisper.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Let's run OpenAI's [Whisper](https://github.com/openai/whisper), pre-trained mod
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `6.1 GB` for `whisper` container image
- Space for checkpoints
Expand Down
2 changes: 1 addition & 1 deletion docs/vit/tutorial_efficientvit.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Let's run MIT Han Lab's [EfficientViT](https://github.com/mit-han-lab/efficientv
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>
3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `10.9 GB` for `efficientvit` container image
- Space for checkpoints
Expand Down
2 changes: 1 addition & 1 deletion docs/vit/tutorial_nanoowl.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Let's run [NanoOWL](https://github.com/NVIDIA-AI-IOT/nanoowl), [OWL-ViT](https:/
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `7.2 GB` for container image
- Spaces for models
Expand Down
2 changes: 1 addition & 1 deletion docs/vit/tutorial_nanosam.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Let's run NVIDIA's [NanoSAM](https://github.com/NVIDIA-AI-IOT/nanosam) to check
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `6.3GB` for container image
- Spaces for models
Expand Down
2 changes: 1 addition & 1 deletion docs/vit/tutorial_sam.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Let's run Meta's [`SAM`](https://github.com/facebookresearch/segment-anything) o
<span class="blobPink1">JetPack 5 (L4T r35.x)</span>
<span class="blobPink2">JetPack 6 (L4T r36.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `6.8GB` for container image
- Spaces for models
Expand Down
2 changes: 1 addition & 1 deletion docs/vit/tutorial_tam.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Let's run [`TAM`](https://github.com/gaomingqi/Track-Anything) to perform Segmen

<span class="blobPink1">JetPack 5 (L4T r35.x)</span>

3. Sufficient storage space (preferably with NVMe SSD).
3. <span class="markedYellow">NVMe SSD **highly recommended**</span> for storage speed and space

- `6.8GB` for container image
- Spaces for models
Expand Down

0 comments on commit c27aa46

Please sign in to comment.