Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

Commit

Permalink
remove use of "venv" input to actions
Browse files Browse the repository at this point in the history
Per discussion in the PR and standup, since we're using the setup-python action in an earlier step of workflows where these actions were called, we don't need to be using any python virtual environments.
  • Loading branch information
derekk-nm committed Jul 15, 2024
1 parent bca6d22 commit 0053719
Show file tree
Hide file tree
Showing 6 changed files with 0 additions and 46 deletions.
12 changes: 0 additions & 12 deletions .github/actions/nm-benchmark/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,6 @@ inputs:
output_directory:
description: 'output directory to store the benchmark results'
required: true
python:
description: 'python version, e.g. 3.10.12'
required: true
venv:
description: 'name for python virtual environment'
required: false
default: ""
runs:
using: composite
steps:
Expand All @@ -23,11 +16,6 @@ runs:
# move source directories
mv vllm vllm-ignore || echo "no 'vllm' folder to move"
mv csrc csrc-ignore || echo "no 'csrc' folder to move"
if [ ! -z "${{ inputs.venv }}" ]; then
COMMIT=${{ github.sha }}
VENV="${{ inputs.venv }}-${COMMIT:0:7}"
source $(pyenv root)/versions/${{ inputs.python }}/envs/${VENV}/bin/activate
fi
pip3 install -r neuralmagic/benchmarks/requirements-benchmark.txt
SUCCESS=0
.github/scripts/nm-run-benchmarks.sh ${{ inputs.benchmark_config_list_file }} ${{ inputs.output_directory }} || SUCCESS=$?
Expand Down
13 changes: 0 additions & 13 deletions .github/actions/nm-install-whl/action.yml
Original file line number Diff line number Diff line change
@@ -1,13 +1,5 @@
name: install whl
description: 'installs found whl based on python version into specified venv'
inputs:
python:
description: 'python version, e.g. 3.10.12'
required: true
venv:
description: 'name for python virtual environment'
required: false
default: ""
runs:
using: composite
steps:
Expand All @@ -17,11 +9,6 @@ runs:
mv vllm vllm-ignore
mv csrc csrc-ignore
# activate and install
if [ ! -z "${{ inputs.venv }}" ]; then
COMMIT=${{ github.sha }}
VENV="${{ inputs.venv }}-${COMMIT:0:7}"
source $(pyenv root)/versions/${{ inputs.python }}/envs/${VENV}/bin/activate
fi
pip3 install -r requirements-dev.txt
WHL=$(find . -type f -iname "nm_vllm*.whl")
WHL_BASENAME=$(basename ${WHL})
Expand Down
13 changes: 0 additions & 13 deletions .github/actions/nm-lm-eval/action.yml
Original file line number Diff line number Diff line change
@@ -1,13 +1,6 @@
name: run lm-eval accuracy test
description: 'run lm-eval accuracy test'
inputs:
python:
description: 'python version, e.g. 3.10.12'
required: true
venv:
description: 'name for python virtual environment'
required: false
default: ""
lm_eval_configuration:
description: 'file containing test configuration'
required: true
Expand All @@ -16,12 +9,6 @@ runs:
steps:
- id: lm-eval
run: |
if [ ! -z "${{ inputs.venv }}" ]; then
COMMIT=${{ github.sha }}
VENV="${{ inputs.venv }}-${COMMIT:0:7}"
source $(pyenv root)/versions/${{ inputs.python }}/envs/${VENV}/bin/activate
fi
pip3 install git+https://github.com/EleutherAI/lm-evaluation-harness.git@262f879a06aa5de869e5dd951d0ff2cf2f9ba380
pip3 install pytest openai==1.3.9
Expand Down
3 changes: 0 additions & 3 deletions .github/workflows/nm-benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -120,15 +120,12 @@ jobs:
- name: install whl
id: install_whl
uses: ./.github/actions/nm-install-whl/
with:
python: ${{ inputs.python }}

- name: run benchmarks
uses: ./.github/actions/nm-benchmark/
with:
benchmark_config_list_file: ${{ inputs.benchmark_config_list_file }}
output_directory: benchmark-results
python: ${{ inputs.python }}

- name: store benchmark result artifacts
if: success()
Expand Down
3 changes: 0 additions & 3 deletions .github/workflows/nm-lm-eval.yml
Original file line number Diff line number Diff line change
Expand Up @@ -105,11 +105,8 @@ jobs:
- name: install whl
id: install_whl
uses: ./.github/actions/nm-install-whl/
with:
python: ${{ inputs.python }}

- name: run lm-eval-accuracy
uses: ./.github/actions/nm-lm-eval/
with:
python: ${{ inputs.python }}
lm_eval_configuration: ${{ inputs.lm_eval_configuration }}
2 changes: 0 additions & 2 deletions .github/workflows/nm-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -122,8 +122,6 @@ jobs:

- name: install whl
uses: ./.github/actions/nm-install-whl/
with:
python: ${{ inputs.python }}

- name: run buildkite script
run: |
Expand Down

2 comments on commit 0053719

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

smaller_is_better

Benchmark suite Current: 0053719 Previous: 9daca33 Ratio
{"name": "mean_ttft_ms", "description": "VLLM Serving - Dense\nmodel - meta-llama/Meta-Llama-3-8B-Instruct\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 191.18171456001014 ms 183.1591711100085 ms 1.04
{"name": "mean_tpot_ms", "description": "VLLM Serving - Dense\nmodel - meta-llama/Meta-Llama-3-8B-Instruct\nmax-model-len - 4096\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 85.74251426419947 ms 83.6536911143672 ms 1.02
{"name": "mean_ttft_ms", "description": "VLLM Serving - Dense\nmodel - facebook/opt-350m\nmax-model-len - 2048\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 30.73935461667437 ms 23.46367938000109 ms 1.31
{"name": "mean_tpot_ms", "description": "VLLM Serving - Dense\nmodel - facebook/opt-350m\nmax-model-len - 2048\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 6.87611778979108 ms 6.091775282016893 ms 1.13

This comment was automatically generated by workflow using github-action-benchmark.

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Performance Alert ⚠️

Possible performance regression was detected for benchmark 'smaller_is_better'.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.10.

Benchmark suite Current: 0053719 Previous: 9daca33 Ratio
{"name": "mean_ttft_ms", "description": "VLLM Serving - Dense\nmodel - facebook/opt-350m\nmax-model-len - 2048\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 30.73935461667437 ms 23.46367938000109 ms 1.31
{"name": "mean_tpot_ms", "description": "VLLM Serving - Dense\nmodel - facebook/opt-350m\nmax-model-len - 2048\nsparsity - None\nbenchmark_serving {\n \"nr-qps-pair_\": \"300,1\",\n \"dataset\": \"sharegpt\"\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 6.87611778979108 ms 6.091775282016893 ms 1.13

This comment was automatically generated by workflow using github-action-benchmark.

Please sign in to comment.