Skip to content

Commit

Permalink
[EZ] Remove remaining amz2023 runner variant references (pytorch#136540)
Browse files Browse the repository at this point in the history
Validated no jobs use the amz2023 runner variant anymore ([proof](https://github.com/search?type=code&q=org%3Apytorch+%2F%5Cbamz2023%5Cb%2F+&p=1)) so removing all references to it

Explicit references to the amz2023 runner type variants were removed in the following PRs:
- pytorch/ignite#3285
- pytorch/ao#887
- pytorch/fbscribelogger#1
- pytorch#134355

Pull Request resolved: pytorch#136540
Approved by: https://github.com/huydhn, https://github.com/malfet
  • Loading branch information
ZainRizvi authored and BoyuanFeng committed Sep 25, 2024
1 parent 323bd5e commit 8166ee9
Show file tree
Hide file tree
Showing 5 changed files with 2 additions and 50 deletions.
24 changes: 0 additions & 24 deletions .github/actionlint.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,30 +32,6 @@ self-hosted-runner:
- lf.linux.8xlarge.nvidia.gpu
- lf.linux.16xlarge.nvidia.gpu
- lf.linux.g5.4xlarge.nvidia.gpu
# Organization-wide AWS Linux Runners with new Amazon 2023 AMI
- amz2023.linux.large
- amz2023.linux.2xlarge
- amz2023.linux.4xlarge
- amz2023.linux.12xlarge
- amz2023.linux.24xlarge
- amz2023.linux.arm64.2xlarge
- amz2023.linux.arm64.m7g.4xlarge
- amz2023.linux.arm64.m7g.4xlarge.ephemeral
- amz2023.linux.4xlarge.nvidia.gpu
- amz2023.linux.8xlarge.nvidia.gpu
- amz2023.linux.16xlarge.nvidia.gpu
- amz2023.linux.g5.4xlarge.nvidia.gpu
# Pytorch/pytorch AWS Linux Runners with the new Amazon 2023 AMI on Linux Foundation account
- amz2023.lf.linux.large
- amz2023.lf.linux.2xlarge
- amz2023.lf.linux.4xlarge
- amz2023.lf.linux.12xlarge
- amz2023.lf.linux.24xlarge
- amz2023.lf.linux.arm64.2xlarge
- amz2023.lf.linux.4xlarge.nvidia.gpu
- amz2023.lf.linux.8xlarge.nvidia.gpu
- amz2023.lf.linux.16xlarge.nvidia.gpu
- amz2023.lf.linux.g5.4xlarge.nvidia.gpu
# Repo-specific IBM hosted S390x runner
- linux.s390x
# Organization wide AWS Windows runners
Expand Down
12 changes: 0 additions & 12 deletions .github/lf-canary-scale-config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,6 @@ runner_types:
max_available: 1000
os: linux
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
variants:
amz2023:
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
lf.c.linux.10xlarge.avx2:
disk_size: 200
instance_type: m4.10xlarge
Expand Down Expand Up @@ -112,9 +109,6 @@ runner_types:
max_available: 1000
os: linux
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
variants:
amz2023:
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
lf.c.linux.4xlarge.nvidia.gpu:
disk_size: 150
instance_type: g3.4xlarge
Expand All @@ -129,9 +123,6 @@ runner_types:
max_available: 400
os: linux
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
variants:
amz2023:
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
lf.c.linux.g4dn.12xlarge.nvidia.gpu:
disk_size: 150
instance_type: g4dn.12xlarge
Expand Down Expand Up @@ -174,9 +165,6 @@ runner_types:
max_available: 50
os: linux
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
variants:
amz2023:
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
lf.c.linux.large:
max_available: 1200
disk_size: 15
Expand Down
12 changes: 0 additions & 12 deletions .github/lf-scale-config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,6 @@ runner_types:
max_available: 1000
os: linux
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
variants:
amz2023:
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
lf.linux.10xlarge.avx2:
disk_size: 200
instance_type: m4.10xlarge
Expand Down Expand Up @@ -112,9 +109,6 @@ runner_types:
max_available: 1000
os: linux
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
variants:
amz2023:
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
lf.linux.4xlarge.nvidia.gpu:
disk_size: 150
instance_type: g3.4xlarge
Expand All @@ -129,9 +123,6 @@ runner_types:
max_available: 400
os: linux
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
variants:
amz2023:
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
lf.linux.g4dn.12xlarge.nvidia.gpu:
disk_size: 150
instance_type: g4dn.12xlarge
Expand Down Expand Up @@ -174,9 +165,6 @@ runner_types:
max_available: 50
os: linux
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
variants:
amz2023:
ami: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64
lf.linux.large:
max_available: 1200
disk_size: 15
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/dynamo/pr_time_benchmarks/benchmark_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
# The weight of the record according to current sampling rate
25: optional i64 weight;
# The name of the current job. Derived from JOB_NAME, e.g., linux-jammy-py3.8-gcc11 / test (default, 3, 4, amz2023.linux.2xlarge).
# The name of the current job. Derived from JOB_NAME, e.g., linux-jammy-py3.8-gcc11 / test (default, 3, 4, linux.2xlarge).
26: optional string github_job;
# The GitHub user who triggered the job. Derived from GITHUB_TRIGGERING_ACTOR.
Expand Down
2 changes: 1 addition & 1 deletion torch/_logging/scribe.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ def inner(**kwargs: TLazyField) -> None:
# A unique number for each run of a particular workflow in a repository, e.g., 238742. Derived from GITHUB_RUN_NUMBER.
10: optional string github_run_number_str;
# The name of the current job. Derived from JOB_NAME, e.g., linux-jammy-py3.8-gcc11 / test (default, 3, 4, amz2023.linux.2xlarge).
# The name of the current job. Derived from JOB_NAME, e.g., linux-jammy-py3.8-gcc11 / test (default, 3, 4, linux.2xlarge).
11: optional string job_name;
# The GitHub user who triggered the job. Derived from GITHUB_TRIGGERING_ACTOR.
Expand Down

0 comments on commit 8166ee9

Please sign in to comment.