You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When VPC2s are attached to bare metals, the destroy doesn't follow the correct order or it times out since there is a process behind the scene that does the actual detaching.
To Reproduce
Create some BM/Instance & VPC2 resource(s), attached
$ terraform destroy 15:33
vultr_vpc2.bar: Refreshing state... [id=abd1a734-9f1a-4055-9c6f-3dd28ca7e51d]
data.vultr_instance.instance_list: Reading...
vultr_vpc2.foo: Refreshing state... [id=00104d0f-7473-42f9-8a4d-42085f9b062a]
vultr_bare_metal_server.bmvpc: Refreshing state... [id=8b8d846c-79a4-41c6-914c-0a59962deaa4]
vultr_instance.instvpc: Refreshing state... [id=1e436faf-eeed-4403-ae1a-4496e621504a]
data.vultr_instance.instance_list: Read complete after 7s [id=6059cec4-2b7e-4660-b2cf-eca62bef60e6]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:- destroy
Terraform will perform the following actions:# vultr_bare_metal_server.bmvpc will be destroyed-resource"vultr_bare_metal_server""bmvpc" {
-app_id=0->null-cpu_count=6->null-date_created="2023-09-05T19:58:34+00:00"->null-disk="2x 960GB SSD"->null-gateway_v4="67.219.104.1"->null-id="8b8d846c-79a4-41c6-914c-0a59962deaa4"->null-label="tf-vpc2-test"->null-mac_address=66988331878112->null-main_ip="67.219.104.165"->null-netmask_v4="255.255.254.0"->null-os="Ubuntu 22.04 LTS x64"->null-os_id=1743->null-plan="vbm-6c-32gb"->null-ram="32768 MB"->null-region="mel"->null-status="active"->null-tags=[
-"test",
-"tf",
] ->null-v6_network_size=0->null-vpc2_ids=[
-"00104d0f-7473-42f9-8a4d-42085f9b062a",
] ->null
}
# vultr_instance.instvpc will be destroyed-resource"vultr_instance""instvpc" {
-allowed_bandwidth=9->null-app_id=0->null-backups="disabled"->null-date_created="2023-09-07T18:10:40+00:00"->null-ddos_protection=false->null-default_password=(sensitive value) ->null-disk=80-features=[] ->null-gateway_v4="67.219.98.1"->null-hostname="vultr.guest"->null-id="1e436faf-eeed-4403-ae1a-4496e621504a"->null-kvm="https://my.vultr.com/subs/vps/novnc/api.php?data=djJ8UWhwbkM0d1lWWXFpTDk3N3BfbUZwQXVYTEk4QVdOSWN8X64wQlHqOMJ8AhjBFYVs8-1MeEr1fh2RNcZV93M8uIRY0XuZhJBHfXx1arPVGEsQ0TgsoqEP3nCO201EfSxvC7YV3Bu4D5Z2iWPTBVl-RytKVrmgS894lLD5ZRpMy159Pof0LYgcdQDmWMqq9D_tbqeo1ax28A9fbqaUIFdbj3BW4jisk2L5_XHj-Z_4GGfIa3Ij0qOkUWedKN1D7Q"->null-label="tf-vpc2-test"->null-main_ip="67.219.98.234"->null-netmask_v4="255.255.254.0"->null-os="Ubuntu 22.04 x64"->null-os_id=1743->null-plan="vc2-2c-4gb"->null-power_status="running"->null-private_network_ids=[] ->null-ram=4096->null-region="mel"->null-server_status="ok"->null-status="active"->null-tags=[] ->null-v6_network_size=0->null-vcpu_count=2->null-vpc2_ids=[
-"00104d0f-7473-42f9-8a4d-42085f9b062a",
-"abd1a734-9f1a-4055-9c6f-3dd28ca7e51d",
] ->null-vpc_ids=[] ->null
}
# vultr_vpc2.bar will be destroyed-resource"vultr_vpc2""bar" {
-date_created="2023-09-06T18:56:21+00:00"->null-description="bar"->null-id="abd1a734-9f1a-4055-9c6f-3dd28ca7e51d"->null-ip_block="10.11.0.0"->null-ip_type="v4"->null-prefix_length=16->null-region="mel"->null
}
# vultr_vpc2.foo will be destroyed-resource"vultr_vpc2""foo" {
-date_created="2023-09-06T18:52:29+00:00"->null-description="foo"->null-id="00104d0f-7473-42f9-8a4d-42085f9b062a"->null-ip_block="10.10.0.0"->null-ip_type="v4"->null-prefix_length=16->null-region="mel"->null
}
Plan:0 to add, 0 to change, 4 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
vultr_bare_metal_server.bmvpc: Destroying... [id=8b8d846c-79a4-41c6-914c-0a59962deaa4]
vultr_instance.instvpc: Destroying... [id=1e436faf-eeed-4403-ae1a-4496e621504a]
vultr_instance.instvpc: Destruction complete after 4s
vultr_vpc2.bar: Destroying... [id=abd1a734-9f1a-4055-9c6f-3dd28ca7e51d]
╷
│ Error: error destroying VPC 2.0 (abd1a734-9f1a-4055-9c6f-3dd28ca7e51d): {"error":"The following servers are attached to this VPC 2.0 network: 67.219.98.234\n","status":400}
│
│
╵
╷
│ Error: error detaching VPCs 2.0 prior to deleting instance 8b8d846c-79a4-41c6-914c-0a59962deaa4 : {"error":"Invalid instance-id.","status":404}
│
│
╵
Expected behavior
Retries or waits until the VPC2 is detached before attempting to delete the VPC2
Additional context #358 Might be a relevant example of this functionality
The text was updated successfully, but these errors were encountered:
Describe the bug
When VPC2s are attached to bare metals, the destroy doesn't follow the correct order or it times out since there is a process behind the scene that does the actual detaching.
To Reproduce
Expected behavior
Retries or waits until the VPC2 is detached before attempting to delete the VPC2
Additional context
#358 Might be a relevant example of this functionality
The text was updated successfully, but these errors were encountered: