Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azurerm_redhat_openshift_cluster - Allow multiple ingress_profile blocks, with a name #25048

Open
1 task done
martin-aders opened this issue Feb 27, 2024 · 5 comments
Open
1 task done

Comments

@martin-aders
Copy link

martin-aders commented Feb 27, 2024

Is there an existing issue for this?

  • I have searched the existing issues

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the contribution guide to help.

Terraform Version

1.7.4

AzureRM Provider Version

3.90.0

Affected Resource(s)/Data Source(s)

azurerm_redhat_openshift_cluster

Terraform Configuration Files

# Main difference to the main tf example is the ingress_profile below
# A 2nd ingress controller has to be added manually after cluster creation to reproduce the issue
data "azurerm_subscription" "current" {}

resource "azuread_application" "application" {
  display_name = "OpenShift Cluster"
}

resource "azuread_service_principal" "sp" {
  client_id   = azuread_application.application.application_id
}

resource "azuread_service_principal_password" "sp_password" {
  service_principal_id = azuread_service_principal.sp.object_id
}

data "azuread_service_principal" "aro_builtin_service_principal" {
  // This is the Azure Red Hat OpenShift RP service principal id, do NOT delete it
  client_id = "f1dd0a37-89c6-4e07-bcd1-ffd3d43d8875"
}

resource "azurerm_role_assignment" "assign_1" {
  scope                = data.azurerm_subscription.current.id
  principal_id         = azuread_service_principal.sp.object_id
  role_definition_name = "Contributor"
}

resource "azurerm_virtual_network" "vnet" {
  name                = "aro-vnet"
  address_space       = ["10.123.0.0/16"]
  location            = "eastus"
  resource_group_name = "rg-aro"
}

resource "azurerm_subnet" "main_subnet" {
  name                                          = "aro-main"
  resource_group_name                           = "rg-aro"
  virtual_network_name                          = azurerm_virtual_network.vnet.name
  address_prefixes                              = ["10.123.1.0/24"]
  service_endpoints                             = ["Microsoft.Storage", "Microsoft.ContainerRegistry"]
  private_link_service_network_policies_enabled = false
}

resource "azurerm_subnet" "worker_subnet" {
  name                 = "aro-worker"
  resource_group_name  = "rg-aro"
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = ["10.123.123.0/24"]
  service_endpoints    = ["Microsoft.Storage", "Microsoft.ContainerRegistry"]
}

resource "azurerm_role_assignment" "network_role1" {
  scope                = azurerm_virtual_network.vnet.id
  role_definition_name = "Network Contributor"
  principal_id         = azuread_service_principal.sp.object_id
}

resource "azurerm_role_assignment" "network_role2" {
  scope                = azurerm_virtual_network.vnet.id
  role_definition_name = "Network Contributor"
  principal_id         = data.azuread_service_principal.aro_builtin_service_principal.object_id
}

# Not really needed for the minimal example, but to show the necessity / use case of having access to all the ingress_profile information (using aliases rather than the IP would be even nicer, but I did not yet find the necessary information easily accessible in the terraform state)
resource "azurerm_dns_a_record" "dns_a_console" {
  name                = "console.apps"
  zone_name           = "example.org"
  resource_group_name = "rg-dns"
  ttl                 = 300
  records             = [ azurerm_redhat_openshift_cluster.mycluster.ingress_profile[0].ip_address ]
  # target_resource_id = "/subscriptions/<IP>/resourceGroups/<clusterRG>/providers/Microsoft.Network/publicIPAddresses/<clusterName>-<randomID>-<pip-v4-or-similar>" # would be better to refer to the IP than just copying it
}

resource "azurerm_redhat_openshift_cluster" "mycluster" {
  name                = "mycluster"
  location            = "eastus"
  resource_group_name = "rg-mycluster"

  cluster_profile {
    domain      = "example.org"
    version     = "4.13.23"
  }

  network_profile {
    pod_cidr     = "10.128.0.0/14"
    service_cidr = "172.30.0.0/16"
  }

  main_profile {
    vm_size   = "Standard_D8s_v5"
    subnet_id = azurerm_subnet.main_subnet.id
  }

  api_server_profile {
    visibility = "Public"
  }

  ingress_profile {
  #  name = "default"
    visibility = "Public"
  }

  # This second profile is not allowed to be added here
  #ingress_profile {
  #  name = "internal"
  #  visibility = "Private"
  #}

  worker_profile {
    vm_size      = "Standard_D4s_v5"
    disk_size_gb = 32
    node_count   = 3
    subnet_id    = azurerm_subnet.worker_subnet.id
  }

  service_principal {
    client_id     = azuread_service_principal.sp.client_id
    client_secret = azuread_service_principal_password.sp_password.value
  }

  depends_on = [
    azurerm_role_assignment.network_role1,
    azurerm_role_assignment.network_role2,
  ]
}

Debug Output/Panic Output

# azurerm_redhat_openshift_cluster.mycluster must be replaced
-/+ resource "azurerm_redhat_openshift_cluster" "mycluster" {
      ...
      ~ ingress_profile { # forces replacement
          ~ ip_address = "x.x.x.x" -> (known after apply)
          ~ name       = "default" -> (known after apply)
            # (1 unchanged attribute hidden)
        }
      - ingress_profile { # forces replacement
          - ip_address = "x.x.x.x" -> null
          - name       = "internal" -> null
          - visibility = "Private" -> null # forces replacement
        }

Expected Behaviour

I would expect that terraform plan would detect that there is a new ingress being listed. Once the user adds a second ingress_profile block to the terraform openshift resource, the plan should suggest to fetch the data of this second ingress and update the state accordingly. No cluster recreation.

Not sure whether the Azure API allows creation of an ARO cluster with two ingresses defined. However, it should be at least possible to allow a second ingress to be present after creation of the ARO. Otherwise the ARO cluster cannot be managed anymore using terraform.

The minimal expectation is that all non-default ingresses would at least be ignored for the plan.

Actual Behaviour

Terraform plan suggests to recreate the cluster, based on a detected change on the ingress_profile:
image

Steps to Reproduce

  1. Create a new ARO cluster, e.g. using the supplied sample config
  2. Add a second IngressController to the cluster. Wait until the corresponding load balancer is ready on the Azure Portal.
  3. Run terraform plan
  4. Be surprised that your cluster must be recreated because there was a change on the ingress_profile.
  5. Look at the Azure OpenShift API and see that indeed the ingressProfiles is a list containing two ingress profiles.
az rest --method GET --uri "https://management.azure.com/subscriptions/<subscription>/resourceGroups/<clusterRgName>/providers/Microsoft.RedHatOpenShift/openShiftClusters/<clusterName>?api-version=2023-09-04"
....
  "properties": {
    ...
    "ingressProfiles": [
      {
        "ip": "x.x.x.x",
        "name": "default",
        "visibility": "Public"
      },
      {
        "ip": "x.x.x.x",
        "name": "internal",
        "visibility": "Private"
      }
    ],
...

Important Factoids

No response

References

@martin-aders
Copy link
Author

Tested the patch. Azure only creates a Load Balancer but no corresponding IngressController resource on the cluster. However, the second ingress is only listed on the ingressProfiles list in the az/terraform state once an IngressController has been applied to the cluster. That's not so elegant but at least the patch solves the immediate problem of forced cluster recreation. Suggestions?

martin-aders added a commit to martin-aders/terraform-provider-azurerm that referenced this issue Mar 27, 2024
…locks, with a name (hashicorp#25028)

* Fixes hashicorp#25048
* Provide multiIngress acceptance test.
* Preserve using "default" as the default ingress name, if not specified.
* Provide unique name using an index suffix if no name is defined for further ingresses.
@martin-aders martin-aders changed the title Terraform should not recreate OpenShift cluster when using more than one IngressController azurerm_redhat_openshift_cluster - Allow multiple ingress_profile blocks, with a name Mar 27, 2024
@martin-aders
Copy link
Author

Patch has been updated according to the PR review and an acceptance test has been added to verify multiple load balancers are actually created. Waiting for the acceptance tests to be re-ran.

martin-aders added a commit to martin-aders/terraform-provider-azurerm that referenced this issue Apr 9, 2024
…locks, with a name (hashicorp#25028)

* Fixes hashicorp#25048
* Provide multiIngress acceptance test.
* Preserve using "default" as the default ingress name, if not specified.
* Provide unique name using an index suffix if no name is defined for further ingresses.
@Honken77
Copy link

Honken77 commented Nov 7, 2024

Hi, @martin-aders

How did you end up circumventing this? I saw that your PR got closed.

lifecycle {
    ignore_changes = [ 
      tags,
      ingress_profile,
      cluster_profile[0].version
     ]
  }

This block still generates changes and forces recreation.

EDIT: Nevermind. For some reason it works now.

@martin-aders
Copy link
Author

Nice idea to work around the issue @Honken77 - the PR just got closed due to me reacting too slowly. Still using the patched provider locally. I need to revive the PR once I'm doing bigger changes in this area again.

@Honken77
Copy link

Honken77 commented Nov 8, 2024

It would for sure be a lot better if we could define our Ingress Controllers at cluster creation within the resource. I can understand if we cannot touch the Default one, but definately create additional ones.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants