Skip to content

Commit

Permalink
initial commit
Browse files Browse the repository at this point in the history
  • Loading branch information
Phil Stevenson authored and Phil Stevenson committed Jul 9, 2020
0 parents commit 8e99b8a
Show file tree
Hide file tree
Showing 30 changed files with 2,259 additions and 0 deletions.
19 changes: 19 additions & 0 deletions .github/release-drafter.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
name-template: "v$NEXT_PATCH_VERSION 🌈"
tag-template: "v$NEXT_PATCH_VERSION"
categories:
- title: "🚀 Features"
labels:
- "feature"
- "enhancement"
- title: "🐛 Bug Fixes"
labels:
- "fix"
- "bugfix"
- "bug"
- title: "🧰 Maintenance"
label: "chore"
change-template: "- $TITLE @$AUTHOR (#$NUMBER)"
template: |
## Changes
$CHANGES
9 changes: 9 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
.terraform
terraform.tfstate.backup
terraform.tfstate

kubeconfig
.DS_Store
.terraform.tfstate.lock.info
.tmp
istio_yaml/config.yaml
54 changes: 54 additions & 0 deletions KNOWNBUGS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# List of bugs

## No matches for kind "Certificate" in version "cert-manager.io/v1alpha3"

| Release |
|:-------|
| v1.3 |

### Issue

This message shows when deploying the certificate manager without external DNS component:

```yaml
enable_cert_manager = true
enable_external_dns = false
```

It complains about the Certificate section not being recognise on the files inside `/istio_component_ingress_yaml/` directory.

### Temporary solution

Always deploy Certificate Manager and External DNS at the same time

## Unable to access to some of the Istio Dashboards

| Release |
|:-------|
| v1.3 |

### Issue

There seems to be a problem of timing when deploying Istio where the Istio Gateway and Istio Virtual Service belonging to that specific dashboard are deployed, marked as healthy but innacessible from outside the cluster.

### Temporary solution

We need to recreate specific resources by deleting them and applying Terrafrom again

This example fixes the Prometheous Dashboard:

```bash
❯ terraform taint 'module.sandbox_eks-eu-west-1.null_resource.istio_component_ingress_yaml["../../tfm_aws_eks/istio_component_ingress_yaml/prometheus.yaml"]'
Resource instance module.sandbox_eks-eu-west-1.null_resource.istio_component_ingress_yaml["../../tfm_aws_eks/istio_component_ingress_yaml/prometheus.yaml"] has been marked as tainted.
Releasing state lock. This may take a few moments...

❯ kubectl delete gateways.networking.istio.io istio-prometheus -n istio-system
gateway.networking.istio.io "istio-prometheus" deleted

❯ kubectl delete virtualservices.networking.istio.io istio-prometheus -n istio-system
virtualservice.networking.istio.io "istio-prometheus" deleted

❯ terraform apply
Acquiring state lock. This may take a few moments...
module.sandbox_eks-eu-west-1.random_string.random: Refreshing state... [id=06xcvx]
```
77 changes: 77 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
AWS EKS Terraform module
========================

This module will deploy AWS EKS on an already-existing VPC, along with the following components:

- AWS EFS for ReadWriteMany Kubernetes support. (Optional)
- Kubernetes autoscaler across all the subnets provided in private_subnets and their respective AZs. https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler
- Kubernetes Dashboard https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler
- cert-manager https://github.com/jetstack/cert-manager
- external-dns https://github.com/kubernetes-sigs/external-dns

Features:

- SSM Session Manager access instead of Bastion host access.
- Cloudwatch alarms for EFS-related metrics (including loss of credits)
- Cloudwatch alarms for Tx instance type loss of credits.
- Autoscaling operations notifications.

Infrastructure requirements
===========================

EKS has very little infrastructure requirements, the general rules are here: https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html

Software requirements
=====================

- AWS CLI tools installed (the `aws` command).
- `kubectl` tool.
- Helm > v3.1
- Local installation of Istio as per https://istio.io/docs/setup/install/istioctl/ config location: `/istio_yaml/`

Inputs
======

These are the parameters supported by this module

| Name | Type | Default | Description |
|-----------------------------|:------------:|:---------------:|------------------------------------------------------------------------------------------------------------------------------------------------|
| vpc_id | String | | ID of the VPC this project is going to be deployed on |
| private_subnets | Strings List | | List of private subnets to deploy EKS on. |
| public_subnets | Strings List | | List of public subnets to deploy external load balancers. |
| project_tags | Map | | A key/value map containing tags to add to all resources, `project_name` is compulsory |
| cluster_version | String | | Kubernetes version for that cluster (needs to be supported by EKS) |
| workers_pem_key | String | "" | PEM key for SSH access to the workers instances. |
| workers_instance_type | String | | Instance type for the EKS workers |
| asg_min_size | Number | | Minimum number of instances in the workers autoscaling group. |
| asg_max_size | Number | | Maximum number of instances in the workers autoscaling group. |
| workers_root_volume_size | Number | 100 | Size of the root volume desired for the EKS workers. |
| enable_eks_public_endpoint | Bool | true | Whether to expose the EKS endpoint to the Internet. |
| eks_public_access_cidrs | Strings List | [ "0.0.0.0/0" ] | List of IPs that have access to public endpoint. |
| enable_eks_private_endpoint | Bool | false | Whether to create an internal EKS endpoint for access from the VPC. |
| enable_efs_integration | Bool | | Whether to deploy an EFS volume to provide support for ReadWriteMany volumes. |
| existing_efs_volume | String | "" | Volume ID of an existing EFS, used for Disaster Recovery purposes |
| enable_istio | bool | "" | Whether to deploy Istio on the cluster. |
| sns_notification_topic_arn | String | "" | SNS notification topic to send alerts to Slack |
| k8s_dashboard_version | String | | Version of the container from https://github.com/kubernetes/dashboard/releases , needs to go hand in hand with the k8s version deployed |
| k8s_autoscaler_version | String | | Version of the container from https://github.com/kubernetes/autoscaler/releases , needs to go hand in hand with the k8s version deployed |
| enable_external_dns | Bool | false | to create the external-dns service or not: https://github.com/kubernetes-sigs/external-dns |
| external_dns_version | String | [""] | The helm chart version of external-dns ( chart repo: https://charts.bitnami.com/bitnami ) |
| dns_zone_names | Strings list | [""] | The zone names of AWS route53 zones that external-dns, cert-manager, base services use. First in the list is the Primary for internal services |
| enable_cert_manager | Bool | false | deploy cert-manager ( https://github.com/jetstack/cert-manager ) |
| cert_manager_version | String | [""] | The the helm chart version of cert-manager ( chart repo: https://github.com/jetstack/cert-manager/tree/master/deploy/charts/cert-manager ) |

Outputs
=======

The module outputs the following:

| Name | Description |
|------------------------|-------------------------------------------------------------------------------------------|
| kubeconfig | Content of the kubeconfig file |
| path_to_kubeconfig | Path to the created kubeconfig |
| host | AWS EKS cluster endpoint |
| cluster_ca_certificate | The cluster CA Certificate (needs base64decode() to get the actual value) |
| token | The bearer token to use for authentication when accessing the Kubernetes master endpoint. |
| dashboard_access | URL to access to the dashboard after using kubectl proxy |
| istio_urls | URLs to access to the istio components |
114 changes: 114 additions & 0 deletions autoscaler.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
locals {
cluster_autoscaler_service_name = "cluster-autoscaler"
k8s_service_account_name = "cluster-autoscaler-aws-cluster-autoscaler"
}

resource "kubernetes_namespace" "cluster-autoscaler" {
depends_on = [
null_resource.wait_for_cluster
]
metadata {
annotations = {
name = local.cluster_autoscaler_service_name
}

name = local.cluster_autoscaler_service_name
}
}
resource "helm_release" "cluster-autoscaler" {
depends_on = [
null_resource.wait_for_cluster
]
name = local.cluster_autoscaler_service_name
chart = "stable/cluster-autoscaler"
namespace = kubernetes_namespace.cluster-autoscaler.id

set {
name = "awsRegion"
value = data.aws_region.current.name
}

set {
name = "rbac.create"
value = "true"
}

set {
name = "rbac.serviceAccountAnnotations.eks\\.amazonaws\\.com/role-arn"
value = module.iam_assumable_role_admin.this_iam_role_arn
type = "string"
}

set {
name = "autoDiscovery.clusterName"
value = module.eks-cluster.cluster_id
}

set {
name = "autoDiscovery.enabled"
value = "true"
}

set {
name = "image.tag"
value = var.k8s_autoscaler_version
}
}

module "iam_assumable_role_admin" {
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "~> v2.6.0"
create_role = true
role_name = "${data.aws_region.current.name}-${var.project_tags.project_name}-${local.cluster_autoscaler_service_name}"
provider_url = replace(module.eks-cluster.cluster_oidc_issuer_url, "https://", "")
role_policy_arns = [aws_iam_policy.cluster_autoscaler.arn]
oidc_fully_qualified_subjects = ["system:serviceaccount:${kubernetes_namespace.cluster-autoscaler.id}:${local.k8s_service_account_name}"]
}

resource "aws_iam_policy" "cluster_autoscaler" {
name_prefix = "${data.aws_region.current.name}-${var.project_tags.project_name}-${local.cluster_autoscaler_service_name}"
description = "EKS cluster-autoscaler policy for cluster ${module.eks-cluster.cluster_id}"
policy = data.aws_iam_policy_document.cluster_autoscaler.json
}

data "aws_iam_policy_document" "cluster_autoscaler" {
statement {
sid = "clusterAutoscalerAll"
effect = "Allow"

actions = [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"ec2:DescribeLaunchTemplateVersions",
]

resources = ["*"]
}

statement {
sid = "clusterAutoscalerOwn"
effect = "Allow"

actions = [
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"autoscaling:UpdateAutoScalingGroup",
]

resources = ["*"]

condition {
test = "StringEquals"
variable = "autoscaling:ResourceTag/kubernetes.io/cluster/${module.eks-cluster.cluster_id}"
values = ["owned"]
}

condition {
test = "StringEquals"
variable = "autoscaling:ResourceTag/k8s.io/${local.cluster_autoscaler_service_name}/enabled"
values = ["true"]
}
}
}
Loading

0 comments on commit 8e99b8a

Please sign in to comment.