Skip to content

Commit

Permalink
Add markdownlint CI check and refactor md files (#2128)
Browse files Browse the repository at this point in the history
* Add markdownlint CI check and refactor md files

* Test

* Revert "Test"

This reverts commit 8d24c99.

* Add markdownlint make commands and add it to pre-commit
  • Loading branch information
albertogdd authored Sep 18, 2023
1 parent 39680e0 commit f16d0b4
Show file tree
Hide file tree
Showing 17 changed files with 154 additions and 36 deletions.
6 changes: 5 additions & 1 deletion .github/pre-commit
Original file line number Diff line number Diff line change
@@ -1,13 +1,16 @@
#!/bin/bash

# Check linting and unit tests before commiting
# if linting failes the commit will be aborted and the linting will start auto fixing
# if linting fails the commit will be aborted and the linting will start auto fixing
# if unit tests fail the commit will be aborted

# Linting

echo Shell: "${SHELL}"

echo Linting markdown files...
make markdown/lint

## this will retrieve all of the .go files that have been
## changed since the last commit
STAGED_GO_FILES=$(git diff --cached --name-only -- '*.go')
Expand All @@ -26,6 +29,7 @@ else
done
fi


make go/golangci
res=${?}
if [[ ${res} -ne 0 ]]; then
Expand Down
11 changes: 11 additions & 0 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,17 @@ jobs:
version: v1.54.2
args: --build-tags containers_image_storage_stub,e2e --timeout 300s --out-${NO_FUTURE}format colored-line-number

markdown-lint:
name: Lint markdown files
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3
- name: Lint markdown files
uses: docker://avtodev/markdown-lint:v1
with:
args: './**.md'

prepare:
name: Prepare properties
runs-on: ubuntu-latest
Expand Down
40 changes: 35 additions & 5 deletions ARCHITECTURE.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
# Architecture

This document describes the high-level architecture of `Dynatrace Operator`.
If you want to familiarize yourself with the code base, you are just in the right place!

## Bird's Eye View

```mermaid
graph LR
A[fa:fa-user User] -->|creates| B(fa:fa-file CR)
Expand All @@ -21,93 +23,121 @@ graph LR
On a very high level, what the operator does is for a given `CustomResource`(CR) provided by the user, the `Operator` will deploy _one or several_ Dynatrace components into the Kubernetes Environment.

A bit more specifically:

- A `CustomResource`(CR) is configured by the user, where they provide what features or components they want to use, and provide some minimal configuration in the CR so the `Dynatrace Operator` knows what to deploy and how to configure it.
- The `Operator` not only deploys the different Dynatrace components, but also keeps them up to date.
- The `CustomResource`(CR) defines a state, the `Dynatrace Operator` enforces it, makes it happen.
- The `CustomResource`(CR) defines a state, the `Dynatrace Operator` enforces it, makes it happen.

### Dynatrace Operator components

The `Dynatrace Operator` is not a single Pod, it consists of multiple components, encompassing several Kubernetes concepts.

#### Operator

This component/pod is the one that _reacts to_ the creation/update/delete of our`CustomResource(s)`, causing the `Operator` to _reconcile_.
A _reconcile_ just means that it will check what is in the `CustomResource(s)` and according to that creates/updates/deletes resources in the Kubernetes environment. (So the state of the Kubernetes Environment matches the state described in the `CR`)

Relevant links:

- [Operator Pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)

#### Webhook

This component/pod is the one that _intercepts_ creation/update/delete of Kubernetes Resources (only those that are relevant), then either mutates or validates them.

- Validation: We only use it for our `CustomResource(s)`, it's meant to catch known misconfigurations. If the validation webhook detects a problem, the user is warned, the change is denied and rolled back, like nothing happened.
- Mutation: Used to modify Kubernetes Resources "in flight", so the Resource will be created/updated in the cluster like if it was applied with added modifications.
- We have 2 use-cases for this:
- Seamlessly modifying user resources with the necessary configuration needed for Dynatrace observability features to work.
- Handle time/timing sensitive minor modifications (labeling, annotating) of user resources, which is meant to help the `Operator` perform more reliably and timely.
- We have 2 use-cases for this:
- Seamlessly modifying user resources with the necessary configuration needed for Dynatrace observability features to work.
- Handle time/timing sensitive minor modifications (labeling, annotating) of user resources, which is meant to help the `Operator` perform more reliably and timely.

Relevant links:

- [What are webhooks?](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)

#### Init Container

Some configurations need to happen on the container filesystem level, like setting up a volume or creating/updating configuration files.
To achieve this we add our init-container (using the `Webhook`) to user Pods. As init-containers run before any other container, we can setup the environment of user containers to enable Dynatrace observability features.

Relevant links:

- [Init Containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)

#### CSI-Driver

A component that is present on all nodes, meant to provide volumes (based on the node's filesystem) to make the capabilities provided by the `Operator` to use less disk space and be more performant.

Relevant links:

- [CSI volume](https://kubernetes.io/docs/concepts/storage/volumes/#csi)

## Code Map

> TODO: Improve folder structure before documenting it more deeply, as its kind of a mess now. If I didn't mention it now, then I probably don't like its current location.
### `config`

Contains the `.yaml` files that are need to deploy the `Operator` and it`s components into a Kubernetes cluster.

- most `.yaml` files are part of the Helm chart
- other `.yaml` files are relevant for different marketplaces

### `hack`

Collection of scripts used for:

- CI tasks
- Development (build,push,deploy,test, etc...)

### `hack/make`

Where the `make` targets are defined. We don't have a single makefile with all the targets as it would be quite large.

### `test`

E2E testing code. Unit tests are NOT found here, they are in the same module that they are testing, as that is the Golang convention.

### `src/api`

Contains the `CustomResourceDefinitions`(CRDs) as Golang `structs` that the `Operator` reacts to. The `CustomResourceDefinition` yaml files are generated based on these `structs`.

### `src/cmd`

Where the entry points for every `Operator` subcommand is found. The `Operator` is not a single container, but we still use the same image for all our containers, to simplify the caching for Kubernetes and mirroring of the `Operator` image in private registries. So each component has its own subcommand.

### `src/controllers`

A Controller is a component that listens/reacts to some Kubernetes Resource. The `Operator` has several of these.

### `src/controllers/certificates`

The `Operator` creates and maintains certificates that are meant to be used by the webhooks. Certificates are required for a webhook to work in kubernetes, and hard coding certificates into the release of the `Operator` is not an option, the same is true for requiring the user to setup `cert-manager` to create/manage certs for the webhooks.

### `src/controllers/csi/driver`

Main logic for the CSI-Driver's `server` container. Implements the CSI gRPC interface, and handles each mount request.

### `src/controllers/csi/provisioner`

Main logic for the CSI-Driver's `provisioner` container. Handles the setting up the environment(filesystem) on the node, so the `server` container can complete its task quickly without making any external requests.

### `src/controllers/dynakube` and `src/controllers/edgeconnect`

Main logic for the 2 `CustomResources`es the `Operator` currently has.

### `src/controllers/node`

The `Operator` keeps track of the nodes in the Kubernetes cluster, this is necessary to notice intentional node shutdowns so the `Operator` can notify the `Dynatrace Environment` about it. Otherwise the `Dynatrace Environment` would produce warnings when a node is shutdown even when it was intentional.

### `src/webhook/mutation`

Mutation webhooks meant for intercepting user Kubernetes Resources, so they can be updated in the instant the updates are required.

### `src/webhook/validation`

Validation webhooks meant for intercepting our `CustomResources` managed by the users, is they can be checked for well-know misconfigurations and warn the user if any problems found.

### `src/standalone`
Main logic for the init-container injected by the `Operator`.

Main logic for the init-container injected by the `Operator`.
6 changes: 4 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
- [Remove all Dynatrace pods in force mode (useful debugging E2E tests)](#remove-all-dynatrace-pods-in-force-mode-useful-debugging-e2e-tests)
- [Copy CSI driver database to localhost for introspection via sqlite command](#copy-csi-driver-database-to-localhost-for-introspection-via-sqlite-command)
- [Add debug suffix on E2E tests to avoid removing pods](#add-debug-suffix-on-e2e-tests-to-avoid-removing-pods)
- [Debug cluster nodes by opening a shell prompt (details here)](#debug-cluster-nodes-by-opening-a-shell-prompt-details-here)
- [Debug cluster nodes by opening a shell prompt (details here)](#debug-cluster-nodes-by-opening-a-shell-prompt)

## Steps

Expand Down Expand Up @@ -90,7 +90,9 @@ kubectl cp dynatrace/dynatrace-oneagent-csi-driver-<something>:/data/csi.db csi.
make test/e2e/cloudnative/proxy/debug
```

### Debug cluster nodes by opening a shell prompt ([details here](https://www.psaggu.com/upstream-contribution/2021/05/04/notes.html))
### Debug cluster nodes by opening a shell prompt

[Details here](https://www.psaggu.com/upstream-contribution/2021/05/04/notes.html)

```sh
oc debug node/<node-name>
Expand Down
7 changes: 4 additions & 3 deletions HACKING.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,20 +10,21 @@
There are automatic builds from the master branch. The latest development build can be installed as follows:

#### Kubernetes

```sh
$ make deploy/kubernetes
make deploy/kubernetes
```

#### OpenShift

```sh
$ make deploy/openshift
make deploy/openshift
```

#### Tests

The unit tests can be executed as follows:

```sh
$ make go/test
make go/test
```
20 changes: 12 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Dynatrace Operator

[![GoDoc](http://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)](http://godoc.org/github.com/Dynatrace/dynatrace-operator)
[![CI](https://github.com/Dynatrace/dynatrace-operator/actions/workflows/ci.yaml/badge.svg?branch=main)](https://github.com/Dynatrace/dynatrace-operator/actions/workflows/ci.yaml)
[![codecov](https://codecov.io/gh/Dynatrace/dynatrace-operator/parse/branch/main/graph/badge.svg)](https://codecov.io/gh/Dynatrace/dynatrace-operator)
Expand Down Expand Up @@ -42,13 +43,14 @@ objects like permissions, custom resources and corresponding StatefulSets.
To create the namespace and apply the operator run the following commands

```sh
$ kubectl create namespace dynatrace
$ kubectl apply -f https://github.com/Dynatrace/dynatrace-operator/releases/latest/download/kubernetes.yaml
kubectl create namespace dynatrace
kubectl apply -f https://github.com/Dynatrace/dynatrace-operator/releases/latest/download/kubernetes.yaml
```

If using `cloudNativeFullStack` or `applicationMonitoring` with CSI driver, the following command is required as well:

```sh
$ kubectl apply -f https://github.com/Dynatrace/dynatrace-operator/releases/latest/download/kubernetes-csi.yaml
kubectl apply -f https://github.com/Dynatrace/dynatrace-operator/releases/latest/download/kubernetes-csi.yaml
```

A secret holding tokens for authenticating to the Dynatrace cluster needs to be created upfront. Create access tokens of
Expand All @@ -59,7 +61,7 @@ to [Create user-generated access tokens](https://www.dynatrace.com/support/help/
The token scopes required by the Dynatrace Operator are documented on our [official help page](https://www.dynatrace.com/support/help/shortlink/full-stack-dto-k8#tokens)

```sh
$ kubectl -n dynatrace create secret generic dynakube --from-literal="apiToken=DYNATRACE_API_TOKEN" --from-literal="dataIngestToken=DATA_INGEST_TOKEN"
kubectl -n dynatrace create secret generic dynakube --from-literal="apiToken=DYNATRACE_API_TOKEN" --from-literal="dataIngestToken=DATA_INGEST_TOKEN"
```

#### Create `DynaKube` custom resource for ActiveGate and OneAgent rollout
Expand All @@ -75,27 +77,29 @@ The recommended approach is using classic Fullstack injection to roll out Dynatr
In case you want to have adjustments please have a look at [our DynaKube Custom Resource examples](assets/samples).

Save one of the sample configurations, change the API url to your environment and apply it to your cluster.

```sh
$ kubectl apply -f cr.yaml
kubectl apply -f cr.yaml
```

For detailed instructions see
our [official help page](https://www.dynatrace.com/support/help/shortlink/full-stack-dto-k8).


## Uninstall dynatrace-operator

> For instructions on how to uninstall the dynatrace-operator on Openshift,
> head to the [official help page](https://www.dynatrace.com/support/help/shortlink/full-stack-dto-k8#uninstall-dynatrace-operator)
Clean-up all Dynatrace Operator specific objects:

```sh
$ kubectl delete -f https://github.com/Dynatrace/dynatrace-operator/releases/latest/download/kubernetes.yaml
kubectl delete -f https://github.com/Dynatrace/dynatrace-operator/releases/latest/download/kubernetes.yaml
```

If the CSI driver was installed, the following command is required as well:

```sh
$ kubectl delete -f https://github.com/Dynatrace/dynatrace-operator/releases/latest/download/kubernetes-csi.yaml
kubectl delete -f https://github.com/Dynatrace/dynatrace-operator/releases/latest/download/kubernetes-csi.yaml
```

## Hacking
Expand Down
4 changes: 2 additions & 2 deletions config/helm/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Tool Prerequisites
## Tool Prerequisites

* Install mpdev, see [google documentation](https://github.com/GoogleCloudPlatform/marketplace-k8s-app-tools/blob/master/docs/tool-prerequisites.md) for more information
* Create an empty GKE cluster
* Apply Googles Application CRD, see [google documentation](https://github.com/GoogleCloudPlatform/marketplace-k8s-app-tools/blob/master/docs/tool-prerequisites.md) for more information

# Installation
## Installation

* Run `hack/gcr/deployer-image.sh` to build and push a new deployer image containing the helm charts
* Run `hack/gcr/deploy.sh` to deploy the deployer image
Expand Down
7 changes: 6 additions & 1 deletion config/helm/chart/default/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ The Dynatrace Operator supports rollout and lifecycle of various Dynatrace compo
This Helm Chart requires Helm 3.

## Quick Start

Migration instructions can be found in the [official help page](https://www.dynatrace.com/support/help/shortlink/k8s-dto-helm#migrate).

Install the Dynatrace Operator via Helm by running the following commands.
Expand All @@ -15,19 +16,23 @@ Install the Dynatrace Operator via Helm by running the following commands.
> [official help page](https://www.dynatrace.com/support/help/shortlink/k8s-helm)
Add `dynatrace` helm repository:
```

```console
helm repo add dynatrace https://raw.githubusercontent.com/Dynatrace/dynatrace-operator/main/config/helm/repos/stable
```

Install `dynatrace-operator` helm chart and create the corresponding `dynatrace` namespace:

```console
helm install dynatrace-operator dynatrace/dynatrace-operator -n dynatrace --create-namespace --atomic
```

## Uninstall chart

> Full instructions can be found in the [official help page](https://www.dynatrace.com/support/help/shortlink/k8s-helm#uninstall-dynatrace-operator)
Uninstall the Dynatrace Operator by running the following command:

```console
helm uninstall dynatrace-operator -n dynatrace
```
Loading

0 comments on commit f16d0b4

Please sign in to comment.