Skip to content

Latest commit

 

History

History
103 lines (86 loc) · 4.51 KB

Technical-Requirements.md

File metadata and controls

103 lines (86 loc) · 4.51 KB

Operator Box

Software requirements:

  • kubectl (Kubernetes command-line tool)
  • Helm 3+ utility for th2 components deployment into kubernetes
  • Chrome 75 or newer

QA Box

  • Chrome 75 or newer

#Cassandra Box

  • Cassandra 3.11.6
    Cassandra in-cluster installed in kubernetes (developer mode)
    Cassandra cluster installed separately (production mode)

Apache Cassandra cluster hardware requirements

Though it is possible to use Cassandra single-node installation, generally it’s recommended to setup at least 3-nodes cluster. Requirements to each node are the same.

Apache Cassandra node Memory (MB) CPU (Сores) Disk space (GB)
Cassandra node_n 4000 MB 2 15 GB for “/ “ mount + 200 GB for “/var“ mount

#External Resources

  • Git repositories for apps and infrastructure code

Test Platform Box

Components calculations:
th2 env = Base + Core + Building blocks + Custom + Cassandra *

Base & Core Components Memory (MB) CPU (millicores) Comment
th2 infra 1000 MB 800 m helm, infra-mgr, infra-editor, infra-operator
th2 core 2500 MB 2000 m mstore, estore, rpt-provider, rpt-viewer
Rabbitmq replica 1 2000 MB 1000 m need to test
Monitoring 1500 MB 2000 m
Other supporting components 500 MB 250 m e.g. in-cluster CD system, ingress and etc
Total: 7500 MB 6050 m
Custom & Building blocks components Memory (MB) CPU (millicores) Comment
th2 in-cluster connectivity services 200 MB * n 200 m * n Depends on number of connectivity instances. 1 Connectivity service * n e.g. if we have 10 connectivity instances: 200 MB * 10 = 2000 MB
th2 codec, act 200 MB * n 200 m * n
th2 check1
th2 Java read 200 MB * n 200 m * n
th2 recon 200 MB * n 200 m * n cacheSize = (podMemoryLimit - 70MB) / (AvrRawMsgSize * 10 * (SUM(number of group in rule)))
th2 check2 800 MB * n 200 m * n
th2 hand 300 MB * n 400 m * n
Total: 1900 * n 1400 * n

Software requirements:

  • Kubernetes Cluster accessible from other boxes

Unallocated stuff!!!

Kubernetes - before you begin

  • Docker CE installed with the following parameters in /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m" 
  },
  "storage-driver": "overlay2"
}
  • Overlay2 storage driver prerequisites

  • Docker registry with push permissions for storing containerized application images

  • Kubernetes Kubernetes cluster installed (single master node as development mode, master and 2+ workers as production mode) with the flannel CNI plugin . Creating a cluster with kubeadm

    Flannel CNI installation:

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    

    If you want to be able to schedule Pods on the control-plane node, for example for a single-machine Kubernetes cluster for development, run:

    kubectl taint nodes --all node-role.kubernetes.io/master-
    
  • Python

  • JAVA 11

Hardware requirements:

High availability configuration cluster

  • Three machines that meet kubeadm’s minimum requirements for the workers

  • One or more machines running one of:

    • Ubuntu 16.04+
    • Debian 9+
    • CentOS 7
    • Red Hat Enterprise Linux (RHEL) 7
    • Fedora 25+
  • Full network connectivity between all machines in the cluster (public or private network is fine)

  • Unique hostname, MAC address, and product_uuid for every node. Click here for more details.

  • Certain ports are open on your machines. Click here for more details.

  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.

  • Full network connectivity between all machines in the cluster (public or private network)

  • sudo privileges on all machines

  • SSH access from one device to all nodes in the system

  • kubeadm and kubelet installed on all machines. kubectl is optional.