You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The ginkgo based integration tests do a decent job of testing the internal mechanics of the operator. But they do NOT provide 100% coverage of the overall system as a "working unit."
Formalize a plan for running automated, full-stack coverage of Fabric networks constructed with the operator. Operator provides several routes for realizing a Fabric network, and each should be tested independently as a recurring validation of system behavior.
The "acceptance" tests can be run continuously, but the expectation is that they MUST be run at release intervals.
Whatever "platform" is used, it should complete the end-to-end-to-end scenario validation in a 100% predictable and automated fashion. Like: everything, even to the point of dynamically provisioning an ephemeral EKS, IKS, KIND, OCP, etc. cluster as a base kubernetes, if that is possible.
We have had early, very positive results integrating cloud-native workflow engines, such as Argo and Tekton, into the context of an automated provisioning workflows. One route to achieve an acceptance test bed would involve:
General idea:
Provision a Kube (or reference one if available)
kubectl apply an Nginx ingress controller
kubectl apply argo / tkn
Submit a Workflow (or tkn Pipeline) to run natively in the cluster as a sequence of orchestrated containers:
apply peers, orderers, CAs, etc. via CRD (or Ansible -> console SDKs)
Issue peer, osnadmin, etc. CLI routines (or Ansible -> console SDKs) to create channels
Compile chaincode images, prepare packages, and install/commit
Execute E2E test / consuming application scenarios.
Finally: tear down the k8s cluster at the completion of the suite.
Workflows and Pipelines should be relatively modular, if possible, such that they can be assembled in the future as building blocks for additional test and automation scenarios.
The text was updated successfully, but these errors were encountered:
The ginkgo based integration tests do a decent job of testing the internal mechanics of the operator. But they do NOT provide 100% coverage of the overall system as a "working unit."
Formalize a plan for running automated, full-stack coverage of Fabric networks constructed with the operator. Operator provides several routes for realizing a Fabric network, and each should be tested independently as a recurring validation of system behavior.
The "acceptance" tests can be run continuously, but the expectation is that they MUST be run at release intervals.
Whatever "platform" is used, it should complete the end-to-end-to-end scenario validation in a 100% predictable and automated fashion. Like: everything, even to the point of dynamically provisioning an ephemeral EKS, IKS, KIND, OCP, etc. cluster as a base kubernetes, if that is possible.
We have had early, very positive results integrating cloud-native workflow engines, such as Argo and Tekton, into the context of an automated provisioning workflows. One route to achieve an acceptance test bed would involve:
General idea:
Submit a
Workflow
(or tknPipeline
) to run natively in the cluster as a sequence of orchestrated containers:Finally: tear down the k8s cluster at the completion of the suite.
Workflows and Pipelines should be relatively modular, if possible, such that they can be assembled in the future as building blocks for additional test and automation scenarios.
The text was updated successfully, but these errors were encountered: