Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Develop a framework for running end-to-end and scenario acceptance test suites #30

Open
jkneubuh opened this issue Jul 20, 2022 · 0 comments

Comments

@jkneubuh
Copy link
Contributor

The ginkgo based integration tests do a decent job of testing the internal mechanics of the operator. But they do NOT provide 100% coverage of the overall system as a "working unit."

Formalize a plan for running automated, full-stack coverage of Fabric networks constructed with the operator. Operator provides several routes for realizing a Fabric network, and each should be tested independently as a recurring validation of system behavior.

The "acceptance" tests can be run continuously, but the expectation is that they MUST be run at release intervals.

Whatever "platform" is used, it should complete the end-to-end-to-end scenario validation in a 100% predictable and automated fashion. Like: everything, even to the point of dynamically provisioning an ephemeral EKS, IKS, KIND, OCP, etc. cluster as a base kubernetes, if that is possible.

We have had early, very positive results integrating cloud-native workflow engines, such as Argo and Tekton, into the context of an automated provisioning workflows. One route to achieve an acceptance test bed would involve:

General idea:

  1. Provision a Kube (or reference one if available)
  2. kubectl apply an Nginx ingress controller
  3. kubectl apply argo / tkn

Submit a Workflow (or tkn Pipeline) to run natively in the cluster as a sequence of orchestrated containers:

  1. "Install Fabric" to a namespace (See Improve the "installation" experience : kubectl apply -f URL  #27) via k8s apply URL
  2. apply peers, orderers, CAs, etc. via CRD (or Ansible -> console SDKs)
  3. Issue peer, osnadmin, etc. CLI routines (or Ansible -> console SDKs) to create channels
  4. Compile chaincode images, prepare packages, and install/commit
  5. Execute E2E test / consuming application scenarios.

Finally: tear down the k8s cluster at the completion of the suite.

Workflows and Pipelines should be relatively modular, if possible, such that they can be assembled in the future as building blocks for additional test and automation scenarios.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant