Currently Empty: ₹0.00
CI/CD
OpenShift CI/CD with TekTon

OpenShift is RedHat’s distribution of Kubernetes with many additional features. Interestingly the new trend is, many organisations are considering Kubernetes/OpenShift for setting up their CI/CD apart from Container Orchestration. This post will demonstrate setting up a CI/CD Tekton pipeline within OpenShift cluster.
Tekton is an opensource project developed and maintained by the Community. Google, Red Hat, IBM, individuals are some major contributors to this project.
I’m sure you are already using one of the CI/CD Servers in this list Jenkins/Cloudbees, TeamCity, Bamboo, TFS, etc., All these Servers must be running 24/7 for entire project life-time, even when no one does any coding activity during Christmas/New Year.
TekTon is serverless, i.e knative CI/CD Framework that can be installed within the OpenShift cluster. You could setup webhook in GitHub to trigger the CI/CD pipeline that is created in Openshift using TekTon.
I’m assuming you already have an OpenShift cluster up and running. My OpenShift cluster looks like this
(jegan@tektutor.org)$ oc version Client Version: 4.10.0-202203141248.p0.g6db43e2.assembly.stream-6db43e2 Server Version: 4.10.5 Kubernetes Version: v1.23.3+e419edf (jegan@tektutor.org)$ tkn version Client version: 0.22.0 Pipeline version: v0.34.1 (jegan@tektutor.org)$ oc get nodes NAME STATUS ROLES AGE VERSION master-1.ocp.tektutor.org Ready master,worker 4d1h v1.23.3+e419edf master-2.ocp.tektutor.org Ready master,worker 4d1h v1.23.3+e419edf master-3.ocp.tektutor.org Ready master,worker 4d1h v1.23.3+e419edf worker-1.ocp.tektutor.org Ready worker 4d v1.23.3+e419edf worker-2.ocp.tektutor.org Ready worker 4d v1.23.3+e419edf
You can install Tekton Pipeline Operator via CLI or OpenShift webconsole.
Installing Tekton Pipeline via OpenShift CLI. Ofcourse, you should be a kubeadmin to do this. oc new-project tekton-pipelines oc adm policy add-scc-to-user anyuid -z tekton-pipelines-controller oc adm policy add-scc-to-user anyuid -z tekton-pipelines-webhook oc apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.notags.yaml
Tekton CI/CD Framework adds many Custom Resources to OpenShift. To name a few
- Task
- Pipeline
- TaskRun
- PipelineRun
You may check the CRDs installed in OpenShift as shown below
(jegan@tektutor.org)$ oc get crds NAME CREATED AT addresspools.metallb.io 2022-03-27T01:42:23Z alertmanagerconfigs.monitoring.coreos.com 2022-03-27T00:55:51Z alertmanagers.monitoring.coreos.com 2022-03-27T00:55:54Z apirequestcounts.apiserver.openshift.io 2022-03-27T01:02:45Z apiservers.config.openshift.io 2022-03-27T00:55:20Z applicationbackups.stork.libopenstorage.org 2022-03-28T23:27:42Z applicationbackupschedules.stork.libopenstorage.org 2022-03-28T23:27:52Z applicationclones.stork.libopenstorage.org 2022-03-28T23:27:47Z applicationregistrations.stork.libopenstorage.org 2022-03-28T23:27:37Z applicationrestores.stork.libopenstorage.org 2022-03-28T23:27:42Z authentications.config.openshift.io 2022-03-27T00:55:20Z authentications.operator.openshift.io 2022-03-27T00:55:54Z backuplocations.stork.libopenstorage.org 2022-03-28T23:27:32Z baremetalhosts.metal3.io 2022-03-27T00:55:48Z bfdprofiles.metallb.io 2022-03-27T01:42:22Z bgppeers.metallb.io 2022-03-27T01:42:23Z bmceventsubscriptions.metal3.io 2022-03-27T00:55:51Z builds.config.openshift.io 2022-03-27T00:55:20Z catalogsources.operators.coreos.com 2022-03-27T00:55:51Z cloudcredentials.operator.openshift.io 2022-03-27T00:55:38Z clusterautoscalers.autoscaling.openshift.io 2022-03-27T00:55:51Z clustercsidrivers.operator.openshift.io 2022-03-27T00:56:31Z clusterdomainsstatuses.stork.libopenstorage.org 2022-03-28T23:27:22Z clusterdomainupdates.stork.libopenstorage.org 2022-03-28T23:27:27Z clusternetworks.network.openshift.io 2022-03-27T01:00:20Z clusteroperators.config.openshift.io 2022-03-27T00:55:08Z clusterpairs.stork.libopenstorage.org 2022-03-28T23:27:12Z clusterresourcequotas.quota.openshift.io 2022-03-27T00:55:18Z clusterserviceversions.operators.coreos.com 2022-03-27T00:55:54Z clustertasks.tekton.dev 2022-03-29T07:23:16Z clusterversions.config.openshift.io 2022-03-27T00:55:08Z conditions.tekton.dev 2022-03-29T07:23:16Z configs.imageregistry.operator.openshift.io 2022-03-27T00:55:51Z configs.operator.openshift.io 2022-03-27T00:55:56Z configs.samples.operator.openshift.io 2022-03-27T00:55:48Z consoleclidownloads.console.openshift.io 2022-03-27T00:55:48Z consoleexternalloglinks.console.openshift.io 2022-03-27T00:55:48Z consolelinks.console.openshift.io 2022-03-27T00:55:48Z consolenotifications.console.openshift.io 2022-03-27T00:55:48Z consoleplugins.console.openshift.io 2022-03-27T00:55:48Z consolequickstarts.console.openshift.io 2022-03-27T00:55:48Z consoles.config.openshift.io 2022-03-27T00:55:21Z consoles.operator.openshift.io 2022-03-27T00:55:48Z consoleyamlsamples.console.openshift.io 2022-03-27T00:55:48Z containerruntimeconfigs.machineconfiguration.openshift.io 2022-03-27T00:56:08Z controllerconfigs.machineconfiguration.openshift.io 2022-03-27T01:01:39Z credentialsrequests.cloudcredential.openshift.io 2022-03-27T00:55:38Z csisnapshotcontrollers.operator.openshift.io 2022-03-27T00:55:50Z dnses.config.openshift.io 2022-03-27T00:55:21Z dnses.operator.openshift.io 2022-03-27T00:55:56Z dnsrecords.ingress.operator.openshift.io 2022-03-27T00:55:54Z egressnetworkpolicies.network.openshift.io 2022-03-27T01:00:20Z egressrouters.network.operator.openshift.io 2022-03-27T00:55:59Z etcds.operator.openshift.io 2022-03-27T00:55:48Z featuregates.config.openshift.io 2022-03-27T00:55:22Z firmwareschemas.metal3.io 2022-03-27T00:55:54Z groupvolumesnapshots.stork.libopenstorage.org 2022-03-28T23:27:07Z helmchartrepositories.helm.openshift.io 2022-03-27T00:55:48Z hostfirmwaresettings.metal3.io 2022-03-27T00:55:56Z hostsubnets.network.openshift.io 2022-03-27T01:00:20Z imagecontentpolicies.config.openshift.io 2022-03-27T00:55:22Z imagecontentsourcepolicies.operator.openshift.io 2022-03-27T00:55:23Z imagepruners.imageregistry.operator.openshift.io 2022-03-27T00:56:25Z images.config.openshift.io 2022-03-27T00:55:22Z infrastructures.config.openshift.io 2022-03-27T00:55:23Z ingresscontrollers.operator.openshift.io 2022-03-27T00:55:41Z ingresses.config.openshift.io 2022-03-27T00:55:24Z installplans.operators.coreos.com 2022-03-27T00:55:58Z ippools.whereabouts.cni.cncf.io 2022-03-27T01:00:19Z kubeapiservers.operator.openshift.io 2022-03-27T00:56:19Z kubecontrollermanagers.operator.openshift.io 2022-03-27T00:55:59Z kubeletconfigs.machineconfiguration.openshift.io 2022-03-27T00:56:10Z kubeschedulers.operator.openshift.io 2022-03-27T00:55:54Z kubestorageversionmigrators.operator.openshift.io 2022-03-27T00:55:48Z machineautoscalers.autoscaling.openshift.io 2022-03-27T00:55:54Z machineconfigpools.machineconfiguration.openshift.io 2022-03-27T00:56:15Z machineconfigs.machineconfiguration.openshift.io 2022-03-27T00:56:13Z machinehealthchecks.machine.openshift.io 2022-03-27T00:56:32Z machines.machine.openshift.io 2022-03-27T00:56:31Ztkn hub install task git-clone tkn hub install task maven machinesets.machine.openshift.io 2022-03-27T00:56:32Z memcacheds.cache.example.com 2022-03-28T02:41:16Z metallbs.metallb.io 2022-03-27T01:42:23Z migrations.stork.libopenstorage.org 2022-03-28T23:27:17Z migrationschedules.stork.libopenstorage.org 2022-03-28T23:27:22Z namespacedschedulepolicies.stork.libopenstorage.org 2022-03-28T23:26:56Z netnamespaces.network.openshift.io 2022-03-27T01:00:20Z network-attachment-definitions.k8s.cni.cncf.io 2022-03-27T01:00:19Z networks.config.openshift.io 2022-03-27T00:55:24Z networks.operator.openshift.io 2022-03-27T00:55:52Z oauths.config.openshift.io 2022-03-27T00:55:24Z olmconfigs.operators.coreos.com 2022-03-27T00:56:04Z openshiftapiservers.operator.openshift.io 2022-03-27T00:55:48Z openshiftcontrollermanagers.operator.openshift.io 2022-03-27T00:55:52Z operatorconditions.operators.coreos.com 2022-03-27T00:56:06Z operatorgroups.operators.coreos.com 2022-03-27T00:56:09Z operatorhubs.config.openshift.io 2022-03-27T00:55:18Z operatorpkis.network.operator.openshift.io 2022-03-27T00:56:01Z operators.operators.coreos.com 2022-03-27T00:56:12Z overlappingrangeipreservations.whereabouts.cni.cncf.io 2022-03-27T01:00:19Z perconaxtradbbackups.pxc.percona.com 2022-03-28T23:05:23Z perconaxtradbclusterbackups.pxc.percona.com 2022-03-28T23:05:23Z perconaxtradbclusterrestores.pxc.percona.com 2022-03-28T23:05:23Z perconaxtradbclusters.pxc.percona.com 2022-03-28T23:05:23Z pipelineresources.tekton.dev 2022-03-29T07:23:16Z pipelineruns.tekton.dev 2022-03-29T07:23:16Z pipelines.tekton.dev 2022-03-29T07:23:16Z podmonitors.monitoring.coreos.com 2022-03-27T00:55:56Z podnetworkconnectivitychecks.controlplane.operator.openshift.io 2022-03-27T01:31:34Z preprovisioningimages.metal3.io 2022-03-27T00:55:59Z probes.monitoring.coreos.com 2022-03-27T00:55:59Z profiles.tuned.openshift.io 2022-03-27T00:55:54Z projects.config.openshift.io 2022-03-27T00:55:25Z prometheuses.monitoring.coreos.com 2022-03-27T00:56:01Z prometheusrules.monitoring.coreos.com 2022-03-27T00:56:03Z provisionings.metal3.io 2022-03-27T00:56:34Z proxies.config.openshift.io 2022-03-27T00:55:18Z rangeallocations.security.internal.openshift.io 2022-03-27T00:55:19Z rolebindingrestrictions.authorization.openshift.io 2022-03-27T00:55:18Z rules.stork.libopenstorage.org 2022-03-28T23:26:45Z runs.tekton.dev 2022-03-29T07:23:16Z schedulepolicies.stork.libopenstorage.org 2022-03-28T23:26:51Z schedulers.config.openshift.io 2022-03-27T00:55:25Z securitycontextconstraints.security.openshift.io 2022-03-27T00:55:19Z servicecas.operator.openshift.io 2022-03-27T00:55:53Z servicemonitors.monitoring.coreos.com 2022-03-27T00:56:06Z storageclusters.core.libopenstorage.org 2022-03-28T23:24:34Z storagenodes.core.libopenstorage.org 2022-03-28T23:24:34Z storages.operator.openshift.io 2022-03-27T00:56:32Z storagestates.migration.k8s.io 2022-03-27T00:55:54Z storageversionmigrations.migration.k8s.io 2022-03-27T00:55:51Z subscriptions.operators.coreos.com 2022-03-27T00:56:28Z taskruns.tekton.dev 2022-03-29T07:23:16Z tasks.tekton.dev 2022-03-29T07:23:16Z tektonaddons.operator.tekton.dev 2022-03-29T06:17:22Z tektonconfigs.operator.tekton.dev 2022-03-29T06:17:22Z tektoninstallersets.operator.tekton.dev 2022-03-29T06:17:22Z tektonpipelines.operator.tekton.dev 2022-03-29T06:17:22Z tektontriggers.operator.tekton.dev 2022-03-29T06:17:22Z thanosrulers.monitoring.coreos.com 2022-03-27T00:56:08Z tuneds.tuned.openshift.io 2022-03-27T00:55:56Z volumesnapshotclasses.snapshot.storage.k8s.io 2022-03-27T01:03:25Z volumesnapshotcontents.snapshot.storage.k8s.io 2022-03-27T01:03:23Z volumesnapshotdatas.volumesnapshot.external-storage.k8s.io 2022-03-28T23:27:01Z volumesnapshotrestores.stork.libopenstorage.org 2022-03-28T23:27:02Z volumesnapshots.snapshot.storage.k8s.io 2022-03-27T01:03:22Z volumesnapshots.volumesnapshot.external-storage.k8s.io 2022-03-28T23:27:01Z volumesnapshotschedules.stork.libopenstorage.org 2022-03-28T23:27:02Z
Let’s get familiar with some basic Tekton jargons before creating a Tekton CI/CD pipeline.
Workspace
- is a directory used by Tasks in a Tekton pipeline
- ConfigMaps can be mounted in a workspace
- Secrets can be mounted in a workspace
- PersistentVolumeClaims can be mounted in a workspace
Task
- is the most basic unit that can be deployed and executed independently in Tekton
- each Task creates a separate Pod
- has one or more Steps
- each Step creates a separate container within the Task Pod
- tasks can be executed independently via TaskRun
- tasks can be shared by one or more Pipelines
- tasks can take parameters
- tasks use one or more Workspace(s) to retrieve inputs and store outputs
TaskRun
- represents a session of Task execution
- each time a Task is executed, it creates a separate TaskRun
- holds the logs that were created during the execution
- supplies parameter values required by a Task
- explains what needs to be mounted in the workspace
Pipeline
- is a chain of Tasks that can be executed in sequence, parallel to other tasks or a combination of them as per your requirement
- each time the Pipeline is executed, it creates a separate PipelineRun
- represents CI/CD pipeline
PipelineRun
- represents one session of Pipeline execution
- supplies the parameter values and workspace configurations to the Pipeline
- holds the logs that were created during the execution
Tekton Pipeline can be created via CLI or via Pipeline Dashboard or OpenShift webconsole.
Let’s create a Tekton CI/CD pipeline via manifest file
java-tekton-cicd-pipeline.yml
# Reserve 500 MB disk space for Persistent Volume apiVersion: v1 kind: PersistentVolume metadata: name: tektutor-tekton-pv spec: capacity: storage: 500Mi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: server: "192.168.1.80" path: "/mnt/nfs_share" --- # Request for 500 Mb disk space from one of the Persistent Volume apiVersion: v1 kind: PersistentVolumeClaim metadata: name: tektutor-tekton-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 500Mi --- # Create a pipelineapiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: java-cicd-pipeline spec: tasks: - name: clone-git-repo params: - name: url value: 'https://github.com/tektutor/spring-ms.git'# This is the git branch that you wish you clone - name: revision value: master taskRef: kind: Task name: git-clone workspaces: - name: output workspace: source-code# Task that compiles your maven project - name: compile params: - name: GOALS value: - -Dmaven.repo.local=$(workspaces.maven-settings.path) - compile runAfter: - clone-git-repo taskRef: kind: Task name: maven workspaces: - name: source workspace: source-code - name: maven-settings workspace: maven-repo# Task that performs unit-test on your maven project - name: test params: - name: GOALS value: - -Dmaven.repo.local=$(workspaces.maven-settings.path) - test runAfter: - compile taskRef: kind: Task name: maven workspaces: - name: source workspace: source-code - name: maven-settings workspace: maven-repo# Task that packages your unit-tested maven project - name: package params: - name: GOALS value: - -Dmaven.repo.local=$(workspaces.maven-settings.path) - package runAfter: - test taskRef: kind: Task name: maven workspaces: - name: source workspace: source-code - name: maven-settings workspace: maven-repo# Task that installs the packaged binaries into your local maven repository - name: install params: - name: GOALS value: - -Dmaven.repo.local=$(workspaces.maven-settings.path) - install runAfter: - package taskRef: kind: Task name: maven workspaces: - name: source workspace: source-code - name: maven-settings workspace: maven-repo workspaces: - name: source-code - name: maven-repo--- # PipelineRunapiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: java-cicd-pipeline-run spec: pipelineRef: name: java-cicd-pipeline serviceAccountName: default workspaces: - name: source-code persistentVolumeClaim: claimName: tektutor-tekton-pvc subPath: source - name: maven-repo persistentVolumeClaim: claimName: tektutor-tekton-pvc subPath: m2
We need to install the below tasks from Tekton Hub https://hub.tekton.dev/.
tkn hub install task git-clone tkn hub install task maven
In case the above commands didn’t work for you, you may to install the tasks as shown below.
oc apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/git-clone/0.5/git-clone.yamloc apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/maven/0.2/maven.yaml
You may now create the pipeline as shown below
oc new-project tektutor oc create -f java-tekton-cicd-pipeline.yml
Expected output is
(jegan@tektutor.org)$ oc create -f java-tekton-cicd-pipeline.yml persistentvolume/tektutor-tekton-pv created persistentvolumeclaim/tektutor-tekton-pvc created pipeline.tekton.dev/java-tekton-cicd-pipeline created pipelinerun.tekton.dev/java-tekton-cicd-pipeline-run-d56fg created
If you wish to see the pipeline in Openshift webconsole, it looks as shown below
Pipeline view looks as shown below
You can check the pipeline output in the CLI as shown below
tkn pipelinerun logs -f --last
RedHat OpenShift pipeline will look much more impressive in web console as shown in the screenshot below.
Hope you found this article useful.