A PipelineRun
allows you to instantiate and execute a Pipeline
on-cluster.
A Pipeline
specifies one or more Tasks
in the desired order of execution. A PipelineRun
executes the Tasks
in the Pipeline
in the order they are specified until all Tasks
have
executed successfully or a failure occurs.
Note: A PipelineRun
automatically creates corresponding TaskRuns
for every
Task
in your Pipeline
.
The Status
field tracks the current state of a PipelineRun
, and can be used to monitor
progress.
This field contains the status of every TaskRun
, as well as the full PipelineSpec
used
to instantiate this PipelineRun
, for full auditability.
A PipelineRun
definition supports the following fields:
- Required:
apiVersion
- Specifies the API version. For exampletekton.dev/v1beta1
.kind
- Indicates that this resource object is aPipelineRun
object.metadata
- Specifies the metadata that uniquely identifies thePipelineRun
object. For example, aname
.spec
- Specifies the configuration information for thisPipelineRun
object.pipelineRef
orpipelineSpec
- Specifies the targetPipeline
.
- Optional:
resources
- Specifies thePipelineResources
to provision for executing the targetPipeline
.params
- Specifies the desired execution parameters for thePipeline
.serviceAccountName
- Specifies aServiceAccount
object that supplies specific execution credentials for thePipeline
.serviceAccountNames
- Maps specificserviceAccountName
values toTasks
in thePipeline
. This overrides the credentials set for the entirePipeline
.taskRunSpec
- Specifies a list ofPipelineRunTaskSpec
which allows for settingServiceAccountName
andPod
template for each task. This overrides thePod
template set for the entirePipeline
.timeout
- Specifies the timeout before thePipelineRun
fails.podTemplate
- Specifies aPod
template to use as the basis for the configuration of thePod
that executes eachTask
.
You must specify the target Pipeline
that you want the PipelineRun
to execute, either by referencing
an existing Pipeline
definition, or embedding a Pipeline
definition directly in the PipelineRun
.
To specify the target Pipeline
by reference, use the pipelineRef
field:
spec:
pipelineRef:
name: mypipeline
To embed a Pipeline
definition in the PiepelineRun
, use the pipelineSpec
field:
spec:
pipelineSpec:
tasks:
- name: task1
taskRef:
name: mytask
The Pipeline
in the pipelineSpec
example
example displays morning and evening greetings. Once you create and execute it, you can check the logs for its Pods
:
kubectl logs $(kubectl get pods -o name | grep pipelinerun-echo-greetings-echo-good-morning)
Good Morning, Bob!
kubectl logs $(kubectl get pods -o name | grep pipelinerun-echo-greetings-echo-good-night)
Good Night, Bob!
You can also embed a Task
definition the embedded Pipeline
definition:
spec:
pipelineSpec:
tasks:
- name: task1
taskSpec:
steps:
...
In the taskSpec
in pipelineSpec
example
it's Tasks
all the way down!
A Pipeline
requires PipelineResources
to provide inputs and store outputs
for the Tasks
that comprise it. You must provision those resources in the resources
field
in the spec
section of the PipelineRun
definition.
A Pipeline
may require you to provision a number of different resources. For example:
- When executing a
Pipeline
against a pull request, the triggering system must specify the commit-ish of agit
resource. - When executing a
Pipeline
manually against your own environment, you must provision your GitHub fork using thegit
resource; your image registry using theimage
resource; and your Kubernetes cluster using thecluster
resource.
You can reference a PipelineResources
using the resourceRef
field:
spec:
resources:
- name: source-repo
resourceRef:
name: skaffold-git
- name: web-image
resourceRef:
name: skaffold-image-leeroy-web
- name: app-image
resourceRef:
name: skaffold-image-leeroy-app
You can also embed a PipelineResource
definition in the PipelineRun
using the resourceSpec
field:
spec:
resources:
- name: source-repo
resourceSpec:
type: git
params:
- name: revision
value: v0.32.0
- name: url
value: https://github.com/GoogleContainerTools/skaffold
- name: web-image
resourceSpec:
type: image
params:
- name: url
value: gcr.io/christiewilson-catfactory/leeroy-web
- name: app-image
resourceSpec:
type: image
params:
- name: url
value: gcr.io/christiewilson-catfactory/leeroy-app
Note: All persistentVolumeClaims
specified within a PipelineRun
are bound
until their respective Pods
or the entire PipelineRun
are deleted. This also applies
to all persistentVolumeClaims
generated internally.
You can specify Parameters
that you want to pass to the Pipeline
during execution,
including different values of the same parameter for different Tasks
in the Pipeline
.
For example:
spec:
params:
- name: pl-param-x
value: "100"
- name: pl-param-y
value: "500"
You can execute the Pipeline
in your PipelineRun
with a specific set of credentials by
specifying a ServiceAccount
object name in the serviceAccountName
field in your PipelineRun
definition. If you do not explicitly specify this, the TaskRuns
created by your PipelineRun
will execute with the credentials specified in the configmap-defaults
ConfigMap
. If this
default is not specified, the TaskRuns
will execute with the default
service account
set for the target namespace
.
For more information, see ServiceAccount
.
If you require more granularity in specifying execution credentials, use the serviceAccounNames
field to
map a specific serviceAccountName
value to a specific Task
in the Pipeline
. This overrides the global
serviceAccountName
you may have set for the Pipeline
as described in the previous section.
For example, if you specify these mappings:
spec:
serviceAccountName: sa-1
serviceAccountNames:
- taskName: build-task
serviceAccountName: sa-for-build
for this Pipeline
:
kind: Pipeline
spec:
tasks:
- name: build-task
taskRef:
name: build-push
- name: test-task
taskRef:
name: test
then test-task
will execute using the sa-1
account while build-task
will execute with sa-for-build
.
You can specify a Pod
template configuration that will serve as the configuration starting
point for the Pod
in which the container images specified in your Tasks
will execute. This allows you to
customize the Pod
configuration specifically for each TaskRun
.
In the following example, the Task
defines a volumeMount
object named my-cache
. The PipelineRun
provisions this object for the Task
using a persistentVolumeClaim
and executes it as user 1001.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: mytask
spec:
steps:
- name: writesomething
image: ubuntu
command: ["bash", "-c"]
args: ["echo 'foo' > /my-cache/bar"]
volumeMounts:
- name: my-cache
mountPath: /my-cache
---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: mypipeline
spec:
tasks:
- name: task1
taskRef:
name: mytask
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: mypipelinerun
spec:
pipelineRef:
name: mypipeline
podTemplate:
securityContext:
runAsNonRoot: true
runAsUser: 1001
volumes:
- name: my-cache
persistentVolumeClaim:
claimName: my-volume-claim
Specifies a list of PipelineTaskRunSpec
which contains TaskServiceAccountName
, TaskPodTemplate
and PipelineTaskName
. Mapping the specs to the corresponding Task
based upon the TaskName
a PipelineTask
will run with the configured TaskServiceAccountName
and TaskPodTemplate
overwriting the pipeline
wide ServiceAccountName
and podTemplate
configuration,
for example:
spec:
podTemplate:
securityContext:
runAsUser: 1000
runAsGroup: 2000
fsGroup: 3000
taskRunSpecs:
- pipelineTaskName: build-task
taskServiceAccountName: sa-for-build
taskPodTemplate:
nodeSelector:
disktype: ssd
If used with this Pipeline
, build-task
will use the task specific PodTemplate
(where nodeSelector
has disktype
equal to ssd
).
If your Pipeline
specifies one or more Workspaces
, you must map those Workspaces
to
the corresponding physical volumes in your PipelineRun
definition. For example, you
can map a PersistentVolumeClaim
volume to a Workspace
as follows:
workspaces:
- name: myworkspace # must match workspace name in Task
persistentVolumeClaim:
claimName: mypvc # this PVC must already exist
subPath: my-subdir
For more information, see the following topics:
- For information on mapping
Workspaces
toVolumes
, see SpecifyingWorkspaces
inPipelineRuns
. - For a list of supported
Volume
types, see SpecifyingVolumeSources
inWorkspaces
. - For an end-to-end example, see
Workspaces
in aPipelineRun
.
In order to only consume the bare minimum amount of resources needed to execute one Step
at a
time from the invoked Task
, Tekton only requests the maximum values for CPU, memory, and ephemeral
storage from within each Step
. This is sufficient as Steps
only execute one at a time in the Pod
.
Requests other than the maximum values are set to zero.
When a LimitRange
parameter is present in
the namespace in which PipelineRuns
are executing and minimum values are specified for container resource requests,
Tekton searches through all LimitRange
values present in the namespace and uses the minimums instead of 0.
For more information, see the LimitRange
code example.
You can use the timeout
field to set the PipelineRun's
desired timeout value in minutes.
If you do not specify this value in the PipelineRun
, the global default timeout value applies.
If you set the timeout to 0, the PipelineRun
fails immediately upon encountering an error.
The global default timeout is set to 60 minutes when you first install Tekton. You can set
a different global default timeout value using the default-timeout-minutes
field in
config/config-defaults.yaml
.
The timeout
value is a duration
conforming to Go's
ParseDuration
format. For example, valid
values are 1h30m
, 1h
, 1m
, and 60s
. If you set the global timeout to 0, all PipelineRuns
that do not have an idividual timeout set will fail immediately upon encountering an error.
To cancel a PipelineRun
that's currently executing, update its definition
to mark it as cancelled. When you do so, the spanwed TaskRuns
are also marked
as cancelled and all associated Pods
are deleted. For example:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: go-example-git
spec:
# […]
status: "PipelineRunCancelled"
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.