Continuous Integration (CI) FAQs
Can I use Harness CI for mobile app development?
Yes. Harness CI offers many options for mobile app development.
I have a MacOS build, do I have to use homebrew as the installer?
No. Your build infrastructure can be configured to use whichever tools you like. For example, Harness Cloud build infrastructure includes pre-installed versions of xcode and other tools, and you can install other tools or versions of tools that you prefer to use. For more information, go to the CI macOS and iOS development guide.
Build infrastructure
What is build infrastructure and why do I need it for Harness CI?
A build stage's infrastructure definition, the build infrastructure, defines "where" your stage runs. It can be a Kubernetes cluster, a VM, or even your own local machine. While individual steps can run in their own containers, your stage itself requires a build infrastructure to define a common workspace for the entire stage. For more information about build infrastructure and CI pipeline components go to:
What kind of build infrastructure can I use? Which operating systems are supported?
For support operating systems, architectures, and cloud providers, go to Which build infrastructure is right for me.
Can I use multiple build infrastructures in one pipeline?
Yes, each stage can have a different build infrastructure. Additionally, depending on your stage's build infrastructure, you can also run individual steps on containers rather than the host. This flexibility allows you to choose the most suitable infrastructure for each part of your CI pipeline.
Local runner build infrastructure
Can I run builds locally? Can I run builds directly on my computer?
Yes. For instructions, go to Set up a local runner build infrastructure.
How do I check the runner status for a local runner build infrastructure?
To confirm that the runner is running, send a cURL request like curl http://localhost:3000/healthz
.
If the running is running, you should get a valid response, such as:
{
"version": "0.1.2",
"docker_installed": true,
"git_installed": true,
"lite_engine_log": "no log file",
"ok": true
}
How do I check the delegate status for a local runner build infrastructure?
The delegate should connect to your instance after you finish the installation workflow above. If the delegate does not connect after a few minutes, run the following commands to check the status:
docker ps
docker logs --follow <docker-delegate-container-id>
The container ID should be the container with image name harness/delegate:latest
.
Successful setup is indicated by a message such as Finished downloading delegate jar version 1.0.77221-000 in 168 seconds
.
Runner can't find an available, non-overlapping IPv4 address pool.
The following runner error can occur during stage setup (the Initialize step in build logs):
Could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network.
This error means the number of Docker networks has exceeded the limit. To resolve this, you need to clean up unused Docker networks. To get a list of existing networks, run docker network ls
, and then remove unused networks with docker network rm
or docker network prune
.
Docker daemon fails with invalid working directory path on Windows local runner build infrastructure
The following error can occur in Windows local runner build infrastructures:
Error response from daemon: the working directory 'C:\harness-DIRECTORY_ID' is invalid, it needs to be an absolute path
This error indicates there may be a problem with the Docker installation on the host machine.
-
Run the following command (or a similar command) to check if the same error occurs:
docker run -w C:\blah -it -d mcr.microsoft.com/windows/servercore:ltsc2022
-
If you get the
working directory is invalid
error again, uninstall Docker and follow the instructions in the Windows documentation to Prepare Windows OS containers for Windows Server. -
Restart the host machine.
How do I check if the Docker daemon is running in a local runner build infrastructure?
To check if the Docker daemon is running, use the docker info
command. An error response indicates the daemon is not running. For more information, go to the Docker documentation on Troubleshooting the Docker daemon
Runner process quits after terminating SSH connection for local runner build infrastructure
If you launch the Harness Docker Runner binary within an SSH session, the runner process can quit when you terminate the SSH session.
To avoid this with macOS runners, use this command when you start the runner binary:
./harness-docker-runner-darwin-amd64 server >log.txt 2>&1 &
disown
For Linux runners, you can use a tool such as nohup
when you start the runner, for example:
nohup ./harness-docker-runner-darwin-amd64 server >log.txt 2>&1 &
Self-hosted VM build infrastructures
Can I use the same build VM for multiple CI stages?
No. The build VM terminates at the end of the stage and a new VM is used for the next stage.
Why are build VMs running when there are no active builds?
With self-hosted VM build infrastructure, the pool
value in your pool.yml
specifies the number of "warm" VMs. These VMs are kept in a ready state so they can pick up build requests immediately.
If there are no warm VMs available, the runner can launch additional VMs up to the limit
in your pool.yml
.
If you don't want any VMs to sit in a ready state, set your pool
to 0
. Note that having no ready VMs can increase build time.
For AWS VMs, you can set hibernate
in your pool.yml
to hibernate warm VMs when there are no active builds. For more information, go to Configure the Drone pool on the AWS VM.
Do I need to install Docker on the VM that runs the Harness Delegate and Runner?
Yes. Docker is required for self-hosted VM build infrastructure.
AWS build VM creation fails with no default VPC
When you run the pipeline, if VM creation in the runner fails with the error no default VPC
, then you need to set subnet_id
in pool.yml
.
AWS VM builds stuck at the initialize step on health check
If your CI build gets stuck at the initialize step on the health check for connectivity with lite engine, either lite engine is not running on your build VMs or there is a connectivity issue between the runner and lite engine.
- Verify that lite-engine is running on your build VMs.
- SSH/RDP into a VM from your VM pool that is in a running state.
- Check whether the lite-engine process is running on the VM.
- Check the cloud init output logs to debug issues related to startup of the lite engine process. The lite engine process starts at VM startup through a cloud init script.
- If lite-engine is running, verify that the runner can communicate with lite-engine from the delegate VM.
- Run
nc -vz <build-vm-ip> 9079
from the runner. - If the status is not successful, make sure the security group settings in
runner/pool.yml
are correct, and make sure your security group setup in AWS allows the runner to communicate with the build VMs. - Make sure there are no firewall or anti-malware restrictions on your AMI that are interfering with the cloud init script's ability to download necessary dependencies. For details about these dependencies, go to Set up an AWS VM Build Infrastructure - Start the runner.
- Run
AWS VM delegate connected but builds fail
If the delegate is connected but your AWS VM builds are failing, check the following:
- Make sure your the AMIs, specified in
pool.yml
, are still available.- Amazon reprovisions their AMIs every two months.
- For a Windows pool, search for an AMI called
Microsoft Windows Server 2019 Base with Containers
and updateami
inpool.yml
.
- Confirm your security group setup and security group settings in
runner/pool.yml
.
Use internal or custom AMIs with self-hosted AWS VM build infrastructure
If you are using an internal or custom AMI, make sure it has Docker installed.
Additionally, make sure there are no firewall or anti-malware restrictions interfering with initialization, as described in CI builds stuck at the initialize step on health check.
Where can I find logs for self-hosted AWS VM lite engine and cloud init output?
- Linux
- Lite engine logs:
/var/log/lite-engine.log
- Cloud init output logs:
/var/log/cloud-init-output.log
- Lite engine logs:
- Windows
- Lite engine logs:
C:\Program Files\lite-engine\log.out
- Cloud init output logs:
C:\ProgramData\Amazon\EC2-Windows\Launch\Log\UserdataExecution.log
- Lite engine logs:
Harness Cloud
What is Harness Cloud?
Harness Cloud lets you run builds on Harness-hosted runners that are preconfigured with tools, packages, and settings commonly used in CI pipelines. It is one of several build infrastructure options offered by Harness. For more information, go to Which build infrastructure is right for me.
How do I use Harness Cloud build infrastructure?
Configuring your pipeline to use Harness Cloud takes just a few minutes. Make sure you meet the requirements for connectors and secrets, then follow the quick steps to use Harness Cloud.
Account verification error with Harness Cloud on Free plan
Recently Harness has been the victim of several Crypto attacks that use our Harness-hosted build infrastructure (Harness Cloud) to mine cryptocurrencies. Harness Cloud is available to accounts on the Free tier of Harness CI. Unfortunately, to protect our infrastructure, Harness now limits the use of the Harness Cloud build infrastructure to business domains and block general-use domains, like Gmail, Hotmail, Yahoo, and other unverified domains.
To address these issues, you can do one of the following:
- Use the local runner build infrastructure option, or upgrade to a paid plan to use the self-hosted VM or Kubernetes cluster build infrastructure options. There are no limitations on builds using your own infrastructure.
- Create a Harness account with your work email and not a generic email address, like a Gmail address.
What is the Harness Cloud build credit limit for the Free plan?
The Free plan allows 2,000 build minutes per month. For more information, go to Harness Cloud billing and build credits.
Can I use xcode for a MacOS build with Harness Cloud?
Yes. Harness Cloud macOS runners include several versions of xcode as well as homebrew. For details, go to the Harness Cloud image specifications. You can also install additional tools at runtime.
Can I use my own secrets manager with Harness Cloud build infrastructure?
No. To use Harness Cloud build infrastructure, you must use the built-in Harness secrets manager.
Connector errors with Harness Cloud build infrastructure
To use Harness Cloud build infrastructure, all connectors used in the stage must connect through the Harness Platform. This means that:
- GCP connectors can't inherit credentials from the delegate. They must be configured to connect through the Harness Platform.
- Azure connectors can't inherit credentials from the delegate. They must be configured to connect through the Harness Platform.
- AWS connectors can't use IRSA, AssumeRole, or delegate connectivity mode. They must connect through the Harness Platform with access key authentication.
For more information, go to Use Harness Cloud build infrastructure - Requirements for connectors and secrets.
To change the connector's connectivity mode:
- Go to the Connectors page at the account, organization, or project scope. For example, to edit account-level connectors, go to Account Settings, select Account Resources, and then select Connectors.
- Select the connector that you want to edit.
- Select Edit Details.
- Select Continue until you reach Select Connectivity Mode.
- Select Change and select Connect through Harness Platform.
- Select Save and Continue and select Finish.
Built-in Harness Docker Connector doesn't work with Harness Cloud build infrastructure
Depending on when your account was created, the built-in Harness Docker Connector (account.harnessImage
) might be configured to connect through a Harness Delegate instead of the Harness Platform. In this case, attempting to use this connector with Harness Cloud build infrastructure generates the following error:
While using hosted infrastructure, all connectors should be configured to go via the Harness platform instead of via the delegate. \
Please update the connectors: [harnessImage] to connect via the Harness platform instead. \
This can be done by editing the connector and updating the connectivity to go via the Harness platform.
To resolve this error, you can either modify the Harness Docker Connector or use another Docker connector that you have already configured to connect through the Harness Platform.
To change the connector's connectivity settings:
- Go to Account Settings and select Account Resources.
- Select Connectors and select the Harness Docker Connector (ID:
harnessImage
). - Select Edit Details.
- Select Continue until you reach Select Connectivity Mode.
- Select Change and select Connect through Harness Platform.
- Select Save and Continue and select Finish.
Can I change the CPU/memory allocation for steps running on Harness cloud?
Unlike with other build infrastructures, you can't change the CPU/memory allocation for steps running on Harness Cloud. Step containers running on Harness Cloud build VMs automatically use as much as CPU/memory as required up to the available resource limit in the build VM.
Does gsutil work with Harness Cloud?
No, gsutil is deprecated. You should use gcloud-equivalent commands instead, such as gcloud storage cp
instead of gsutil cp
.
However, neither gsutil nor gcloud are recommended with Harness Cloud build infrastructure. Harness Cloud sources build VMs from a variety of cloud providers, and it is impossible to predict which specific cloud provider hosts the Harness Cloud VM that your build uses for any single execution. Therefore, avoid using tools (such as gsutil or gcloud) that require a specific cloud provider's environment.
Can't use STO steps with Harness Cloud macOS runners
Currently, STO scan steps aren't compatible with Harness Cloud macOS runners, because Apple's M1 CPU doesn't support nested virtualization. You can use STO scan steps with Harness Cloud Linux and Windows runners.
How do I configure OIDC with GCP WIF for Harness Cloud builds?
Go to Configure OIDC with GCP WIF for Harness Cloud builds.
When we run the build in Harness cloud, which delegate will be used? Will it be using a delegate that is running in the local infra or the delegate will be used for cloud run is also hosted in Harness?
When the build is running on Harness cloud, a delegate that is hosted in Harness cloud will be used
Do I need to keep a delagate running in our local infrastructure, if Im only running the build in Harness cloud?
You wouldn't need a delegate running in local infra if you are only running the build on Harness cloud
Can I run CD steps/stages in Harness cloud in the same way how we can run the CI steps/stages in Harnes cloud?
Running CD steps/stages in Harness cloud is not currently supported
Kubernetes clusters
What is the difference between a Kubernetes cluster build infrastructure and other build infrastructures?
For a comparison of build infrastructures go to Which build infrastructure is right for me.
For requirements, recommendations, and settings for using a Kubernetes cluster build infrastructure, go to:
- Set up a Kubernetes cluster build infrastructure
- Build and push artifacts and images - Kubernetes cluster build infrastructures require root access
- CI Build stage settings - Infrastructure - Kubernetes tab
Can I run Docker commands on a Kubernetes cluster build infrastructure?
If you want to run Docker commands when using a Kubernetes cluster build infrastructure, Docker-in-Docker (DinD) with privileged mode is required. For instructions, go to Run DinD in a Build stage.
If your cluster doesn't support privileged mode, you must use a different build infrastructure option, such as Harness Cloud, where you can run Docker commands directly on the host without the need for Privileged mode. For more information, go to Set up a Kubernetes cluster build infrastructure - Privileged mode is required for Docker-in-Docker.
Can I use Istio MTLS STRICT mode with Harness CI?
Yes, but you must create a headless service for Istio MTLS STRICT mode.
How can you execute Docker commands in a CI pipeline that runs on a Kubernetes cluster that lacks a Docker runtime?
You can run Docker-in-Docker (DinD) as a service with the sharedPaths
set to /var/run
. Following that, the steps can be executed as Docker commands. This works regardless of the Kubernetes container runtime.
The DinD service does not connect to the Kubernetes node daemon. It launches a new Docker daemon on the pod, and then other containers use that Docker daemon to run their commands.
For details, go to Run Docker-in-Docker in a Build stage.
Resource allocation for Kubernetes cluster build infrastructure
You can adjust CPU and memory allocation for individual steps running on a Kubernetes cluster build infrastructure or container. For information about how resource allocation is calculated, go to Resource allocation.
What is the default CPU and memory limit for a step container?
For default resource request and limit values, go to Build pod resource allocation.
Why do steps request less memory and CPU than the maximum limit? Why do step containers request fewer resources than the limit I set in the step settings?
By default, resource requests are always set to the minimum, and additional resources (up to the specified maximum limit) are requested only as needed during build execution. For more information, go to Build pod resource allocation.
How do I configure the build pod to communicate with the Kubernetes API server?
By default, the namespace's default service account is auto-mounted on the build pod, through which it can communicate with API server. To use a non-default service account, specify the Service Account Name in the Kubernetes cluster build infrastructure settings.
Do I have to mount a service account on the build pod?
No. Mounting a service account isn't required if the pod doesn't need to communicate with the Kubernetes API server during pipeline execution. To disable service account mounting, deselect Automount Service Account Token in the Kubernetes cluster build infrastructure settings.
What types of volumes can be mounted on a CI build pod?
You can mount many types of volumes, such as empty directories, host paths, and persistent volumes, onto the build pod. Use the Volumes in the Kubernetes cluster build infrastructure settings to do this.
How can I run the build pod on a specific node?
Use the Node Selector setting to do this.
It is possible to make a toleration configuration at the project, org, or account level?
Tolerations in a Kubernetes cluster build infrastructure can only be set at the stage level.
I want to use an EKS build infrastructure with an AWS connector that uses IRSA
You need to set the Service Account Name in the Kubernetes cluster build infrastructure settings.
If you get error checking push permissions
or similar, go to the Build and Push to ECR error article.
Why are build pods being evicted?
Harness CI pods shouldn't be evicted due to autoscaling of Kubernetes nodes because Kubernetes doesn't evict pods that aren't backed by a controller object. However, build pods can be evicted due to CPU or memory issues in the pod or using spot instances as worker nodes.
If you notice either sporadic pod evictions or failures in the Initialize step in your Build logs, add the following Annotation to your Kubernetes cluster build infrastructure settings:
"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
AKS builds timeout
Azure Kubernetes Service (AKS) security group restrictions can cause builds running on an AKS build infrastructure to timeout.
If you have a custom network security group, it must allow inbound traffic on port 8080, which the delegate service uses.
For more information, refer to the following Microsoft Azure troubleshooting documentation: A custom network security group blocks traffic.
How do I set the priority class level? Can I prioritize my build pod if there are resource shortages on the host node?
Use the Priority Class setting to ensure that the build pod is prioritized in cases of resource shortages on the host node.
What's the default priority class level?
If you leave the Priority Class field blank, the PriorityClass
is set to the globalDefault
, if your infrastructure has one defined, or 0
, which is the lowest priority.
Can I transfer files into my build pod?
To do this, use a script in a Run step.
How are step containers named within the build pod?
Step containers are named sequentially starting with step-1
.
When I run a build, Harness creates a new pod and doesn't run the build on the delegate
This is the expected behavior. When you run a Build (CI) stage, each step runs on a new build farm pod that isn't connected to the delegate.
What permissions are required to run CI builds in an OpenShift cluster?
For information about building on OpenShift clusters, go to Permissions Required and OpenShift Support in the Kubernetes Cluster Connector Settings Reference.
What are the minimum permissions required for the service account role for a Kubernetes Cluster connector?
For information about permissions required to build on Kubernetes clusters, go to Permissions Required in the Kubernetes Cluster Connector Settings Reference.
How does the build pod communicate with the delegate? What port does the lite-engine listen on?
The delegate communicates to the temp pod created by the container step through the build pod IP. Build pods have a lite engine running on port 20001.
Experiencing OOM on java heap for the delegate
Check CPU utilization and try increasing the CPU request and limit amounts.
Your Java options must use UseContainerSupport instead of UseCGroupMemoryLimitForHeap
, which was removed in JDK 11.
I have multiple delegates in multiple instances. How can I ensure the same instance is used for each step?
Use single replica delegates for tasks that require the same instance, and use a delegate selector by delegate name. The tradeoff is that you might have to compromise on your delegates' high availability.
Delegate is not able to connect to the created build farm
If you get this error when using a Kubernetes cluster build infrastructure, and you have confirmed that the delegate is installed in the same cluster where the build is running, you may need to allow port 20001 in your network policy to allow pod-to-pod communication.
If the delegate is not able to connect to the created build farm with Istio MTLS STRICT mode, and you are seeing that the pod is removed after a few seconds, you might need to add Istio ProxyConfig with "holdApplicationUntilProxyStarts": true
. This setting delays application start until the pod is ready to accept traffic so that the delegate doesn't attempt to connect before the pod is ready.
For more delegate and Kubernetes troubleshooting guidance, go to Troubleshooting Harness.
If my pipeline consists of multiple CI stages, are all the steps across different stages executed within the same build pod?
No. Each CI stage execution triggers the creation of a new build pod. The steps within a stage are then carried out within the stage's dedicated pod. If your pipeline has multiple CI stages, distinct build pods are generated for each individual stage.
When does the cleanup of build pods occur? Does it happen after the entire pipeline execution is finished?
Build pod cleanup takes place immediately after the completion of a stage's execution. This is true even if there are multiple CI stages in the same pipeline; as each build stage ends, the pod for that stage is cleaned up.
Is the build pod cleaned up in the event of a failed stage execution?
Yes, the build pod is cleaned up after stage execution, regardless of whether the stage succeeds or fails.
How do I know if the pod cleanup task fails?
To help identify pods that aren't cleaned up after a build, pod deletion logs include details such as the cluster endpoint targeted for deletion. If a pod can't be located for cleanup, then the logs include the pod identifier, namespace, and API endpoint response from the pod deletion API. You can find logs in the Build details.
How does the isto proxy config "holdApplicationUntilProxyStarts" adding a delay in the application start?
When we set set holdApplicationUntilProxyStarts
to true
it causes the sidecar injector to inject the sidecar at the start of the pod’s container list, and configures it to block the start of all other containers until the proxy is ready
How to configure the istio proxy config "holdApplicationUntilProxyStarts" for a pod?
This can be added as a pod annotation proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }'
Why the dind build is failing with OOM error even after increasing the memory of the run step where we run the docker build command?
When we run the docker build command in a dind setup, the build will be executed on the dind container. Hence you would need to increase the memory for the dind background step to fix the OOM error during build
Self-signed certificates
Can I mount internal CA certs on the CI build pod?
Yes. With a Kubernetes cluster build infrastructure, you can make the certs available to the delegate pod, and then set DESTINATION_CA_PATH
. For DESTINATION_CA_PATH
, provide a list of paths in the build pod where you want the certs to be mounted, and mount your certificate files to opt/harness-delegate/ca-bundle
. For more information, go to Configure a Kubernetes build farm to use self-signed certificates.
Can I use self-signed certs with local runner build infrastructure?
With a local runner build infrastructure, you can use CI_MOUNT_VOLUMES
to use self-signed certificates. For more information, go to Set up a local runner build infrastructure.
How do I make internal CA certs available to the delegate pod?
There are multiple ways you can do this:
- Build the delegate image with the certs baked into it, if you are custom building the delegate image.
- Create a secret/configmap with the certs data, and then mount it on the delegate pod.
- Run commands in the
INIT_SCRIPT
to download the certs while the delegate launches and make them available to the delegate pod file system.
Where should I mount internal CA certs on the build pod?
The usage of the mounted CA certificates depends on the specific container image used for the step. The default certificate location depends on the base image you use. The location where the certs need to be mounted depends on the container image being used for the steps that you intend to run on the build pod.
Git connector SCM connection errors when using self-signed certificates
If you have configured your build infrastructure to use self-signed certificates, your builds may fail when the code repo connector attempts to connect to the SCM service. Build logs may contain the following error messages:
Connectivity Error while communicating with the scm service
Unable to connect to Git Provider, error while connecting to scm service
To resolve this issue, add SCM_SKIP_SSL=true
to the environment
section of the delegate YAML. For example, here is the environment
section of a docker-compose.yml
file with the SCM_SKIP_SSL
variable:
environment:
- ACCOUNT_ID=XXXX
- DELEGATE_TOKEN=XXXX
- MANAGER_HOST_AND_PORT=https://app.harness.io
- LOG_STREAMING_SERVICE_URL=https://app.harness.io/log-service/
- DEPLOY_MODE=KUBERNETES
- DELEGATE_NAME=test
- NEXT_GEN=true
- DELEGATE_TYPE=DOCKER
- SCM_SKIP_SSL=true
For more information about self-signed certificates, delegates, and delegate environment variables, go to:
- Delegate environment variables
- Docker delegate environment variables
- Install delegates
- Set up a local runner build infrastructure
- Configure a Kubernetes build farm to use self-signed certificates
Certificate volumes aren't mounted to the build pod
If the volumes are not getting mounted to the build containers, or you see other certificate errors in your pipeline, try the following:
-
Add a Run step that prints the contents of the destination path. For example, you can include a command such as:
cat /kaniko/ssl/certs/additional-ca-cert-bundle.crt
-
Double-check that the base image used in the step reads certificates from the same path given in the destination path on the Delegate.
Windows builds
Error when running Docker commands on Windows build servers
Make sure that the build server has the Windows Subsystem for Linux installed. This error can occur if the container can't start on the build system.
Docker commands aren't supported for Windows builds on Kubernetes cluster build infrastructures.
Is rootless configuration supported for builds on Windows-based build infrastructures?
No, currently this is not supported for Windows builds.
What is the default user set on the Windows Lite-Engine and Addon image? Can I change it?
The default user for these images is ContainerAdministrator
. For more information, go to Run Windows builds in a Kubernetes build infrastructure - Default user for Windows builds.
Can I use custom cache paths on a Windows platform with Cache Intelligence?
Yes, you can use custom cache paths with Cache Intelligence on Windows platforms.
How do I specify the disk size for a Windows instance in pool.yml?
With self-hosted VM build infrastructure, the disk
configuration in your pool.yml
specifies the disk size (in GB) and type.
For example, here is a Windows pool configuration for an AWS VM build infrastructure:
version: "1"
instances:
- name: windows-ci-pool
default: true
type: amazon
pool: 1
limit: 4
platform:
os: windows
spec:
account:
region: us-east-2
availability_zone: us-east-2c
access_key_id:
access_key_secret:
key_pair_name: XXXXX
ami: ami-088d5094c0da312c0
size: t3.large ## VM machine size.
hibernate: true
network:
security_groups:
- sg-XXXXXXXXXXXXXX
disk:
size: 100 ## Disk size in GB.
type: "pd-balanced"
Step continues running for a long time after the command is complete
In Windows builds, if the primary command in a Powershell script starts a long-running subprocess, the step continues to run as long as the subprocess exits (or until it reaches the step timeout limit). To resolve this:
- Check if your command launches a subprocess.
- If it does, check whether the process is exiting, and how long it runs before exiting.
- If the run time is unacceptable, you might need to add commands to sleep or force exit the subprocess.
Example: Subprocess with two-minute life
Here's a sample pipeline that includes a Powershell script that starts a subprocess. The subprocess runs for no more than two minutes.
pipeline:
identifier: subprocess_demo
name: subprocess_demo
projectIdentifier: default
orgIdentifier: default
tags: {}
stages:
- stage:
identifier: BUild
type: CI
name: Build
spec:
cloneCodebase: true
execution:
steps:
- step:
identifier: Run_1
type: Run
name: Run_1
spec:
connectorRef: YOUR_DOCKER_CONNECTOR_ID
image: jtapsgroup/javafx-njs:latest
shell: Powershell
command: |-
cd folder
gradle --version
Start-Process -NoNewWindow -FilePath "powershell" -ArgumentList "Start-Sleep -Seconds 120"
Write-Host "Done!"
resources:
limits:
memory: 3Gi
cpu: "1"
infrastructure:
type: KubernetesDirect
spec:
connectorRef: YOUR_KUBERNETES_CLUSTER_CONNECTOR_ID
namespace: YOUR_KUBERNETES_NAMESPACE
initTimeout: 900s
automountServiceAccountToken: true
nodeSelector:
kubernetes.io/os: windows
os: Windows
caching:
enabled: false
paths: []
properties:
ci:
codebase:
connectorRef: YOUR_CODEBASE_CONNECTOR_ID
build:
type: branch
spec:
branch: main
Default user, root access, and run as non-root
Which user does Harness use to run steps like Git Clone, Run, and so on? What is the default user ID for step containers?
Harness uses user 1000
by default. You can use a step's Run as User setting to use a different user for a specific step.
Can I enable root access for a single step?
If your build runs as non-root (meaning you have set runAsNonRoot: true
in your build infrastructure settings), you can run a specific step as root by setting Run as User to 0
in the step's settings. This setting uses the root user for this specific step while preserving the non-root user configuration for the rest of the build. This setting is not available for all build infrastructures, as it is not applicable to all build infrastructures.
When I try to run as non-root, the build fails with "container has runAsNonRoot and image has non-numeric user (harness), cannot verify user is non-root"
This error occurs if you enable Run as Non-Root without configuring the default user ID in Run as User. For more information, go to CI Build stage settings - Run as non-root or a specific user.
Codebases
What is a codebase in a Harness pipeline?
The codebase is the Git repository where your code is stored. Pipelines usually have one primary or default codebase. If you need files from multiple repos, you can clone additional repos.
How do I connect my code repo to my Harness pipeline?
For instructions on configuring your pipeline's codebase, go to Configure codebase.
What permissions are required for GitHub Personal Access Tokens in Harness GitHub connectors?
For information about configuring GitHub connectors, including required permissions for personal access tokens, go to the GitHub connector settings reference.
Can I skip the built-in clone codebase step in my CI pipeline?
Yes, you can disable the built-in clone codebase step for any Build stage. For instructions, go to Disable Clone Codebase for specific stages.
Can I configure a failure strategy for a built-in clone codebase step?
No, you can't configure a failure strategy for the built-in clone codebase step. If you have concerns about clone failures, you can disable Clone Codebase, and then add a Git Clone step with a step failure strategy at the beginning of each stage where you need to clone your codebase.
Can I recursively clone a repo?
Currently, the built-in clone codebase step doesn't support recursive cloning. However, you can disable Clone Codebase, and then add a Run step with git clone --recursive
. This is similar to the process you would follow to clone a subdirectory instead of the entire repo.
If you want to recursively clone a repo in addition to your default codebase, you can pull the Git credentials from your code repo connector to use in your Run step.
Can I clone a specific subdirectory rather than an entire repo?
Yes. For instructions, go to Clone a subdirectory.
Does the built-in clone codebase step fetch all branches? How can I fetch all branches?
The built-in clone codebase step fetches only one or two branches from the repo, depending on the build type (tag, PR, or branch).
If you need to clone all branches in a repo, you can execute the necessary git commands in a Run step.
Can I clone the default codebase to a different folder than the root?
The built-in clone codebase step always clones your repo to the root of the workspace, /harness
. If you need to clone elsewhere, you can disable Clone Codebase and use a Git Clone or Run step to clone your codebase to a specific subdirectory.
What is the default clone depth setting for CI builds?
The built-in clone codebase step has the following depth settings:
- For manual builds, the default depth is
50
. - For webhook or cron triggers, the default depth is
0
.
For more information, go to Configure codebase - Depth.
Can I change the depth of the built-in clone codebase step?
Yes. Use the Depth setting to do this.
How can I reduce clone codebase time?
There are several strategies you can use to improve codebase clone time:
- Depending on your build infrastructure, you can set Limit Memory to
1Gi
in your codebase configuration. - For builds triggered by PRs, set the Pull Request Clone Strategy to Source Branch and set Depth to
1
. - If you don't need the entire repo contents for your build, you can disable the built-in clone codebase step and use a Run step to execute specific
git clone
arguments, such as to clone a subdirectory.
What codebase environment variables are available to use in triggers, commands, output variables, or otherwise?
For a list of <+codebase.*>
and similar expressions you can use in your build triggers and otherwise, go to the CI codebase variables reference.
What expression can I use to get the repository name and the project/organization name for a trigger?
You can use the expressions <+eventPayload.repository.name>
or <+trigger.payload.repository.name>
to reference the repository name from the incoming trigger payload.
If you want both the repo and project name, and your Git provider's webhook payload doesn't include a single payload value with both names, you can concatenate two expressions together, such as <+trigger.payload.repository.project.key>/<+trigger.payload.repository.name>
.
The expression eventPayload.repository.name causes the clone step to fail when used with a Bitbucket account connector.
Try using the expression <+trigger.payload.repository.name>
instead.
Codebase expressions aren't resolved or resolve to null.
Empty or null
values primarily occur due to the build type (tag, branch, or PR) and start conditions (manual or automated trigger). For example, <+codebase.branch>
is always null
for tag builds, and <+trigger.*>
expressions are always null
for manual builds.
Other possible causes for null
values are that the connector doesn't have API access enabled in the connector's settings or that your pipeline doesn't use the built-in clone codebase step.
For more information about when codebase expressions are resolved, go to CI codebase variables reference.
How can I share the codebase configuration between stages in a CI pipeline?
The pipeline's default codebase is automatically available to each subsequent Build stage in the pipeline. When you add additional Build stages to a pipeline, Clone Codebase is enabled by default, which means the stage clones the default codebase declared in the first Build stage.
If you don't want a stage to clone the default codebase, you can disable Clone Codebase for specific stages.
The same Git commit is not used in all stages
If your pipeline has multiple stages, each stage that has Clone Codebase enabled clones the codebase during stage initialization. If your pipeline uses the generic Git connector and a commit is made to the codebase after a pipeline run has started, it is possible for later stages to clone the newer commit, rather than the same commit that the pipeline started with.
If you want to force all stages to use the same commit ID, even if there are changes in the repository while the pipeline is running, you must use a code repo connector for a specific SCM provider, rather than the generic Git connector.
Git fetch fails with invalid index-pack output when cloning large repos
- Error: During the Initialize step, when cloning the default codebase,
git fetch
throwsfetch-pack: invalid index-pack output
. - Cause: This can occur with large code repos and indicates that the build machine might have insufficient resources to clone the repo.
- Soltuion: To resolve this, edit the pipeline's YAML and allocate
memory
andcpu
resources in thecodebase
configuration. For example:
properties:
ci:
codebase:
connectorRef: YOUR_CODEBASE_CONNECTOR_ID
repoName: YOUR_CODE_REPO_NAME
build:
type: branch
spec:
branch: <+input>
sslVerify: false
resources:
limits:
memory: 4G ## Set the maximum memory to use. You can express memory as a plain integer or as a fixed-point number using the suffixes `G` or `M`. You can also use the power-of-two equivalents `Gi` and `Mi`. The default is `500Mi`.
cpu: "2" ## Set the maximum number of cores to use. CPU limits are measured in CPU units. Fractional requests are allowed; for example, you can specify one hundred millicpu as `0.1` or `100m`.
Clone codebase fails due to missing plugin
- Error: Git clone fails during stage initialization, and the runner's logs contain
Error response from daemon: plugin \"<plugin>\" not found
- Platform: This error can occur in build infrastructures that use a Harness Docker Runner, such as the local runner build infrastructure or the VM build infrastructures.
- Cause: A required plugin is missing from your build infrastructure container's Docker installation. The plugin is required to configure Docker networks.
- Solution:
- On the machine where the runner is running, stop the runner.
- Set the
NETWORK_DRIVER
environment variable to your preferred network driver plugin (such asexport NETWORK_DRIVER="nat"
orexport NETWORK_DRIVER="bridge"
). For Windows, use PowerShell variable syntax, such as$Env:NETWORK_DRIVER="nat"
or$Env:NETWORK_DRIVER="bridge"
. - Restart the runner.
How do I configure the Git Clone step? What is the Clone Directory setting?
For details about Git Clone step settings, go to:
Does Harness CI support Git Large File Storage (git-lfs)?
Yes. For more information, go to the Harness CI documentation on Git Large File Storage.
Can I run git commands in a CI Run step?
Yes. You can run any commands in a Run step. With respect to Git, for example, you can use a Run step to clone multiple code repos in one pipeline, clone a subdirectory, or use Git LFS.
How do I handle authentication for git commands in a Run step?
You can store authentication credentials as secrets and use expressions, such as <+secrets.getValue("YOUR_TOKEN_SECRET")>
, to call them in your git commands.
You could also pull credentials from a git connector used elsewhere in the pipeline.
Can I use codebase variables when cloning a codebase in a Run step?
No. Codebase variables are resolved only for the built-in Clone Codebase functionality. These variables are not resolved for git commands in Run steps or Git Clone steps.
Git connector fails to connect to the SCM service. SCM request fails with UNKNOWN
This error may occur if your code repo connector uses SSH authentication. To resolve this error, make sure HTTPS is enabled on port 443. This is the protocol and port used by the Harness connection test for Git connectors.
Also, SCM service connection failures can occur when using self-signed certificates.
How can I see which files I have cloned in the codebase?
You can add a Run step to the beginning of your Build stage that runs ls -ltr
. This returns all content cloned by the Clone Codebase step.
SCM status updates and PR checks
Does Harness supports Pull Request status updates?
Yes. Your PRs can use the build status as a PR status check. For more information, go to SCM status checks.
How do I configure my pipelines to send PR build validations?
For instructions, go to SCM status checks - Pipeline links in PRs.
What connector does Harness uses to send build status updates to PRs?
Harness uses the pipeline's codebase connector, specified in the pipeline's default codebase configuration to send status updates to PRs in your Git provider.
Can I use the Git Clone step, instead of the built-in clone codebase step, to get build statues on my PRs?
No. You must use the built-in clone codebase step (meaning, you must configure a default codebase) to get pipeline links in PRs.
Pipeline status updates aren't sent to PRs
Harness uses the pipeline's codebase connector to send status updates to PRs in your Git provider. If status updates aren't being sent, make sure that you have configured a default codebase and that it is using the correct code repo connector. Also make sure the build that ran was a PR build and not a branch or tag build.
Build statuses don't show on my PRs, even though the code base connector's token has all repo permissions.
If the user account used to generate the token doesn't have repository write permissions, the resulting token won't have sufficient permissions to post the build status update to the PR. Specific permissions vary by connector. For example, GitHub connector credentials require that personal access tokens have all repo
, user
, and admin:repo_hook
scopes, and the user account used to generate the token must have admin permissions on the repo.
For repos under organizations or projects, check the role/permissions assigned to the user in the target repository. For example, a user in a GitHub organization can have some permissions at the organization level, but they might not have those permissions at the individual repository level.
Can I export a failed step's output to a pull request comment?
To do this, you could:
- Modify the failed step's command to save output to a file, such as
your_command 2>&1 | tee output_file.log
. - After the failed step, add a Run step that reads the file's content and uses your Git provider's API to export the file's contents to a pull request comment.
- Configure the subsequent step's conditional execution settings to Always execute this step.
Does my pipeline have to have a Build stage to get the build status on the PR?
Yes, the build status is updated on a PR only if a Build (CI) stage runs.
My pipeline has multiple Build stages. Is the build status updated for each stage or for the entire pipeline?
The build status on the PR is updated for each individual Build stage.
My pipeline has multiple Build stages, and I disabled Clone Codebase for some of them. Why is the PR status being updated for the stages that don't clone my codebase?
Currently, Harness CI updates the build status on a PR even if you disabled Clone Codebase for a specific build stage. We are investigating enhancements that could change this behavior.
Is there any character limit for the PR build status message?
Yes. For GitHub, the limit is 140 characters. If the message is too long, the request fails with description is too long (maximum is 140 characters)
.
What identifiers are included in the PR build status message?
The pipeline identifier and stage identifier are included in the build status message.
What is the format of the content in the PR build status message?
The PR build status message format is PIPELINE_IDENTIFIER-STAGE_IDENTIFIER — Execution status of Pipeline - PIPELINE_IDENTIFIER (EXECUTION_ID) Stage - STAGE_IDENTIFIER was STATUS
I don't want to send build statuses to my PRs.
Because the build status updates operate through the default codebase connector, the easiest way to prevent sending PR status updates would be to disable Clone Codebase for all Build stages in your pipeline, and then use a Git Clone or Run step to clone your codebase.
You could try modifying the permissions of the code repo connector's token so that it can't write to the repo, but this could interfere with other connector functionality.
Removing API access from the connector is not recommended because API access is required for other connector functions, such as cloning the codebase.
Why was the PR build status not updated for an Approval stage?
Build status updates occur for Build stages only.
Failed pipelines don't block PR merges
Although Harness can send pipeline statuses to your PRs, you must configure branch protection rules and other checks in your SCM provider.
Troubleshoot Git event (webhook) triggers
For troubleshooting information for Git event (webhook) triggers, go to Troubleshoot Git event triggers.
Pipeline initialization and Harness CI images
Initialize step to fails with a "Null value" error
This can occur if an expression or variable is called before it's value is resolved. In Build (CI) stages, steps run in separate containers/build pods, and you can only use expressions after they are resolved. For example, if you use an expression for an output variable from a step in a repeat looping strategy in step that runs before the repeat loop completes, then the expression's value isn't available to the step that requested it.
Initialize step occasionally times out at 8 minutes
Eight minutes is the default time out limit for the Initialize step. If your build is hitting the timeout limit due to resource constraints, such as pulling large images, you can increase the Init Timeout in the stage Infrastructure settings.
Problems cloning code repo during initialization.
For codebase issues, go to Codebases.
When a pipeline pulls artifacts or images, are they stored on the delegate?
Artifacts and images are pulled into the stage workspace, which is a temporary volume that exists during stage execution. Images are not stored on the delegate during pipeline execution. In a Kubernetes cluster build infrastructure, build stages run on build pods that are cleaned automatically after the execution.
Can I get a list of internal Harness-specific images that CI uses?
For information about the backend/Harness-specific images that Harness CI uses to execute builds, including how to get a list of images and tags that your builds use, go to Harness CI images.
How often are Harness CI images updated?
Harness publishes updates for all CI images on the second and fourth Monday of each month. For more information, go to Harness CI images - Harness CI image updates.
How do I get a list of tags available for an image in the Harness image registry?
To list all available tags for an image in app.harness.io/regstry
, call the following endpoint and replace IMAGE_NAME
with the name of the image you want to query.
https://app.harness.io/registry/harness/IMAGE_NAME/tags/list
What access does Harness use to pull the Harness internal images from the public image repo?
By default, Harness uses anonymous access to to pull Harness images.
If you have security concerns about using anonymous access or pulling Harness-specific images from a public repo, you can change how your builds connect to the Harness container image registry.
Can I use my own private registry to store Harness CI images?
Yes, you can pull Harness CI images from a private registry.
Build failed with "failed to pull image" or "ErrImagePull"
- Error messages:
ErrImagePull
or some variation of the following, which may have a different image name, tag, or registry:Failed to pull image "artifactory.domain.com/harness/ci-addon:1.16.22": rpc error: code = Unknown desc = Error response from daemon: unknown: Not Found.
- Causes:
- Harness couldn't pull an image that is needed to run the pipeline.
ErrImagePull
can be caused by networking issues or if the specified image doesn't exist in the specified repository.Failed to pull image - Not Found
means that a Harness-specific image or tag, in this caseci-addon:1.16.22
, isn't present in the specified artifact repository, and you are using theaccount.harnessImage
connector to pull Harness images. You can use this connector to pull from your own registry or to pull images from any Docker registry, but it is also used to pull Harness-required CI images. Modifying this connector can cause it to fail to pull the necessary Harness CI images.
- Solutions:
- If you modified the built-in Harness Docker connector, check the connector's configuration to make sure it uses one of the compatible methods for pull Harness-required images, as described in Connect to the Harness container image registry.
- If you are trying to pull images from your own registry, check your configuration for pulling Harness images from a private registry. You might need to use a different connecter than the built-in Harness Docker connector.
- If you modified tags for some images, check that your configuration uses valid tags that are present in the repository from which Harness is attempting to pull the tags.
- If you believe the issue is due to networking issues, try again later if you think the issue is transient, or check your connector or network configuration to make sure Harness is able to connect to the given registry.
What pipeline environment variables are there for CI pipelines?
Go to CI environment variables reference.
Docker Hub rate limiting
By default, Harness uses anonymous Docker access to pull Harness-required images. If you experience rate limiting issues when pulling images, try the solutions described in Harness CI images - Docker Hub rate limiting.
Build and push images
Where does a pipeline get code for a build?
The codebase declared in the first stage of a pipeline becomes the pipeline's default codebase. If your build requires files from multiple repos, you can clone additional repos.
How do I use a Harness CI pipeline to build and push artifacts and images?
You can use Build and Push steps or Run steps. For information about this go to Build and push artifacts and images.
I need to get the Maven project version from pom.xml and pass it as a Docker build argument
To do this, you can:
-
Use a Run step to get the version and assign it to a variable. For example, you could use a command like:
version=$(cat pom.xml | grep -oP '(?<=<version>)[^<]+')
-
Specify this variable as an output variable from the Run step.
-
Use an expression to reference the output variable in your build arguments, such as in the Build and Push to Docker step's Build Arguments or
docker build
commands executed in a Run step.
Where do I store Maven project settings.xml in Harness CI?
For information about this, go to Maven settings.xml.
How do I publish maven artifacts to AWS CodeArtifact?
Typically, this is configured within your Maven settings.xml file to publish artifacts upon build, as explained in the AWS documentation on Use CodeArtifact with mvn.
However, if you're not publishing directly via Maven, you can push directly using the AWS CLI or cURL, as explained in the AWS documentation on Publishing with curl.
How do I enable the Gradle daemon in builds?
To enable the Gradle daemon in your Harness CI builds, include the --daemon
option when running Gradle commands in your build scripts (such as in Run steps or in build arguments for a Build and Push step). This option instructs Gradle to use the daemon process.
Optionally, you can use Background steps to optimize daemon performance.
Out of memory errors with Gradle
If a Gradle build experiences out of memory errors, add the following to your gradle.properties
file:
-XX:+UnlockExperimentalVMOptions -XX:+UseContainerSupport
Your Java options must use UseContainerSupport instead of UseCGroupMemoryLimitForHeap
, which was removed in JDK 11.
Can I push without building?
Harness CI provides several options to upload artifacts. The Upload Artifact steps don't include a "build" component.
Can I build without pushing?
You can build without pushing.
What drives the Build and Push steps? What is kaniko?
With Kubernetes cluster build infrastructures, Build and Push steps use kaniko. Other build infrastructures use drone-docker. kaniko requires root access to build the Docker image.
For more information, go to:
- Build and push artifacts and images - Kubernetes clusters require root access
- Harness CI images - Images list
Can I set kaniko and drone-docker runtime flags, such as skip-tls-verify or custom-dns?
Yes, you can set plugin runtime flags on any Build and Push step.
Can I run Build and Push steps as non-root? Does kaniko support non-root users?
With a Kubernetes cluster build infrastructure, Build and Push steps use the kaniko plugin. kaniko requires root access to build Docker images, and it does not support non-root users. However, you can use the buildah plugin to build and push with non-root users.
Can I run Build and Push steps as root if my build infrastructure runs as non-root?
If your build infrastructure is configured to run as a non-root user (meaning you have set runAsNonRoot: true
), you can run a specific step as root by setting Run as User to 0
in the step's settings. This setting uses the root user for this specific step while preserving the non-root user configuration for the rest of the build. This setting is not available for all build infrastructures, as it is not applicable to all build infrastructures.