GitLab Multibranch Pipeline Setup on Jenkins Kubernetes with Shared Libraries (Step-by-Step Guide)

Introduction

In this blog post, we’ll walk through how to configure a GitLab multibranch pipeline with shared libraries on Jenkins running on Kubernetes. We assume that Jenkins is already deployed on Kubernetes and connected to GitLab. We will cover step-by-step instructions to set up the pipeline and integrate shared libraries.

Step 1 : Configure Jenkins Cloud and Kaniko Pod Template

First, ensure that Jenkins is configured with the Kubernetes plugin. In the Jenkins dashboard, go to **Manage Jenkins > Clouds**. Add a new Kubernetes cloud configuration if not already present. Set the jenkins URL. Usually, when running Jenkins inside the same Kubernetes cluster, this is the name of the clusterIP service you have configured when installing Jenkins.



Next, define a Pod Template for Kaniko. This pod template will be used to build Docker images inside Kubernetes without requiring Docker-in-Docker. Include containers for Kaniko and any required build tools. Ensure service account and permissions are correctly set for image pushing to your container registry.

In **Manage Jenkins > Clouds > Pod Templates**, add a new pod template and paste the following yaml in the Raw yaml field (change the secret name accordingly).

 spec:
  containers:
  - image: gcr.io/kaniko-project/executor:v1.23.2-debug
    imagePullPolicy: Always
    name: kaniko
    command:
    - sleep
    args:
    - 99d
    volumeMounts:
    - name: cache
  	mountPath: /images
    - name: jenkins-docker-cfg
  	mountPath: /kaniko/.docker
  volumes:
  - name: cache
    persistentVolumeClaim:
  	claimName: image-cache
  - name: jenkins-docker-cfg
    projected:
      sources:
      - secret:
          name: ugd-registry
     	  items:
     	  - key: .dockerconfigjson
       	    path: config.json

Step 2 : Create and Configure a Multibranch Jenkins Pipeline

Navigate to the Jenkins dashboard and select **New Item**. Choose **Multibranch Pipeline** and provide a name.

In the pipeline configuration:

– Add your GitLab source under **Branch Sources**.

– Select your appropriate GitLab credentials.

– Add the owner and namespace of your project (Jenkins should automatically detect it)

– Configure the discovery of branches and merge requests as needed.

– Optionally, configure build triggers to automatically detect changes from GitLab webhooks.- Add your shared library repository with the correct credentials under **Pipeline librairies**. You can fork this repo  and use the library.

– You can now import and use shared library functions inside your `Jenkinsfile` with `@Library` annotation, using the name you provided.

Step 3 : Define Jenkinsfile with Shared Libraries

Inside your GitLab repository, create a `Jenkinsfile` at the root. For this example, we will use the function K8SContinuousIntegrationPipeline defined in the previous repo (https://github.com/taxrakoto/shared_librairies). Here’s our final Jenkinsfile, assuming my_pipeline is the name you choose when adding the library to your pipeline conf.

@Library('my_pipeline') _

K8SContinuousIntegrationPipeline {
    BUILD_CONTEXT = '.'
    DOCKERFILE = 'Dockerfile'
    REGISTRY_URL = 'registry.gitlab.com/my_user/my_project'
    IMAGE_TAG = 'staging'

}

Do not forget the dash in the first line, very important. You can change the location and name of your Dockerfile in the field BUILD_CONTEXT and DOCKERFILE

Step 4: Trigger and Verify Builds

Push your changes to GitLab. Jenkins will automatically scan the repository, detect branches and merge requests, and trigger builds according to your configuration. Monitor the build logs from the Jenkins UI to ensure that the Kaniko build executes successfully and that the shared library is being used as expected.

Conclusion

With Jenkins running on Kubernetes, integrated with GitLab, and enhanced by shared libraries, you now have a scalable CI pipeline setup. The multibranch pipeline automatically manages multiple branches and merge requests, while shared libraries help you maintain clean and reusable pipeline code. You can extend this setup further with additional stages like security scanning, artifact storage, or deployment automation.  For the deployment part (CD), we will use ArgoCD and Helm, but that will be part of another blog.

Leave a Comment

Your email address will not be published. Required fields are marked *