Optimizing Jenkins for Scalability & Performance : My Personal Insights

Optimizing Jenkins for Scalability & Performance : My Personal Insights

Hello! Today, I'd be sharing some insights into optimizing Jenkins for scalability and resilience.

Jenkins has been an indispensable tool in my DevOps toolkit, but like any powerful tool, it does require fine-tuning to ensure it runs smoothly and efficiently.

Here’s a detailed look at the steps I’ve taken to make Jenkins work better for me, and how you can do the same :) 💡

Performance Optimization

1 --> Minimize Build-up History

I'd recommend limiting the number of builds retained in the file system.

Why?

While configuring Jenkins jobs, you should specify how many builds you'd like to retain and for how long.

  • More the builds retained, more the overhead in terms of disk storage

  • Plus, it's a potential factor for performance degradation (Jenkins would have to display a huge list of retained builds. Something not needed.)

  • And you don't want really long loading times for job pages and dashboards

Solution:- 👇

I'd suggest opting in for the "Discard Old Build Feature" 📌
You can specify the number of builds you'd like to keep and retain for how long.

I'll add the code snippet for this configuration:-

pipeline {
    agent any
    options {
        // Discard old builds to manage storage and performance
        buildDiscarder(logRotator(numToKeepStr: '10', daysToKeepStr: '30'))
    }
    stages {
        stage('Build') {
            steps {
                echo 'Building...'
                // Add your build steps here
            }
        }
        stage('Test') {
            steps {
                echo 'Testing...'
                // Add your test steps here
            }
        }
        stage('Deploy') {
            steps {
                echo 'Deploying...'
                // Add your deployment steps here
            }
        }
    }
}

2 --> Define a Heap Size

Set an appropriate heap size for Jenkins.

A heap is basically the "amount of memory" allocated to store objects that've been created by the application.

So, if we're looking at improvising on jenkin's stability and resilience, it would be worth-while to concentrate our efforts around optimising this.

Why?

A proper heap size = An efficient memory utilisation + Prevents OutOfMemory errors.

  • Efficient memory utilisation

    --> Jenkins must have enough memory heap to perform its operations/ store objects without excessive garbage collection. (I'll talk about this in detail subsequently)

  • Preventing the `OutOfMemory errors

    --> We're averting potential, unexpected crashes or an unpredictable behaviour, that might occur in case of long-running jobs.

    A decent, sufficient heap size means Jenkins won't go out of memory.
    (Something that might happen far too frequently with an insufficient memory space)

But how do we determine the heap size needed? 🤔

How to determine the heap size?

Step 1 -->

I'd suggest making use of tools like top, htop and the Jenkins Monitoring plugin.
That'll help identify typical memory usage patterns.

[ --> Any peaks in usage / memory consumed during builds, tests, deployments would hint at the memory requirements.] 👍

Step 2 -->

Enable + Analyse GC Logs --> It would hint at the garbage collector's behaviour, alter the heap size subsequently. Snippet to enable GC logging:-

java -Xlog:gc*:file=gc.log:time,uptime:filecount=5,filesize=10M -jar your-application.jar

Step 3 -->

You'd have to then tune JVM options such as -Xmx and -Xms to set appropriate heap sizes.

java -Xms2g -Xmx4g -jar your-application.jar

These settings ensure that the JVM starts with at least 2 GB of heap memory and can use up to 4 GB if needed during the application's execution.

You can adjust this accordingly post monitoring ad analysis of the estimated memory requirements.

3 --> Consider your Plugins Again

Yes, you heard that right.

Please, please make sure to review all of your installed plugins. Uninstall the ones that're unnecessary.

Plugins do extend the functionality Jenkins offers. However, excessive plugins slow down Jenkins' performance.

Cause there're some default settings / global configurations that might hamper Jenkin's performance tuning.

Solution:
Keep Jenkins bloat-free. Eliminate unnecessary plugins. 👍

4 --> Tune the Garbage Collector

Optimize the garbage collector to reduce pause times.

The garbage collector manages memory automatically, but it can cause application freezes.

Sounds good, but how do you tune the GC?

Step 1 --> Choose the right GC:-

  1. Serial GC --> Small applications with single-threaded environments.

    • -XX:+UseSerialGC
  2. Parallel GC --> Can afford longer pauses but benefit from high throughput.

    • -XX:+UseParallelGC
  3. Concurrent Mark-Sweep (CMS) GC --> For Low-pause applications, more suitable for web servers / interactive applications

    • -XX:+UseConcMarkSweepGC
  4. G1 GC (Garbage First GC) --> Balances between high throughput and low pause times, --> Applications requiring predictable pause times.

    • -XX:+UseG1GC
  5. ZGC (Z Garbage Collector) -->Aims for low latency with pause times not exceeding a few milliseconds

    • -XX:+UseZGC

Step 2 --> Adjust the GC Parameters:-

Tuning the GC Threads*:*

Adjust the number of threads used by the garbage collector to match the application's needs.

We've already spoken about Enabling GC Logging, and setting the right memory heap size

Example for G1 GC: -XX:ParallelGCThreads=4 -XX:ConcGCThreads=2

  1.  shCopy codejava -Xlog:gc*:file=gc.log:time,uptime:filecount=5,filesize=10M -jar your-application.jar
    

    Review the logs, check for frequent full GCs / noticebly long pauses, which might hint at tweaking at the settings.

4 --> Optimize Pipeline

Run tasks in parallel and define pipelines as code (Using JenkinsFile).

Why?

Parallel tasks reduce build times, we're essentially speeding up the build. Workflows become more streamlined.

Plus, defining pipelines as code brings in some consistency and predictability across environments.

Caveat: This might bring in some complexity in configurations. Use wisely!

Quick Code Snippet:-

pipeline {
    agent any
    stages {
        stage('Build') {
            parallel {
                stage('Unit Tests') {
                    steps {
                        sh 'make test'
                    }
                }
                stage('Integration Tests') {
                    steps {
                        sh 'make integration-test'
                    }
                }
            }
        }
        stage('Deploy') {
            steps {
                sh 'make deploy'
            }
        }
    }
}

Scalability Optimization

1 --> Containerization with Kubernetes (EKS)

Deploy Jenkins as a containerised application on top of a Kubernetes cluster.

Kubernetes offers features like self-healing, scalability & availability, distributing Jenkins pods across nodes for infrastructure-level scaling.

Enhanced scalability and resilience through container orchestration👍

Code Snippet:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: jenkins
spec:
  replicas: 3
  selector:
    matchLabels:
      app: jenkins
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      containers:
      - name: jenkins
        image: jenkins/jenkins:lts
        ports:
        - containerPort: 8080

Step 3: Set Up Correct Slaves

Use easy-to-manage slave nodes that can be quickly replaced or added.

Why?

An efficient slave management ensures that jobs can continue running smoothly even if a slave crashes.

We're reducing downtime and improving job execution efficiency.

Step 4: Apply Multiple Jenkins Masters

Use multiple Jenkins masters for different teams or projects.

Why?

This ensures that changes in one project don’t affect others and allows for project-specific configurations.

Also, we're bringing in resilience (Service continuity remains unaffected, even in face of a master's failure)

Snippet:-

apiVersion: v1
kind: Service
metadata:
  name: jenkins-master
spec:
  selector:
    app: jenkins-master
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

Summary of all that we've discussed today:-

Performance Optimization:

  • Minimize build-up history --> the number of builds you'd like to retain and for how long

  • Set a decent size for the memory heap
    --> Making sure we're preventing OutOfMemory errors & preventing unnecessary crashes

  • Optimize pipelines plus minimize unnecessary plugins

Scalability Optimization:

  • Deploy Jenkins as a containerised application on top of an EKS Cluster.
    --> Make the CI/CD pipeline actually resilient by leveraging K8's capabilities

  • Set up correct slaves & apply multiple Jenkins masters. Service Continuity ++ 👍

That's it for today! Will post another article shortly, that would geared towards enhancing Jenkins for Availability plus the DR strategies for resilience.

Please feel free to connect with me on my LinkedIn handle. Do mention your thoughts in the comments. Eager to hear your thoughts!

This is Tanishka Marrott signing off!


Resources for reference