Pipelines are an important part of the Jenkins ecosystem for CI/CD, and one important iteration of the pipeline concept is called the declarative pipeline. SD Times, in partnership with CloudBees, recently hosted a webinar called “Modernize Your Pipelines with Best Practices Built-In,” to talk about the benefits of these. 

But to understand why CloudBees, which is an enterprise sponsor of Jenkins, is pushing for people to use declarative pipelines, we have to go back and take a look at the history of pipelines in Jenkins. 

Darin Pope, developer advocate at CloudBees, explained that the first iteration of a pipeline was essentially just chaining together freestyle jobs, which are sequential tasks that the developer can define.  

According to Pope, some of the downsides of freestyle jobs are that they are not auditable without installing extra plugins and you are bound to the Jenkins UI, which doesn’t allow for building some of the more complex workflow features, such as flow control, running steps on multiple agents, or running steps in parallel. 

Around 2015 or 2016, Pope recalled, Jenkins 2 was released, and it introduced a new syntax called the scripted pipeline. But developers could end up doing bad things when writing these pipelines and there wasn’t much to stop them from doing that. 

Declarative pipelines were then introduced to put some guardrails in place for developers so that they wouldn’t do those bad things to their projects. 

“Me as a developer, on Joe Schmoe’s team running an e-commerce shop, I simply can’t go in anymore and modify that pipeline via the UI,” said Ian Kurtz, solution architect at CloudBees. “Nope, I have to open a PR, right? So it almost enables the best practices that you see, these days, as far as using pipelines.”

Declarative pipelines use the same execution engine as scripted pipelines, but they add a number of new benefits. They are easier to learn, have richer syntax, provide syntax checking, and are easily restartable using the “Restart from stage” functionality. 

Allowing the pipeline to restart from a particular stage ensures that there’s not as much time lost when something goes wrong on these long builds. 

“I’ve worked with customers where they have infrastructure issues that are intermittent, you know, maybe they have networking problems, or an NFS drive that’s flaky. And that fails at the end of a six-hour pipeline or a two-hour pipeline, and it’s just one stage. So you can just rerun that stage if you’re using declarative,” Ray Kivisto explained. 

Kurtz added: “A lot of people with us today have seen the direct business impact of the failure of long running pipelines, right? It impacts your release cycle, impacts your build times, and it just negatively impacts your software delivery process as a whole.”

To hear the speakers expand on those benefits with real-world examples, and for best practices for creating pipelines in Jenkins, watch the webinar recording