XebiaLabs’ Tim Buntel, Rogue Wave’s Rod Cope, HPE’s John Jeremiah, CA’s Stephen Feloney, Tasktop’s Nicole Bryan, and CollabNet’s Thomas Hooker make a case for their company’s continuous integration and delivery solutions.

Tim Buntel, vice president of products at XebiaLabs:
For small companies or greenfield projects where you’re starting from scratch or you’re working with one team and a narrow set of technologies, implementing DevOps is fairly straightforward. Where companies begin to run into difficulties is when they start to scale their DevOps operations across disparate teams and when projects start involving a lot different individuals with varied skill sets, different technologies for building software, and deployment to many different environments. 

The XebiaLabs DevOps Platform is built with all these variations and this complexity in mind. Our platform offers many integration points that make it easy for you to bring all your tools and teams into a Continuous Delivery pipeline, with full visibility for both technical and non-technical team members. It allows you to proactively spot bottlenecks and potential failures anywhere in the pipeline. It lets you automate deployment to all kinds of environments, changes to workflows, compliance requirements, audit reports, and release management tasks.

Rod Cope, CTO of Rogue Wave:
Rogue Wave Software’s Continuous Delivery Assessment provides a blueprint for continuous delivery, helping companies develop an effective roadmap to adopting automation in software delivery processes, including architecting and optimizing open source-powered build and delivery pipelines. Open source tools like Jenkins, Maven, Puppet, Chef, Docker, and Kubernetes drive continuous integration and delivery in the modern enterprise, but getting them all working well together can be difficult.   

Each of these tools has many deployment options, dozens of configuration settings, and a large range of performance tuning capabilities.  Developers may get something running, but is it optimized?  Will it break at scale?  Is it secure? We guide teams to implement a successful continuous delivery process, adopt DevOps best practices, and implement open source tools for maximum efficiency.

John Jeremiah, IT and software marketing leader at HPE:
One of the problems facing enterprises is gaining visibility across pipelines, integrating different tools to get a handle on overall speed, quality and security. We’re innovating on a number of CI/CD related initiatives. I’ll highlight a couple. We’ve been building ALM Octane as a platform to help enterprises manage and maintain visibility into the health of their delivery. Octane helps to address the enterprise needs for mature CI/CD based delivery.

Because automated testing is so critical, UFT Pro extends open source tools like Selenium and makes it easier for teams to create and maintain their battery of automated functional test scripts. These tools are critical enablers in achieving speed, quality and security of faster app delivery, but they have to do the work.

RELATED CONTENT: Getting an end-to-end perspective with continuous integration and delivery

Again and again, we hear from our customers that they are missing the mark on continuous delivery because they still have manual testing silos, manual release processes and a disconnected DevOps toolchain.

It is evident that companies cannot achieve continuous delivery if they do not modernize their testing and release practices, and CA solutions are purpose-built to solve these bottlenecks. We enable teams to generate test scripts from requirements, simulate test environments anywhere, access robust test data when it’s needed and execute performance and security testing early and often. We orchestrate continuous “everything”—development, testing, release and improvement — with robust integrations to open source, commercial and home grown solutions across the DevOps toolchain, including planning, CI, testing and deployment tools.

Nicole Bryan, VP of product management at Tasktop
Many CI/CD tools focus on solving “the right side” of the DevOps delivery pipeline i.e. connecting releases to builds, automating deployments and monitoring of production application. Or in other words, improving the time to value from code completion to code in production.

Of equal importance is “the left side” of the pipeline i.e. efficiently turning customer requests into requirements, features, epics and stories. These two sides must be interlinked because the success of the left side influences the success of the right side (and vice versa). Both must be integrated and communicating to create a value stream that continuously builds software that meets customers’ needs.

An organization’s main problem is knowledge worker access to the right information at the right time, and Tasktop solves that. Integration relies on the real-time flow of project-critical information.

Thomas Hooker, vice president of marketing at CollabNet
When selecting a solid continuous integration or continuous delivery tool, one of the things it has to have is visibility so managers and teams can see what is going on at all times. I think that is key, and you have to be able to provide that visibility based on the persona of the user.

Our products give customers that visibility across the “whole forest,” so you get to see all the different work steps from a common dashboard. This is what CollabNet’s DevOps Lifecycle Manager provides managers with: a single view of all the Value Streams in their portfolio, as well as insight into the ways in which these applications contribute to the value the organization delivers.