Application delivery teams and the application lifecycle management tools they use are reaching a breaking point as they struggle to support continuous delivery of applications with an expanding array of architectures and form factors. According to Ashish Kuthiala, Austin-based senior director for Agile and DevOps portfolio offerings at Hewlett Packard Enterprise Software, three key disruptions are reshaping ALM and testing: DevOps, increasing application complexity, and Cloud and SaaS models. In response, enterprise development’s next wave of productivity will be increasingly automated, collaborative and powered by big data.
SD Times spoke with Kuthiala about these disruptions and how predictive analytics and machine learning will be necessary tools for building quality software from the start.
SD Times: What’s your background and how has that positioned you to see these disruptions?
Kuthiala: I’ve held different roles across the software development value chain — a developer, on early Extreme Programming teams, and hands-on roles on the operations side of the house. When I found myself under pressure to accelerate this delivery chain due to business urgency and new technology, I could see that seismic changes were needed to keep up.
Since you mentioned Extreme Programming, do you feel that test-first programming and XP ultimately led us to DevOps?
Paradigms such as Extreme Programming were precursors and accelerators to DevOps, but a lot of those methodologies were more focused on just the dev-test teams. Agile focused on development, testing and users, but it fell short on delivering the value quickly to end users. You’d work fast and hard on smaller deliverables to get them right but then wait to bundle it up and and throw it over to production teams.
QA has a lot of ingrained processes and systems — and historically, that was the right thing to do. The QA organization’s main charter was not to let shoddy code slip by, so processes, tool sets and teams were built not so much for speed, but more for quality. Now, there’s a lot of pressure on the QA team to re-look at their processes, because quality processes that take long cycles cannot hold up the speed of delivery to the end user — and more importantly, quality cannot be achieved by a siloed team. Quality assurance needs to be pervasive throughout the software value delivery chain.
So DevOps is the first disruptor.
Today, application design, development and testing happen simultaneously, requiring test creation and execution earlier in the lifecycle, even before coding begins. This puts new pressures on QA to adapt or risk being cut out of the DevOps process.
First, test definition must start with user stories and requirements, before code is written, and facilitated with proven practices such as Business Driven Development (BDD) and Test-Driven Development (TDD).
Second, testers must skill-up to increase their use of automation at every phase: Unit, functional, regression, load and security, while using automation best practices to strive for good functional test design and re-usability.
Third, testers must align with the teams implementing the Continuous Integration (CI) toolchain so that unit/functional, regression, performance and security testing execute with the continuous build cycle, within a single sprint.
Finally, with the complexity of today’s application landscape, there is always more to test than time allows. Testers should get comfortable leveraging production analytics to understand how apps are actually being used in the wild and use that insight to focus their test activities.
Then the second disruptor is complexity — but this is where your approach gets interesting, because you advocate using predictive analytics within QA.
A: When we talk about software complexity, the whole model of how software is now being built and delivered is radically different than it was one, two or even three years ago. Development teams are adopting new architectural models to create and deliver applications in the continuous delivery model. We are also seeing an explosion in the sheer number of distinct platforms and forms such as web, mobile, and Internet Of Things from which software is consumed.
These fundamental changes in delivery cadence, software platforms and software architecture are increasing the complexity of lifecycle management, to the point of chaos. Heterogeneous dev processes, apps built with shared services and APIs, widespread open source in code and tools, different protocols and characteristics of IoT and mobile application delivery, challenge app delivery teams. Even the smallest code change has so many ramifications — it’s a network effect.
For example, even changing one line of code can have severe impact beyond the module within which it is contained in. How do we analyze and track these impacts? Tapping into and analyzing the vast amount of data that lies across your software development ecosystem can provide you with the insights and alerts you need to make fast and intelligent decisions.
What kinds of predictive analytics and machine learning would you look for?
A: Today, if I make a change, it may take five hours for me to see if it passes all the quality tests — that’s if you’re doing things really well today. Sometimes, it takes a week or even a month in many organizations. To deliver at the speed at which business wants to move, this is increasingly unacceptable.
If I were to embrace data-based machine learning and analytics driven testing, the metrics I would like as a developer or a tester would be the number of tests I have to run for my code changes — do they seem to go down with each cycle? Do the test cycle times get faster? What is my confidence level in accepting the machine learning recommendations about my tests? Is there learning from each cycle? Perhaps the first time I ran 120 tests, and the next time I only have to run 25 based on the past learning.
Your third disruptor is cloud and SaaS (software-as-a-service). What effect does that have on the lifecycle?
There’s an increasing adoption of infrastructure models that can be instantiated at the click of a button — scalable Cloud and SaaS models that are fundamentally changing the way applications are both composed and consumed.
Meanwhile, legacy systems aren’t going away — they have a long half-life. When you start to manage a mix of such models, how do you rapidly provision or consume services from these different platforms? How do you scale up and down based on your needs? How do you test and replicate all the hybrid platforms: Amazon, Azure, on-premise, mobile, web…?
The proven cost savings garnered from moving to the cloud and the elasticity of cloud delivery are enabling teams to rapidly deliver against business requirements and meet un-predictable consumption loads, but there are huge challenges in harnessing these models to your benefit.
What is HPE’s solution to these problems?
We believe that application delivery teams need to prepare for a hybrid-cloud world by investing in skills and tools to test and manage software composed of on-premises and cloud services, and investigate a hybrid-cloud approach to application delivery management as well.
HPE’s ADM software suite supports hybrid cloud delivery with a highly elastic, cost effective choice of consumption models. First, we provide a choice of on-premises or cloud-based automated lifecycle management, functional, performance and security testing and the ability to set up a flexible, on-demand test lab in the cloud.
Second, HPE’s ADM suite can rapidly provision and scale all forms of testing globally across on-premises, private and public cloud footprints with your choice of where the application under test, the integrated services and user devices are present—on-premises or in the cloud.
Third, we build in service and network virtualization, which enables continuous development and testing across teams even when services are not ready yet, or in the cloud and difficult if not impossible to access, because global network behavior can create obstacles to quality and performance.
Given these three disruptors, what is a simple yet bold move a development organization can take right now?
Businesses — and therefore their IT colleagues — are under relentless pressure to innovate faster than their customers. This transformation needs to cut across the teams, processes and the tooling underneath it. It cannot be an overnight change; it’s an ongoing journey to continuous improvement. Start by attack your biggest problem or bottleneck in the system. Be ready to experiment, fail and learn fast. Analyze your data to learn and get better.
Once you solve this problem, you’re going to move on to the next bottleneck, and so on. Having this mindset is what we see in organizations that are very successful.