Rapid innovation and the digitalization of everything is increasing application complexity and the complexity of environments in which applications run. While there’s an increasing emphasis on continuous testing as more DevOps teams embrace CI/CD, some organizations are still disproportionately focused on functional testing.

“Just because it works doesn’t mean it’s a good experience,” said Thomas Murphy, senior director analyst at Gartner. “If it’s my employee, sometimes I make them suffer but that means I’m going to lose productivity and it may impact employee retention. If it’s my customers, I can lose retention because I did not meet the objectives in the first place.”

Today’s applications should help facilitate the organization’s business goals while providing the kind of experience end users expect. To accomplish that, software teams must take a more holistic approach to testing than they have done traditionally, which involves more types of tests and more roles involved in testing.

“The patterns of practice come from architecture and the whole idea of designing patterns,” said Murphy. “The best practices 10 years ago are not best practices today and the best practices three years ago are probably not the best practices today. The leading practices are the things Google, Facebook and Netflix were doing three to five years ago.”

Chris Lewis, engineering director at technology consulting firm DMW Group, said his enterprise clients are seeing the positive impact a test-first mindset has had over the past couple of years.

“The things I’ve seen [are] particularly in the security and infrastructure world where historically testing hasn’t been something that’s been on the agenda. Those people tend to come from more traditional, typically full-stack software development backgrounds and they’re now wanting more control of the development processes end to end,” said Lewis. “They started to inject testing thinking across the life cycle.”

Nancy Kastl, executive director of testing services at digital transformation agency SPR, said a philosophical evolution is occurring regarding what to test, when to test and who does the testing. 

“Regarding what to test, the movement continues away from both manual [and] automated UI testing methods and toward API and unit-level testing. This allows testing to be done sooner, more efficiently and fosters better test coverage,” said Kastl.

“When” means testing earlier and throughout the SDLC.

“Companies are continuing to adopt Agile or improve the way they are using Agile to achieve its benefits of continuous delivery,” said Kastl. “With the current movement to continuous integration and delivery, the ‘shift-left’ philosophy is now embedded in continuous testing.”

However, when everyone’s responsible for testing, arguably nobody’s responsible, unless it’s clear how testing should be done by whom, when, and how. Testing can no longer be the sole domain of testers and QA engineers because finding and fixing bugs late in the SDLC is inadequate, unnecessarily costly and untenable as application teams continue to shrink their delivery cycles. As a result, testing must necessarily shift left to developers and right to production, involving more roles.

“This continues to be a matter of debate. Is it the developers, testers, business analysts, product owners, business users, project managers [or]  someone else?” said Kastl. “With an emphasis on test automation requiring coding skills, some argue for developers to do the testing beyond just unit tests.”

Meanwhile, the scope of tests continues to expand beyond unit, integration, system and user acceptance testing (UAT) to include security, performance, UX, smoke, and regression testing. Feature flags, progressive software delivery, chaos engineering and test-driven development are also considered part of the testing mix today.

Security goes beyond penetration testing
Organizations irrespective of industry are prioritizing security testing to minimize vulnerabilities and manage threats more effectively.

“Threat modeling would be a starting point. The other thing is that AI and machine learning are giving me more informed views of both code and code quality,” said Gartner’s Murphy. “There are so many different kinds of attacks that occur and sometimes we think we’ve taken these precautions but the problem is that while you were able to stop [an attack] one way, they’re going to find different ways to launch it, different ways it’s going to behave, different ways that it will be hidden so you don’t detect it.”

In addition to penetration testing, organizations may use a combination of tools and services that can vary based on the application. Some of the more common ones are static and dynamic application security testing, mobile application security testing, database security testing, software composition analysis and appsec testing as a service.

DMW Group’s Lewis said his organization helps clients improve the way they define their compliance and security rules as code, typically working with people in conventional security architecture and compliance functions.

“We get them to think about what the outcomes are that they really want to achieve and then provide them with expertise to actually turn those into code,” said Lewis.

SPR’s Kastl said continuous delivery requires continuous security verification to provide early insight into potential security vulnerabilities.

“Security, like quality, is hard to build in at the end of a software project and should be prioritized through the project life cycle,” said Kastl. “The Application Security Verification Standard (ASVS) is a framework of security requirements and controls that define a secure application with developing and testing modern applications.”

Kastl said that includes:

  • adding security requirements to the product backlog with the same attention to coverage as the application’s functionality;
  • a standards-based test repository that includes reusable test cases for manual testing and to build automated tests for Level 1 requirements in the ASVS categories, which include authentication, session management, and function-level access control;
  • in-sprint security testing that’s integrated into the development process while leveraging existing approaches such as Agile, CI/CD and DevOps;
  • post-production security testing that surfaces vulnerabilities requiring immediate attention before opting for a full penetration test;
  • and, penetration testing to find and exploit vulnerabilities and to determine if previously detected vulnerabilities have been fixed. 

“The OWASP Top 10 is a list of the most common security vulnerabilities,” said Kastl. It’s based on data gathered from hundreds of organizations and over 100,000 real world applications and APIs.”

Performance testing beyond load testing
Load testing ensures that the application continues to operate as intended as the workload increases with emphasis on the upper limit. By comparison, scalability testing considers both minimum and maximum loads. In addition, it’s wise to test outside of normal workloads (stress testing), to see how the application performs when workloads suddenly spike (spike testing) and how well a normal workload endures over time (endurance testing).

“Performance really impacts people from a usability perspective. It used to be if your page didn’t load within this amount of time, they’d click away and then it wasn’t just about the page, it was about the performance of specific elements that could be mapped to shopping cart behavior,” said Gartner’s Murphy.

For example, GPS navigation and wearable technology company Garmin suffered a multi-day outage when it was hit by a ransomware attack in July 2020. Its devices were unable to upload activity to Strava’s mobile app and website for runners and cyclists. The situation underscores the fact that cybersecurity breaches can have downstream effects.

“I think Strava had a 40% drop in data uploads. Pretty soon, all this data in the last three or four days is going to start uploading to them so they’re going to get hit with a spike of data, so those types of things can happen,” said Murphy.

To prepare for that sort of thing, one could run performance and stress tests on every build or use feature flags to compare performance with the prior build.

Instead of waiting for a load test at the end of a project to detect potential performance issues, performance tests can be used to baseline the performance of an application under development.

“By measuring the response time for a single user performing specific functions, these metrics can be gathered and compared for each build of the application,” said Kastl. “This provides an early warning of potential performance issues. These baseline performance tests can be integrated with your CI/CD pipeline for continuous monitoring of the application’s performance.”

Mobile and IoT devices, such as wearables, have increased the need for more comprehensive performance testing and there’s still a lot of room for improvement.

“As the industry has moved more to cloud-based technology, performance testing has become more paramount,” said Todd Lemmonds, QA architect at health benefits company Anthem, a Sauce Labs customer. “One of my current initiatives is to integrate performance testing into the CI/CD pipeline. It’s always done more toward UAT which, in my mind, is too late.”

To affect that change, the developers need to think about performance and how the analytics need to be structured in a way that allows the business to make decisions. The artifacts can be used later during a full systems performance test.

“We’ve migrated three channels on to cloud, [but] we’ve never done a performance test of all three channels working at capacity,” said Lemmonds. “We need to think about that stuff and predict the growth pattern over the next five years. We need to make sure that not only can our cloud technologies handle that but what the full system performance is going to look like. Then, you run into issues like all of our subsystems are not able to handle the database connections so we have to come up with all kinds of ways to virtualize the services, which is nothing new to Google and Amazon, but [for] a company like Anthem, it’s very difficult.”

DMW Group’s Lewis said some of his clients have ignored performance testing in cloud environments since cloud environments are elastic.

“We have to bring them back to reality and say, ‘Look, there is an art form here that has significantly changed and you really need to start thinking about it in more detail,” said Lewis.

UX testing beyond UI and UAT
While UI and UAT testing remain important, UI testing is only a subset of what needs to be done for UX testing, while traditional UAT happens late in the cycle. Feature flagging helps by providing early insight into what’s resonating and not resonating with users while generating valuable data. There’s also usability testing including focus groups, session recording, eye tracking and quick one-question in-app surveys that ask whether the user “loves” the app or not.

One area that tends to lack adequate focus is accessibility testing, however. 

“More than 54 million U.S. consumers have disabilities and face unique challenges accessing products, services and information on the web and mobile devices,” said SPR’s Kastl. “Accessibility must be addressed throughout the development of a project to ensure applications are accessible to individuals with vision loss, low vision, color blindness or learning loss, and to those otherwise challenged by motor skills.”

The main issue is a lack of awareness, especially among people who lack firsthand or secondhand experience with disabilities. While there are no regulations to enforce, accessibility-related lawsuits are growing exponentially. 

“The first step to ensuring an application’s accessibility is to include ADA Section 508 or WCAG 2.1 Accessibility standards as requirements in the product’s backlog along with functional requirements,” said Kastl.

Non-compliance to an accessibility standard on one web page tends to be repeated on all web pages or throughout a mobile application. To detect non-compliant practices as early as possible, wireframes and templates for web and mobile applications should be reviewed for potential non-compliant designed components, Kastl said. In addition to the design review, there should be a code review in which development teams perform self-assessments using tools and practices to identify standards that have not been followed in coding practices. Corrective action should be taken by the team prior to the start of application testing. Then, during in-sprint testing activities, assistive technologies and tools such as screen readers, screen magnification and speed recognition software should be used to test web pages and mobile applications against accessibility standards. Automated tools can detect and report non-compliance.

Gartner’s Murphy said organizations should be monitoring app ratings and reviews as well as social media sentiment on an ongoing basis.

“You have to monitor those things, and you should. You’re feeding stuff like that into a system such as Statuspage or PagerDuty so that you know something’s gone wrong,” said Murphy. “It may not just be monitoring your site. It’s also monitoring those external sources because they may be the leading indicator.”