Using agile development methods, continuous build and deployment, organizations can produce several builds in a day. But if it takes 30 hours to test a build, how do testers get it done by tomorrow? How do they keep up?

By leveraging the automation capabilities of the very tools that are threatening to bury testers, to be able to do more testing and increase test efficiency.

A couple of years ago, I was testing the customer-facing and internal-facing portals for a company that provided VoIP services. It took about 30 minutes for me to manually deploy a Git Tag to my virtual Unix/Tomcat test environment. After that, it took about 60 hours of testing to go through the major scenarios and validate the results.

Eventually I cut it down to 30 hours of testing by adding some automated scripts to prepare the database with the test accounts and profiles that my test scripts needed. This saved me hours of manual data entry in the portal interface. I also added some automated scripts to help quick-test the basic portal features, but the real testing had to be done manually.

Sales and Marketing were evolving the “perfect” feature package plans. As a result, the customer’s billing statement changed with every new build. The requirements were fuzzy and constantly changing as they tried out different ideas.

testing

Also, as we neared the release date, I discovered that certain rapidly proliferating inventory items in the production database could not be deleted when they ceased to exist in the real world. So, we started testing database fixes that could not be completely verified in testing due to the differences between the QA and production environments, meaning it would have to be tested again in production.

In this same timeframe, unscheduled and untested changes were made to some IP tables during the weekend. The resulting errors caused major outages in the customer’s phone services. The director called a big meeting and told everyone to slow down and be more careful. He said, “If someone asks you to do something you are not comfortable with, raise your hand and say, ‘I am not comfortable doing this.’ ”

After several extensions to the release date (due to continuous changes in the application), Development decided to schedule the release to production. They told me I would be getting the release-candidate tag for testing the next morning (Friday), and they were going to deploy the release on Saturday. So, I could verify the release in production on Sunday, giving me a sleepless 24 hours to do my 30 hours of testing in QA, and another 24 hours to repeat the feat in production.

Given the risk to production, I wrote an e-mail saying the magic words: “I don’t feel comfortable doing this…” My contract was cancelled an hour after I sent the e-mail, and they released the portals on Sunday as planned (without my testing).

Lesson learned: Test faster
Everyone needs to get code into production faster. So, even if management seems sympathetic to the problem, the fact remains that we have to deploy and test faster, or get dropped by the wayside.

I haven’t really found a way to test faster. What I have found are some ways to get the code moved through the test stages faster, without taking on a risk that I am “not comfortable” with.

I have been working with some excellent continuous-delivery (CD) automation tools as well as DevOps methods over the past year. I have discovered that the same tools that have made it possible to build and deploy an application in minutes (and bury the testers) can automate testing just as effectively as they do deployment. The first, best trick I learned is to leverage their automation to automate my testing. Based on this experience, I can say that there are several good test-automaton opportunities in CD and DevOps.

Here are three opportunities I have discovered to speed up testing using automated deployment tools in an agile/DevOps environment:

1. Institute collaboration, and allow self-serve deployments for developers and testers.

2. Add test automation to the deployment process to:
• Automate infrastructure and structural tests. This is mechanical, repeatable, simple and boring, and should always be automated. There are a number of good tools out there that do a good job in this slot.
• Automate test data preparation and distribution to your test environments.

3. Do less testing. Function, story, scenario, user acceptance and exploratory testing are all types of testing that everyone agrees should be done manually. Don’t try to automate these tests, just what has changed. (Scary, I know, but with CD you can actually do this without adding much risk or discomfort.)

Collaboration and self-serve deployments
Many testers in entrepreneur-driven shops do their own deployments before they test. This saves the time you used to spend waiting for someone else to deploy your application. Testers in this role actually have more to test than ever before: not just the application functions, stories, integrations, configuration and dependencies, but also the deployment process itself.

Testing the deployment process is a new opportunity in CD and DevOps. Anyone doing self-serve deployments is testing the deployment. Look for more on this topic in the future, because the most efficient deployment process is to have one core deployment process for the application and reuse it in every environment in the SDLC. Each environment should have its own environment definitions, so the same core process can be used to deploy the application at every stage of the SDLC.

I have run a lot of deployments, mostly manual in 2011 and automated in 2012. Automated deployments under DevOps are wonderful compared to traditional manual or script-driven deployments.

The first difference is collaboration. In traditional, manual deployments, debugging a failed deployment requires system-administration knowledge, lots of permissions, and probably a lot of time. I am not a system administrator; I am a tester.

I had a manual deployment at the VoIP phone company take two days because someone changed the configuration of the Tomcat server. The Unix administrator was not available, and I didn’t have enough permission to even look at the tail of the log. These are classic IT deployment problems that DevOps is determined to fix through collaboration and breaking down traditional silo barriers.

CD and DevOps are both concerned with managing and controlling the configuration of the environments as well as the applications. So, in CD and DevOps, the tool could have prevented the failure by deploying the correct configuration as part of the process.

Further, if the deployments are automated using a good commercial tool, every step in the process will report success or failure. When I have a deployment fail for reasons I don’t understand, I need to be able to reach out to the folks who can figure it out. With the tool, I don’t need special permissions to go poking around in the system; I can send the log to a developer or IT person for help, right from the tool. I usually get a diagnosis and help in minutes.

Add test automation to the process
CI has proven the value of running automated unit tests to establish the quality of the build. Test automation in CD can verify the quality of the application, the configuration/infrastructure of the environment where it is deployed, and the deployment process itself. It can also be used to create and deploy test artifacts, like test data, that are necessary to test execution.

When I polled testers who were doing deployments (both manual and automated), several reported deploying test data when they deploy their applications, but no one reported doing any automated testing when they deploy. There have to be people taking advantage of this test automation opportunity, but I haven’t been able to find any.

So why aren’t testers running automated tests when the application is deployed? There seem to be three main reasons.

First, automated function and scenario testing is a failure, but it’s the first “automated testing” that people think of, so they focus on it and ignore other opportunities.
Second, once you do get some test automation, running it quickly stops finding bugs and people lose interest.

And third, we have been ignoring the really good automated tests because they aren’t function tests, and they don’t find many bugs most of the time.

What kind of automated tests should we run at deployment time? Automate the stuff that can be automated reliably. Put it into the deployment process the same way unit tests are part of CI. CI has made automated unit test suites viable in the build cycle because the cost of running these simple tests is negligible, and it’s the measurement of code quality. Just because they pass doesn’t mean that the code is perfect. It means that the baseline testing has been accomplished.

There are several types of automated test tools that are very reliable and provide valuable information with zero maintenance. If you add them to the deployment process, then they have a chance of becoming instantiated in the same way as automated unit testing is now a given in Continuous Integration.

a. Use automated dynamic analysis tools in QA: The boss kept coming into the Web team’s office and asking why this link or that link was broken, or why there was a misspelled word or grammar error on a page. I found this seriously embarrassing.

You can’t easily check 1,000 pages manually for a whole list of structural errors. But there are good tools out there that can test these things lightning fast in any environment, from test to production. The first time I ran a test tool against the site, it reported that about 6% of the links were broken. The team went into shocked denial when I reported this. “The tool is broken,” they said vehemently.

I started testing the tool. It was not broken. Next, I ran it against our competitor’s sites. Big companies or small, database-driven or hand-coded and static, they averaged 7% broken links. This got my interest up, and I looked further.

The automated tool detected tons of broken inbound links. This is bad for your Google Penguin quality ratings. There were hundreds of duplicate file names, which is bad for your Google analytics rating when the hits on your homepage are split over three different variations of the name. Then there were the server errors and the CMS errors… Got the idea?

For Web applications, I have had great luck using dynamic analysis tools as part of the test-system deployment to “quick-test” the application and the environment. I invoke the tool at the end of the deployment process to provide baseline tests on the deployed website. It produces several reports, and it can e-mail them to whomever you want.

I like running it from a box that is outside the test and production environment, because then I am also testing the network connectivity, DNS, login credentials, server configuration, and so on. Running it with the right credentials will also test the security boundaries. And, it only takes about five minutes to test a website with a thousand pages.

There are other automated test tools that can run in this space, like the readability tester. It has pointed out all sorts of content in our pages that are unintelligible to humans. There is also a tool that checks metadata on Web pages and videos when they are delivered, so it can test the database, the CMS, the Web application or the static content, wherever it is. I could spend a whole article talking about all the ways SEO gets messed up when pages don’t have the correct metadata. If you have a thousand application pages, pictures and videos, do you have time to check that every one of them is properly titled, described, and tagged every time you deploy?

b. Automate data preparation and distribution: The best ROI for test automation is automated data preparation. This means preparing a database with all the data that the testers need to run their tests. Good test data design and automated data preparation will ensure that that historical data is correct and that no time is wasted manually re-keying data during the test execution stage.

This really speeds up manual testing. For example, student records with unfinished homework or failed exams are necessary to trigger alerts to the teacher. Or a student not being in class for two weeks triggers an alert to the teacher and the principal. Or a customer’s record showing that he or she has signed a contract triggers the next step in an order fulfillment scenario.

Deployment automation can execute SQL scripts to prepare and deploy the test data when and where you need it. The deployment-automation tool sees this as just another automated deployment process. This really pays off if you have multiple test servers that all need the data deployments. The ability to automatically distribute test data where and when it is needed represents an enormous time-saver for the test effort.

Only test what has changed (if possible)
The most time-consuming testing we do is function, story and user acceptance testing, because they are manual. What is the best way to cut down on the time we spend manual testing? Run fewer tests. Only test what has changed. I see this going on all the time. The problem is that we can’t normally be sure of what has changed in a manual environment, so there is a lot of (uncomfortable) risk involved.

What if you knew exactly what had changed (since the last deployment) in every tier of the application you are testing? Could you be comfortable testing only the changes? There is no single answer to this question. But, if I had been sure about what had changed in the application at the VoIP phone company, I would have been able to reduce my test coverage by about 80% and cut the 30 hours down to six without feeling uncomfortable.

Here is why this can be comfortable: The CI server stores the build artifacts in an artifact repository. (That’s a Definitive Software Library under ITIL.) The deployment tool takes the artifacts from there and keeps a complete inventory of every version of everything: artifacts, processes, configuration, environments, etc. I point this out because this is necessary to this technique, but not all deployment-automation tools keep an inventory like this, so be cautious.

I also have to point out that even if the tool has an inventory capability, there may be someone in the organization who can’t resist the odd tweak or diddle. There has to be a strong quality process in place to support (and protect) the tools, the code and the pipeline.

If all the changes are made using the tools, you will always know exactly what code is where, how it got there, who ordered it, who approved it, what the outcome was, and so on. So, when the CI server tells you what is different in the build of every tier in your application, and deployment automation deploys that version of the application, you can have great confidence that you know exactly what has changed. There may be dependencies and integration issues to consider of course, but for the most part I have had very good luck with this strategy.

Marnie Hutcheson is a technical tester with more than 15 years of experience designing and automating software test systems running on the Internet. Her specialty is systems integration testing and release management in large real-time systems. She holds an Engineering degree from the University of Colorado in Denver.