API testing has become more important than ever because the world of three-tier architectures and monolithic applications is being replaced by something much more complex. That has driven up the number of APIs in applications.

According to Joachim Herschmann, senior director and analyst on the application design and development team at Gartner, the number of APIs in applications has grown tremendously over the past few years because APIs offer a great way to extend the functionality of an application without having to write code. However, the sheer number of API tests has created many challenges for developers; challenges that are similar to those in other types of testing. 

It is often difficult to test API calls because they require a testing environment to be set up and maintained. This can be a difficult and time-consuming task for developers who already have a lot on their plate. There are also many different types of API testing, including functional testing, load testing, security testing, fuzz testing, and performance testing which testers have to handle often. 

Organizations are having to move API testing up from their developers who may have written some initial API testing or frameworks or patterns and shift it to test engineers or quality engineers who may need to maintain those tests, build new versions of those tests, and frameworks to help them do that, explained Coty Rosenblath, CTO at Katalon. 

However, many organizations still rely on collaboration between QA working in tandem with developers. 

“Since we can’t gain the knowledge and experience overnight, it’s important that we ask for help and use the knowledge and experience of other people — like developers. They offer great support and they can teach QA a lot,” said Adam Lochno, a quality assurance analyst at The Software House. 

Certain platforms enable testers to do different types of tests in one place, making it easier. Often people look for a platform to shift into automated functional testing and once they have those tools they can then shift that over to the API world.

“This enables the combination of functional and API testing in a way that lets testers and engineers do things like setting up a context for testing using the API and then check its functionality. Or vice versa, they can do a functional test suite, and then check its performance and confirm that it did the things it was supposed to through an API so that you’re not having to deal with some of the vagaries of functional testing except when you really want to know the specifics,” Rosenblath said. 

Another challenge is being able to track the performance and behavior of those tests, which is a problem amplified especially in API testing. 

It’s especially critical in API testing to be able to track how quickly someone responds because it can scale up and cascade. Any given functional test may be dependent on a number of different API calls, and they may stack up. Testers want to be able to understand the baseline performance and understand when it goes off the rails and tackle it because it can have wide-ranging implications.

“If your login API or your tracking API starts to spike in terms of maybe 10-20% performance, it could have an impact across your application. So having a system to track not only the working capability of the API, but did it perform its function, and how does it perform, it is important,” Rosenblath said. 

API standards also vary often so testers need to keep up with the state of the art when it comes to API testing and its complexity. Whether it’s SOAP or REST APIs or dealing with GraphQL, each has its own authentication protocols and network configurations that need to be tracked, Rosenblath explained. 

The Software House’s Lochno found that documentation is sometimes lacking for REST APIs, leading to a lack of information about what fields endpoints take and what are frequent blockers during tests. In such situations, he said that it is necessary to find a person who can provide testers with this information. 

“Any form of documentation is invaluable. When we have it, we can easily find the information we need, such as the required fields in the body request, or what response we can expect. Swagger deserves special attention. This automatically created documentation not only presents all the data on the tray but also allows you to ‘shoot the API’ directly from the documentation,” said Lochno.  

With the absence of documentation, testers are forced to look at the developer console and they have to spend a lot of time sifting through all of the info to find the answer which is often obstructed by all of the unnecessary information. 

API tests are ideal for automation

While APIs and how they behave can be very complex, API testing is very suitable for automated testing and even autonomous testing, which can generate tests. 

“It’s comparatively straightforward these days to take API definitions like Swagger files, or similar definitions, read them, and create test cases right out of that. And most vendors have capabilities to do that,” Herschmann said. “On the other hand, for UI tests, there is not necessarily such a thing as a definition of a user interface. There’s a lot of change going on, which makes UI tests a lot more brittle. So API tests are more stable in the sense that these interfaces or the contracts or definitions change less frequently than a user interface does.”

Another common way to do API testing is to create test cases by recording the traffic against the API, similar to how UI tests are performed. 

The developer or tester uses the recorder mechanism to record the raw blueprint of the test case. They can record a day’s worth of recorded traffic, then replay that same traffic on another system to see if this new system is capable of handling the same kind of traffic.

A useful way to perform API tests is by directly accessing the business logic that is accessible through the APIs rather than first going through whatever interface layer is there. 

In order for developers or testers to do some of the testing, they can virtualize the service and instantiate it in another environment through service virtualization. 

“Service virtualization is really intelligent in the sense that it implements the behavior of a more complex set of APIs that interact with one another,” Herschmann said. He used the example of wanting customer data, and a single request goes out, but on the back end, it triggers several subsequent requests – one for customer name, another for the customer address, and another for the financial details of that account. All of those might go to different databases. “It triggers a real simulated activity of that service. And so that allows me to do very complex testing in environments where the actual services may not be available,” he said. 

API tests are more accessible to computer processing than traditional functional testing, according to Katalon’s Rosenblath. With API tests you just have text, you can parse that, and you can understand what was asked for and what was returned. 

An automated testing tool will be able to track the actual execution of APIs by watching API logs as they’re occurring in the live system to service test configurations and context. Then, for example, the tool can look at a stream of API calls and understand what the dependencies are between those API calls and then structure the test so that everything occurs in the right order, and then can look at the data that are being passed back. 

Once the normal data is established from the tests, teams can infer what might be outliers and then push that in as potential edge case tests.

According to Herschmann, organizations are generally not doing enough testing at the API layer and are rather focusing on UI testing. But that might change as more organizations are focused on creating tools for API tests. 

“Instinctively, the first way to think about testing is through the user interface, because we humans interact with an application through the user interface. But basically, humans only see the tip of the iceberg, literally, the front end. We don’t see that, potentially, the website makes hundreds of calls to the back end,” Herschmann said. 

However, there are some organizations that go the other way around and instead start at the API testing level and those are the organizations such as highly interactive financial network types of solutions where it’s all about low latency and fast connections to all servers. 

Now testing vendors who have long focused on UI testing tools are now shifting their focus toward APIs either through growing tool capabilities organically or through acquisitions, according to Gartner’s Herschmann.

While API testing needs to be in the spotlight, organizations want to make sure that they have a good balance of all types of testing, according to Rosenblath. 

Some organizations have extensive API suites but have neglected their functional testing which can leave them blind to user experience issues. 

If testing is handled purely within the development organization, the engineers may build a lot of API tests, but they may not be building enough functional tests and vice versa for companies that focus their testing efforts in the testing ward. 

All in all, API tests need constant maintenance and need a testing team around them to handle changes that the engineers are not fully aware about. 

“You may have new tests that are needed, that aren’t the result of changes to the API code itself, but changes to the way the business is operating, maybe something upstream, or maybe something in the data infrastructure has changed. And that changes the way the API behaves. That’s something the engineering organization is probably not dealing with,” Rosenblath said. “But your test organization needs to take that into account and needs to be connected with their business organization and put those tests in place quickly so that they know and they can assure you as a business that you’ve got that new business requirement covered in your API.”

One company’s journey into API testing 

The job search site Jooble has a distributed monolith architecture but is trying to move to a microservice architecture. Andrii Rybalko, a developer at Jooble, said that in their case, the hardest thing is separating APIs from their dependencies. Here a dependency is any executable thing that is not a part of our app, like other APIs, databases, Rabbit, Redis, etc.

The team at Jooble divides dependencies into two groups: APIs and all the others. For APIs, they create stubs or mocks. But they use real databases, Rabbit, Redis, etc in a special testing environment, like in Docker or virtual machines, which is recreated from scratch on each test run.

“At first, we were creating static stubs of APIs, but this way we cannot check that some endpoint was called, and it’s unclear which setup of the stub corresponds to which test. So, we moved to another approach where we dynamically set up mocks of APIs inside the test itself,” Rybalko said. “This solves the previous problems, but introduces new ones – setups can intersect each other, and when you run tests in parallel, this can cause some strange bugs. To deal with that, we generate pseudo-random data, which differentiates from one setup to another.”

This approach with pseudo-random data is also used for databases and other dependencies, for the same reason – intersections of tests.

This gives the team the ability to deploy logical units independently and automate the regression, which leads to daily releases and a reduction of slow and at times unreliable manual testing. This decreases the testing time of business hypotheses and provides a reliable and stable development speed.

Contract testing offers a more holistic approach to automated testing 

An important aspect of API testing is contract testing which focuses on ensuring that spec files such as Swagger or OpenAPI and RAML properly fulfill contracts between API consumers and producers. 

“Contract testing is important because it’s the type of technology that allows you to gain some level of confidence with the aspects that contract testing tests, while also increasing velocity. It’s a lower overhead method that’s actually piggybacking or leveraging something that’s already enumerated, in many of these cases: the API contract,” said Abel Mathew, CTO at Sauce Labs.

Contract testing can also validate spec files in an automated way by capturing how an API consumer and producer communicate with each other.

By creating this contract, developers have a mechanism by which they can specify the behavior of your APIs. But additionally, they can leverage code-generation techniques. By creating a spec file, developers can generate a bunch of the boilerplate often associated with an API, according to Mathew. 

“I can leverage that same file to take the same requirements, the request and response that I use, and then I can say, well, now that I know what this should look like from a black box testing perspective, I can effectively validate what the request and response should look like in terms of both form and content,” Mathew said. “Now, because it’s using a spec file that’s created at the beginning of the development cycle, the first benefit that we have here is that an introduces some form of testing early on in the development cycle.”

Without this type of testing, an error might occur in functional testing and then one has to drill it down further to identify what was the cause. 

This method of test generation also allows for the testing of third-party APIs fairly easily as well, according to Mathew. For example, if an API calls in some sort of third-party, one can effectively enumerate what the expected contract from that API is such as a strike. Now, testers can get more holistic testing, which is a huge advantage as opposed to writing functional tests, Mathew said.