Software is like a house: It must be adequately maintained, or peeled-up wallpaper might reveal the foundation underneath isn’t as stable as you thought. This metaphor brings new meaning to mergers and acquisitions as companies on the buy side and the sell side attempt to patch up rough edges that might ruin a deal.

When buying a house in a seller’s market, one might forego the due diligence of an engineering inspection of the electric, plumbing, roof, and ceiling joists. If it’s not clearly a seller’s market, doing a thorough inspection is a good idea. It can remove some of the information asymmetry and help with the purchase decision, or at least the price negotiation.

What does this have to do with software development and IT management?

When companies enter into an acquisition, they are also taking on the technology assets of the selling company, including all the buggy software and hodge-podge applications that have been put together to support operational needs. More often than not, these technology assets are not considered as part of the M&A due diligence process. This results in a lack of objective understanding by the buy side when it comes to application portfolios. Put bluntly, a buyer could be acquiring a system that fully does not work without knowing it.

(Related: Docker introduces Docker Security Scanning)

So for CIOs and IT managers who don’t have a chance to vet this new set of technology before a company merger, there are two key things to measure: reliability and technical debt.

According to the latest data from CAST Research Labs, there are more than 550 million lines of code in today’s average application. That’s a lot to get through, and ideally, IT should go through acquired applications within the first three to six weeks after a merger to set benchmarks and prepare for a successful integration.

Check the reliability status of your assets
U.S. companies are losing up to US$26 billion in revenue due to downtime every year. That’s a big number, even for big business, and implementing routine application benchmarking tests can eliminate much of it.

Take into consideration the recent realization of missteps by the U.S. Department of Corrections, which accidentally released more than 3,200 prisoners over a span of 12 years in Washington state due to an unaddressed software glitch. Considering the U.S. keeps a detailed record on the 2.2 million individuals incarcerated, that’s a lot of data to manage, and this was just in one state. In this situation, some officials knew it was happening but failed to take some practical steps to eliminate the core problem. This resulted in a complete breakdown of the system and the release of potentially dangerous criminals back into society.

While it does take some elbow grease, IT managers must take the appropriate steps to measure not just the output of the data in systems, but also the structural quality of systems. Uncovering vulnerabilities in critical systems unveils possible issues that will disrupt services, impact customer satisfaction, and negatively impact the company’s brand before they take hold. By establishing an application reliability benchmark, managers can see if systems have stability and data integrity issues before they’re deployed into production, meaning secure data, 24×7 uptime, and happy customers.

Identify technical debt
Technical debt refers to the accumulated costs and effort required to fix problems that remain in code after an application has been released. CAST Research Lab estimates the technical debt of an average-sized application is $1 million, and according to Deloitte, more CIOs are turning their attention to technical debt to build business cases for core renewal projects, prevent business disruption, and prioritize maintenance work.

Despite the heightened awareness of technical debt, many businesses still struggle to correctly estimate it and make the use case to business decision makers. When it comes to Big Data, technical debt can be exacerbated by the enthusiasm and urgency that often comes with closing a deal and merging two companies together.

To overcome this, IT mangers should task development teams to configure structural quality tools to measure the cost of remediation and improvement of core systems. This includes the code quality and structural quality of both buy-side and sell-side applications.

Structural quality measures can identify code and architectural defects that pose risks to the business when not fixed prior to a system release. These represent carry-forward defects whose remediation will require effort charged to future releases. Code quality addresses coding practices that can sometimes make software more complex, harder to understand and more difficult to modify—a major source of technical debt.

Equipped with real-world estimates of technical debt (including the remediation and validation of code defects), businesses can accurately work this into their M&A strategy, reducing profits lost on fixing problems by tacking them on the front end.

Understanding that an objective view of technology assets plays a key role in driving value is the first step to a successful merger or acquisition. It’s not enough to simply slap on a fresh coat of paint; real success can only be achieved by taking care to ensure the support beams are as solid as the foundation.