Today’s networks, websites, communications, apps and complex IT infrastructures are churning out massive amounts of data, so much so that there were 2.7 billion terabytes of digital content in 2012, according to IDC. That number is predicted to skyrocket to more than 8 billion terabytes by 2015.
To capitalize on this trend, it seems that every day another new company joins the ranks of those able to monitor, analyze, store and ultimately sell Big Data to help organizations maximize efficiencies and growth.
“Operational Intelligence,” as it’s been coined, is being offered up as a service to businesses seeking real-time analysis of machine data for better visibility, optimization and predictability for their bottom line. There is obviously great value in this.
What many businesses are also starting to realize is that there is another massive pool of company information that, when properly analyzed, can yield an even more positive impact to the bottom line. That information is computer source code and forms some of the most elemental intellectual property for almost every business. Taking a Big Data approach to source code and software development enables businesses to drive down costs and bring products and services to market, faster and more efficiently than their competitors.
“Code intelligence” is the process of scrutinizing lines of code for quality and security issues, turning large amounts of raw source code into actionable information for the business. And with more than 60 million lines of software code being written every day, code intelligence is the best way for companies to gain visibility into the state of their software quality and security (and risks!), to ensure fast, efficient software delivery and, in the end, make better business decisions.
The average coding flaw costs US$2,000 to fix when it’s caught in the development stage. Once the code is finalized, an error spotted in QA costs $24,000 on average to fix. However, 80% of software development happens during the rework—that is, after a defect, glitch or breach has taken place.
After a release cycle, companies spend upward of $100,000 to fix a development flaw, not including the money spent on brand and reputation management and for rapid response to customer complaints.
Across industries, companies are realizing the time and money-saving benefits of analyzing their source code and using this code intelligence. I know executives in the manufacturing industry who have used it to cut their product release cycles in half. There are QA teams in the automotive industry who have used it to reduce development costs by 25%. One financial services provider told me that code intelligence allowed them to fix more than 3,500 software defects prior to their release hitting customers’ hands. And on the customer-facing side, tech support volumes have dropped significantly, and complaint calls are down by more than 50% for some companies.
We should all be leveraging code intelligence to gain continuous visibility into the software quality and security within our organizations. The process will not only allow us to make better decisions on our product and service offerings, but will also transform software development into a competitive advantage.
In a “volume, velocity and variety”-driven world, code intelligence is the Big Data that we need to monitor, analyze and capitalize on to innovative and—even more importantly—keep our customers coming back.
Anthony Bettencourt is president and chairman of the board of testing company Coverity.