We had a chance to talk unit testing with Eli Lopian, CEO of test company Typemock. Here are this thoughts on that and other matters:

SD TIMES: Unit testing has been compared to going to the gym in the sense that everyone knows it’s good for them, but the time and effort to implement seem daunting. How do companies prep themselves to overcome that initial hurdle, especially when implementing unit testing in legacy code?

ELI LOPIAN: This is a great point, as developers are typically hesitant to touch legacy code. Most developers want to work on the shiny new code and they still see legacy code as a threat and work in “survival” mode – minimal changes and therefore limited growth and innovation.

As you are probably aware, 70% of software costs are attributed to working on legacy code (I personally think that the number is higher). Many don’t know that 60% of the work on legacy code is actually code enhancements, which means that features added in legacy code exceed those added in greenfield by more than 140% (!).

The fun fact is that the better engineered the code is (i.e. has unit tests) the more maintenance that code gets, as developers are not scared to modify it and add new features.

Therefore, in order to get over that initial hurdle, leading companies do the following:

  • They celebrate covering their legacy code and declare: “We touch legacy code” as most of the features go there. It should be on the table, as for many developers it is convenient to set this subject aside, and it is the organization’s duty to shed light on this in order to remove the technical debt.
  • They figure out what the correct legacy code is to unit test. It is the classic 80-20 paradigm; figure out what the 80% is that can be done in 20% of the time, find the areas that get the most changes, those that are prone to bugs and are critical to the system.
  • Getting the right tools for the job also helps, be it the build server, the unit test framework or the isolation framework. Suggest solutions.
  • Last but not least, these companies take it all in small bites and don’t try to do it all in one go. They take it slowly, get one team up and running and then expand to a zero technical debt policy.

 

SD TIMES: Are Typemock customers more frequently those looking to streamline preexisting unit testing and TDD practices or are they hoping for a way to ease themselves into the practice?

We see a mix of both. Many companies use our tools to ease themselves into unit testing without any need to modify their existing code, with powerful tools that allow them to get unit tests running. We also see others that use our tools to fill in gaps that were not covered in a streamlined way.

We are actually witnessing a different segmentation between those who think only developers should write the tests as opposed to those who let the computer write it for them. We cater to both. Tools such as mocking, to test a code that uses objects that are neither yet created nor require a slow and complex setup, without constraint to your design, as well as Suggest that writes the tests for you just like a compiler. 

We put in a lot of hard work so that developers can make their own choice. They can either accept all tests provided by our solution or they can choose to review each test and either accept, reject or even modify them.

SD TIMES: What sort of research was done before setting out to write the AI model in Typemock Suggest?

We have been working with unit tests, extreme programming, Agile and DevOps for over 15 years now and have accumulated vast experience, data and know-how. Our experience showed us that AI is not enough for creating the correct tests that are also readable, easy to maintain and built for future enhancements. We needed something else… a little bit of “magic.

Several years ago I held several meetings with customers from which I understood that I haven’t yet solved the problem of unit testing. I then met with my employees and was frustrated as I didn’t have any way of helping them out with our product development.

I went to the beach, looked at the sea and tried to think of how I can really solve the unit testing problem. I decided that in order to clear my mind I would go out for a swim.

I was in the sea, breathing slowly, seeing the shore line between the waves every few strokes, and while I was swimming I suddenly had a thought – I remembered thinking about how I would write a program to solve Sudoku – I wouldn’t teach it the same methods that I use – I would simply let it fill in numbers systematically – until it reaches an answer. I suddenly understood that I was trying to automate writing tests the human way instead of the computer way. I was trying to get the computer to understand the context.

Once I grasped that, we managed to come up with the breakthrough and create Suggest — an innovative solution that learns your code base and builds unit tests to cover it, using Fuzzy Logic and patented algorithms to create and rate tests without the need to ‘understand’ the context.

SD TIMES: What methods were used to train the model?

Our goal is getting to at least 80% coverage, within a reasonable time frame, with no false positives.

In order to do this we built the engine’s algorithms using our accumulated know-how. Like in an Agile mindset, we first started with tests that helped us witness whether we reached our goals. We then went on to perfect the results by analyzing millions of lines of code and letting the engine operate on dozens of open-source and private-source code bases. It generated tens of thousands of unit tests which we reviewed and rated, in order to tweak the algorithms and achieve better results. This process is ongoing, and Suggest gets better with every new version we launch.