Every developer wants to create great software that users can appreciate. Unfortunately, creating software that disappoints is easy. Conversely, creating high-quality software that users will appreciate is hard. Fortunately there are a number of things you can do and tools you can use to create high-quality software. This article will talk about those things that developers should know and tools they can use to create high-quality software.

It is easy to get wrapped up in the fun and creativity of building software. As a developer, there are days where you are “in the zone” writing code, and you look up to find that you just wrote hundreds of lines of code. Sometimes you are writing new features where the code is completely new. Other times it is refactoring an existing piece of code. In either case, you are eager to commit your code to source control because you want to share it with everyone. It is a demonstration of your creativity, of your abilities, of your value.

(Related: Microsoft open-sources .NET CLR)

At this point you should be concerned. Certainly I would have concerns if all I had done was write code. A developer that knows how to write high-quality code does not have these concerns. That is because they have taken steps to mitigate risks and to ensure quality. They have a process.

A software process includes phases such as planning, requirements/analysis, design, development, test/acceptance, and release/deployment. Examples of software processes include agile, waterfall, iterative and incremental development, test-driven development, and extreme programming.

It is important to decide on the process your team will follow early on. No matter what process you choose, the team needs to agree on it. The quality of your software can be correlated back to whether the process was followed or not. Some of the worst-run software projects never got agreement on the process early on. Teams that can self-organize and agree upon a process quickly are on the road to building high-quality software. These days I am finding that the teams I work with use an agile methodology with elements of iterative and incremental, test-driven development, and Extreme Programming.

No matter what process you pick, you need tools to help you follow the process. Such tools are often referred to as application life-cycle management (ALM) tools. There are many tools available, including comprehensive products like Visual Studio Team Foundation Server (TFS) from Microsoft, or suites of tools like those from Atlassian (JIRA, Bamboo, Clover and Confluence), or those from JetBrains (TeamCity, dotPeek, dotCover, ReSharper), or specific-use tools like Xamarin Test Cloud for mobile application testing, or Octopus Deploy for deployment. Teams that I work with mostly use TFS for requirements, work-item tracking and source control. We find that TFS is very useful since it has integration with Visual Studio that makes a developer’s life easier.

Know that you don’t need to stick with just one tool or suite of products. Depending on your budget, you may decide to buy multiple tools that overlap in functionality. Certain tools are better than others and may offer an advantage. For example, we use ReSharper for code analysis. It has an extensive set of rules to analyze your code, and then offers quick fixes to resolve code issues.

Know your requirements
Requirements management is extremely important to developing a quality product. One aspect of quality is known as functional quality. Functional quality is a measure of what the software does vs. what it is supposed to do. You need to be able to capture requirements and convey them to the development team to be able to measure quality.

One of the most widely used products for capturing requirements for .NET developers is Visual Studio with Team Foundation Server. Visual Studio is the development environment for .NET and Team Foundation Server that extends Visual Studio to add full ALM capabilities.

In agile development, requirements are captured using user stories. A user story is a story of what the user does or needs to do as a part of his or her job. It is not a comprehensive description of their entire job, but a small vertical slice of functionality. Good user stories fit into a single sprint of work that usually lasts about two weeks.

Any user story longer than a single sprint is referred to as an epic. Epics should be broken down into two or more smaller user stories. Consider the testing of the functionality when creating a user story too. If you find that the test for the functionality is more than the sprint allows, then your user story is still an epic and you need to break down the functionality further into multiple user stories.

Breaking down functionality into small manageable user stories (i.e. functionality) is a best practice and helps later on during the testing phase. You will find it is much easier to write tests, and over time you will be able to build up a suite of tests for the product.

Create a definition of done
All user stories should have acceptance criteria, which are the requirements that have to be met for a user story to be assessed as complete. One cannot measure functional quality without specifying acceptance criteria. No user story should be started unless there is acceptance criteria associated with it.

Ensuring functional quality is not enough to develop quality software. There are a number of non-functional quality measures that affect the overall quality of the application. Non-functional quality has an impact on the impression of the application. Examples of non-functional quality measurements are performance, throughput, code readability, and test code coverage.

One way to help ensure that both functional and non-functional quality is addressed is to create a definition of “done,” which is a listing of those steps that a developer must do before he can mark a task as being complete. The following is an example of a definition of done:

  • Code written using development standards
  • Code commented on appropriately
  • Builds with no warnings or errors
  • Code committed to source control
  • Unit tests written and passing
  • Tests with expected and unexpected conditions
  • Minimum code coverage achieved
  • No TODOs left in the code
  • Peer and/or team code review completed
  • Tasks marked as completed

This might seem unattainable to some, but I am here to tell you that it is absolutely attainable. You just need unwavering commitment. I have found that not following a definition of done such as this has always led to some form of regret. That regret often manifests itself in extra work that you did not plan on doing.

Breaking down the work
Now that you have the requirements and a definition of done, you need to define some tasks and write some code. The process of task writing is an art as much as it is a science. Tasks should have enough detail so that the developer knows what functionality needs to be created.

Most of the time we find that developers on the team are best suited to create tasks. That does not mean that the developer that creates the task does the work; it just means that a developer is best for breaking down the work into smaller units of work (i.e. tasks). The tasks that are created should include all the work that is needed, such as developing, testing and deploying.

Writing quality code
There are many attributes that we can use to describe quality code. Many developers can agree that code should be readable, understandable, consistent, maintainable, testable and elegant. When you write code, you should incorporate these attributes.

Readable: All code is readable by machines, but good code is readable by human beings. All developers on your team (both present and future) should be able to read your code. Most of the time you will rely on other developers to help review your code. You will know if your code is not readable because they will start complaining.

One time I asked a “non-developer” to read a particularly important piece of code. Certainly if they read and made sense of my code, then I was relatively confident that other developers on my team could read the code too.

Understandable: Just because code is readable does not make it understandable. The intent of what the code does needs to be evident by reading the code (which means no guessing).

Sometimes it’s hard to convey intent without help. This is where naming standards and code comments help explain the intent of the code. When something is not obvious, then you should add additional information to help explain the intent of the code.

One of my favorite things to look for is variable names that don’t make sense. How many times have you seen a variable named “x” and you don’t know what it means? A simple change to a name that conveys meaning such as “numberOfEmployees” is a significant improvement.

Consistent: Imagine a novel written by multiple authors. You might not make sense of it given the different conventions and styles of each author. When you work on a team of developers, it is the same problem: Each developer has his or her own way of writing code. Consistency is needed to help the code make sense. This is where naming conventions, code standards and a style guide help.

Maintainable: Code should be organized and extensible so that modifications are easy to make. It should be easy to update and maintain the code.

I am on a customer engagement at this very moment where maintainability is a big problem. We are tasked with updating code with bug fixes and new features. Unfortunately, in order to make one change, we have to affect five different projects and multiple files in each project. The maintenance costs associated with this application are very high. This is when you advocate for refactoring (i.e. rewriting) code for maintainability.

Testable: Code should be structured to support testability. This includes abstracting external dependencies so that code can be isolated for testing. Examples of external dependencies include databases and Web services.

One of the most common ways for isolating code for testability is to use interface development to abstract away your dependencies. Then, during testing, you can use dependency injection to provide an alternative implementation that relies only on local resources.

Tools and frameworks such as Microsoft Fakes, JustMock from Telerik, NMock, Rhino Mocks, and Moq help abstract away external resources that your application depends on by providing fakes, stubs and shims.

Now you need to write tests that that exercise your code. So what tests should you write? We advocate a lot for unit testing with code coverage analysis. This is the easiest type of test a developer can create. Unit testing needs to be integrated with both your IDE and Continuous Integration server. If done properly, you will see significant gains in quality and productivity.

A friend of mine once asked me for a review of a library used throughout our company. During the review I had suggested a significant change to his code. He agreed that it was a good change. A half-hour later, he sent out an e-mail to all of IT stating that a new version of the library was available in our NuGet repository. I said, “What, are you crazy?”

He said, “…But I have 100% code coverage, dozens of unit tests and all my unit tests passed.” He was confident that his new code would “just work.”

You might not be able to achieve 100% code coverage for your applications. In fact, most projects cannot achieve that amount of code coverage. The general rule in the industry seems to be around 80% to 85% code coverage, but even these numbers are sometimes hard to achieve. I would be concerned with any project that had less than 70% code coverage.

Hopefully you have a culture of testability ingrained within your development team. In my experience, good developers write unit tests, and the best developers estimate their work to include unit tests.

Elegant: Elegant code is another word for simple code. Unfortunately, many developers (including myself) thrive in the complex.

Writing simple code is hard. You need to resist the urge to write code and take a step back. Some of the best code I have ever written has very few lines, and that is because I took my time and thought about what I needed to do. In the end, I wrote less code than I originally thought I needed.

There is no tool that helps you write elegant code. Being able to write elegant code comes with experience. The one piece of advice that I give developers is “Always be learning.” In our industry, there is always some new better way of doing something you previously have done.

Beyond unit tests
Having unit tests that meet an acceptable minimum for code coverage is a great start. There are other types of tests that can help improve the quality of your code. Functional, load, stress and performance tests are but a few types of tests you can do to improve quality. Often you need dedicated infrastructure to support such tests, which also require more effort in terms of time of and skill.

Functional testing is a significant step above unit tests. The goal of a functional test is to test the features of the application. Some example of tools that support functional testing are Coded UI and Web tests in Visual Studio. Web tests help you exercise your Web pages and services.

A practice known as behavior-driven development is related to functional testing and test-driven development in that specialized tools are used during the software process to help test the requirements of the application. Examples of tools that support this type of testing are SpecFlow (Cucumber for .NET) and NBehave. These tools are great if you have a large set of functional use cases that need to be tested. I had the opportunity to work with a development team that used SpecFlow. They had more than 15,000 tests that tested the functionality of the application each night.

Load testing tests the throughput of your application. Often the goal is to measure how many users an application can support. This starts by creating a test script that performs the actions that a user would perform. Instances of the test script are created until the application hits a maximum number of simultaneous users, or until the service-level agreements are no longer met.

Visual Studio has had the ability to create and run load tests for years now. Recently this capability was extended to take advantage of Visual Studio Online (formerly Team Foundation Service), which allows you to run your load tests in the cloud. This is great way to do load testing without setting up a dedicated infrastructure, and it clearly demonstrates the benefits of cloud-based infrastructure.

Stress testing is similar to load testing, except that the test is usually confined to a single operation. The goal is to determine the maximum capacity of a given set of hardware to perform the operation. This type of testing is useful in finding areas of your application that break.

One of my litmus tests for any application is to perform a stress test on a basic feature such as the login. If a simple and widely used operation like logging in fails under stress, you know that the rest of the application will most likely have similar problems.

Finally we get to performance tests. The goal of performance tests is to measure the response time of operations under load. Unacceptable response times have a direct impact on how users perceive your software. If things take too long, then users may get impatient and not want to use your software. The key to measuring performance is to perform the measurement under load. This can be accomplished using both the load and stress tests previously mentioned.

Continuous Integration
The goal of today’s modern Continuous Integration practices (CI) are to automate the build, test and release of an application. The concept of CI now supports the automation of building the application and running unit tests. The goal is to not check anything into source control that could degrade the quality of the application. For example, if a developer has code that is unable to be built, you don’t want that code to enter source control. Instead, you want that developer to save their changes without checking into source control. Then when they are ready, allow them to check in.

TFS supports a number of build triggers such as manual, CI, rolling builds, gated check-in, and scheduled. For the purpose of improving quality, we look to either CI or gated check-in.

The developer workflow described above is supported using gated check-ins and shelvesets. A gated check-in is a check-in that is honored only if the submitted changes merge and build. This protects the code that resides in source control from getting to an unusable state. When you run into situations where your code does not build, you can use a shelveset.

Shelvesets are great for a few reasons. First, they allow you to save your work in progress and continue with a different task. Second, they allow you to share code that is not ready for check-in. Finally, they allow you to share code for code reviews using the Code Review feature in Visual Studio.

Continuous Delivery
Continuous Delivery (CD) is the process of automating deployment of software so that it can be released at any time. This process works with CI to produce deployments once code is submitted to source control. Tools in this space include Visual Studio Release Management, Octopus Deploy, Atlassian’s Bamboo, and JetBrains’ TeamCity.

CD allows you to deliver your software more frequently to other environments, and it gets the software in the hands of your QA and users earlier. This helps quality assurance and user acceptance testing occur more often.

Application monitoring
By far the best tools that I have come to use over the past decade are application monitoring tools, which usually fall in the application performance management (APM) space. And there are many tools out there: AppDynamics, New Relic, Crittercism, Xamarin Insights, and Visual Studio Application Insights, just to name a few I’ve used. These offer insight into your application and tell you a great deal about how it actually works.

The figure below shows a dashboard from Visual Studio Application Insights that allows users to correlate server metrics to specific operations. There is a wealth of information on this dashboard such Top 10 slowest requests, response time by operation, exception rate, CPU usage, and memory usage. APM tools often allow you to drill into these types of metrics to perform analysis. This is great if you are troubleshooting an error in your application or are looking for the root cause of performance issues.

While all the tools I mentioned have their strengths, I would be remiss if I did not mention the one tool that has never let me down: Dynatrace. Using its PurePath technology, you are able to see things that many of the other tools don’t give you. For example, Dynatrace is able to produce a single stack trace of a business operation that contains synchronous, asynchronous and parallel code paths. This one feature alone was able to troubleshoot a production issue for a customer of mine that saved them US$40 million. The time it took to install the product and find that issue: 30 minutes.

Spotlight: JetBrains ReSharper
One of the best tools to create readable, understandable and consistent code is JetBrains ReSharper. ReSharper analyzes your code on the fly and makes suggestions on how to improve it. It has features such as code formatting, hints and suggestions, and refactoring.

You might be asking, “Doesn’t Visual Studio provide some of these features?” Yes, but not to the extent that ReSharper does. ReSharper provides more than 1,700 code inspections that help increase the quality of your code. It helps developers so much that I recommend that every developer in an organization purchase a copy.

Spotlight: Code Contracts
One new feature that has been introduced to .NET is Code Contracts, which provide a way to specify preconditions, postconditions and object invariants in your code. Preconditions are requirements that must be met when entering a method or property. Postconditions describe expectations at the time the method or property code exits. Object invariants describe the expected state for a class that is in a good state.

Recently I started working on a project where the developers were already writing code. There was a bug that we had for months where we would get a null reference to an image in our UI code. Unfortunately we were never able to reproduce the bug. It would just occur several times a day at random.

To mitigate the problem, the developer working on the UI wrote code to check for that null condition. It wasn’t until one of our developers put preconditions and postconditions in our business logic code that we were able to determine the root cause. The price of not having preconditions was extra code and months of guessing as to the problem.

One of the biggest improvements you can make to the quality of your code is to create unit tests that check for preconditions on all public methods. You should write unit tests that exercise the preconditions. Having these preconditions will save time when others use your code since they will find bugs early and often.

Spotlight: Xamarin Test Cloud
Xamarin Test Cloud allows you to automate the testing of your mobile application across thousands of devices. Anyone that has built mobile software can attest to the fact that testing across various devices and operating system versions is impossible without tools such as this.