Frequent communication about progress within a team and among external stakeholders is essential to adopting a more agile approach to developing software and ensuring project success. Kurt Bittner, CTO of consultancy firm Ivar Jacobson International, Americas, said he came to this conclusion after 28 years in the industry in a variety of leadership roles. He spoke about the subjects of his November webinar, entitled “Practical Strategies for Software Project Success: Reviewing Results and Setting Expectations,” with SD Times.

SD Times: What is the purpose of frequent reviews of intermediate results?
Kurt Bittner: Building confidence and trust is essential to being successful in any endeavor. In a software development effort, the best way to do this is to show frequent progress in the form of executing code. Demonstrations of progress help to build confidence that the project is on the right path and that it is working effectively. Building this confidence increases trust and opens the door for the inevitable hard discussions about project scope and resources.

How truly valuable or essential are early results of an intermediate review?
The problem with the traditional status report based on whether the project is “on track” (which is based on schedule adherence) is that it is really not meaningful to the business. It assumes that the original plan was good, and that deviations from the plan are bad, but if all that is being done is producing a bunch of documents, no one really knows whether the project is really on track or not.

Seeing something working, even if it is just a small part of the system, is essential to knowing whether real progress is really being made. There is also the problem that users often don’t really know what they want (or need, which is often a different matter), so the faster you can show them something, the sooner you will know what you really need to build.

What are the concerns that surround how early results may be received?
There is a tendency for people to look at early versions of the system and see more than is really there. Just because something looks nice does not mean that all the work behind the screens is done. External reviewers can get ahead of themselves and assume that because something looks good early on that the schedule can be accelerated. I have never seen this to be the case. This is where setting expectations comes in; the stakeholders have to understand what they are seeing and where it fits in the big picture.

Why do these concerns exist?
Mostly because there is not enough communication between stakeholders and the project team about how the project will run and how stakeholders should be, and need to be, involved. When people are shown something without the context of where what they are seeing fits, and what is needed from them in the review, they fill in the gaps themselves.

Does embracing early results really determine the success of a project? How so?
Yes, in a number of ways. First, the relationship between business and IT is often tenuous at best—the trust on both sides can be quite low. Showing results early and often builds this trust by proving, in visible ways, that the project is on track.

Second, it uncovers errors in assumptions about what is really needed earlier. Traditional approaches that rely on documents and sign-offs are actually very poor at uncovering errors in requirements, because no one reads big documents, and because documents often cannot convey the real essence of how things should look and work.

Finally, documents are a really poor vehicle for communication. Showing results early creates the kind of dialogue between the project team and the business that results in better solutions.

How do you set expectations, and how do you conduct reviews for maximum benefit?
First, we have to change what should be reviewed to track project status. As noted above, tracking against a schedule does not really tell you if the project is headed in the right direction; if the original plan is flawed, then the schedule-based progress measures will actually punish teams for doing the right thing by continually making them look bad for deviating from the flawed schedule. We believe that executed and tested code is the only valid measure of progress.

Second, we think that a conscious focus on risk, especially technical risk, should be an equal partner along with customer priority in driving the early project work. In doing so, risk is reduced while progress is being made, resulting in greater certainty that the right solution is being built.

When reviewing results, stakeholders need to understand what they are seeing and why, and what is expected of them. Setting these expectations will yield the best result for time invested. The reviews themselves should be driven by business scenarios to ensure a clear connection to things that are meaningful to all the stakeholders.

What should one review and when?
In addition to the executing code, the important things to look at—at the end of every iteration—are trends on risks, progress (as measured by executed and tested business scenarios), quality (as measured by outstanding defects), and the cost and time to complete. Iterations may vary between two and six weeks in length, with four weeks being a reasonable starting point for most organizations.

What do you find to be most problematic when a team has to communicate the results from a review? Why is this so? What can be done to rectify the issue?
The easiest aspect is to demonstrate some working code, but the challenge is to put the right business context around what has been done and the right project context into where it fits into the whole. The harder things to talk about are related to risk and issues: Being really open about these challenges is sometimes awkward. Of course all this assumes that the team actually has something to show: If things have gone badly then they may not, and this does not make for a fun situation.

Assuming that the team has made progress, the discussion of risks and issues needs to be as open as possible. The typical approach to risk is to assign it to someone to work.

In the typical approach, when the risk is assigned to someone, they investigate the risk and put together a plan for mitigating it and a plan for responding to it if it occurs. We believe that the risk needs to be owned by the whole team, and that the best approach to mitigating risk is to find a requirement that, if implemented and tested, will force the risk to be confronted.  

A concrete example is performance. With most applications, there is a risk that the system will not perform well when it is forced to deal with larger volumes of data or users or both. The best way to mitigate this risk is to implement the parts of the system that are likely to be sensitive to load and then do real load testing. This usually does not require implementing more than a small proportion of the system, and doing so early uncovers poor design decisions that would be costly to fix later.

Mitigating risk by doing real development work is far superior to the normal approach where risks are treated like side issues, separate from the mainstream of the project work. We prefer to build risk mitigation right into the iteration plans.

If there is a risk that a particular subject matter expert is not available, causing delays in the project, the best way to confront the risk is to force it to happen and then show the consequences of it happening. Doing so makes unavoidably visible the problem, which enables one to escalate the issue through the appropriate management chain. The business management can then decide whether sacrificing the project is more important than the other work the subject matter expert is doing.

If there is a risk that a particular technology is unproven, we choose a scenario that will require that technology to implement. The testing of the technology is done by using it to implement the related critical functionality and then testing the result to see if it is sufficient for the task at hand. Testing occurs in the context of doing real work, not as a “side project.”

The discussion of risk then becomes a team effort, and we focus the discussion of risk to be about what the team will do in the next iteration in order to reduce or eliminate the top risks. Iteration by iteration, we keep applying that approach until we have acceptably reduced or eliminated all the risks and delivered the solution.