27 September 2011

Managing the Cone of Uncertainty

Some of you may not be familiar with the term "Cone of Uncertainty". In the Project Management and Requirement Management world it is often called "Scope". So what is the Cone of Uncertainty? If you have ever watched a weather report as they monitor the progress of a hurricane you will notice that the further the hurricane is from shore the larger the potential impact. As the hurricane nears, the potential impact becomes smaller and smaller as predictability improves.

It is no different in system development. Consider the early phases of a project, when we are far from understanding the potential impact of the system. Our estimates are based on what we used to call WAGs (Wild A** Guesses). Somehow these become finalized in a project schedule. I feel I must state here that we never came close to meeting that schedule. As we began to work through the requirements, our understanding of the system increased and our estimates began to change, and get closer to the truth during each phase of development.

So what can we do to manage this uncertainty? There is no silver bullet, of course. But there are some things we can do to mitigate this risk.
  • Many environments change so slowly that they can be considered static. Know your environment and how quickly and often it changes.
  • Before making any significant investments, uncertainty should be reduced to a comfortable level. Snap decisions are never a good thing. If there is uncertainty in new technology or new approaches, mitigate the risk with the input of experts.
  • Systems engineering is volatile; external pressures increase uncertainty over time. External pressures can affect the entire development process. When those external pressures arise, assess and determine the importance of those pressures and identify their impact to the current development plans.
  • You must actively and continuously work to reduce the uncertainty level. Just ignoring uncertainty will not make things any better. Identify uncertainty and meet it head on.
  • The Cone of Uncertainty is narrowed both by research and by decisions that reduce variability. This is a key part of the System Engineering tasks. Document decisions and rationale and the resulting requirements so you can move on.
  • These decisions are about scope, what is included - and what is not included. Continually assess what is in and what is out. Remember that today the capability might be in, and tomorrow it may be out (as Heidi Klum says on Project Runway). In an Agile environment this assessment is made almost daily.

Many process improvement efforts today are targeted towards reducing the size of the cone of uncertainty.
  • Quality requirements from solid elicitation techniques
  • Improved estimation techniques
  • Formal requirements management
  • Formal change management
  • Proper and complete testing

Remember, that if these decisions change later in the project then the cone will widen. As always, requirements play a key role in managing the Cone of Uncertainty.

By: Marcia Stinson

21 September 2011

Implementing a Requirements Management Tool on a Complex System

Many years ago I spent several years working on a very complex weapon control system. As you can imagine the requirements were large, complex, and changed often. We spent a lot of time just trying to manage those pesky changes that continued to be submitted, both from customers and from the developers. In those early days, we did not have any requirements management tools to help use assess these changes. We were using Interleaf and Excel (I can hear groans of pain now). Everything was manual, including our complex traceability. We had a couple of folks who did nothing but maintain the traceability matrices and assess the impact of changes. At this time we only had traceability from the Concept of Operations to System Requirements to Subsystem Requirements. I say “only” but at that time just having this level of traceability was a big accomplishment.

When we had enough changes we issued a new system requirements document and new subsystem requirements document. Those poor contractors had to go through the massive subsystem requirements and manually determine what had changes. I can’t imagine the time the contractors spent just trying to figure out what changes they needed to be concerned about.

It was in the middle of this upgrade project that the customer said enough and tasked my team with evaluating and selecting a requirements management tool. The tool we selected is not important to this particular discussion, but what we learned from this tool selection and implementation is important. Here are some lessons learned.

(1) - There is not a single tool which is going to please everyone. We had users who loved our selection and those who fought us every step of the way. Without a customer supporting and enforcing the change it would not be possible on a large program like this one. One user complained about the column size of the tool generated traceability matrix, totally ignoring the fact that it saved him days of manual effort.

(2) - Our manual traceability was not very clean. Once we imported all of our information into the tool and linked it up we found many gaps in the traceability. What was more disturbing was that we had links that really didn’t make any sense. We had to do a lot of work to clean up our traceability matrices.

(3) - Just tracing requirements was great, but now we could use the same effort to link requirements to test plans, and went so far as to link subsystem requirements to design documents that we could review. This didn’t happen overnight, but it did happen. Eventually we could trace system requirements to a subsystem requirement to a design document to a code module. We even used a tool to determine the complexity of code modules and used this to help determine how difficult a change would be to implement and test.

(4) - Metrics from a requirements tool are key to understanding completeness of testing activities. We often thought we were 50% complete with testing. After all, 50% of the tests were completed. However, what we found was that we were prone to testing the simplest and most understood parts of the system first. So even thought we were 50% complete, everything left was very high risk. We learned to prioritize our testing by looking at requirements priorities and software complexity, information we could not determine through manual traceability.

(5) - It was very easy to get overwhelmed. Start simple. We had to back off our ambitious ideas and begin with a simple traceability model. As we learned and gained more experience with the tool, we added more information to our model. We were constantly assessing our process to figure out what else we could do to make it better.

- Don’t skimp on training and mentoring. We trained everyone on the project and created experts who helped users get over initial hurdles. We sent our experts to our contractors for weeks at a time to help them get up to speed in using the tool. We even had our own internal users group. Be prepared for this kind of effort.

What a great leargning experience this was for me. If you´re interested in embarking on a change like this to improve your requirements process, contact Visure Solutions. We will be happy to discuss your process with you.

By: Marcia Stinson

13 September 2011

What happens when you define ambiguous Requirements?

As part of my research as a rookie in Requirements Engineering, I found this interesting blog entry by Ian Chan http://bit.ly/o4L1yE

I fully agree... mistakes are always so obviuos after disaster happens...