Feeds:
Posts
Comments

[tweetmeme source=”aabdulmoniem” only_single=false]

Yesterday, I have asked my IT administrator to install another screen for me with a new screen card supporting dual screens in order to feel the experience of the new Multi-Screen support in VS 2010. Really amazing guys!

Multi Screen Support

Multi Screen Support on my Machine

All you have to do is to undock any window in the VS 2010 and just drag and release to the new screen. This gives you many benefits:

  1. Keep you on focus as you can open the designer and code behind file of a form in the same time.
  2. If you were pairing with your colleague and you want to do something while he is reviewing some code on another screen you can do it easily.

Really, a very good and useful feature in the new IDE.

Advertisements

[tweetmeme  source=”aabdulmoniem” only_single=false]

If we look at the theme of C# 3.0 – 3.5, we will see that functional programming has been introduced by exposing LINQ features.

And if we look into the current release theme, we will see that this year’s theme is dynamic keyword.

Dynamic keyword gives you the ability to create objects dynamically at runtime, gives you the ability to call something you already know that it is existing but at runtime not at compile time, and it gives you the ability to interact with dynamic languages such as python for example.

Let’s get started with this new keyword.

Assume that we have the following Person class:

public class Person
{
     public string Title { get; set; }
     public string FirstName { get; set; }
     public string LastName { get; set; }
     public string FullName
     {
         get
         {
             return Title + " " + FirstName + " " + LastName;
         }
     }

    public Person()
    {
    }

    public Person(string title, string firstName, string lastName)
    {
         Title = title;
         FirstName = firstName;
         LastName = lastName;
    }

    public void Print()
    {
         Console.WriteLine(FullName);
    }
}

In the Main method of a console application, if I would like to instantiate a new object from the Person class I would do this:

Person person = new Person();
person.Title = "Mr.";
person.FirstName = "Ahmed";
person.LastName = "Abdul Moniem";
Console.WriteLine(person.FullName);

In the last example, the compiler at compile time knows exactly what is Person is and what its type. So, it will generate error if you tried to access a non-existent property or method in the Person class.

Lets add couple of lines to demonstrate the dynamic type:

Person person = new Person();
person.Title = "Mr.";
person.FirstName = "Ahmed";
person.LastName = "Abdul Moniem";
Console.WriteLine(person.FullName);

dynamic dynamicPerson = person;
dynamicPerson.FirstName = "Mohamed";

Console.WriteLine(dynamicPerson.FullName);

As you can see, I have created a new variable called dynamicPerson and its type is dynamic which telling the compiler that it will be validated at runtime. In last example, I have made dynamicPerson to point to person object then I have changed a property on the dynamicPerson and used it to print the full name on the screen.

The same results between this and the normal object initiation process. So, what is the benefit of dynamic keyword?

Let’s add the following using statement on top of our file like that:

using System.Dynamic;

And in our Main method we will write something like that:

dynamic employee = new ExpandoObject();
employee.FirstName = "Ahmed";
employee.LastName = "Abdul Moniem";
employee.FullName = employee.FirstName + " " + employee.LastName;
Console.WriteLine(employee.FullName);

Here is the magic of dynamic keyword, you can create a new object which doesn’t exist at all at runtime. In the last example we have created a new employee of type ExpandoObject (in System.Dynamic) which supports this dynamic behavior in the language.

While I was coding I realized that an employee must have a first name so I wrote:

employee.FirstName = "Ahmed";

And again I realized that the employee must have a last name, so I wrote:

employee.LastName = "Abdul Moniem";

As you may guess, I also realized that I want to add a new property FullName, so I wrote:

employee.FullName = employee.FirstName + " " + employee.LastName;

Then I print the result, and viola! every thing is working exactly as the first example of Person class in this post despite of that I don’t have any employee classes! This is the magic of dynamic keyword.

Let’s add some sugar on my employee object:

employee.Print = new Action(delegate() { Console.WriteLine(employee.FullName); });
employee.Print();

As you can see, I have added a new method which will run at runtime without any mentioned error, also at compile time everything is working fine as long as I am using this dynamic keyword.

You can also, use the famous lambda expression to create your methods. So, the last example can be changed to be:

employee.Print = new Action(() => Console.WriteLine(employee.FullName));
employee.Print();

Until now, I am just demonstrating some of the capabilities of this new keyword but until now I didn’t reveal its power!

Let’s now consider some real cases. What if I have a type which I want to instantiate an object from it at runtime and then call a method on this object still at runtime. This means that I don’t know that type at compile time. This means I have to use reflection, my old friend.

// Assuming that Person type is not known until runtime
Type t = typeof(Person);
object person = Activator.CreateInstance(t, null);

The method CreateInstance will return a type object not a Person type. Which means I can’t do this:

// Compile time error, no method named Print() in object!
person.Print();

So, what I have to do? I should use Invoke method to call the Print method at runtime. The reflection engine at runtime will know that I have a person object that I can call Print on it, and it will call it on behalf of me.

t.InvokeMember("Print", System.Reflection.BindingFlags.InvokeMethod, null, person, null);

And here is the complete code:

// Assuming that Person type is not known until runtime
Type t = typeof(Person);
object person = Activator.CreateInstance(t, null);
t.InvokeMember("Print", System.Reflection.BindingFlags.InvokeMethod, null, person, null);

Using the dynamic keyword, it becomes more easy and cleaner to develop a scenario like this:

// Assuming that Person type is not known until runtime
Type t = typeof(Person);
dynamic person = Activator.CreateInstance(t, null);
person.Print();

Let’s now delve into a more interesting topic, how you can run a dynamic language like python from C#?

First of all, you will need to install IronPython and reference its assemblies into your project.

Now we will write a simple python script into a file called Math.py for example, and it to your project root and it must have Copy Always value of its Copy to Output Directory property:

def Add(x ,y):
    return x + y

Now we will execute this python script:

var py = Python.CreateRuntime();
dynamic test = py.UseFile("Math.py");
dynamic sum = test.Add(5,10);
Console.WriteLine(sum);

And simply you will see the result 15 printed on the screen! Nice, right?!

Finally, I have just highlighted some of the features of this dynamic keyword and how you can use it in many different scenarios. I hope you all to grasp all the benefits of this new keyword.


[tweetmeme  source=”aabdulmoniem” only_single=false]

One of the new features of C# 4.0 that I liked so much is: Named and Optional Parameters. This feature simplifies the old concept method overloading or constructor overloads.

Also, It makes your code more cleaner and concise. Also, you will not be obligated to duplicate code in many method overloads in order to achieve the same functionality which in return improves maintainability of your application.

And a big enhancements in the COM land while using optional parameters because it will not obligate you to enter all the arguments coming from a COM interface which is often too large number of arguments.

Let’s first demonstrate the old days of C#. Consider that we have a simple class Person like the following:

public class Person
{
     public string Title { get; set; }
     public string FirstName { get; set; }
     public string LastName { get; set; }

     public string FullName
     {
         get
        {
            return FirstName + " " + LastName;
        }
     }

     public Person()
     {
         Title = "Mr. ";
         FirstName = "Ahmed";
         LastName = "Abdul Moniem";
     }

     public Person(string firstName, string lastName)
     {
          Title = "Mr. ";
          FirstName = firstName;
          LastName = lastName;
     }

     public Person(string title, string firstName, string lastName)
     {
          Title = title;
          FirstName = firstName;
          LastName = lastName;
     }
}

As you can see we have many overloads for the Person constructor. So, if you want to use the default constructor which initialize data members with the default values you can just use the default constructor but if you want to change one of the default data members you can use another overload.

Let’s consider here a deadlock example, assume that you want to create a new instance of Person class and you want only to change the default value of LastName property. You can create a new constructor overload that will satisfy your needs. It will take only last name and assign this value to the LastName property and we will keep all the default values in place in both Title and FirstName properties. It will be like this:

public Person(string lastName)
{
    Title = "Mr. ";
    FirstName = "Ahmed";
    LastName = lastName;
}

And we can create a new person object using this constructor like this:

Person person = new Person(“Abdul Moniem”);

Perfect! Ok, here is the deadlock, what if I want to do the same thing with first name. So, I will create another constructor overload like this:

public Person(string firstName)
{
     Title = "Mr. ";
     FirstName = firstName;
     LastName = "Abdul Moniem";
}

But unfortunately, this is not allowed. Overloading methods must defer in the number of parameters and/or types. So, we can’t put two constructors taking only one string parameter!

Even if you chose to use the last constructors which supplies all the data members to the object and you will put the default values by hand. This is not possible if you don’t know what is the default values (most of the cases, specially if you are not the creator of the class and you are just using it). Also, it is not a neat solution because you are always obligated to use all the arguments of the constructor!

Here the new Named and Optional Parameters feature comes to the rescue. We will modify the Person class to the following and naming it PersonEx (just for differentiation):

public class PersonEx
{
     public string Title { get; set; }
     public string FirstName { get; set; }
     public string LastName { get; set; }
     public string FullName
     {
          get
          {
             return Title + " " + FirstName + " " + LastName;
          }
     }

    public PersonEx(string title = "Mr. ", string firstName = "Ahmed", string lastName = "Abdul Moniem")
    {
        Title = title;
        FirstName = firstName;
        LastName = lastName;
    }
}

As you can see we have only one constructor that will satisfy our needs and this is the synatx of optional parameters in any method or constructor:

For each argument you are supplying the default value like this (Type ArgumentName = DefaultValue) like in (string Title = “Mr. “). This tells the compiler that this argument will be optional with a default value = “Mr. “.

This the part of optional parameters. What about the named ones?

When you want to create a new object from PersonEx you can do the following:

PersonEx person1 = new PersonEx(lastName: "Mohamed");
PersonEx person2 = new PersonEx(firstName: "Mostafa");
PersonEx person3 = new PersonEx(firstName: "Mostafa", lastName: "Mohamed");
PersonEx person4 = new PersonEx("Miss. ", "Mona", "Mansour");

as you see, the syntax of named parameters comes in the form of (ParameterName: ParameterValue) like in (lastName: “Mohamed”).

Deadlock is solved! right! you can use any combination you want without creating many overloads, and with super flexibility while creating objects.

A remaining important thing is that you are obligated to add the optional parameter at the end of the method header like param keyword. So the following syntax is wrong and it will give you a compile time error:

public PersonEx(string title = "Mr. ", string firstName, string lastName = "Abdul Moniem")
{
     Title = title;
     FirstName = firstName;
     LastName = lastName;
}

but this is correct, considering that firstName becomes a required parameter and the other two are optional (you can make methods hybrid):

public PersonEx(string firstName, string title = "Mr. ", string lastName = "Abdul Moniem")
{
     Title = title;
     FirstName = firstName;
     LastName = lastName;
}

[tweetmeme  source=”aabdulmoniem” only_single=false]

I was waiting on fire for the new version of TFS 2010 because of its super features which solved many other drawbacks I was facing in previous version.

One of the most features I was waiting for is the ability to install TFS 2010 on client machines running client operating systems like Windows 7 for example. In old versions I was tied to install it only on server machines running server operating systems like Windows 2003 or Windows 2008.

This was really annoying for me, because I am using my personal  machine as a personal lab and using TFS within it. So, I was obligated to install Windows 2003 (I know that I can install windows 7 for example, and use a virtual machine for windows 2003 server, but this can be a good solution to some how fast PCs).

And here we go, I have installed TFS 2010 on my client machine. And really I was so happy for that.

After two days, I decided to begin learning and using the new features of this giant. But I was shocked when I see that there is no support for sharepoint or reporting services in client machines!

In the install manual you can read the following:

Client operating systems do not support integration with SharePoint Products or the reporting feature. If you want to use either one of these features, you must install Team Foundation Server on a server operating system.

Really, this is so bad!

I think I will try to switch to the second solution of using virtual machine.

So, don’t ever install TFS 2010 on a client machine if you need those two features.


[tweetmeme  source=”aabdulmoniem” only_single=false]

Many parameters will influence our estimates about software projects. This chapter discusses the different estimate influences which must be taken into consideration while making estimates.

What I have learned?

Project Size

  • The largest driver in a software estimate is the size of the software being built, because there is more variation in the size than in any other factor.
  • A system consisting of 1,000,000 Line of code (LOC) requires dramatically more effort than a system consisting of only 100,000 LOC.
  • These comments about software size being the largest cost driver might seem obvious, yet organizations routinely violate this fundamental fact in two ways:
    • Costs, effort, and schedule are estimated without knowing how big the software will be.
    • Costs, effort, and schedule are not adjusted when the size of the software is consciously increased (that is, in response to change requests).
  • So we have to invest an appropriate amount of effort assessing the size of the software that will be built. The size of the software is the single most significant contributor to project effort and schedule.
  • What is the difference between economy of scale and diseconomy of scale?
    • An economy of scale is something like, “If we build a larger manufacturing plant, we’ll be able to reduce the cost per unit we produce.” An economy of scale implies that the bigger you get, the smaller the unit cost becomes.
    • A diseconomy of scale is the opposite. In software, the larger the system becomes, the greater the cost of each unit. If software exhibited economies of scale, a 100,000-LOC system would be less than 10 times as costly as a 10,000-LOC system. But the opposite is almost always the case.
  • As you can see from the next graph, in this example, the 10,000-LOC system would require 13.5 staff months. If effort increased linearly, a 100,000-LOC system would require 135 staff months, but it actually requires 170 staff months.

  • As last graph is drawn, the effect of the diseconomy of scale doesn’t look very dramatic. Indeed, within the 10,000 LOC to 100,000 LOC range, the effect is usually not all that dramatic. But two factors make the effect more dramatic. One factor is greater difference in project size, and the other factor is project conditions that degrade productivity more quickly than average as project size increases.

  • In last graph, you can see that the worst-case effort growth increases much faster than the nominal effort growth, and that the effect becomes much more pronounced at larger project sizes. Along the nominal effort growth curve, effort at 100,000 lines of code is 13 times what it is at 10,000 lines of code, rather than 10 times. At 1,000,000 LOC, effort is 160 times the 10,000-LOC effort, rather than 100 times.
  • The worst-case growth is much worse. Effort on the worst-case curve at 100,000 LOC is 17 times what it is at 10,000 LOC, and at 1,000,000 LOC it isn’t 100 times as large—it’s 300 times as large!
  • Don’t assume that effort scales up linearly as project size does. Effort scales up exponentially.
  • Use software estimation tools to compute the impact of diseconomies of scale. (see Hidden Gems section).
  • When to ignore diseconomies? If you’ve completed previous projects that are about the same size as the project you’re estimating—defined as being within a factor of 3 from largest to smallest— you can safely use a ratio-based estimating approach, such as lines of code per staff month, to estimate your new project.

Software Kind

  • Factor the kind of software you develop into your estimate. The kind of software you’re developing is the second-most significant contributor to project effort and schedule.
  • For example, a team developing an intranet system for internal use might generate code 10 to 20 times faster than a team working on an avionics project, real-time project, or embedded systems project.

Personnel Factors

  • Personnel factors also exert significant influence on project outcomes.

  • Effect of personnel factors on project effort. Depending on the strength or weakness in each factor, the project results can vary by the amount indicated—that is, a project with the worst requirements analysts would require 42% more effort than nominal, whereas a project with the best analysts would require 29% less effort than nominal.
  • Two implications here:
    • You can’t accurately estimate a project if you don’t have some idea of who will be doing the work.
    • The most accurate estimation approach will depend on whether you know who specifically will be doing the work that’s being estimated.

Programming Language

  • First, as last graph suggested, the project team’s experience with the specific language and tools that will be used on the project has about a 40% impact on the overall productivity rate of the project.

  • Second, some languages generate more functionality per line of code than others. For example, C# or Java are more productive than C.

  • A third factor related to languages is the richness of the tool support and environment associated with the language. According to Cocomo II, the weakest tool set and environment will increase total project effort by about 50% compared to the strongest tool set and environment.
  • A final factor related to programming language is that developers working in interpreted languages tend to be more productive than those working in compiled languages, perhaps as much as a factor of 2.

Other Project Influences

Hidden Gems

Here I will introduce some excerpts which I rate them as hidden gems inside this chapter.

  • Gem 1:

For software estimation, the implications of diseconomies of scale are a case of good news, bad news. The bad news is that if you have large variations in the sizes of projects you estimate, you can’t just estimate a new project by applying a simple effort ratio based on the effort from previous projects. If your effort for a previous 100,000-LOC project was 170 staff months, you might figure that your productivity rate is 100,000/170, which equals 588 LOC per staff month. That might be a reasonable assumption for another project of about the same size as the old project, but if the new project is 10 times bigger, the estimate you create that way could be off by 30% to 200%.

There’s more bad news: There isn’t a simple technique in the art of estimation that will account for a significant difference in the size of two projects. If you’re estimating a project of a significantly different size than your organization has done before, you’ll need to use estimation software that applies the science of estimation to compute the estimate for the new project based on the results of past projects. My company provides a free software tool called Construx® Estimate that will do this kind of estimate. You can download a copy at www.construx.com/estimate.

  • Gem 2:
Table 5-5: Cocomo II Adjustment Factors

Cocomo II Factor

Influence

Observation

Applications (Business Area) Experience

1.51

Teams that aren’t familiar with the project’s business area need significantly more time. This shouldn’t be a surprise.

Architecture and Risk Resolution

1.38 [*]

The more actively the project attacks risks, the lower the effort and cost will be. This is one of the few Cocomo II factors that is controllable by the project manager.

Database Size

1.42

Large, complex databases require more effort project-wide. Total influence is moderate.

Developed for Reuse

1.31

Software that is developed with the goal of later reuse can increase costs as much as 31%. This doesn’t say whether the initiative actually succeeds. Industry experience has been that forward-looking reuse programs often fail.

Extent of Documentation Required

1.52

Too much documentation can negatively affect the whole project. Impact is moderately high.

Language and Tools Experience

1.43

Teams that have experience with the programming language and/or tool set work moderately more productively than teams that are climbing a learning curve. This is not a surprise.

Multi-Site Development

1.56

Projects conducted by a team spread across multiple sites around the globe will take 56% more effort than projects that are conducted by a team co-located at one facility. Projects that are conducted at multiple sites, including out-sourced or offshore projects, need to take this effect seriously.

Personnel Continuity (turnover)

1.59

Project turnover is expensive—in the top one-third of influential factors.

Platform Experience

1.40

Experience with the underlying technology platform affects overall project performance moderately.

Platform Volatility

1.49

If the platform is unstable, development can take moderately longer. Projects should weigh this factor in their decision about when to adopt a new technology. This is one reason that systems projects tend to take longer than applications projects.

Precedentedness

1.33[*]

Refers to how “precedented” (we usually say “unprecedented”) the application is. Familiar systems are easier to create than unfamiliar systems.

Process Maturity

1.43[*]

Projects that use more sophisticated development processes take less effort than projects that use unsophisticated processes. Cocomo II uses an adaptation of the CMM process maturity model to apply this criterion to a specific project.

Product Complexity

2.38

Product complexity (software complexity) is the single most significant adjustment factor in the Cocomo II model. Product complexity is largely determined by the type of software you’re building.

Programmer Capability (general)

1.76

The skill of the programmers has an impact of a factor of almost 2 on overall project results.

Required Reliability

1.54

More reliable systems take longer. This is one reason (though not the only reason) that embedded systems and life-critical systems tend to take more effort than other projects of similar sizes. In most cases, your marketplace determines how reliable your software must be. You don’t usually have much latitude to change this.

Requirements Analyst Capability

2.00

The single largest personnel factor—good requirements capability—makes a factor of 2 difference in the effort for the entire project. Competency in this area has the potential to reduce a project’s overall effort from nominal more than any other factor.

Requirements Flexibility

1.26[*]

Projects that allow the development team latitude in how they interpret requirements take less effort than projects that insist on rigid, literal interpretations of all requirements.

Storage Constraint

1.46

Working on a platform on which you’re butting up against storage limitations moderately increases project effort.

Team Cohesion

129[*]

Teams with highly cooperative interactions develop software more efficiently than teams with more contentious interactions.

Time Constraint

1.63

Minimizing response time increases effort across the board. This is one reason that systems projects and real-time projects tend to consume more effort than other projects of similar sizes.

Use of Software Tools

1.50

Advanced tool sets can reduce effort significantly.

[*]Exact effect depends on project size. Effect listed is for a project size of 100,000 LOC.

Effect of personnel factors on project effort. Depending on the strength or weakness in each factor, the project results can vary by the amount indicated—that is, a project with the worst requirements analysts would require 42% more effort than nominal, whereas a project with the best analysts would require 29% less effort than nominal.

Finally

We have finished the first part of this book titled (Part I: Critical Estimation Concepts), in subsequent posts we will discuss the different available estimation techniques. Be with us 🙂


[tweetmeme  source=”aabdulmoniem” only_single=false]

I think that Steve McConnell has changed his career to be software psychologist! WOW, this man is awesome! Reading this chapter, gives me a full proof that Steve has very strong and solid background on the human nature and mentality.

This chapter demonstrates the many sources of errors that a man can fall in while he is making estimates. And to say the truth, I didn’t find any material discussing what may happen because of our humanity like this chapter. Thank you Steve.

What I have learned?

  • Software estimation creeps come from four generic sources:
    • Inaccurate information about the project being estimated
    • Inaccurate information about the capabilities of the organization that will perform the project
    • Too much chaos in the project to support accurate estimation (that is, trying to estimate a moving target)
    • Inaccuracies arising from the estimation process itself
  • It isn’t possible to estimate the amount of work required to build something when that “something” has not been defined.
  • I have learned about the cone of uncertainty and how it can be so useful in software estimation.

  • Consider the effect of the Cone of Uncertainty on the accuracy of your estimate. Your estimate cannot have more accuracy than is possible at your project’s current position within the Cone.
  • You have to narrow uncertainty and variability of a project if you want to estimate correctly.
  • Cone of uncertainty doesn’t narrow itself unless you make decision that we will eliminate some variability issues in the project.
  • if the project is not well controlled, or if the estimators aren’t very skilled, estimates can fail to improve. Next figure shows what happens when the project doesn’t focus on reducing variability—the uncertainty isn’t a Cone, but rather a Cloud that persists to the end of the project. The issue isn’t really that the estimates don’t converge; the issue is that the project itself doesn’t converge—that is, it doesn’t drive out enough variability to support more accurate estimates.
  • After making decisions that eliminate some variability from the project the cone will narrow like this:

  • Account for the Cone of Uncertainty by using predefined uncertainty ranges in your estimates.
Scoping Error
Phase Possible Error on Low Side Possible Error on High Side Range of High to Low Estimates
Initial Concept 0.25x (-75%) 4.0x (+300%) 16x
Approved Product Definition 0.50x (-50%) 2.0x (+100%) 4x
Requirements Complete 0.67x (-33%) 1.5x (+50%) 2.25x
User Interface Design Complete 0.80x (-20%) 1.25x (+25%) 1.6x
Detailed Design Complete (for sequential projects) 0.90x (-10%) 1.10x (+10%) 1.2x
Source: Adapted from Software Estimation with Cocomo II (Boehm et al. 2000).
  • Account for the Cone of Uncertainty by having one person create the “how much” part of the estimate and a different person create the “how uncertain” part of the estimate.
  • Don’t ever and never make a commitment on early stages of the cone of uncertainty. Meaningful commitments are not possible in the early, wide part of the Cone. Effective organizations delay their commitments until they have done the work to force the Cone to narrow. Meaningful commitments in the early-middle part of the project (about 30% of the way in) are possible and appropriate.
  • How you can relate the cone of uncertainty with iterative development? (See Hidden Gems sections).
  • Don’t expect better estimation practices alone to provide more accurate estimates for chaotic projects. You can’t accurately estimate an out-of-control process. As a first step, fixing the chaos is more important than improving the estimates.
  • One of the most common sources of estimation error is forgetting to include necessary tasks in the project estimates.
  • Developers estimate often optimistically. So, don’t reduce developer estimates—they’re probably too optimistic already.
  • Avoid having “control knobs” on your estimates. While control knobs might give you a feeling of better accuracy, they usually introduce subjectivity and degrade actual accuracy.
  • COCOMO II has many control knobs which makes the chances of estimate errors too high.
  • Don’t give off-the-cuff estimates. Even a 15-minute estimate will be more accurate.
  • Accuracy not equal precision, in software estimation world they are too different. As an example, airline schedules are precise to the minute, but they are not very accurate. Measuring people’s heights in whole meters might be accurate, but it would not be at all precise.

Hidden Gems

Here I will introduce some excerpts which I rate them as hidden gems inside this chapter.

  • Gem 1:

Suppose you’re developing an order-entry system and you haven’t yet pinned down the requirements for entering telephone numbers. Some of the uncertainties that could affect a software estimate from the requirements activity through release include the following:

  • When telephone numbers are entered, will the customer want a Telephone Number Checker to check whether the numbers are valid?
  • If the customer wants the Telephone Number Checker, will the customer want the cheap or expensive version of the Telephone Number Checker? (There are typically 2-hour, 2-day, and 2-week versions of any particular feature—for example, U.S.-only versus international phone numbers.)
  • If you implement the cheap version of the Telephone Number Checker, will the customer later want the expensive version after all?
  • Can you use an off-the-shelf Telephone Number Checker, or are there design constraints that require you to develop your own?
  • How will the Telephone Number Checker be designed? (Typically there is at least a factor of 10 difference in design complexity among different designs for the same feature.)
  • How long will it take to code the Telephone Number Checker? (There can be a factor of 10 difference—or more—in the time that different developers need to code the same feature.)
  • Do the Telephone Number Checker and the Address Checker interact? How long will it take to integrate the Telephone Number Checker and the Address Checker?
  • What will the quality level of the Telephone Number Checker be? (Depending on the care taken during implementation, there can be a factor of 10 difference in the number of defects contained in the original implementation.)
  • How long will it take to debug and correct mistakes made in the implementation of the Telephone Number Checker? (Individual performance among different programmers with the same level of experience varies by at least a factor of 10 in debugging and correcting the same problems.)

As you can see just from this short list of uncertainties, potential differences in how a single feature is specified, designed, and implemented can introduce cumulative differences of a hundredfold or more in implementation time for any given feature. When you combine these uncertainties across hundreds or thousands of features in a large feature set, you end up with significant uncertainty in the project itself.

  • Gem 2:

The Cone of Uncertainty and Iterative Development

Applying the Cone of Uncertainty to iterative projects is somewhat more involved than applying it to sequential projects is.

If you’re working on a project that does a full development cycle each iteration—that is, from requirements definition through release—you’ll go through a miniature Cone on each iteration. Before you do the requirements work for the iteration, you’ll be at the Approved Product Definition point in the Cone, subject to 4x variability from high to low estimates. With short iterations (less than a month), you can move from Approved Product Definition to Requirements Complete and User Interface Design Complete in a few days, reducing your variability from 4x to 1.6x. If your schedule is immovable, the 1.6x variability will apply to the specific features you can deliver in the time available, rather than to the effort or schedule. There are estimation advantages that flow from short iterations, which are discussed in Section 8.4, “Using Data from Your Current Project.”

What you give up with approaches that leave requirements undefined until the beginning of each iteration is long-range predictability about the combination of cost, schedule, and features you’ll deliver several iterations down the road. As Chapter 3, “Value of Accurate Estimates,” discussed, your business might prioritize that flexibility highly, or it might prefer that your projects provide more predictability.

The alternative to total iteration is not no iteration. That option has been found to be almost universally ineffective. The alternatives are less iteration or different iteration.

Many development teams settle on a middle ground in which a majority of requirements are defined at the front end of the project, but design, construction, test, and release are performed in short iterations. In other words, the project moves sequentially through the User Interface Design Complete milestone (about 30% of the calendar time into the project) and then shifts to a more iterative approach from that point forward. This drives down the variability arising from the Cone to about ±25%, which allows for project control that is good enough to hit a target while still tapping into major benefits of iterative development. Project teams can leave some amount of planned time for as-yet-to-be-determined requirements at the end of the project. That introduces a little bit of variability related to the feature set, which in this case is positive variability because you’ll exercise it only if you identify desirable features to implement. This middle ground supports long-range predictability of cost and schedule as well as a moderate amount of requirements flexibility.

  • Gem 3:

Project teams are sometimes trapped by off-the-cuff estimates. Your boss asks, for example, “How long would it take to implement print preview on the Gigacorp Web site?” You say, “I don’t know. I think it might take about a week. I’ll check into it.” You go off to your desk, look at the design and code for the program you were asked about, notice a few things you’d forgotten when you talked to your manager, add up the changes, and decide that it would take about five weeks. You hurry over to your manager’s office to update your first estimate, but the manager is in a meeting. Later that day, you catch up with your manager, and before you can open your mouth, your manager says, “Since it seemed like a small project, I went ahead and asked for approval for the print-preview function at the budget meeting this afternoon. The rest of the budget committee was excited about the new feature and can’t wait to see it next week. Can you start working on it today?”

I’ve found that the safest policy is not to give off-the-cuff estimates.

  • Gem 4:

In casual conversation, people tend to treat “accuracy” and “precision” as synonyms. But for estimation purposes, the distinctions between these two terms are critical.

Accuracy refers to how close to the real value a number is. Precision refers merely to how exact a number is. In software estimation, this amounts to how many significant digits an estimate has. A measurement can be precise without being accurate, and it can be accurate without being precise. The single digit 3 is an accurate representation of pi to one significant digit, but it is not precise. 3.37882 is a more precise representation of pi than 3 is, but it is not any more accurate.

Airline schedules are precise to the minute, but they are not very accurate. Measuring people’s heights in whole meters might be accurate, but it would not be at all precise.


[tweetmeme  source=”aabdulmoniem” only_single=false]

I have watched now a very interesting, useful, and short learning video. The video is trying to highlight the new features of VS 2010.

You can watch it from here. Only 15 minutes and you will be done!

The best feature I have seen, is the very powerful support for testing, quality control, and test first development in the new IDE. Really interesting.

Also, I have liked the new drag-and-drop binding feature in WPF and Silverlight application. Really, WOW.

Thank you Microsoft for this new baby!

%d bloggers like this: