Feeds:
Posts
Comments

Archive for April, 2010


[tweetmeme source=”aabdulmoniem” only_single=false]

Older versions of VS like 2003 and 2005 were dedicated to only one framework. This was annoying because you have to use multiple versions of VS to develop multi-target applications which wasn’t very feasible at this time!

VS 2008 had come with a great feature called Multi-Target support which enables developers to develop multi-target applications on the same IDE.  So, I can develop a program under .NET 2.0 and using the same VS to develop another one with .NET 3.5 or even convert the old one to .NET 3.5. Great feature, huh!

The problem was that all these versions will run on the same CLR, they are only different in class libraries! So, VS 2008 was primarily focused on filtering the different assemblies and project templates according the developer’s framework choice but everything else was working on the same CLR like compiling and debugging for example.

This wasn’t perfect because the intellisense of VS 2008 always showing the libraries of .NET 3.5 even if you are using .NET 2.0! This makes the chance that developers using .NET 2.0 adding code snippets which is supported only in .NET 3.5 by accident.

VS 2010 comes to the rescue. Now you can develop many programs under many different frameworks safely because the intellisense has been improved to show only what your framework supports.

Enough talking let’s see some screen shots. Just I will create two different web applications targeting different frameworks. The first one will target .NET 2.0:

And the second one will target .NET 4.0:

Now we have two web applications in our Solution Explorer:

Now let’s examine some differences, as we see DotNet2 project is the startup one so we will just run the page and open the integrated server information of VS 2010 to see the following:

But when we mark DotNet4 project as the startup one and run it you will see a different server information:

As you saw the difference is a clean separation between old CLR and new CLR. Good shot!

Let’s see another difference while working with VS Toolbox. We will just open the default page of the DotNet2 web application and open the VS ToolBox data tab to see what the supported controls are in ASP.NET 2.0:

And for the DotNet4 project, we will a different control list:

As we can see that VS 2010 now filters the tool box to show different controls according to the targeted .NET framework. Amazing!

Another one is, the property grid. Let’s see what is the difference. Drag a button on the default page for both projects and let’s see what the property grid will show. In DotNet2 project:

And for DotNet4 project:

As we can see, VS 2010 filters the properties which are supported in each framework correctly. Nice!

Last one, is the Intellisense which we have talked about firstly. Let’s try to write a statement like (Response.Re) in the two projects and let’s see what is the difference. In DotNet2 project:

And in DotNet4 project:

As we can see, VS 2010 filters the methods and properties which are supported in the targeted framework automatically for you to prevent to from accidentally writing something not appropriate.

A very good addition to this new version VS 2010.

Read Full Post »


[tweetmeme source=”aabdulmoniem” only_single=false]

In old editions of VS to find all the references of a specific variable on the same file for example you have to do that:

Right Click on the variable -> Click on find all references

Then VS will give you all the references he found within the page or not in a new separate window (Results window).

Now, with the new theme of VS 2010 which is “Keep the developer into focus”, you can accomplish that scenario only by putting your mouse cursor on top of the required variable and let VS 2010 do it for you out of the box ;). See the next image:

References Highlighting

FirstName property is highlighted

Read Full Post »


[tweetmeme source=”aabdulmoniem” only_single=false]

Yesterday, I have asked my IT administrator to install another screen for me with a new screen card supporting dual screens in order to feel the experience of the new Multi-Screen support in VS 2010. Really amazing guys!

Multi Screen Support

Multi Screen Support on my Machine

All you have to do is to undock any window in the VS 2010 and just drag and release to the new screen. This gives you many benefits:

  1. Keep you on focus as you can open the designer and code behind file of a form in the same time.
  2. If you were pairing with your colleague and you want to do something while he is reviewing some code on another screen you can do it easily.

Really, a very good and useful feature in the new IDE.

Read Full Post »


[tweetmeme  source=”aabdulmoniem” only_single=false]

If we look at the theme of C# 3.0 – 3.5, we will see that functional programming has been introduced by exposing LINQ features.

And if we look into the current release theme, we will see that this year’s theme is dynamic keyword.

Dynamic keyword gives you the ability to create objects dynamically at runtime, gives you the ability to call something you already know that it is existing but at runtime not at compile time, and it gives you the ability to interact with dynamic languages such as python for example.

Let’s get started with this new keyword.

Assume that we have the following Person class:

public class Person
{
     public string Title { get; set; }
     public string FirstName { get; set; }
     public string LastName { get; set; }
     public string FullName
     {
         get
         {
             return Title + " " + FirstName + " " + LastName;
         }
     }

    public Person()
    {
    }

    public Person(string title, string firstName, string lastName)
    {
         Title = title;
         FirstName = firstName;
         LastName = lastName;
    }

    public void Print()
    {
         Console.WriteLine(FullName);
    }
}

In the Main method of a console application, if I would like to instantiate a new object from the Person class I would do this:

Person person = new Person();
person.Title = "Mr.";
person.FirstName = "Ahmed";
person.LastName = "Abdul Moniem";
Console.WriteLine(person.FullName);

In the last example, the compiler at compile time knows exactly what is Person is and what its type. So, it will generate error if you tried to access a non-existent property or method in the Person class.

Lets add couple of lines to demonstrate the dynamic type:

Person person = new Person();
person.Title = "Mr.";
person.FirstName = "Ahmed";
person.LastName = "Abdul Moniem";
Console.WriteLine(person.FullName);

dynamic dynamicPerson = person;
dynamicPerson.FirstName = "Mohamed";

Console.WriteLine(dynamicPerson.FullName);

As you can see, I have created a new variable called dynamicPerson and its type is dynamic which telling the compiler that it will be validated at runtime. In last example, I have made dynamicPerson to point to person object then I have changed a property on the dynamicPerson and used it to print the full name on the screen.

The same results between this and the normal object initiation process. So, what is the benefit of dynamic keyword?

Let’s add the following using statement on top of our file like that:

using System.Dynamic;

And in our Main method we will write something like that:

dynamic employee = new ExpandoObject();
employee.FirstName = "Ahmed";
employee.LastName = "Abdul Moniem";
employee.FullName = employee.FirstName + " " + employee.LastName;
Console.WriteLine(employee.FullName);

Here is the magic of dynamic keyword, you can create a new object which doesn’t exist at all at runtime. In the last example we have created a new employee of type ExpandoObject (in System.Dynamic) which supports this dynamic behavior in the language.

While I was coding I realized that an employee must have a first name so I wrote:

employee.FirstName = "Ahmed";

And again I realized that the employee must have a last name, so I wrote:

employee.LastName = "Abdul Moniem";

As you may guess, I also realized that I want to add a new property FullName, so I wrote:

employee.FullName = employee.FirstName + " " + employee.LastName;

Then I print the result, and viola! every thing is working exactly as the first example of Person class in this post despite of that I don’t have any employee classes! This is the magic of dynamic keyword.

Let’s add some sugar on my employee object:

employee.Print = new Action(delegate() { Console.WriteLine(employee.FullName); });
employee.Print();

As you can see, I have added a new method which will run at runtime without any mentioned error, also at compile time everything is working fine as long as I am using this dynamic keyword.

You can also, use the famous lambda expression to create your methods. So, the last example can be changed to be:

employee.Print = new Action(() => Console.WriteLine(employee.FullName));
employee.Print();

Until now, I am just demonstrating some of the capabilities of this new keyword but until now I didn’t reveal its power!

Let’s now consider some real cases. What if I have a type which I want to instantiate an object from it at runtime and then call a method on this object still at runtime. This means that I don’t know that type at compile time. This means I have to use reflection, my old friend.

// Assuming that Person type is not known until runtime
Type t = typeof(Person);
object person = Activator.CreateInstance(t, null);

The method CreateInstance will return a type object not a Person type. Which means I can’t do this:

// Compile time error, no method named Print() in object!
person.Print();

So, what I have to do? I should use Invoke method to call the Print method at runtime. The reflection engine at runtime will know that I have a person object that I can call Print on it, and it will call it on behalf of me.

t.InvokeMember("Print", System.Reflection.BindingFlags.InvokeMethod, null, person, null);

And here is the complete code:

// Assuming that Person type is not known until runtime
Type t = typeof(Person);
object person = Activator.CreateInstance(t, null);
t.InvokeMember("Print", System.Reflection.BindingFlags.InvokeMethod, null, person, null);

Using the dynamic keyword, it becomes more easy and cleaner to develop a scenario like this:

// Assuming that Person type is not known until runtime
Type t = typeof(Person);
dynamic person = Activator.CreateInstance(t, null);
person.Print();

Let’s now delve into a more interesting topic, how you can run a dynamic language like python from C#?

First of all, you will need to install IronPython and reference its assemblies into your project.

Now we will write a simple python script into a file called Math.py for example, and it to your project root and it must have Copy Always value of its Copy to Output Directory property:

def Add(x ,y):
    return x + y

Now we will execute this python script:

var py = Python.CreateRuntime();
dynamic test = py.UseFile("Math.py");
dynamic sum = test.Add(5,10);
Console.WriteLine(sum);

And simply you will see the result 15 printed on the screen! Nice, right?!

Finally, I have just highlighted some of the features of this dynamic keyword and how you can use it in many different scenarios. I hope you all to grasp all the benefits of this new keyword.

Read Full Post »


[tweetmeme  source=”aabdulmoniem” only_single=false]

One of the new features of C# 4.0 that I liked so much is: Named and Optional Parameters. This feature simplifies the old concept method overloading or constructor overloads.

Also, It makes your code more cleaner and concise. Also, you will not be obligated to duplicate code in many method overloads in order to achieve the same functionality which in return improves maintainability of your application.

And a big enhancements in the COM land while using optional parameters because it will not obligate you to enter all the arguments coming from a COM interface which is often too large number of arguments.

Let’s first demonstrate the old days of C#. Consider that we have a simple class Person like the following:

public class Person
{
     public string Title { get; set; }
     public string FirstName { get; set; }
     public string LastName { get; set; }

     public string FullName
     {
         get
        {
            return FirstName + " " + LastName;
        }
     }

     public Person()
     {
         Title = "Mr. ";
         FirstName = "Ahmed";
         LastName = "Abdul Moniem";
     }

     public Person(string firstName, string lastName)
     {
          Title = "Mr. ";
          FirstName = firstName;
          LastName = lastName;
     }

     public Person(string title, string firstName, string lastName)
     {
          Title = title;
          FirstName = firstName;
          LastName = lastName;
     }
}

As you can see we have many overloads for the Person constructor. So, if you want to use the default constructor which initialize data members with the default values you can just use the default constructor but if you want to change one of the default data members you can use another overload.

Let’s consider here a deadlock example, assume that you want to create a new instance of Person class and you want only to change the default value of LastName property. You can create a new constructor overload that will satisfy your needs. It will take only last name and assign this value to the LastName property and we will keep all the default values in place in both Title and FirstName properties. It will be like this:

public Person(string lastName)
{
    Title = "Mr. ";
    FirstName = "Ahmed";
    LastName = lastName;
}

And we can create a new person object using this constructor like this:

Person person = new Person(“Abdul Moniem”);

Perfect! Ok, here is the deadlock, what if I want to do the same thing with first name. So, I will create another constructor overload like this:

public Person(string firstName)
{
     Title = "Mr. ";
     FirstName = firstName;
     LastName = "Abdul Moniem";
}

But unfortunately, this is not allowed. Overloading methods must defer in the number of parameters and/or types. So, we can’t put two constructors taking only one string parameter!

Even if you chose to use the last constructors which supplies all the data members to the object and you will put the default values by hand. This is not possible if you don’t know what is the default values (most of the cases, specially if you are not the creator of the class and you are just using it). Also, it is not a neat solution because you are always obligated to use all the arguments of the constructor!

Here the new Named and Optional Parameters feature comes to the rescue. We will modify the Person class to the following and naming it PersonEx (just for differentiation):

public class PersonEx
{
     public string Title { get; set; }
     public string FirstName { get; set; }
     public string LastName { get; set; }
     public string FullName
     {
          get
          {
             return Title + " " + FirstName + " " + LastName;
          }
     }

    public PersonEx(string title = "Mr. ", string firstName = "Ahmed", string lastName = "Abdul Moniem")
    {
        Title = title;
        FirstName = firstName;
        LastName = lastName;
    }
}

As you can see we have only one constructor that will satisfy our needs and this is the synatx of optional parameters in any method or constructor:

For each argument you are supplying the default value like this (Type ArgumentName = DefaultValue) like in (string Title = “Mr. “). This tells the compiler that this argument will be optional with a default value = “Mr. “.

This the part of optional parameters. What about the named ones?

When you want to create a new object from PersonEx you can do the following:

PersonEx person1 = new PersonEx(lastName: "Mohamed");
PersonEx person2 = new PersonEx(firstName: "Mostafa");
PersonEx person3 = new PersonEx(firstName: "Mostafa", lastName: "Mohamed");
PersonEx person4 = new PersonEx("Miss. ", "Mona", "Mansour");

as you see, the syntax of named parameters comes in the form of (ParameterName: ParameterValue) like in (lastName: “Mohamed”).

Deadlock is solved! right! you can use any combination you want without creating many overloads, and with super flexibility while creating objects.

A remaining important thing is that you are obligated to add the optional parameter at the end of the method header like param keyword. So the following syntax is wrong and it will give you a compile time error:

public PersonEx(string title = "Mr. ", string firstName, string lastName = "Abdul Moniem")
{
     Title = title;
     FirstName = firstName;
     LastName = lastName;
}

but this is correct, considering that firstName becomes a required parameter and the other two are optional (you can make methods hybrid):

public PersonEx(string firstName, string title = "Mr. ", string lastName = "Abdul Moniem")
{
     Title = title;
     FirstName = firstName;
     LastName = lastName;
}

Read Full Post »


[tweetmeme  source=”aabdulmoniem” only_single=false]

I was waiting on fire for the new version of TFS 2010 because of its super features which solved many other drawbacks I was facing in previous version.

One of the most features I was waiting for is the ability to install TFS 2010 on client machines running client operating systems like Windows 7 for example. In old versions I was tied to install it only on server machines running server operating systems like Windows 2003 or Windows 2008.

This was really annoying for me, because I am using my personal  machine as a personal lab and using TFS within it. So, I was obligated to install Windows 2003 (I know that I can install windows 7 for example, and use a virtual machine for windows 2003 server, but this can be a good solution to some how fast PCs).

And here we go, I have installed TFS 2010 on my client machine. And really I was so happy for that.

After two days, I decided to begin learning and using the new features of this giant. But I was shocked when I see that there is no support for sharepoint or reporting services in client machines!

In the install manual you can read the following:

Client operating systems do not support integration with SharePoint Products or the reporting feature. If you want to use either one of these features, you must install Team Foundation Server on a server operating system.

Really, this is so bad!

I think I will try to switch to the second solution of using virtual machine.

So, don’t ever install TFS 2010 on a client machine if you need those two features.

Read Full Post »


[tweetmeme  source=”aabdulmoniem” only_single=false]

Many parameters will influence our estimates about software projects. This chapter discusses the different estimate influences which must be taken into consideration while making estimates.

What I have learned?

Project Size

  • The largest driver in a software estimate is the size of the software being built, because there is more variation in the size than in any other factor.
  • A system consisting of 1,000,000 Line of code (LOC) requires dramatically more effort than a system consisting of only 100,000 LOC.
  • These comments about software size being the largest cost driver might seem obvious, yet organizations routinely violate this fundamental fact in two ways:
    • Costs, effort, and schedule are estimated without knowing how big the software will be.
    • Costs, effort, and schedule are not adjusted when the size of the software is consciously increased (that is, in response to change requests).
  • So we have to invest an appropriate amount of effort assessing the size of the software that will be built. The size of the software is the single most significant contributor to project effort and schedule.
  • What is the difference between economy of scale and diseconomy of scale?
    • An economy of scale is something like, “If we build a larger manufacturing plant, we’ll be able to reduce the cost per unit we produce.” An economy of scale implies that the bigger you get, the smaller the unit cost becomes.
    • A diseconomy of scale is the opposite. In software, the larger the system becomes, the greater the cost of each unit. If software exhibited economies of scale, a 100,000-LOC system would be less than 10 times as costly as a 10,000-LOC system. But the opposite is almost always the case.
  • As you can see from the next graph, in this example, the 10,000-LOC system would require 13.5 staff months. If effort increased linearly, a 100,000-LOC system would require 135 staff months, but it actually requires 170 staff months.

  • As last graph is drawn, the effect of the diseconomy of scale doesn’t look very dramatic. Indeed, within the 10,000 LOC to 100,000 LOC range, the effect is usually not all that dramatic. But two factors make the effect more dramatic. One factor is greater difference in project size, and the other factor is project conditions that degrade productivity more quickly than average as project size increases.

  • In last graph, you can see that the worst-case effort growth increases much faster than the nominal effort growth, and that the effect becomes much more pronounced at larger project sizes. Along the nominal effort growth curve, effort at 100,000 lines of code is 13 times what it is at 10,000 lines of code, rather than 10 times. At 1,000,000 LOC, effort is 160 times the 10,000-LOC effort, rather than 100 times.
  • The worst-case growth is much worse. Effort on the worst-case curve at 100,000 LOC is 17 times what it is at 10,000 LOC, and at 1,000,000 LOC it isn’t 100 times as large—it’s 300 times as large!
  • Don’t assume that effort scales up linearly as project size does. Effort scales up exponentially.
  • Use software estimation tools to compute the impact of diseconomies of scale. (see Hidden Gems section).
  • When to ignore diseconomies? If you’ve completed previous projects that are about the same size as the project you’re estimating—defined as being within a factor of 3 from largest to smallest— you can safely use a ratio-based estimating approach, such as lines of code per staff month, to estimate your new project.

Software Kind

  • Factor the kind of software you develop into your estimate. The kind of software you’re developing is the second-most significant contributor to project effort and schedule.
  • For example, a team developing an intranet system for internal use might generate code 10 to 20 times faster than a team working on an avionics project, real-time project, or embedded systems project.

Personnel Factors

  • Personnel factors also exert significant influence on project outcomes.

  • Effect of personnel factors on project effort. Depending on the strength or weakness in each factor, the project results can vary by the amount indicated—that is, a project with the worst requirements analysts would require 42% more effort than nominal, whereas a project with the best analysts would require 29% less effort than nominal.
  • Two implications here:
    • You can’t accurately estimate a project if you don’t have some idea of who will be doing the work.
    • The most accurate estimation approach will depend on whether you know who specifically will be doing the work that’s being estimated.

Programming Language

  • First, as last graph suggested, the project team’s experience with the specific language and tools that will be used on the project has about a 40% impact on the overall productivity rate of the project.

  • Second, some languages generate more functionality per line of code than others. For example, C# or Java are more productive than C.

  • A third factor related to languages is the richness of the tool support and environment associated with the language. According to Cocomo II, the weakest tool set and environment will increase total project effort by about 50% compared to the strongest tool set and environment.
  • A final factor related to programming language is that developers working in interpreted languages tend to be more productive than those working in compiled languages, perhaps as much as a factor of 2.

Other Project Influences

Hidden Gems

Here I will introduce some excerpts which I rate them as hidden gems inside this chapter.

  • Gem 1:

For software estimation, the implications of diseconomies of scale are a case of good news, bad news. The bad news is that if you have large variations in the sizes of projects you estimate, you can’t just estimate a new project by applying a simple effort ratio based on the effort from previous projects. If your effort for a previous 100,000-LOC project was 170 staff months, you might figure that your productivity rate is 100,000/170, which equals 588 LOC per staff month. That might be a reasonable assumption for another project of about the same size as the old project, but if the new project is 10 times bigger, the estimate you create that way could be off by 30% to 200%.

There’s more bad news: There isn’t a simple technique in the art of estimation that will account for a significant difference in the size of two projects. If you’re estimating a project of a significantly different size than your organization has done before, you’ll need to use estimation software that applies the science of estimation to compute the estimate for the new project based on the results of past projects. My company provides a free software tool called Construx® Estimate that will do this kind of estimate. You can download a copy at www.construx.com/estimate.

  • Gem 2:
Table 5-5: Cocomo II Adjustment Factors

Cocomo II Factor

Influence

Observation

Applications (Business Area) Experience

1.51

Teams that aren’t familiar with the project’s business area need significantly more time. This shouldn’t be a surprise.

Architecture and Risk Resolution

1.38 [*]

The more actively the project attacks risks, the lower the effort and cost will be. This is one of the few Cocomo II factors that is controllable by the project manager.

Database Size

1.42

Large, complex databases require more effort project-wide. Total influence is moderate.

Developed for Reuse

1.31

Software that is developed with the goal of later reuse can increase costs as much as 31%. This doesn’t say whether the initiative actually succeeds. Industry experience has been that forward-looking reuse programs often fail.

Extent of Documentation Required

1.52

Too much documentation can negatively affect the whole project. Impact is moderately high.

Language and Tools Experience

1.43

Teams that have experience with the programming language and/or tool set work moderately more productively than teams that are climbing a learning curve. This is not a surprise.

Multi-Site Development

1.56

Projects conducted by a team spread across multiple sites around the globe will take 56% more effort than projects that are conducted by a team co-located at one facility. Projects that are conducted at multiple sites, including out-sourced or offshore projects, need to take this effect seriously.

Personnel Continuity (turnover)

1.59

Project turnover is expensive—in the top one-third of influential factors.

Platform Experience

1.40

Experience with the underlying technology platform affects overall project performance moderately.

Platform Volatility

1.49

If the platform is unstable, development can take moderately longer. Projects should weigh this factor in their decision about when to adopt a new technology. This is one reason that systems projects tend to take longer than applications projects.

Precedentedness

1.33[*]

Refers to how “precedented” (we usually say “unprecedented”) the application is. Familiar systems are easier to create than unfamiliar systems.

Process Maturity

1.43[*]

Projects that use more sophisticated development processes take less effort than projects that use unsophisticated processes. Cocomo II uses an adaptation of the CMM process maturity model to apply this criterion to a specific project.

Product Complexity

2.38

Product complexity (software complexity) is the single most significant adjustment factor in the Cocomo II model. Product complexity is largely determined by the type of software you’re building.

Programmer Capability (general)

1.76

The skill of the programmers has an impact of a factor of almost 2 on overall project results.

Required Reliability

1.54

More reliable systems take longer. This is one reason (though not the only reason) that embedded systems and life-critical systems tend to take more effort than other projects of similar sizes. In most cases, your marketplace determines how reliable your software must be. You don’t usually have much latitude to change this.

Requirements Analyst Capability

2.00

The single largest personnel factor—good requirements capability—makes a factor of 2 difference in the effort for the entire project. Competency in this area has the potential to reduce a project’s overall effort from nominal more than any other factor.

Requirements Flexibility

1.26[*]

Projects that allow the development team latitude in how they interpret requirements take less effort than projects that insist on rigid, literal interpretations of all requirements.

Storage Constraint

1.46

Working on a platform on which you’re butting up against storage limitations moderately increases project effort.

Team Cohesion

129[*]

Teams with highly cooperative interactions develop software more efficiently than teams with more contentious interactions.

Time Constraint

1.63

Minimizing response time increases effort across the board. This is one reason that systems projects and real-time projects tend to consume more effort than other projects of similar sizes.

Use of Software Tools

1.50

Advanced tool sets can reduce effort significantly.

[*]Exact effect depends on project size. Effect listed is for a project size of 100,000 LOC.

Effect of personnel factors on project effort. Depending on the strength or weakness in each factor, the project results can vary by the amount indicated—that is, a project with the worst requirements analysts would require 42% more effort than nominal, whereas a project with the best analysts would require 29% less effort than nominal.

Finally

We have finished the first part of this book titled (Part I: Critical Estimation Concepts), in subsequent posts we will discuss the different available estimation techniques. Be with us 🙂

Read Full Post »

Older Posts »

%d bloggers like this: