Feeds:
Posts
Comments

Archive for the ‘Currently Reading’ Category


[tweetmeme  source=”aabdulmoniem” only_single=false]

I have just finished “Beginning Silverlight 4 in C#” book written by Robert Lair from Apress and I would like to share my humble opinion about it.

Really and simply, it is a great book to grasp many concepts about Silverlight 4 in a just 400 papers. For me, I have finished the book in four days (3-5 hours / day) and of course there are some reasons for that, a couple of them are:

  1. I am not so beginner in this technology as I have used WPF before so I didn’t need any time to grasp multiple concepts quickly.
  2. I need to finish this book as quickly as possible to wrap most of the features and abilities of the new version of Silverlight. In fact my team is going to develop a business application using Silverlight and I was in real need to see the full picture of this technology.

Enough talking about myself, let’s talk about the book.

This book talks about:

  • Why we need Silverlight and how it fits in the RIAs? Really a good and brief introduction gives the reader why he is gonna read about such a new technology.
  • VS 2010 new features and how it supports Silverlight 4 and what you need to get started. BTW, there are some posts here in my blog about the new features of VS 2010 you can find it in “Cutting Edge” column.
  • Layout management in Silverlight. It mentioned some of layout controls and how you can use it to layout your applications like the Grid, WrapPanel, DockPanel, Canvas, and StackPanel.
  • Some of Silverlight controls and some basic examples about them like TextBox, Button, TextBlock … etc.
  • List controls like DataGrid and ListBox besides data binding mechanisms which is so so powerful in the world of XAML.
  • Silverlight toolkit, and what it contains.
  • How you can get access to data in Silverlight (WCF is recommended approach), how you can use Sockets for data access and network communications.
  • Navigation framework which is so similar to master pages and content place holders in ASP.NET. Also, how you can pass data between different pages, how you can map the URIs, and How you can route the URIs to custom ones.
  • Isolated storage and how you can use it for caching purposes, saving and retrieving files … etc.
  • Accessing computer devices like web cam and microphone from a Silverlight application easily.
  • Expression blend introduction and how it fits in the world of Silverlight.
  • How you can style your application in a very similar way to CSS in HTML pages.
  • How to animate objects easily and how to use Expression Blend for this purpose with some basic and nice examples.
  • How to create custom controls, although the book didn’t mention how to create user controls. It gives a very good example for building a custom control from scratch.
  • How Printing is so easy in Silverlight. You can print the screen as it is, or you can even customize the printable data to select portions of screen or even a whole new shape.
  • Finally, it talks briefly about important concepts and ideas you will need when you need to deploy a Silverlight application. It gives an idea about assembly cashing, out-of-browser mode, and elevated trust.

Really, I am recommending this book to any one who needs really to know what is the Silverlight about and what can do.

Thank you Apress for not wasting my time 🙂

    Read Full Post »


    [tweetmeme  source=”aabdulmoniem” only_single=false]

    Hello! Back again, it has been a while from our last review on this great book, Software Estimation.

    We have already reviewed the first part of the book which discusses the critical concepts about software estimation.

    Now, we are moving on to another stage. This book now turns to a discussion of detailed estimation techniques that can be applied to specific estimation problems.

    This chapter is just an introduction to lay out the ground for the upcoming chapters which will discuss specific techniques.

    What I have learned?

    • There are many considerations you have to take into account before choosing the right estimation technique to your specific problems.
    1. What is being estimated?
      • Do you have features, and you want to  estimate schedule and effort?
      • Do you have budget and development time frame, and you want to estimate how many features can be delivered?
      • In this book, estimating size refers to estimating the scope of technical work of a given feature set—in units such as lines of code, function points, stories, or some other measure. Estimating features refers to estimating how many features can be delivered within schedule and budget constraints. These terms are not industry standards; I’m defining them here for the sake of clarity.
    2. Project Size
      • Small: <= 5  total technical staff. Best estimation techniques is “bottom -up” techniques based on estimates made by individuals who will do the job.
      • Large: >= 25 people lasts for 6 to 12 months or more. Best estimation techniques are in the early stages, the best estimation approaches tend to be “top-down” techniques based on algorithms and statistics. In the middle stages, a combination of top-down and bottom-up techniques based on the project’s own historical data will produce the most accurate estimates. In the later stages of large projects, bottom-up techniques will provide the most accurate estimates.
      • Medium: 5 to 25 people and last 3 to 12 months. They have the advantage of being able to use virtually all the estimation techniques that large projects can use and several of the small-project techniques, too.
    3. Software Development Style
      • For purposes of estimation, the two major development styles are sequential and iterative. Industry terminology surrounding iterative, Agile, and sequential projects can be confusing. For this book’s purposes, the primary difference between these kinds of projects is the percentage of requirements they define early in the project compared to the percentage they define after construction is underway.
      • Evolutionary prototyping: Iterative.
      • Extreme Programming: highly iterative.
      • Evolutionary delivery: normally practiced as iterative.
      • Staged delivery: Sequential.
      • Rational Unified Process (RUP): Sequential.
      • Scrum: Iterative from multi-sprint point of view.
    4. Development Stage
      • Early: On sequential projects, the early stage will be the period from the beginning of the project concept until requirements have been mostly defined. On iterative projects, early refers to the initial planning period.
      • Middle: It is the time between initial planning and early construction.
      • Late: Refers to the time from mid-construction through release.
    5. Accuracy Possible
      • The accuracy of a technique is a function partly of the technique, partly of whether the technique is being applied to a suitable estimation problem, and partly of when in the project the technique is applied.

        Some estimation techniques produce high accuracy but at high cost. Others produce lower accuracy, but at lower cost. Normally you’ll want to use the most accurate techniques available, but depending on the stage of the project and how much accuracy is possible at that point in the Cone of Uncertainty, a low-cost, low-accuracy approach can be appropriate.

    • Most of the remaining chapters in this book begin with tables that describe the applicability of techniques in the chapter. Here’s an example:
    Applicability of Techniques in this Chapter—SAMPLE
    Group Reviews Calibration with Project-Specific Data
    What’s Estimated Size, Effort, Schedule, Features Size, Effort, Schedule, Features
    Size of project – M L S M L
    Development Stage Early—Middle Middle—Late
    Iterative or Sequential Both Both
    Accuracy Possible Medium—High High

    Read Full Post »


    [tweetmeme  source=”aabdulmoniem” only_single=false]

    I have just finished a Manning Book titled “jQuery in Action”. I am here just to share my opinion with you about this book.

    Really, it gives me a good starting point in jQuery commands and utility functions. Also, it demonstrated many practical examples to test the core API of jQuery. So, I think it is a good start for beginners who need to know what is the jQuery?!

    Also, it gives me a very high level of the plugin capability in jQuery and introduced me with some famous plugins like forms plugin, UI plugin,  Live query plugin, and dimension plugin.

    But IMHO, I think it is just suitable to give you highlights about the core API of jQuery and it doesn’t stand as a jQuery reference at all. So, if you  are expecting when you read this book to be a jQuery guru then you are wrong.

    Also, I have missed practical examples on the interaction of jQuery with ASP.NET and how we can use them efficiently together. The book didn’t mention ASP.NET altogether because it is focusing on the core API.

    So, it is a very basic jQuery book in my opinion suitable to learn how to write jQuery scripts and some basic concepts to give you a starting point to read more advanced materials.

    Read Full Post »


    [tweetmeme  source=”aabdulmoniem” only_single=false]

    Many parameters will influence our estimates about software projects. This chapter discusses the different estimate influences which must be taken into consideration while making estimates.

    What I have learned?

    Project Size

    • The largest driver in a software estimate is the size of the software being built, because there is more variation in the size than in any other factor.
    • A system consisting of 1,000,000 Line of code (LOC) requires dramatically more effort than a system consisting of only 100,000 LOC.
    • These comments about software size being the largest cost driver might seem obvious, yet organizations routinely violate this fundamental fact in two ways:
      • Costs, effort, and schedule are estimated without knowing how big the software will be.
      • Costs, effort, and schedule are not adjusted when the size of the software is consciously increased (that is, in response to change requests).
    • So we have to invest an appropriate amount of effort assessing the size of the software that will be built. The size of the software is the single most significant contributor to project effort and schedule.
    • What is the difference between economy of scale and diseconomy of scale?
      • An economy of scale is something like, “If we build a larger manufacturing plant, we’ll be able to reduce the cost per unit we produce.” An economy of scale implies that the bigger you get, the smaller the unit cost becomes.
      • A diseconomy of scale is the opposite. In software, the larger the system becomes, the greater the cost of each unit. If software exhibited economies of scale, a 100,000-LOC system would be less than 10 times as costly as a 10,000-LOC system. But the opposite is almost always the case.
    • As you can see from the next graph, in this example, the 10,000-LOC system would require 13.5 staff months. If effort increased linearly, a 100,000-LOC system would require 135 staff months, but it actually requires 170 staff months.

    • As last graph is drawn, the effect of the diseconomy of scale doesn’t look very dramatic. Indeed, within the 10,000 LOC to 100,000 LOC range, the effect is usually not all that dramatic. But two factors make the effect more dramatic. One factor is greater difference in project size, and the other factor is project conditions that degrade productivity more quickly than average as project size increases.

    • In last graph, you can see that the worst-case effort growth increases much faster than the nominal effort growth, and that the effect becomes much more pronounced at larger project sizes. Along the nominal effort growth curve, effort at 100,000 lines of code is 13 times what it is at 10,000 lines of code, rather than 10 times. At 1,000,000 LOC, effort is 160 times the 10,000-LOC effort, rather than 100 times.
    • The worst-case growth is much worse. Effort on the worst-case curve at 100,000 LOC is 17 times what it is at 10,000 LOC, and at 1,000,000 LOC it isn’t 100 times as large—it’s 300 times as large!
    • Don’t assume that effort scales up linearly as project size does. Effort scales up exponentially.
    • Use software estimation tools to compute the impact of diseconomies of scale. (see Hidden Gems section).
    • When to ignore diseconomies? If you’ve completed previous projects that are about the same size as the project you’re estimating—defined as being within a factor of 3 from largest to smallest— you can safely use a ratio-based estimating approach, such as lines of code per staff month, to estimate your new project.

    Software Kind

    • Factor the kind of software you develop into your estimate. The kind of software you’re developing is the second-most significant contributor to project effort and schedule.
    • For example, a team developing an intranet system for internal use might generate code 10 to 20 times faster than a team working on an avionics project, real-time project, or embedded systems project.

    Personnel Factors

    • Personnel factors also exert significant influence on project outcomes.

    • Effect of personnel factors on project effort. Depending on the strength or weakness in each factor, the project results can vary by the amount indicated—that is, a project with the worst requirements analysts would require 42% more effort than nominal, whereas a project with the best analysts would require 29% less effort than nominal.
    • Two implications here:
      • You can’t accurately estimate a project if you don’t have some idea of who will be doing the work.
      • The most accurate estimation approach will depend on whether you know who specifically will be doing the work that’s being estimated.

    Programming Language

    • First, as last graph suggested, the project team’s experience with the specific language and tools that will be used on the project has about a 40% impact on the overall productivity rate of the project.

    • Second, some languages generate more functionality per line of code than others. For example, C# or Java are more productive than C.

    • A third factor related to languages is the richness of the tool support and environment associated with the language. According to Cocomo II, the weakest tool set and environment will increase total project effort by about 50% compared to the strongest tool set and environment.
    • A final factor related to programming language is that developers working in interpreted languages tend to be more productive than those working in compiled languages, perhaps as much as a factor of 2.

    Other Project Influences

    Hidden Gems

    Here I will introduce some excerpts which I rate them as hidden gems inside this chapter.

    • Gem 1:

    For software estimation, the implications of diseconomies of scale are a case of good news, bad news. The bad news is that if you have large variations in the sizes of projects you estimate, you can’t just estimate a new project by applying a simple effort ratio based on the effort from previous projects. If your effort for a previous 100,000-LOC project was 170 staff months, you might figure that your productivity rate is 100,000/170, which equals 588 LOC per staff month. That might be a reasonable assumption for another project of about the same size as the old project, but if the new project is 10 times bigger, the estimate you create that way could be off by 30% to 200%.

    There’s more bad news: There isn’t a simple technique in the art of estimation that will account for a significant difference in the size of two projects. If you’re estimating a project of a significantly different size than your organization has done before, you’ll need to use estimation software that applies the science of estimation to compute the estimate for the new project based on the results of past projects. My company provides a free software tool called Construx® Estimate that will do this kind of estimate. You can download a copy at www.construx.com/estimate.

    • Gem 2:
    Table 5-5: Cocomo II Adjustment Factors

    Cocomo II Factor

    Influence

    Observation

    Applications (Business Area) Experience

    1.51

    Teams that aren’t familiar with the project’s business area need significantly more time. This shouldn’t be a surprise.

    Architecture and Risk Resolution

    1.38 [*]

    The more actively the project attacks risks, the lower the effort and cost will be. This is one of the few Cocomo II factors that is controllable by the project manager.

    Database Size

    1.42

    Large, complex databases require more effort project-wide. Total influence is moderate.

    Developed for Reuse

    1.31

    Software that is developed with the goal of later reuse can increase costs as much as 31%. This doesn’t say whether the initiative actually succeeds. Industry experience has been that forward-looking reuse programs often fail.

    Extent of Documentation Required

    1.52

    Too much documentation can negatively affect the whole project. Impact is moderately high.

    Language and Tools Experience

    1.43

    Teams that have experience with the programming language and/or tool set work moderately more productively than teams that are climbing a learning curve. This is not a surprise.

    Multi-Site Development

    1.56

    Projects conducted by a team spread across multiple sites around the globe will take 56% more effort than projects that are conducted by a team co-located at one facility. Projects that are conducted at multiple sites, including out-sourced or offshore projects, need to take this effect seriously.

    Personnel Continuity (turnover)

    1.59

    Project turnover is expensive—in the top one-third of influential factors.

    Platform Experience

    1.40

    Experience with the underlying technology platform affects overall project performance moderately.

    Platform Volatility

    1.49

    If the platform is unstable, development can take moderately longer. Projects should weigh this factor in their decision about when to adopt a new technology. This is one reason that systems projects tend to take longer than applications projects.

    Precedentedness

    1.33[*]

    Refers to how “precedented” (we usually say “unprecedented”) the application is. Familiar systems are easier to create than unfamiliar systems.

    Process Maturity

    1.43[*]

    Projects that use more sophisticated development processes take less effort than projects that use unsophisticated processes. Cocomo II uses an adaptation of the CMM process maturity model to apply this criterion to a specific project.

    Product Complexity

    2.38

    Product complexity (software complexity) is the single most significant adjustment factor in the Cocomo II model. Product complexity is largely determined by the type of software you’re building.

    Programmer Capability (general)

    1.76

    The skill of the programmers has an impact of a factor of almost 2 on overall project results.

    Required Reliability

    1.54

    More reliable systems take longer. This is one reason (though not the only reason) that embedded systems and life-critical systems tend to take more effort than other projects of similar sizes. In most cases, your marketplace determines how reliable your software must be. You don’t usually have much latitude to change this.

    Requirements Analyst Capability

    2.00

    The single largest personnel factor—good requirements capability—makes a factor of 2 difference in the effort for the entire project. Competency in this area has the potential to reduce a project’s overall effort from nominal more than any other factor.

    Requirements Flexibility

    1.26[*]

    Projects that allow the development team latitude in how they interpret requirements take less effort than projects that insist on rigid, literal interpretations of all requirements.

    Storage Constraint

    1.46

    Working on a platform on which you’re butting up against storage limitations moderately increases project effort.

    Team Cohesion

    129[*]

    Teams with highly cooperative interactions develop software more efficiently than teams with more contentious interactions.

    Time Constraint

    1.63

    Minimizing response time increases effort across the board. This is one reason that systems projects and real-time projects tend to consume more effort than other projects of similar sizes.

    Use of Software Tools

    1.50

    Advanced tool sets can reduce effort significantly.

    [*]Exact effect depends on project size. Effect listed is for a project size of 100,000 LOC.

    Effect of personnel factors on project effort. Depending on the strength or weakness in each factor, the project results can vary by the amount indicated—that is, a project with the worst requirements analysts would require 42% more effort than nominal, whereas a project with the best analysts would require 29% less effort than nominal.

    Finally

    We have finished the first part of this book titled (Part I: Critical Estimation Concepts), in subsequent posts we will discuss the different available estimation techniques. Be with us 🙂

    Read Full Post »


    [tweetmeme  source=”aabdulmoniem” only_single=false]

    I think that Steve McConnell has changed his career to be software psychologist! WOW, this man is awesome! Reading this chapter, gives me a full proof that Steve has very strong and solid background on the human nature and mentality.

    This chapter demonstrates the many sources of errors that a man can fall in while he is making estimates. And to say the truth, I didn’t find any material discussing what may happen because of our humanity like this chapter. Thank you Steve.

    What I have learned?

    • Software estimation creeps come from four generic sources:
      • Inaccurate information about the project being estimated
      • Inaccurate information about the capabilities of the organization that will perform the project
      • Too much chaos in the project to support accurate estimation (that is, trying to estimate a moving target)
      • Inaccuracies arising from the estimation process itself
    • It isn’t possible to estimate the amount of work required to build something when that “something” has not been defined.
    • I have learned about the cone of uncertainty and how it can be so useful in software estimation.

    • Consider the effect of the Cone of Uncertainty on the accuracy of your estimate. Your estimate cannot have more accuracy than is possible at your project’s current position within the Cone.
    • You have to narrow uncertainty and variability of a project if you want to estimate correctly.
    • Cone of uncertainty doesn’t narrow itself unless you make decision that we will eliminate some variability issues in the project.
    • if the project is not well controlled, or if the estimators aren’t very skilled, estimates can fail to improve. Next figure shows what happens when the project doesn’t focus on reducing variability—the uncertainty isn’t a Cone, but rather a Cloud that persists to the end of the project. The issue isn’t really that the estimates don’t converge; the issue is that the project itself doesn’t converge—that is, it doesn’t drive out enough variability to support more accurate estimates.
    • After making decisions that eliminate some variability from the project the cone will narrow like this:

    • Account for the Cone of Uncertainty by using predefined uncertainty ranges in your estimates.
    Scoping Error
    Phase Possible Error on Low Side Possible Error on High Side Range of High to Low Estimates
    Initial Concept 0.25x (-75%) 4.0x (+300%) 16x
    Approved Product Definition 0.50x (-50%) 2.0x (+100%) 4x
    Requirements Complete 0.67x (-33%) 1.5x (+50%) 2.25x
    User Interface Design Complete 0.80x (-20%) 1.25x (+25%) 1.6x
    Detailed Design Complete (for sequential projects) 0.90x (-10%) 1.10x (+10%) 1.2x
    Source: Adapted from Software Estimation with Cocomo II (Boehm et al. 2000).
    • Account for the Cone of Uncertainty by having one person create the “how much” part of the estimate and a different person create the “how uncertain” part of the estimate.
    • Don’t ever and never make a commitment on early stages of the cone of uncertainty. Meaningful commitments are not possible in the early, wide part of the Cone. Effective organizations delay their commitments until they have done the work to force the Cone to narrow. Meaningful commitments in the early-middle part of the project (about 30% of the way in) are possible and appropriate.
    • How you can relate the cone of uncertainty with iterative development? (See Hidden Gems sections).
    • Don’t expect better estimation practices alone to provide more accurate estimates for chaotic projects. You can’t accurately estimate an out-of-control process. As a first step, fixing the chaos is more important than improving the estimates.
    • One of the most common sources of estimation error is forgetting to include necessary tasks in the project estimates.
    • Developers estimate often optimistically. So, don’t reduce developer estimates—they’re probably too optimistic already.
    • Avoid having “control knobs” on your estimates. While control knobs might give you a feeling of better accuracy, they usually introduce subjectivity and degrade actual accuracy.
    • COCOMO II has many control knobs which makes the chances of estimate errors too high.
    • Don’t give off-the-cuff estimates. Even a 15-minute estimate will be more accurate.
    • Accuracy not equal precision, in software estimation world they are too different. As an example, airline schedules are precise to the minute, but they are not very accurate. Measuring people’s heights in whole meters might be accurate, but it would not be at all precise.

    Hidden Gems

    Here I will introduce some excerpts which I rate them as hidden gems inside this chapter.

    • Gem 1:

    Suppose you’re developing an order-entry system and you haven’t yet pinned down the requirements for entering telephone numbers. Some of the uncertainties that could affect a software estimate from the requirements activity through release include the following:

    • When telephone numbers are entered, will the customer want a Telephone Number Checker to check whether the numbers are valid?
    • If the customer wants the Telephone Number Checker, will the customer want the cheap or expensive version of the Telephone Number Checker? (There are typically 2-hour, 2-day, and 2-week versions of any particular feature—for example, U.S.-only versus international phone numbers.)
    • If you implement the cheap version of the Telephone Number Checker, will the customer later want the expensive version after all?
    • Can you use an off-the-shelf Telephone Number Checker, or are there design constraints that require you to develop your own?
    • How will the Telephone Number Checker be designed? (Typically there is at least a factor of 10 difference in design complexity among different designs for the same feature.)
    • How long will it take to code the Telephone Number Checker? (There can be a factor of 10 difference—or more—in the time that different developers need to code the same feature.)
    • Do the Telephone Number Checker and the Address Checker interact? How long will it take to integrate the Telephone Number Checker and the Address Checker?
    • What will the quality level of the Telephone Number Checker be? (Depending on the care taken during implementation, there can be a factor of 10 difference in the number of defects contained in the original implementation.)
    • How long will it take to debug and correct mistakes made in the implementation of the Telephone Number Checker? (Individual performance among different programmers with the same level of experience varies by at least a factor of 10 in debugging and correcting the same problems.)

    As you can see just from this short list of uncertainties, potential differences in how a single feature is specified, designed, and implemented can introduce cumulative differences of a hundredfold or more in implementation time for any given feature. When you combine these uncertainties across hundreds or thousands of features in a large feature set, you end up with significant uncertainty in the project itself.

    • Gem 2:

    The Cone of Uncertainty and Iterative Development

    Applying the Cone of Uncertainty to iterative projects is somewhat more involved than applying it to sequential projects is.

    If you’re working on a project that does a full development cycle each iteration—that is, from requirements definition through release—you’ll go through a miniature Cone on each iteration. Before you do the requirements work for the iteration, you’ll be at the Approved Product Definition point in the Cone, subject to 4x variability from high to low estimates. With short iterations (less than a month), you can move from Approved Product Definition to Requirements Complete and User Interface Design Complete in a few days, reducing your variability from 4x to 1.6x. If your schedule is immovable, the 1.6x variability will apply to the specific features you can deliver in the time available, rather than to the effort or schedule. There are estimation advantages that flow from short iterations, which are discussed in Section 8.4, “Using Data from Your Current Project.”

    What you give up with approaches that leave requirements undefined until the beginning of each iteration is long-range predictability about the combination of cost, schedule, and features you’ll deliver several iterations down the road. As Chapter 3, “Value of Accurate Estimates,” discussed, your business might prioritize that flexibility highly, or it might prefer that your projects provide more predictability.

    The alternative to total iteration is not no iteration. That option has been found to be almost universally ineffective. The alternatives are less iteration or different iteration.

    Many development teams settle on a middle ground in which a majority of requirements are defined at the front end of the project, but design, construction, test, and release are performed in short iterations. In other words, the project moves sequentially through the User Interface Design Complete milestone (about 30% of the calendar time into the project) and then shifts to a more iterative approach from that point forward. This drives down the variability arising from the Cone to about ±25%, which allows for project control that is good enough to hit a target while still tapping into major benefits of iterative development. Project teams can leave some amount of planned time for as-yet-to-be-determined requirements at the end of the project. That introduces a little bit of variability related to the feature set, which in this case is positive variability because you’ll exercise it only if you identify desirable features to implement. This middle ground supports long-range predictability of cost and schedule as well as a moderate amount of requirements flexibility.

    • Gem 3:

    Project teams are sometimes trapped by off-the-cuff estimates. Your boss asks, for example, “How long would it take to implement print preview on the Gigacorp Web site?” You say, “I don’t know. I think it might take about a week. I’ll check into it.” You go off to your desk, look at the design and code for the program you were asked about, notice a few things you’d forgotten when you talked to your manager, add up the changes, and decide that it would take about five weeks. You hurry over to your manager’s office to update your first estimate, but the manager is in a meeting. Later that day, you catch up with your manager, and before you can open your mouth, your manager says, “Since it seemed like a small project, I went ahead and asked for approval for the print-preview function at the budget meeting this afternoon. The rest of the budget committee was excited about the new feature and can’t wait to see it next week. Can you start working on it today?”

    I’ve found that the safest policy is not to give off-the-cuff estimates.

    • Gem 4:

    In casual conversation, people tend to treat “accuracy” and “precision” as synonyms. But for estimation purposes, the distinctions between these two terms are critical.

    Accuracy refers to how close to the real value a number is. Precision refers merely to how exact a number is. In software estimation, this amounts to how many significant digits an estimate has. A measurement can be precise without being accurate, and it can be accurate without being precise. The single digit 3 is an accurate representation of pi to one significant digit, but it is not precise. 3.37882 is a more precise representation of pi than 3 is, but it is not any more accurate.

    Airline schedules are precise to the minute, but they are not very accurate. Measuring people’s heights in whole meters might be accurate, but it would not be at all precise.

    Read Full Post »


    [tweetmeme source=”aabdulmoniem” only_single=false]

    Have you ever thought that an accurate estimate may save your project?! Career? Or even your life?! Yeah, I am just like you, I haven’t thought that it is too important to try to give an accurate estimate! So, if you want to learn what I have learned, go and read this chapter.

    What I have learned?

    • When to overestimate, and when to underestimate? How you can choose between them? And why?
    • Overestimation will let Parkinson’s Law to kick in which is: Parkinson’s Law will kick in—the idea that work will expand to fill available time.
    • Underestimation will create numerous problems like:
      • Reduced effectiveness of project plans.
      • Statistically reduced chance of on-time completion.
      • Poor technical foundation leads to worse-than-nominal results.
      • Destructive late-project dynamics make the project worse than nominal.
    • Don’t intentionally underestimate. The penalty for underestimation is more severe than the penalty for overestimation. Address concerns about overestimation through planning and control, not by biasing your estimates.

    Overestimation VS. Underestimation

    • What is the benefits of the accurate estimates? (See the following section).

    Hidden Gems

    Here I will introduce some excerpts which I rate them as hidden gems inside this chapter.

    • Gem 1:

    Benefits of Accurate Estimates

    Once your estimates become accurate enough that you get past worrying about large estimation errors on either the high or low side, truly accurate estimates produce additional benefits.

    Improved status visibility One of the best ways to track progress is to compare planned progress with actual progress. If the planned progress is realistic (that is, based on accurate estimates), it’s possible to track progress according to plan. If the planned progress is fantasy, a project typically begins to run without paying much attention to its plan and it soon becomes meaningless to compare actual progress with planned progress. Good estimates thus provide important support for project tracking.

    Higher quality Accurate estimates help avoid schedule-stress-related quality problems. About 40% of all software errors have been found to be caused by stress; those errors could have been avoided by scheduling appropriately and by placing less stress on the developers (Glass 1994). When schedule pressure is extreme, about four times as many defects are reported in the released software as are reported for software developed under less extreme pressure (Jones 1994). One reason is that teams implement quick-and-dirty versions of features that absolutely must be completed in time to release the software. Excessive schedule pressure has also been found to be the most significant cause of extremely costly error-prone modules (Jones 1997).

    Projects that aim from the beginning to have the lowest number of defects usually also have the shortest schedules (Jones 2000). Projects that apply pressure to create unrealistic estimates and subsequently shortchange quality are rudely awakened when they discover that they have also shortchanged cost and schedule.

    Better coordination with nonsoftware functions Software projects usually need to coordinate with other business functions, including testing, document writing, marketing campaigns, sales staff training, financial projections, software support training, and so on. If the software schedule is not reliable, that can cause related functions to slip, which can cause the entire project schedule to slip. Better software estimates allow for tighter coordination of the whole project, including both software and nonsoftware activities.

    Better budgeting Although it is almost too obvious to state, accurate estimates support accurate budgets. An organization that doesn’t support accurate estimates undermines its ability to forecast the costs of its projects.

    Increased credibility for the development team One of the great ironies in software development is that after a project team creates an estimate, managers, marketers, and sales staff take the estimate and turn it into an optimistic business target—over the objections of the project team. The developers then overrun the optimistic business target, at which point, managers, marketers, and sales staff blame the developers for being poor estimators! A project team that holds its ground and insists on an accurate estimate will improve its credibility within its organization.

    Early risk information One of the most common wasted opportunities in software development is the failure to correctly interpret the meaning of an initial mismatch between project goals and project estimates. Consider what happens when the business sponsor says, “This project needs to be done in 4 months because we have a major trade show coming up,” and the project team says, “Our best estimate is that this project will take 6 months.” The most typical interaction is for the business sponsor and the project leadership to negotiate the estimate, and for the project team eventually to be pressured into committing to try to achieve the 4-month schedule.

    Bzzzzzt! Wrong answer! The detection of a mismatch between the project goal and the project estimate should be interpreted as incredibly useful, incredibly rare, early-in-the-project risk information. The mismatch indicates a substantial chance that the project will fail to meet its business objective. Detected early, numerous corrective actions are available, and many of them are high leverage. You might redefine the scope of the project, you might increase staff, you might transfer your best staff onto the project, or you might stagger the delivery of different functionality. You might even decide the project is not worth doing after all.

    But if this mismatch is allowed to persist, the options that will be available for corrective action will be far fewer and will be much lower leverage. The options will generally consist of “overrun the schedule and budget” or “cut painful amounts of functionality.”

    Tip #9 Recognize a mismatch between a project’s business target and a project’s estimate for what it is: valuable risk information that the project might not be successful. Take corrective action early, when it can do some good.

    Finally

    Guys, keep reading this book. It is marvelous.

    Read Full Post »


    [tweetmeme source=”aabdulmoniem” only_single=false]

    Yesterday, I have published this quiz which measures your estimation skills. I am now publishing the quiz answer here in order to measure how good an estimator are you? If you didn’t solve the quiz yet, please try to solve it before knowing the answers.

    Remember, The purpose of this quiz is not to determine whether you know when Alexander the Great was born or the latitude of Shanghai. Its purpose is to determine how well you understand your own estimation capabilities.

    Item Answer
    Surface temperature of the Sun 10,000°F/ 6,000°C
    Latitude of Shanghai 31 degrees North
    Area of the Asian continent 17,139,000 square miles

    44,390,000 square kilometers

    The year of Alexander the Great’s birth 356 BC
    Total value of U.S. currency in circulation in 2004 $719.9 billion [*]
    Total volume of the Great Lakes 5,500 cubic miles

    23,000 cubic kilometers

    2.4 x 10^22 cubic feet

    6.8 x 10^20 cubic meters

    1.8 x 10^23 U.S. gallons

    6.8 x 10^23 liters

    Worldwide box office receipts for the movie Titanic $1.835 billion[*]
    Total length of the coastline of the Pacific Ocean 84,300 miles

    135,663 kilometers

    Number of book titles published in the U.S. since 1776 22 million
    Heaviest blue whale ever recorded 380,000 pounds

    190 English tons

    170,000 kilograms

    170 metric tons

    [*]Billions are U.S. billions (that is, 10^9) rather than British billions (10^12)‥

    Read Full Post »

    Older Posts »

    %d bloggers like this: