Introduction
There is no such thing as a late software project. Why? Because a software project takes as long as it takes and you don’t know that amount of time until it’s over. Guessing how long it will take at the beginning, and then calling your project early, on time, or late, based on that guess is like being forced to predict how long it will take you to drive home from work when driving through rush hour tonight. Your drive will take as long as it takes. When you’re home, you can analyze how good your guess was, but you’re not late. You’ve never driven home on that day, during that time, before. How could you possibly know if there would be an accident? What if you guessed 45 minutes and got there in 25? Were you sand-bagging? The next time you have to guess what time you’ll be home, you’ll be expected to say 25 minutes. Why not? That’s how long it took last time. Now the next time it takes you 70 minutes. You’re late! No, you’re not. Your guess was wrong; two completely different things.
An estimate is basically an educated guess. When management decides to use an estimate as a deadline, they’re hurting themselves. Why? Take the rush hour analogy from above. What if you estimated that it would take 30 minutes to get home? About 20 minutes into your drive you realize there is no way you’re going to make it; you’re at least another 20 minutes away. You have two choices: you can hurry through traffic by speeding, driving on the shoulder, ignoring red lights, etc... Or you can drive safely and get home when you get home. The first option is the development equivalent of rushing through your coding efforts, cutting corners, and working when you’re tired because you’re putting in 14-hour days. You’ll get there on time, but will that be a product you can depend on?
This is the basis for the theme of this paper. I believe that IT (Information Technology) is living with the myth that it can predict the future, that it can predict how long it will take to get home from work tonight. The intent of this paper is to make it clear that predicting how long projects will take, especially projects that take months or years, is virtually impossible. A side effect of hitting mythical deadlines is that it’s bad for business. We’ll discuss that too.
I also intend to present the other side of the story by presenting research that contradicts what I believe, and by discussing this topic with project managers that don’t agree with my beliefs.
Beliefs and Support
The nature of dealing with computers doesn’t allow for accurate estimates. For example, let’s say you give each of your three developers on your small project team a new monitor to use. The first two plug the monitor in and start using it within ten minutes. How long will it take the third person to do it? Ten minutes, right? Be careful. It actually took the third person 30 man-minutes to install his monitor. Was he incompetent? Was he lazy? Let’s examine this simple real-life story that happened to me to see what really happened...
I was given a second monitor to use with my computer at work recently. If we treated this as a project (an admittedly simple one), we would probably estimate the time at one resource and about 10 minutes. After all, what had to be done? Attach the monitor, adjust display settings and away you go. Here's what actually happened:
1. Attached new monitor to computer. No signal.
2. Think for a minute; new monitor worked with previous computer, same model. The dongle worked with the previous computer too.
3. Decided that all connections worked, so rebooted the computer. Still didn't work.
4. Discussed issue with co-worker who was walking by. Decided that the video card might be bad, so...
5. Open both computers to swap video cards. Upon opening my computer, discovered that the video card wasn't seated properly.
6. Reseated video card and put both computers back together.
7. Started computer and monitors were recognized.
8. Adjusted display settings to my preference.
The original estimate was one person and 10 minutes. The actual results were two people, and 30 minutes (20 minutes of one person’s time and 10 minutes of a co-workers time).
Projects don't get much smaller and simpler than this, yet it went over the "deadline" by 200%. By most accounts, any project that takes 200% longer than anticipated would be considered a failure. If something this simple and “predictable” can go this far off track, imagine accurately estimating projects that take months or years.
This is dealing with computers. This is part of the reason why it's so hard to predict how long projects will take. Information systems projects, of course, involve computers, a distinct characteristic that has more effect than initially might be apparent. (Olson, 2004)
So why is IT burdened with this myth anyway? Part of the reason is that IT is still in its relative infancy compared to the history of other departments. Since other departments, such as accounting, operations, etc..., can usually predict how long it will take to get their work done, they naturally feel this belief should carry over to estimating software projects. The problem with this view is that those other departments are estimating future work that is very similar to, or almost exactly like the work they’ve done in the past. With software engineering projects, each project is something brand new that’s never been done before. If it had been done, we should just buy it, not build it.
I need to make something clear before going any further. A project can be completed by a certain date. Absolutely it can. If your business absolutely must have a product to market by March 31 or miss out on an opportunity entirely, then you can have your product to market by then. And you may even have the quality you wanted and be within your projected budget. Ah, but what if it’s March 15 and you realize you have two more months of work to do yet? Will you be “late?” Not necessarily. You can certainly sacrifice some quality or spend some more money to help complete it on time. The point here is, if you’re rushing to meet a deadline, and you have to hit it, you’re most likely going to lose some features, some quality, or be over budget. Hurrying always makes your product worse. People under time pressure don’t work better; they just work faster. (DeMarco & Lister, 1999)
From "Nestle's ERP Odyssey," from the 15-May-2002 issue of CIO Magazine: "Nestle
Estimates are good. They’re necessary because they give the business at least some idea of the timeframe that they’re dealing with. It only turns into a problem when the estimate is treated as a deadline. An estimate is just that; it’s an ESTIMATE. Software estimations would be fine, if they were actually accepted as “estimations” rather than concrete expressions of an end date. (Staddon, 2007)
Can meeting a deadline be bad, even if you didn’t have to hurry? And even if you’re going to meet all of your features, hit your quality standard, and be within budget? Yes, that can still be bad. Why? Because work expands to fill the time allocated for it, now known as Parkinson’s law. (DeMarco & Lister, 1999) This phenomenon means that people will tend to use all the time they have, even if they could have been done earlier.
So what is a business supposed to do? Could they do something crazy like not mandate a deadline? Believe it or not, a study was conducted that showed that projects on which the boss applied no schedule pressure whatsoever (“Just wake me up when you’re done.”) had the highest productivity of all. (DeMarco & Lister, 1999) So how do you hold your team accountable then? If there’s no deadline, will they take forever and still not be done? This is where assembling the right team comes into play. If you’re afraid the employee is hiding behind the curtain surfing the net or playing Doom, well, there are far more severe problems than just productivity issues. Without trust – mutual trust – any engineering department is in trouble. (DeMarco & Lister, 1999) And by the way, project and functional managers are still involved. They’re still monitoring progress, analyzing the amount of work done, etc... This approach doesn’t mean you’re throwing projects over the wall and ignoring them until they’re done.
Counter Beliefs
I had quite a few lengthy discussions about this topic with a project manager (Mike) from a $1.3-billion company.
Mike said that it is possible to accurately estimate a project. You add the appropriate slack time and state your variance. For example, you can be within 50% of your estimate for a two-year project involving a team of six. You can be on time if you time-box. (Time-boxing is a concept whereby you finish by the deadline no matter what. If features are missing, then they’re missing; at least you’re on time.)
Mike also stated that accuracy will vary depending on the type of project. For example, you might be working with a team that is working on the same project for years. You might be continually adding new functionality to it. Your estimates for this type of work will be more accurate than if you were starting a brand new project, with a brand new team, using new technology.
A web site (Green, 2006) claims to have a “proven software project estimation method that produces reasonably accurate results...” With this method, you assign low, medium, or high risk to each task. The low risk has an allowance of 10%, meaning that it would be normal for that task to exceed its estimated completion date by 10%. A medium risk is assigned an allowance of 50%, and a high risk is assigned an allowance of 150%. There is nothing wrong with this method for producing an estimate. It just goes to show, however, that this method treats being off by 150% as being accurate.
Thoughts About Counter Beliefs
Regarding Mike’s points from above, I feel what he says proves my point. If you can accurately predict how long a project takes, why is slack time necessary? And stating your variance? Why is a variance necessary? Because it’s an ESTIMATE. And 50% of a two-year project is one year. So that means you’ll finish anywhere from 12 months to 36 months. That’s not what I’d call being able to predict how long a project will take. And if you need to time-box to hit your deadline, then you haven’t really accurately predicted how long it will take to get all of the features completed.
For people that believe that you can predict when projects will be done, and that you should force your team to hit that deadline, they can point to analyzing the risk and allocating time for the unknown. However, successful risk analysis depends on the personal experience of the analyst, as well as access to the project plan, and historical data. (Olson, 2004)
Two points are important about that last statement. First, analyst experience varies. This means that successful risk analysis changes depending on who is doing it. This further means that some will be more “correct” than others. So you can analyze risk, but you can’t predict the level of risk that will actually happen.
Second, successful risk analysis depends on historical data. Think about what that means for a second. Read that again. Successful risk analysis depends on historical data. You don’t always have historical data! Actually, many times you don’t. What if you’re developing a brand new project with new personnel and new technology? There is no historical data, and therefore nothing to compare it to.
Further Support and Advice
A decent approach that I’ve seen is to add 40% to every project because I have experienced that many software engineering projects average that amount of unknown. You have to be careful though; that’s an average. That doesn’t mean that your project will have 40% of unknown issues. One project could contain 10% and the next could be 70%. That’s the hard part for traditional managers to accept. “You’re telling me that you’re going to add 40% for the unknown?” Yep. Not only am I telling you that, but be prepared for that unknown to be even higher when all is said and done.
This might sound like I’m being insensitive here. Quite the contrary; if this is reality, what could bring management and IT together more than both entities understanding reality and being on the same page?
Now how can management plan the business around such wild changes in when projects might be done? They have a couple of choices. They can hit the deadline and sacrifice budget, features, or quality. Or they can plan for a project to take three times longer than expected and be willing to wait it out. After all, some projects actually aren’t so time critical. If you’re replacing an internal system that already works, but it costs a lot of money to support and it’s nearing its capacity, you have the luxury of using that old system until you get the new system completely ready to use.
The incompleteness and inconsistencies of our ideas become clear only during implementation. (Brooks, 1995)
Observe that for the programmer, as for the chef, the urgency of the patron may govern the scheduled completion of the task, but it can not govern the actual completion. An omelette, promised in two minutes, may appear to be progressing nicely. But when it has not set in two minutes, the customer has two choices – wait or eat it raw. Software customers have had the same choices. (Brooks, 1995)
The cook has another choice; he can turn up the heat. The result is often an omelette nothing can save – burned in one part, raw in another. (Brooks, 1995)
Why are estimates many times overly-optimistic? There are many reasons: pressure from management, being optimistic by nature, not wanting to appear incompetent, etc... An example of not wanting to appear incompetent might be something like this: If you ask a developer how long it will take to create this simple report (see Figure 1), and you’re his boss, you might hear something like “eight hours.” A typical scenario might have the manager questioning this estimate and the programmer giving in and reducing it to something like four hours. Why? What changed between the time the first estimate was given and the second, besides the apparent look of disappointment on the manager’s face? Little did the manager realize that the original estimate of eight hours was actually optimistic. The programmer really thought it was going to take 16 hours, but he thought that would appear too long in his manager’s eyes, so he said eight, and now it’s down to four!
(Figure 1 – A “Simple” Report)
So why would this take so long? First, to present the data correctly, a thorough understanding of the data structures is necessary. That takes time, and it could take a lot of time if the data structure is complex. Second more than one approach to the solution might be possible, and sometimes only through trial and error will you know which one is best. Third, what appears like an amazingly simple report to the user actually contains a quite-complicated SQL query statement to produce it. See Appendix A for the actual code used to produce this report.
The old success criteria of meeting outcome, cost, and schedule constraints are no longer adequate. (Cohen & Graham, 2001)
In the past the project manager was concerned mainly with the technical risk and so concentrated on creating the outcome. This resulted in the narrow orientation project managers were often given and the focus on the triple constraints of outcome, cost, and duration. Those commissioning the project saw this approach as giving them control. They were concerned with the marketing risk and adding value to the company. They apparently felt that if a business could get specific things done at a fixed cost and time, the value would be there and the market would respond. This orientation of working within constraints led to many bad practices by both project managers and upper managers. In particular many new possibilities that arose during the execution of a project were often ignored because they were not in the budget nor in the specifications nor in the schedule. (Cohen & Graham, 2001)
It’s been my experience that actual programming time is less than most people think. When I was managing a team of developers on a project, I came into the job during the middle of the project. I was able to show that the developers were spending their time on the project at a rate much less than what management thought was happening. Management thought that the project team was working more than 90% on the project. After tracking time for weeks, the most any developer was working on the project was 75%, and one of the developers was only working on the project 25% of his time. Observe this excerpt from The Mythical Man-Month:
Charles Portman, manager of ICL’s Software Division, Computer Equipment Organization (Northwest) at
He found his programming teams missing schedules by about one half – each job was taking approximately twice as long as estimated. The estimates were very careful, done by experienced teams estimating man-hours for several hundred subtasks on a Pert chart. When the slippage pattern appeared, he asked them to keep careful daily logs of time usage. These showed that the estimating error could be entirely accounted for by the fact that his teams were only realizing 50 percent of the working week as actual programming and debugging time. Machine downtime, higher-priority short unrelated jobs, meetings, paperwork, company business, sickness, personal time, etc. accounted for the rest. In short, the estimates made an unrealistic assumption about the number of technical work hours per man-year. My own experience quite confirms his conclusion. (Brooks, 1995)
The above is another reason for estimates generally being bad; management sometimes simply does not want to believe that their team isn’t working as much as they thought.
Another reason deadlines are bad for the business as a whole is that it causes the development team to leave behind “broken windows.” Observe this excerpt from The Pragmatic Programmer (Hunt & Thomas, 2000):
In inner cities, some buildings are beautiful and clean, while others are rotting hulks. Why? Researchers in the field of crime and urban decay discovered a fascinating trigger mechanism, one that very quickly turns a clean, intact, inhabited building into a smashed and abandoned derelict.
A broken window.
One broken window, left unrepaired for any substantial length of time, instills in the inhabitants of the building a sense of abandonment – a sense that the powers that be don’t care about the building. So another window gets broken. People start littering. Graffiti appears. Serious structural damage begins. In a relatively short space of time, the building becomes damaged beyond the owner’s desire to fix it, and the sense of abandonment becomes reality.
The “Broken Window Theory” has inspired police departments in
Don’t leave “broken windows” (bad designs, wrong decisions, or poor code) unrepaired. Fix each one as soon as it is discovered. If there is insufficient time to fix it properly, then board it up. Perhaps you can comment out the offending code, or display a “Not Implemented” message, or substitute dummy data instead. Take some action to prevent further damage and to show that you’re on top of the situation.
We’ve seen clean, functional systems deteriorate pretty quickly once windows start breaking. There are other factors that can contribute to software rot, and we’ll touch on some of them elsewhere, but neglect accelerates the rot faster than any other factor.
You may be thinking that no one has the time to go around cleaning up all the broken glass of a project. If you continue to think like that, then you better plan on getting a dumpster, or moving to another neighborhood. Don’t let entropy win.
It’s exactly this type of doing-it-right style that loses out when deadlines are enforced.
The Chaos Report is the first survey made by the Standish Group. This report is the landmark study of IT project failure. It is cited by everybody writing a paper or making a presentation where a reference is made of IT project failure. (IT Cortex)
The respondents to the Standish Group survey were IT executive managers. The sample includes large, medium, and small companies across major industry segments: banking, securities, manufacturing, retail, wholesale, heath care, insurance, services, and local, state, and federal organizations. The total sample size was 365 respondents representing 8,380 applications. In addition, The Standish Group conducted focus groups and personal interviews to provide qualitative context for the survey results. (IT Cortex)
On the success side, the average is only 16.2% for software projects that are completed on-time and on-budget. (IT Cortex)
Only 16.2% of 8,380 applications were on time and on budget, and yet we continue to ignore the simple, obvious truth: We can not predict how long a software project will take.
Summary
Project estimation is good and it’s necessary. Turning estimates into deadlines is bad. Accurately predicting how long a project will take, especially a new project, with a new team and new technology, is virtually impossible.
IT is living with a myth that it can predict how long it will take to complete projects. Being burdened with this myth is bad for business because of the negative side effects caused by being forced to hit a deadline: hurrying, reducing the quality of the code, leaving behind broken windows, working when you’re tired (a lot of overtime), Parkinson’s Law, etc...
IT is burdened with this myth because other departments operate in a way that allows them to predict how long their tasks will take. IT is different. Once upper management realizes and accepts this, the company will be better for it.
So what is a business supposed to do? First accept the fact that an estimate is simply an estimate. If you have to hit a deadline, then incorporate the concept of time-boxing. Otherwise, assemble a good team with a good manager, create an estimate, then constantly update those estimates as you’re completing the project. As the project is progressing, you’ll have an idea of when it will be done. Don’t force the team to hit the deadline. Anticipate that the project will be done sometime after the estimate and adjust your business plans according to that assumption.
Annotated Bibliography
Brooks Jr., F. P. (1995). The Mythical Man-Month. MA: Addison Wesley Longman,
Inc.
This book focuses on different aspects of software engineering, such as adding resources to an already late project, using the right tools for development, estimating projects, and more. The author was a professor of computer science at the
Cohen, D. J., & Graham, R. J. (2001). The Project Manager's MBA, How to
Translate Project Decisions into Business Success.
This book presents the business basics that every project manager needs to understand. One author is senior vice president and managing director of the Project Management Practice at Strategic Management Group. The other author has developed a consulting practice in project management and is the author of two other project management books. The audience for this book would be those who are looking for a good grounding in project management. Mentioned in this book is the fact that the old constraints of project management no longer apply to today’s project management.
DeMarco, T., & Lister, T. (1999). Peopleware, Productive Projects and Teams.
The authors of this book discuss at length about how the major issues involving software projects are sociological in nature, not technical. The authors have lectured, written, and consulted internationally since 1979 on management, estimating, productivity, and corporate culture. The book has been described as an Anti-Dilbert Manifesto. The intended audience for this book would be those interested in looking past the common management errors and discovering what approaches really allow teams to excel. If I could only recommend one book to any management member or IT member, it would be this one.
Green, A. (2006). How to Estimate a Software Project. Retrieved November 10,
2007 from http://www.bright-green.com/docs/howto_estimate.html.
This article goes into detail about an accurate method of estimating software projects. Alan Green is a programmer with 15+ years of experience. This article is intended for those interested in a way to estimate software projects. It is relevant to this paper in that it explains that you need a wide range of estimating accuracy based on level of risk.
Hunt, A., & Thomas, D. (2000). The Pragmatic Programmer, from Journeyman to
Master.
The Pragmatic Programmer illustrates the best practices and major pitfalls of many different aspects of software development. Following the lessons in this book will help developers achieve long-term success in their profession. One author owns his own consulting business and the other founded an ISO9001-certified English software company that delivered sophisticated, custom software projects throughout the world.
IT Cortex (n.d.). Failure Rate, Statistics Over IT Projects Failure Rate. Retrieved
November 10, 2007 from http://www.it-cortex.com/Stat_Failure_Rate.htm.
This article displayed and summarized statistics on failure rates for IT projects, including the Chaos Report from 1995. The information contained within this web page provides great insight into the overwhelming failure rates for IT projects. The numbers within it support the idea that estimation is often incorrect.
Olson, D. L. (2004). Introduction to Information Systems Project Management.
This book shows how good project management skills can be applied to the management of information systems. It discusses common problems and pitfalls of managing projects. The author is a professor at the
Staddon, J. (2007). The Myth of Software Estimation. Retrieved November 17,
2007 from http://jeffspost.wordpress.com/2007/08/26/the-myth-of-software-estimation/.
This article is similar to this research paper in that it views estimation as a myth and uses a different analogy to convey that message. Jeff Staddon is a full time software developer and active member of the
Worthen, B. (2002). Nestle's ERP Odyssey. Retrieved November 10, 2007 from http://www.cio.com/article/print/31066.
This is from the May 2002 edition of CIO magazine. It discusses what can and did go wrong in a major project within a large corporation. The audience for this article would be those that are interested in learning from the mistakes of an implementation gone wrong.
Appendix A
Code for a “Simple” Report
DECLARE @CustomerID varchar(40)
DECLARE @Date datetime
SET @CustomerID = 'OTEAMC10'
SET @Date = '2/8/2007'
SELECT cdtc01.CustomerDocumentTypeCode, COUNT(cdtc01.CustomerDocumentTypeCode) as 'Count In',
-- Get the total number of document instances with a failed completion status
(SELECT COUNT(didscs.DocInstcDataServiceComplStatusID)
FROM DocInstcDataServiceComplStatus didscs
INNER JOIN DataServiceCompletionStatus dscs
ON didscs.DataServiceCompletionStatusID = dscs.DataServiceCompletionStatusID
INNER JOIN DocumentInstance di
ON didscs.DocumentInstanceID = di.DocumentInstanceID
INNER JOIN CustomerDocumentTypes cdt
ON di.CustomerDocumentTypesID = cdt.CustomerDocumentTypesID
INNER JOIN CustomerDocumentTypeCode cdtc
ON cdt.CustomerDocumentTypeCodeID = cdtc.CustomerDocumentTypeCodeID
INNER JOIN DataFileInstance dfi
ON di.DataFileInstanceID = dfi.DataFileInstanceID
INNER JOIN Business b
ON dfi.FileOwner_BusinessID = b.BusinessID
WHERE b.BusinessCode = (@CustomerID)
AND dfi.DateReceived >= (@Date) AND dfi.DateReceived < customerdocumenttypecode =" cdtc01.CustomerDocumentTypeCode" succeededyn =" 0)" customerdocumenttypesid =" cdt.CustomerDocumentTypesID"> = cdtc01.CustomerDocumentTypeCodeID
INNER JOIN Customer c
ON cdtc01.CustomerID = c.CustomerID
INNER JOIN Business b
ON c.CustomerID = b.BusinessID
INNER JOIN DataFileInstance dfi
ON b.BusinessID = dfi.FileOwner_BusinessID
WHERE b.BusinessCode = (@CustomerID)
AND dfi.DateReceived >= (@Date) AND dfi.DateReceived <>
UNION ALL
SELECT 'Unknown' as CustomerDocumentTypeCode, 0 as 'Count In',
COUNT(udi.UnloadableDocumentInstanceID)
FROM DataFileInstance dfi
INNER JOIN UnloadableDocumentInstance udi
ON dfi.DataFileInstanceID = udi.DataFileInstanceID
INNER JOIN Business b
ON dfi.FileOwner_BusinessID = b.BusinessID
AND dfi.DateReceived >= (@Date) AND dfi.DateReceived < businesscode =" (@CustomerID)
-- DailyReconciliation.sql
No comments:
Post a Comment