Software planning involves estimating how much time, effort, money, and resources will be required to build a specific software system. After the project scope is determined and the problem is decomposed into smaller problems, software managers use historical project data (as well as personal experience and intuition) to determine estimates for each. The final estimates are typically adjusted by taking project complexity and risk into account. The resulting work product is called a project management plan.
Estimation Reliability Factors
- Project complexity
- Project size
- Degree of structural uncertainty (degree to which requirements have solidified, the ease with which functions can be compartmentalized, and the hierarchical nature of the information processed)
- Availability of historical information
Project Planning Objectives
- To provide a framework that enables software manager to make a reasonable estimate of resources, cost, and schedule.
- Project outcomes should be bounded by ‘best case’ and ‘worst case’ scenarios.
- Estimates should be updated as the project progresses.
- Describes the data to be processed and produced, control parameters, function, performance, constraints, external interfaces, and reliability.
- Often functions described in the software scope statement are refined to allow for better estimates of cost and schedule.
- Human Resources (number of people required and skills needed to complete the development project)
- Reusable Software Resources (off-the-shelf components, full-experience components, partial-experience components, new components)
- Development Environment (hardware and software required to be accessible by software team during the development process)
Software Project Estimation
Effective software project estimation is one of the most challenging and important activities in software development. Proper project planning and control is not possible without a sound and reliable estimate. As a whole, the software industry doesn’t estimate projects well and doesn’t use estimates appropriately. We suffer far more than we should as a result and we need to focus some effort on improving the situation.
Under-estimating a project leads to under-staffing it (resulting in staff burnout), under-scoping the quality assurance effort (running the risk of low quality deliverables), and setting too short a schedule (resulting in loss of credibility as deadlines are missed). For those who figure on avoiding this situation by generously padding the estimate, over-estimating a project can be just about as bad for the organization! If you give a project more resources than it really needs without sufficient scope controls it will use them. The project is then likely to cost more than it should (a negative impact on the bottom line), take longer to deliver than necessary (resulting in lost opportunities), and delay the use of your resources on the next project.
Software Project Estimation
The four basic steps in software project estimation are:
1) Estimate the size of the development product. This generally ends up in either Lines of Code (LOC) or Function Points (FP), but there are other possible units of measure. A discussion of the pros & cons of each is discussed in some of the material referenced at the end of this report.
2) Estimate the effort in person-months or person-hours.
3) Estimate the schedule in calendar months.
4) Estimate the project cost in dollars (or local currency).
Software Project Estimation Options
- Delay estimation until late in the project.
- Base estimates on similar projects already completed.
- Use simple decomposition techniques to estimate project cost and effort.
- Use empirical models for software cost and effort estimation.
- Automated tools may assist with project decomposition and estimation.
- Software sizing (fuzzy logic, function point, standard component, change)
- Problem-based estimation (using LOC decomposition focuses on software functions, using FP decomposition focuses on information domain characteristics)
- Process-based estimation (decomposition based on tasks required to complete the software process framework)
Sizing measures are needed to make valid comparisons across (or within) systems. Without a software sizing measure, productivity cannot be computed. For example, the real estate industry has “square feet” and the oil industry has “barrels of oil” as standard measures. In the computer software industry, however, software developers do not have a generally accepted measure of what they produce.
While estimating is probably the most common use of a sizing measure, there are many other potentially valuable applications, including progress measurement, change management, risk identification, and earned value.
There are only two software sizing measures widely used today — Lines of Code (LOC or KLOC) and Function Points (FP). Though each is a sizing measure, they actually measure different things and have very different characteristics.
Lines of Code is a measure of the size of the system after it is built. It is very dependent on the technology used to build the system, the system design, and how the programs are coded. The major disadvantages of LOC are that systems coded in different languages cannot be easily compared and efficient code is penalized by having a smaller size. Capers Jones stated at a talk to the Chicago Quality Assurance Association on November 22, 1996 that anyone using LOC is “committing profession malpractice.” Despite these problems, LOC is still frequently used by very reputable and professional organizations.
In contrast to LOC, Function Points is a measure of delivered functionality that is relatively independent of the technology used to develop the system. FP is based on sizing the system by counting external components (inputs, outputs, external interfaces, files and inquiries.) While FP addresses many of the problems inherent in LOC and has developed a loyal following, it has its own set of advantages and disadvantages.
LOC and FP — Advantages and Disadvantages
Because Lines Of Code and Function Points have been the only ways to size a system, a software developer’s or project manager’s choices have been very limited. Most have opted not to measure at all.
- Typically derived from regression analysis of historical software project data with estimated person-months as the dependent variable and KLOC or FP as independent variables.
- Constructive Cost Model (COCOMO) is an example of a static estimation model.
- The Software Equation is an example of a dynamic estimation model.
This type of estimation uses empirically derived formulas to predict effort. It uses either software lines of code (LOC) or function points (FP) to calculate the effort in months of work for one person. Because all software projects are different, it is impossible to give one equation that will estimate all project effort.
The structure of empirical estimation models is a formula, derived from data collected from past software projects that uses software size to estimate effort. Size, itself, is an estimate, described as either lines of code (LOC) or function points (FP). No estimation model is appropriate for all development environments, development processes, or application types. Models must be customized (values in the formula must be altered) so that results from the model agree with the data from the particular environment.
The typical formula of estimation models is:
E = a + b(S)c
E represents effort, in person months,
S is the size of the software development, in LOC or FP, and,
a, b, and c are values derived from data.
The relationship seen between development effort and software size is generally:
This graph demonstrates that the amount of effort accelerates as size increases, i.e., the value c in the typical formula above is greater than 1.
COCOMO is a model designed by Barry Boehm to give an estimate of the number of man-months it will take to develop a software product.
This “COnstructive COst MOdel” is based on a study of about sixty projects at TRW, a Californian automotive and IT company, acquired by Northrop Grumman in late 2002. COCOMO consists of a hierarchy of three increasingly detailed and accurate forms.
Basic COCOMO – is a static, single-valued model that computes software development effort (and cost) as a function of program size expressed in estimated lines of code.
Intermediate COCOMO – computes software development effort as function of program size and a set of “cost drivers” that include subjective assessment of product, hardware, personnel and project attributes.
Detailed COCOMO – incorporates all characteristics of the intermediate version with an assessment of the cost driver’s impact on each step (analysis, design, etc.) of the software engineering process.
Basic COCOMO is a form of the COCOMO model. COCOMO may be applied to three classes of software projects. These give a general impression of the software project.
- Organic projects – are relatively small, simple software projects in which small teams with good application experience work to a set of less than rigid requirements.
- Semi-detached projects – are intermediate (in size and complexity) software projects in which teams with mixed experience levels must meet a mix of rigid and less than rigid requirements.
- Embedded projects – are software projects that must be developed within a set of tight hardware, software, and operational constraints.
The basic COCOMO equations take the form
E = ab(KLOC)bb
D = cb(E)db
P = E/D
where E is the effort applied in person-months, D is the development time in chronological months, KLOC is the estimated number of delivered lines of code for the project (expressed in thousands), and P is the number of people required. The coefficients ab, bb, cb and db are constants.
Basic COCOMO is good for quick, early, rough order of magnitude estimates of software costs, but its accuracy is necessarily limited because of its lack of factors to account for differences in hardware constraints, personnel quality and experience, use of modern tools and techniques, and other project attributes known to have a significant influence on software costs.
As an example of how the intermediate COCOMO model works, the following is a calculation of the estimated effort for a semi-detached project of 56 KLOC. The cost drivers are set as follows:
Product cost drivers (from the table) set high = 1.15 x 1.08 x 1.15
Computer cost drivers (from the table) set nominal = 1.00
Personnel cost drivers (from the table) set low = 1.19 x 1.13 x 1.17 x 1.10 x 1.07
Project cost drivers (from the table) set high = 0.91 x 0.91 x 1.04
hence, product(cost drivers) = 1.43 x 1.00 x 1.85 x 0.86 = 2.28
for a semi-detached project of 56KLOC: a = 3.0 b = 1.12 S = 56
E = a(S)b x product(cost drivers)
E = 3.0 x (56)1.12 x 2.28
E = 3.0 x 90.78 x 2.28
E = 620.94 person-months
Description, and Table of values, for COCOMO Cost Drivers:
▫ Effort estimation is only, ever, an estimate. Management should treat it with caution.
▫ To make empirical models as useful as possible, as much data as possible should be collected from projects and used to customize (refine) any model used. An ongoing data collection program is essential if models are to developed and refined, and if management wishes to make informed decisions
▫ Many organizations are known to use ‘estimation redundancy’, i.e., to provide a check on a particular estimate they will use more than one estimating method. The result is usually a set of estimates from which an organization will choose. The underlying assumptions play a large role in the final decision and organization makes.
▫ The different estimating methods used should be documented, and all underlying assumptions should be recorded.
The Intermediate COCOMO is an extension of the Basic COCOMO model, and is used to estimate the programmer time to develop a software product. This extension considers a set of “cost driver attributes” that can be grouped into four major categories, each with a number of subcategories:
- Product attributes
Required software reliability
Size of application database
Complexity of the product
- Hardware attributes
Run-time performance constraints
Volatility of the virtual machine environment
Required turnabout time
- Personnel attributes
Software engineer capability
Virtual machine experience
Programming language experience
- Project attributes
Use of software tools
Application of software engineering methods
Required development schedule
Each of the 15 attributes is rated on a 6-point scale that ranges from “very low” to “extra high” (in importance or value).
Detailed COCOMO is defined in Barry Boehm’s book “Software Engineering Economics in 1981 [BOEH81]”. Detailed COCOMO incorporates all characteristics of the Intermediate COCOMO version with an assessment of the cost driver’s impact on each step (analysis, design, etc.) of the software engineering process.
Detailed COCOMO offers a means for processing all the project characteristics to construct a software estimate. The detailed model introduces two more capabilities:
1. Phase-sensitive effort multiplier: Some phases (design, programming, integration/test) are more affected than others by factors defined by the cost drivers. The detailed model provides a set of phase sensitive effort defined by the cost drivers, The detailed model provides a set of phase sensitive effort multipliers for each cost driver. This helps in determining the manpower allocation for each phase of the project.
2. Three-level product hierarchy: Three product levels are defined. These are module, subsystem and system levels. The rating of the cost drivers is done at appropriate level; that is, the level at which it is most susceptible to variation.
Barry W. Boehm is known for many contributions to software engineering. He was the first to identify software as the primary expense of future computer systems, he developed COCOMO, the spiral model, wideband delphi, and many more contributions through his involvement in industry and academia.
Barry Boehm’s book Software Engineering Economics documents his Constructive Cost Model (COCOMO).
- It may be more cost effective to acquire a piece of software rather than develop it.
- Decision tree analysis provides a systematic way to sort through the make-buy decision.
- As a rule outsourcing software development requires more skillful management than does in-house development of the same product.
Automated Estimation Tool Capabilities
- Sizing of project deliverables
- Selecting project activities
- Predicting staffing levels
- Predicting software effort
- Predicting software cost
- Predicting software schedule
Software Quality Assurance
Software Quality Assurance (SQA) is defined as a planned and systematic approach to the evaluation of the quality of and adherence to software product standards, processes, and procedures. SQA includes the process of assuring that standards and procedures are established and are followed throughout the software acquisition life cycle. Compliance with agreed-upon standards and procedures is evaluated through process monitoring, product evaluation, and audits. Software development and control processes should include quality assurance approval points, where an SQA evaluation of the product may be done in relation to the applicable standards.
SQA is the concern of every software engineer to reduce cost and improve product time-to-market. A Software Quality Assurance Plan is not merely another name for a test plan, though test plans are included in an SQA plan. SQA activities are performed on every software project. Use of metrics is an important part of developing a strategy to improve the quality of both software processes and work products.
The purpose of Software Quality Assurance is to provide management with appropriate visibility into the process being used by the software project and of the products being built.
- Variation control is the heart of quality control (software engineers strive to control the process applied, resources expended, and end product quality attributes).
- Quality of design – refers to characteristics designers specify for the end product to be constructed
- Quality of conformance – degree to which design specifications are followed in manufacturing the product
- Quality control – series of inspections, reviews, and tests used to ensure conformance of a work product to its specifications
- Quality assurance – consists of the auditing and reporting procedures used to provide management with data needed to make proactive decisions
Cost of Quality
- Prevention costs – quality planning, formal technical reviews, test equipment, training
- Appraisal costs – in-process and inter-process inspection, equipment calibration and maintenance, testing
- Failure costs – rework, repair, failure mode analysis
- External failure costs – complaint resolution, product return and replacement, help line support, warranty work
Total Quality Management
- Kaizen – develop a process that is visible, repeatable, and mesaurable
- Atarimae hinshitsu – examine the intangibles that affect the process and work to optimize their impact on the process
- Kansei – examine the way the product is used by the customer with an eye to improving both the product and the development process
- Miryokuteki hinshitsu – observe product use in the market place to uncover new product applications and identify new products to develop
Software Quality Assurance
- Conformance to software requirements is the foundation from which software quality is measured.
- Specified standards are used to define the development criteria that are used to guide the manner in which software is engineered.
- Software must conform to implicit requirements (ease of use, maintainability, reliability, etc.) as well as its explicit requirements.
SQA Group Activities
- Prepare SQA plan for the project.
- Participate in the development of the project’s software process description.
- Review software engineering activities to verify compliance with the defined software process.
- Audit designated software work products to verify compliance with those defined as part of the software process.
- Ensure that any deviations in software or work products are documented and handled according to a documented procedure.
- Record any evidence of noncompliance and reports them to management.
- Purpose is to find defects (errors) before they are passed on to another software engineering activity or released to the customer.
- Software engineers (and others) conduct formal technical reviews (FTR) for software engineers.
- Using formal technical reviews (walkthroughs or inspections) is an effective means for improving software quality.
Formal Technical Reviews
- Involves 3 to 5 people (including reviewers)
- Advance preparation (no more than 2 hours per person) required
- Duration of review meeting should be less than 2 hours
- Focus of review is on a discrete work product
- Review leader organizes the review meeting at the producer’s request
- Reviewers ask questions that enable the producer to discover his or her own error (the product is under review not the producer)
- Producer of the work product walks the reviewers through the product
- Recorder writes down any significant issues raised during the review
- Reviewers decide to accept or reject the work product and whether to require additional reviews of product or not
Statistical Quality Assurance
- Information about software defects is collected and categorized
- Each defect is traced back to its cause
- Using the Pareto principle (80% of the defects can be traced to 20% of the causes) isolate the “vital few” defect causes
- Move to correct the problems that caused the defects
Link to visit: http://www.rspa.com/spi/SQA.html
- Defined as the probability of failure free operation of a computer program in a specified environment for a specified time period
- Can be measured directly and estimated using historical and developmental data (unlike many other software quality factors)
- Software reliability problems can usually be traced back to errors in design or implementation.
- Defined as a software quality assurance activity that focuses on identifying potential hazards that may cause a software system to fail.
- Early identification of software hazards allows developers to specify design features to can eliminate or at least control the impact of potential hazards.
- Software reliability involves determining the likelihood that a failure will occur, while software safety examines the ways in which failures may result in conditions that can lead to a mishap.
- Poka-yoke devices are mechanisms that lead to the prevention of a potential quality problem before it occurs or to the rapid detection of a quality problem if one is introduced
- Poka-yoke devices are simple, cheap, part of the engineering process, and are located near the process task where the mistakes occur
- Management section – describes the place of SQA in the structure of the organization
- Documentation section – describes each work product produced as part of the software process
- Standards, practices, and conventions section – lists all applicable standards/practices applied during the software process and any metrics to be collected as part of the software engineering work
- Reviews and audits section – provides an overview of the approach used in the reviews and audits to be conducted during the project
- Test section – references the test plan and procedure document and defines test record keeping requirements
- Problem reporting and corrective action section – defines procedures for reporting, tracking, and resolving errors or defects, identifies organizational responsibilities for these activities
- Other – tools, SQA methods, change control, record keeping, training, and risk management
ISO Quality Standards
- Quality assurance systems are defined as the organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
- ISO 9000 describes the quality elements that must be present for a quality assurance system to be compliant with the standard, but it does not describe how an organization should implement these elements.
- ISO 9001 is the quality standard that contains 20 requirements that must be present in an effective software quality assurance system.
ISO 9000 is a family of ISO (the International Organization for Standardization) standards for quality management systems. ISO 9000 was developed from the British Standards Institution’s BS 5750. The ISO 9000 standards are maintained by ISO and administered by accreditation and certification bodies. ISO 9000 does not guarantee the quality of end products and services; rather, it certifies that consistent business processes are being applied.
ISO does not itself certify organizations. Many countries have formed accreditation bodies to authorize certification bodies, which audit organizations applying for ISO 9001 compliance certification. It is important to note that it is not possible to be certified to ISO 9000. Although commonly referred to as ISO 9000:2000 certification, the actual standard to which an organization’s quality management can be certified is ISO 9001:2000. Both the accreditation bodies and the certification bodies charge fees for their services. The various accreditation bodies have mutual agreements with each other to ensure that certificates issued by one of the Accredited Certification Bodies (CB) are accepted world-wide.
The applying organization is assessed based on an extensive sample of its sites, functions, products, services, and processes and a list of problems (“action requests” or “non-compliances”) made known to the management. If there are no major problems on this list, the certification body will issue an ISO 9001 certificate for each geographical site it has visited, once it receives a satisfactory improvement plan from the management showing how any problems will be resolved.
An ISO certificate is not a once-and-for-all award, but must be renewed at regular intervals recommended by the certification body, usually around three years. In contrast to the Capability Maturity Model there are no grades of competence within ISO 9001.
However, there are various approaches which attempt to measure quality in a way that is not simply pass or fail, as is the case with ISO 9001. One such scheme is BSI Benchmark, which evaluates the progress of an organization’s management system by measuring the degree of application of the eight management principles which underlie the ISO 9000 standards.
Two types of auditing are required to become registered to the standard: auditing by an external certification body (external audit) and audits by internal staff trained for this process (internal audits). The aim is a continual process of review and assessment, to verify that the system is working as it’s supposed to, find out where it can improve, and to correct or prevent problems identified. It is considered healthier for internal auditors to audit outside their usual management line, so as to bring a degree of independence to their judgements.
ISO 9000 document suite
ISO 9000 is composed of the following sections:
ISO 9000:2000, Quality management systems
Fundamentals and vocabulary. Covers the basics of what quality management systems are and also contains the core language of the ISO 9000 series of standards. The latest version is ISO 9000:2005.
ISO 9001 Quality management systems
Requirements is intended for use in any organization which designs, develops, manufactures, installs and/or services any product or provides any form of service. It provides a number of requirements which an organization needs to fulfill if it is to achieve customer satisfaction through consistent products and services which meet customer expectations. This is the only implementation for which third-party auditors may grant certifications. The latest version is :2000.
ISO 9004 Quality management systems
Guidelines for performance improvements. covers continual improvement. This gives you advice on what you could do to enhance a mature system. This standard very specifically states that it is not intended as a guide to implementation.
The ISO 9001 standard is generalized and abstract. Its parts must be carefully interpreted, to make sense within a particular organization. Developing software is not like making cheese or offering counseling services; yet the ISO 9001 guidelines, because they are business management guidelines, can be applied to each of these. Diverse organizations—police departments (US), professional soccer teams (Mexico) and city councils (UK)—have successfully implemented ISO 9001:2000 systems.