Software Engineering (SE) is the discipline of designing, creating, and maintaining software by applying technologies and practices from computer science, project management, engineering, application domains and other fields.
Software engineering is “(1) the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, that is, the application of engineering to software,” and “(2) the study of approaches as in (1).”
- Software is both a product and a vehicle for developing a product.
- Software is engineered not manufactured.
- Software does not wear out, but it does deteriorate.
- Currently, most software is still custom-built.
Software engineering is concerned with the application of engineering principles to the conception, development and verification of a software system. This discipline deals with identifying, defining, realizing and verifying the required characteristics of the resultant software. These software characteristics may include: functionality, reliability, maintainability, availability, testability, ease-of-use, portability, and other attributes. Software engineering addresses these characteristics by preparing design and technical specifications that, if implemented properly, will result in software that can be verified to meet these requirements.
Software engineering is also concerned with the characteristics of the software development process. In this regard, it deals with characteristics such as cost of development, duration of development, and risks in development of software. Some typical characteristics are:
- Software is developed or engineered. It is not manufactured in the classical sense.
- For hardware the manufacturing phase can introduce quality problems that are not existent for software.
- Furthermore, a separate manufacturing process has to be developed.
- Software development costs are concentrated in engineering.
- Software only suffers from design errors.
- Software does not wear out.
Hardware follows the “bathtub” curve:
Software does not suffer from wearing out, so it should follow the following curve:
But, due to changes during its life-time it deteriorates:
- Most software is custom-built, rather than being assembled from existing components.
A software component is a system element offering a predefined service and able to communicate with other components. Clemens Szyperski and David Messerschmitt give the following five criteria for what a software component shall be to fulfill the definition:
- Composable with other components
- Encapsulated i.e., non-investigable through its interfaces
- A unit of independent deployment and versioning
Software components often take the form of objects or collections of objects (from object-oriented programming), in some binary or textual form, adhering to some interface description language (IDL) so that the component may exist autonomously from other components in a computer. The idea that software should be componentized, built from prefabricated components, was first published in Douglas McIlroy’s address at the NATO conference on software engineering in Garmisch, Germany, 1968 titled Mass Produced Software Components.
- System software
- Real-time software
- Business software
- Engineering and scientific software
- Embedded software
- Personal computer software
- Web-based software
- Artificial intelligence software
- Software failures receive a lot more publicity than software engineering success stories.
- The software crisis predicted thirty years ago has never materialized and software engineering successes outnumber the failures.
- The problems that afflict software development are associated more with how to develop and support software properly, than with simply building software that functions correctly.
During the years 1960/1970 it became clear that:
- The construction of large programs is much more problematic than for small programs.
- The effort increases more than linearly compared to the size of a program.
- Hardware is no longer the most important factor.
- Do-it-yourself programming is not the same as the development of a product to be used by or adapted by others.
- A program is not a static entity, but it evolves in time due to changes in requirements and environment.
Software is no longer specialized problem-solving, but an industry in itself. Hence a discipline to provide a framework for building software with a higher quality is needed: Software Engineering.
The Therac-25, a computerized radiation therapy machine, massively overdosed patients at least six times between June 1985 and January 1987. Each overdose was several times the normal therapeutic dose and resulted in the patient’s severe injury or even death. Overdoses, although they sometimes involved operator error, occurred primarily because of errors in the Therac-25’s software and because the manufacturer did not follow proper software engineering practices.
On 4 June 1996 the maiden flight of the Ariane 5 launcher ended in a failure, about 40 seconds after initiation of the flight sequence. The failure was caused by specification and design errors in the software of the inertial reference system. The extensive reviews and tests carried out during the Ariane 5 development program did not include adequate analysis and testing of the inertial reference system or of the complete flight control system, which could have detected the potential failure.
Problems with an automated baggage-handling system controlled by more than 100 computers have severely delayed the opening of Denver’s new airport in 1994. On a tested transit of 429 bags from the Continental ticket counters, 38 were lost in the system, and an additional 15% had to have their bar-coded tags scanned twice. The airport eventually opened in 1995, 16 months late, with delay costs over $300 million, and with a mainly manual baggage system. The system suffered from problems in two basic areas: software and mechanical.
- Software standards provide software engineers with all the guidance they need. The reality is the standards may be outdated and rarely referred to.
- People with modern computers have all the software development tools. The reality is that CASE tools are more important than hardware to producing high quality software, yet they are rarely used effectively.
- Adding people is a good way to catch up when a project is behind schedule. The reality is that adding people only helps the project schedule when is it done in a planned, well-coordinated manner.
- Giving software projects to outside parties to develop solves software project management problems. The reality is people who can’t manage internal software development problems will struggle to manage or control the external development of software too.
- A general statement of objectives from the customer is all that is needed to begin a software project. The reality is without constant communication between the customer and the developers it is impossible to build a software product that meets the customer’s real needs.
- Project requirements change continually and change is easy to accommodate in the software design. The reality is that every change has far-reaching and unexpected consequences. Changes to software requirements must be managed very carefully to keep a software project on time and under budget.
- Once a program is written, the software engineer’s work is finished. The reality is that maintaining a piece of software is never done, until the software product is retired from service.
- There is no way to assess the quality of a piece of software until it is actually running on some machine. The reality is that one of the most effective quality assurance practices (formal technical reviews) can be applied to any software design product and can serve as a quality filter very early in the product life cycle.
- The only deliverable from a successful software project is the working program. The reality is the working program is only one of several deliverables that arise from a well-managed software project. The documentation is also important since it provides a basis for software support after delivery.
- Software engineering is all about the creation of large and unnecessary documentation. The reality is that software engineering is concerned with creating quality. This means doing things right the first time and not having to create deliverables needed to complete or maintain a software product. This practice usually leads to faster delivery times and shorter development cycles.
Brief history of software engineering
Software engineering has a long evolving history. Both the tools that are used and the applications that are written have evolved over time. It seems likely that software engineering will continue evolving for many decades to come.
60 year time line
1940s: First computer users wrote machine code by hand.
1950s: Early tools, such as macro assemblers and interpreters were created and widely used to improve productivity and quality. First generation optimizing compilers.
1960s: Second generation tools like optimizing compilers and inspections were being used to improve productivity and quality. The concept of software engineering was widely discussed. First really big (1000 programmer) projects. Commercial mainframes and custom software for big business. The influential 1968 NATO Conference on Software Engineering was held.
1970s: Collaborative software tools, such as Unix, code repositories, make, and so on. Minicomputers and the rise of small business software.
1980s: Personal computers and personal workstations become common. Commensurate rise of consumer software.
1990s: Object-oriented programming and agile processes like Extreme programming gain mainstream acceptance. The WWW and hand-held computers make software even more widely available.
2000s: Managed code and interpreted platforms such as .NET, PHP, Python and Java make writing software easier than ever before.
Common Process Framework
- Software engineering work tasks
- Project milestones
- Work products
- Quality assurance points
The Capability Maturity Model
The basic idea behind this model is that the quality of the development process determines the quality of a software product. The Capability Maturity Model has been developed by the SEI (Software Engineering Institute). It defines 5 process maturity levels. Most companies are somewhere between level 1 and level 2.
Level 1: Initial.
The development process is chaotic and the results are unpredictable (quality, time to spend, …) This does not mean that the resulting software is bad by definition, but successes rely on individuals.
The first areas to improve are:
· Project management/planning.
· Configuration management.
· Software quality assurance.
Level 2: Repeatable.
Procedures are written down (cost, schedule, functionality are tracked). Therefore earlier successes are repeatable for similar applications.
Areas to improve:
· Reviews, testing.
Level 3: Defined.
The actual process is defined in a model: the software process for both management and engineering activities is documented, standardized and integrated.
Areas to improve:
· Process measurement.
· Process analysis.
· Quantitative quality plans.
Level 4: Managed.
The software development process is instrumented. The measurements (metrics) are analyzed to adjust the process.
Areas to improve:
· Problem analysis.
· Problem prevention.
· Changing technology.
Level 5: Optimizing.
The effect of the shape of the process itself on the result is understood.
Therefore the continuous process improvement includes changing the process itself.
Software development process – layered approach
A software development process is a structure imposed on the development of a software product. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process.
Software Engineering processes are composed of many activities, notably the following. They are considered sequential steps in the Waterfall process, but other processes may rearrange or combine them in different ways.
Fig 2.1 Software development process
Extracting the requirements of a desired software product is the first task in creating it. While customers probably believe they know what the software is to do, it may require skill and experience in software engineering to recognize incomplete, ambiguous or contradictory requirements.
Specification is the task of precisely describing the software to be written, in a mathematically rigorous way. In practice, most successful specifications are written to understand and fine-tune applications that were already well-developed, although safety-critical software systems are often carefully specified prior to application development. Specifications are most important for external interfaces that must remain stable.
The architecture of a software system refers to an abstract representation of that system. Architecture is concerned with making sure the software system will meet the requirements of the product, as well as ensuring that future requirements can be addressed. The architecture step also addresses interfaces between the software system and other software products, as well as the underlying hardware or the host operating system.
Implementation (or Coding)
Reducing a design to code may be the most obvious part of the software engineering job, but it is not necessarily the largest portion.
Testing of parts of software, especially where code by two different engineers must work together, falls to the software engineer.
An important (and often overlooked) task is documenting the internal design of software for the purpose of future maintenance and enhancement. Documentation is most important for external interfaces.
Software Training and Support
A large percentage of software projects fail because the developers fail to realize that it doesn’t matter how much time and planning a development team puts into creating software if nobody in an organization ends up using it. People are occasionally resistant to change and avoid venturing into an unfamiliar area, so as a part of the deployment phase, its very important to have training classes for the most enthusiastic software users (build excitement and confidence), shifting the training towards the neutral users intermixed with the avid supporters, and finally incorporate the rest of the organization into adopting the new software. Users will have lots of questions and software problems which leads to the next phase of software.
Maintaining and enhancing software to cope with newly discovered problems or new requirements can take far more time than the initial development of the software. Not only may it be necessary to add code that does not fit the original design but just determining how software works at some point after it is completed may require significant effort by a software engineer. About ⅔ of all software engineering work is maintenance, but this statistic can be misleading. A small part of that is fixing bugs. Most maintenance is extending systems to do new things, which in many ways can be considered new work. In comparison, about ⅔ of all civil engineering, architecture, and construction work is maintenance in a similar way.
The waterfall model is a software development model in which development is seen as flowing steadily downwards (like a waterfall) through the phases of requirements analysis, design, implementation, testing (validation), integration, and maintenance. The origin of the term “waterfall” is often cited to be an article published in 1970 by W. W. Royce; ironically, Royce himself advocated an iterative approach to software development and did not even use the term “waterfall”.
Fig 2.2 Waterfall model
In Royce’s original waterfall model, the following phases are followed perfectly in order:
- Requirements specification
- Construction (implementation or coding)
- Testing and debugging
To follow the waterfall model, one proceeds from one phase to the next in a purely sequential manner. For example, one first completes “requirements specification” — they set in stone the requirements of the software. When and only when the requirements are fully completed, one proceeds to design. The software in question is designed and a “blueprint” is drawn for implementers (coders) to follow — this design should be a plan for implementing the requirements given. When and only when the design is fully completed, an implementation of that design is made by coders. Towards the later stages of this implementation phase, disparate software components produced by different teams are integrated. After the implementation and integration phases are complete, the software product is tested and debugged; any faults introduced in earlier phases are removed here. Then the software product is installed, and later maintained to introduce new functionality and remove bugs.
Thus the waterfall model maintains that one should move to a phase only when its preceding phase is completed and perfected. Phases of development in the waterfall model are thus discrete, and there is no jumping back and forth or overlap between them.
Iterative and Incremental development model
Fig 2.3 Iterative and Incremental development model
Iterative and Incremental development is a software development process developed in response to the weaknesses of the more traditional waterfall model. The two most well known iterative development frameworks are the Rational Unified Process and the Dynamic Systems Development Method.
The basic idea behind iterative enhancement is to develop a software system incrementally, allowing the developer to take advantage of what was being learned during the development of earlier, incremental, deliverable versions of the system. Learning comes from both the development and use of the system, where possible. Key steps in the process were to start with a simple implementation of a subset of the software requirements and iteratively enhance the evolving sequence of versions until the full system is implemented. At each iteration, design modifications are made and new functional capabilities are added.
Prototyping is defined as developing an initial “model”, providing the prototype to the intended users, gathering feedback from the users, and including any revisions or refinements. If this process continues until the required system is developed, then the process would be considered evolutionary development. However, when the objective of the prototype is to determine or validate system requirements, then the process applies and is sometimes referred to as throw-away prototyping.
Fig 2.4 Prototyping Development Model
It is a method to evaluate the feasibility of technical ideas and theories that has become increasingly popular and is a widely used development mode at various defense R&D centers. Developing a prototype is usually a distinct portion of the life cycle. Just as the prototype will provide insight into the design and implementation issues, the estimate and cost of producing the prototype will provide insight into the cost of the overall project.
The spiral model is a software development process combining elements of both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts.
The spiral model was defined by Barry Boehm in his article A Spiral Model of Software Development and Enhancement from 1985. This model was not the first model to discuss iterative development, but it was the first model to explain why the iteration matters. As originally envisioned, the iterations were typically 6 months to 2 years long.
Each phase starts with a design goal and ends with the client (who may be internal) reviewing the progress thus far. Analysis and engineering efforts are applied at each phase of the project, with an eye toward the end goal of the project.
Fig 2.5 Spiral model
The Spiral model is used most often in large projects (by companies such as IBM and Microsoft) and needs constant review to stay on target. For smaller projects, the concept of agile software development is becoming a viable alternative. The US military has adopted the spiral model for its Future Combat Systems program.
Project Management Concepts
Project management is the discipline of organizing and managing resources in such a way that these resources deliver all the work required to complete a project within defined scope, time, and cost constraints. A project is a temporary and one-time endeavor undertaken to create a unique product or service. This property of being a temporary and a one-time undertaking contrasts with processes, or operations, which are permanent or semi-permanent ongoing functional work to create the same product or service over-and-over again. The management of these two systems is often very different and requires varying technical skills and philosophy, hence requiring the development of project management.
The first challenge of project management is ensuring that a project is delivered within the defined constraints. The second, more ambitious, challenge is the optimized allocation and integration of the inputs needed to meet those pre-defined objectives. The project, therefore, is a carefully selected set of activities chosen to use resources (time, money, people, materials, energy, space, provisions, communication, quality, risk, etc.) to meet the pre-defined objectives.
- People (recruiting, selection, performance management, training, compensation, career development, organization, work design, team/culture development)
- Product (product objectives, scope, alternative solutions, constraint tradeoffs)
- Process (framework activities populated with tasks, milestones, work products, and QA points)
- Project (planning, monitoring, controlling)
- Players (senior managers, technical managers, practitioners, customers, end-users)
- Team leadership model (motivation, organization, skills)
- Characteristics of effective project managers (problem solving, managerial identity, achievement, influence and team building)
Software Team Organization
- Democratic decentralized (rotating task coordinators and group consensus)
- Controlled decentralized (permanent leader, group problem solving, subgroup implementation of solutions)
- Controlled centralized (top level problem solving and internal coordination managed by team leader)
Factors Affecting Team Organization
- Difficulty of problem to be solved
- Size of resulting program
- Team lifetime
- Degree to which problem can be modularized
- Required quality and reliability of the system to be built
- Rigidity of the delivery date
- Degree of communication required for the project
- Software scope (context, information objectives, function, performance)
- Problem decomposition (partitioning or problem elaboration – focus is on functionality to be delivered and the process used to deliver it)
- Process model chosen must be appropriate for the: customers and developers, characteristics of the product, and project development environment
- Project planning begins with melding the product and the process
- Each function to be engineered must pass though the set of framework activities defined for a software organization
- Work tasks may vary but the common process framework (CPF) is invariant (project size does not change the CPF)
- The job of the software engineer is to estimate the resources required to move each function though the framework activities to produce each work product
- Project decomposition begins when the project manager tries to determine how to accomplish each CPF activity
- Start on the right foot
- Maintain momentum
- Track progress
- Make smart decisions
- Conduct a postmortem analysis
- Why is the system being developed?
- What will be done by When?
- Who is responsible for a function?
- Where are they organizationally located?
- How will the job be done technically and managerially?
- How much of each resource is needed?
The port was a chaotic mess. Access had been blocked with scuttled ships and port facilities had been wrecked. Captain Edward Ellsberg, a US Navy salvage expert, rapidly salvaged scuttled ships for service in the Allied merchant fleets. He also salvaged a large floating dry dock and returned port shops and facilities to operation. Ellsberg had very limited resources. Ellsberg’s efforts show that a project-oriented expert can accomplish a nearly insurmountable task. Ellsberg had virtually no support staff and few skilled workers. He planned and managed the entire project by himself. Ellsberg, an accomplished author, documented this case in Under the Red Sea Sun (New York: Dodd, Mead & Company, 1946).
Software process and project metrics are quantitative measures that enable software engineers to gain insight into the efficiency of the software process and the projects conducted using the process framework. In software project management, we are primarily concerned with productivity and quality metrics. The four reasons for measuring software processes, products, and resources (to characterize, to evaluate, to predict, and to improve).
Measures, Metrics, and Indicators
- Measure – provides a quantitative indication of the size of some product or process attribute
- Measurement – is the act of obtaining a measure
- Metric – is a quantitative measure of the degree to which a system, component, or process possesses a given attribute
Process and Project Indicators
- Metrics should be collected so that process and product indicators can be ascertained
- Process indicators enable software project managers to: assess project status, track potential risks, detect problem area early, adjust workflow or tasks, and evaluate team ability to control product quality
- Private process metrics (e.g. defect rates by individual or module) are known only to the individual or team concerned.
- Public process metrics enable organizations to make strategic changes to improve the software process.
- Metrics should not be used to evaluate the performance of individuals.
- Statistical software process improvement helps and organization to discover where they are strong and where are weak.
- Software project metrics are used by the software team to adapt project workflow and technical activities.
- Project metrics are used to avoid development schedule delays, to mitigate potential risks, and to assess product quality on an on-going basis.
- Every project should measure its inputs (resources), outputs (deliverables), and results (effectiveness of deliverables).
- Direct measures of software engineering process include cost and effort.
- Direct measures of the product include lines of code (LOC), execution speed, memory size, defects per reporting time period.
- Indirect measures examine the quality of the software product itself (e.g. functionality, complexity, efficiency, reliability, maintainability).
Software Quality Metrics
- Factors assessing software quality come from three distinct points of view (product operation, product revision, product modification).
- Software quality factors requiring measures include correctness (defects per KLOC), maintainability (mean time to change), integrity (threat and security), and usability (easy to learn, easy to use, productivity increase, user attitude).
- Defect removal efficiency (DRE) is a measure of the filtering ability of the quality assurance and control activities as they are applied through out the process framework.
Integrating Metrics with Software Process
- Many software developers do not collect measures.
- Without measurement it is impossible to determine whether a process is improving or not.
- Baseline metrics data should be collected from a large, representative sampling of past software projects.
- Getting this historic project data is very difficult, if the previous developers did not collect data in an on-going manner.
Statistical Process Control
- It is important to determine whether the metrics collected are statistically valid and not the result of noise in the data.
- Control charts provide a means for determining whether changes in the metrics data are meaningful or not.
- Zone rules identify conditions that indicate out of control processes (expressed as distance from mean in standard deviation units).
Metrics for Small Organizations
- Most software organizations have fewer than 20 software engineers.
- Best advice is to choose simple metrics that provide value to the organization and don’t require a lot of effort to collect.
- Even small groups can expect a significant return on the investment required to collect metrics, if this activity leads to process improvement.