Custom Software

Leverage our technology expertise for custom applications and API's as well application rewrites. Driven by emerging technologies in open source, AI, automation, IoT and big data.

AI Solutions

Platform solutions, consulting expertise and seasoned advisory for defining and executing AI services. These can be internal operational improvements or external support and services.

Hire Talent

Quick turn around of individuals and teams within our core disciplines. Exceptional technical vetting and retention with global location and time zone coverage. Flex utilization for certain roles.

  • Strategy & Ideation
  • UX/UI Design
  • Integration & APIs
  • Quality Assurance
  • Support and Hosting Services
  • Custom SaaS Dev and Cloud Services
  • Telehealth Software
  • EHR Software
  • EMR Software
  • Medical Software
  • Engagements
  • Mobile Apps
  • Mobile Testing
  • React Native
  • Objective-C
  • Ruby on Rails
  • About Intersog
  • Why Intersog
  • Leadership Team
  • Letter from the CEO
  • Team Management
  • Software Dev Blogs
  • IT Strategy
  • Coding Tips
  • IT News and Trends
  • White Papers
  • Strategy & Ideation
  • Integration/APIs
  • MVP Development
  • Backend Development
  • Agriculture
  • IT Cost Calculator
  • Software Development

Agile Software Development Life Cycle: Case Study

Learn more about our agile software development life cycle from our Mitsubishi case study.

Any software development project, either big or small, requires a great deal of planning and steps that divide the entire development process into several smaller tasks that can be assigned to specific people, completed, measured, and evaluated. Agile Software Development Life Cycle (SDLC), is the process for doing exactly that – planning, developing, testing, and deploying information systems. The benefit of agile SDLC is that project managers can omit, split, or mix certain steps depending on the project’s scope while maintaining the efficiency of the development process and the integrity of the development life cycle. 

Today, we are going to examine a software development life cycle case study from one of Intersog’s previous projects to show how agility plays a crucial role in the successful delivery of the final product. Several years back, we worked with Mitsubishi Motors helping one of the world’s leading automotive manufacturers to develop a new supply chain management system. With the large scope of the project, its complex features, and many stakeholders relying on the outcomes of the project, we had to employ an agile approach to ensure a secure software development life cycle.

Business Requirements

Mitsubishi Motors involves many stakeholders and suppliers around the world, which makes its supply chain rather complex and data-heavy. That is why timely improvements are crucial for the proper functioning of this huge system and a corporation as a whole. Over the years of functioning, the old supply chain has been accumulating some noticeable frictions that resulted in the efficiency bottlenecks, and Intersog offered came ups with just the right set of solutions to make sufficient solutions that would help Mitsubishi ensure a coherent line of communication and cooperation with all the involved suppliers.

  • Intersog Gains Game-Changer Status on Clutch

Previously, Mitsubishi used an outdated supply chain management system that involved a large number of spreadsheets that required a lot of manual input. Considering a large number of stakeholders, the problem of synchronization has been a pressing one as well – different stakeholders would input the data at different speeds and at different times of day, which created a degree of confusion among suppliers. Though the system has been sufficient for a long time, the time has come to eliminate all the redundancies and streamline data input. 

The legacy system has been partially automated and ran on the IBM AS400 server, which allows for impressive flexibility, but it no longer sufficed for Mitsubishi’s growing needs. The main requirement, thus, was to create a robust online supply chain solution that would encompass the entire logistics process starting with auto parts and steel suppliers and ending with subcontractors and car dealerships around the world. That being said, Mitsubishi did not want to completely change the system, they opted for overhaul, and we came up with the idea of an integrated web application that was meant to function in conjunction with a DB2 base that was already used on the IBM AS400 server. 

IT Architecture and Agile SDLC

Mitsubishi employs a series of guidelines and rules on how to build, modify, and acquire new IT resources, which is why Intersog had to be truly agile to adapt to the client’s long-established IT architecture. Adapting to the requirements of the client, and especially to the strict regulations of the IT architecture of large corporations like Mitsubishi requires knowledge, flexibility, and strong industry expertise. Each software development company has its own architecture standards and frameworks for building new systems but many face difficulties when working with the existing systems and modifying them to the new requirements.

Intersog has no such problems. We approached Mitsubishi’s case with strong industry expertise and flexibility to account for all the client’s needs and specifications of the existing system. Obviously, following the client’s architecture regulations requires a profound understanding of said regulations, which is why information gathering is an integral phase of the software development life cycle.

Requirements Gathering

The requirements gathering phase can take anywhere from just a couple of days to several weeks. Working with complex and multi-layered legacy systems like the one used by Mitsubishi requires serious analysis and information gathering. In the case of Mitsubishi, our dedicated team had to gain a clear understanding of how the legacy system functions, create new software specifications, map out the development process, gather and create all the necessary documentation, track all the issues related to the functioning of the legacy system, outline the necessary solutions, and allocate all the resources to achieve the project’s goals in the most efficient manner. 

Working on the Mitsubishi project, our team has been gathering all the required information for up to 4 weeks. This included a profound examination of the legacy system, mapping out all of its flaws and specifications, bridging the gaps between the current state of the system and the requirements of the client, and outlining the development process. 

case study system development life cycle

  • Can Advanced Digital Tools Revolutionize Communication in Remote Teams?

The design stage includes all the integral decisions regarding the software architecture, its makeover, the tech frameworks that would be used in the system’s rework. During this stage, developers discuss the coding guidelines, the tools, practices, and runtimes that will help the team meet the client’s requirements. Working with large corporations like Mitsubishi, a custom software development team has to work closely with the company’s own developers to better understand the specifics of the architecture and create a design that reflects all the requirements. 

After all the requirements are gathered, we initiated the design stage based on all of the client’s specifications and came up with a number of solutions that matched Mitsubishi’s specs:

  • Convenient data model meant to optimize data duplication;
  • Permission system that differentiated the users by their access levels;
  • Appealing user interface mockup to improve the comfortability of user-system interaction;
  • Integration with the legacy RPG system;
  • Notifications for the partners to keep them up with the important activities.

This set of essential solutions has been discussed and approved in the course of the design stage that lasted for 2 months. During this stage, Intersog and Mitsubishi development teams worked closely to come up with the solutions that matched the client’s requirements to the tee. Proper functioning of the supply chain is vital for the entire corporation, which is why it was critical to do everything flawlessly. 2 months might seem like quite a timeline, but for this case study on software development life cycle, it was not that long considering how complex Mitsubishi’s legacy system was. 

Solution Development

After approving the solution design, the team can move to develop those solutions. That’s the core of the entire project, a stage at which the teams meet the goals and achieve the outcomes set during previous stages. The success of the development stage depends heavily on how good a job the teams did during the design stage – if everything was designed with laser precision, the team can expect few if any, surprises during the development stage. 

What happens during the development stage is the teams coding their way towards the final product based on decisions that have been made earlier. With Mitsubishi, we followed the guidelines we came up with earlier and implemented a set of essential solutions:

  • We built a convenient data model that minimizes the risk of human error by reducing redundant and repetitive data entry and duplication. 
  • Improved Mitsubishi’s security system to differentiate the users by their level of access and give them the respective level of control over the data.
  • Added the notifications for the users so that they could react to the relevant changes faster.
  • Designed an appealing and comfortable user interface using the AJAX framework to make the user-system interaction more comfortable and time-efficient. 
  • Deployed the platform running on the IBM AS400 server with the integration of DB2 databases.
  • Integrated the existing RPG software into the new system.
  • Migrated the existing spreadsheets and all the essential data into the new system.

All of these solutions took us 6 months to implement, which is rather fast for a project of such scale. Such a time-efficiency was possible only thanks to the huge amount of work we’ve done throughout the research and design stages. The lesson to learn from these software development life cycle phases for the example case study is that the speed of development would depend heavily on how well you prepare. 

Depending on the scale of the project, you might be looking at different timelines for the development stage. Small scale projects can be finished in a matter of weeks while some of the most complicated solutions might take more than a year to finish. In the case of the Mitsubishi project, it was essential for the client to get things done faster. Rushing things up is never a good idea, but you can always cut your development timeline by doing all the preparation work properly and having a clear understanding of what needs to be done and in which order.

Quality Assurance                   

Quality assurance is as vital for your project’s success as any other stage; this is where you test your code, assess the quality of solutions, and make sure everything runs smoothly and according to plan. Testing helps you identify all the bugs and defects in your code and eliminate those in a timely manner. Here at Intersog, we prefer testing our software on a regular basis throughout the development process. This approach helps us to identify the issues on the go and fix them before they snowball into serious problems. 

That’s it, quality assurance is a set of procedures aimed at eliminating bugs and optimizing the functioning of the software solutions. Here at Intersog, we run both manual and automated tests so that we can be truly sure of the quality of solutions we develop for our clients. With Mitsubishi, we ran tests throughout the development process and after the development stage was over. It took us an additional month to test all the solutions we’ve developed, after which we were ready for the implementation stage.

Would you like to learn more

about talent solutions

Integration and Support

Following the testing, and once we are sure all the solutions work flawlessly, the development team gets to the implementation stage. Also known as the integration stage, this is where we integrate the new solution into the client’s pre-existing ecosystem. Basically, you are putting new gears into a complex mechanism that has been functioning for many years, and it is essential to make sure all of those gears fit perfectly. 

With such a complex system as the one employed by Mitsubishi and a vast amount of accumulated data, our developers had to be incredibly precise not to lose anything. We are talking about surgical precision because Mitsubishi’s suppliers amassed thousands upon thousands of spreadsheets full of critical data on supplies, material and product deliveries, accounting data, and more. All of that had to be carefully integrated with the new automated solution. 

After 2 months, the solutions have been fully integrated with Mitsubishi’s existing ecosystem. Intersog usually backs the clients up by offering support and maintenance services to ensure flawless functioning of the system over time, but this time, our client was fully capable of maintaining the new system on their own. As said, Mitsubishi has its own development team that is able to take care of the system maintenance, so that our cooperation was finished after the integration stage. 

Final Thoughts and Outtakes

A software development life cycle depends on many factors that are unique for each company. In the case of Mitsubishi, we’ve managed to get things done in just under a year, which is rather fast for a project of such an immense scale. Different projects have different life cycles, and it depends on the scale, the client’s ability to explain their needs, and the development team’s ability to understand those needs, gather all the necessary information, design the appropriate set of solutions, develop said solutions, ensure their quality, and implement them fast.

' src=

Related Posts

It strategy intersog gains game-changer status on clutch, it strategy harnessing staff augmentation in the tech industry: a data-driven approach, software development blogs ai is boosting retail sales.

case study system development life cycle

This website uses these cookies:

  • Skip to main content
  • Skip to quick search
  • Skip to global navigation
  • Submissions

A Case Study of the Application of the Systems Development Life Cycle (SDLC) in 21 st Century Health Care: Something Old, Something New?

Creative Commons License

Permissions : This work is licensed under a Creative Commons Attribution 3.0 License. Please contact [email protected] to use this work in a way not covered by the license.

For more information, read Michigan Publishing's access and usage policy .

The systems development life cycle (SDLC), while undergoing numerous changes to its name and related components over the years, has remained a steadfast and reliable approach to software development. Although there is some debate as to the appropriate number of steps, and the naming conventions thereof, nonetheless it is a tried-and-true methodology that has withstood the test of time. This paper discusses the application of the SDLC in a 21st century health care environment. Specifically, it was utilized for the procurement of a software package designed particularly for the Home Health component of a regional hospital care facility. We found that the methodology is still as useful today as it ever was. By following the stages of the SDLC, an effective software product was identified, selected, and implemented in a real-world environment. Lessons learned from the project, and implications for practice, research, and pedagogy, are offered. Insights from this study can be applied as a pedagogical tool in a variety of classroom environments and curricula including, but not limited to, the systems analysis and design course as well as the core information systems (IS) class. It can also be used as a case study in an upper-division or graduate course describing the implementation of the SDLC in practice.

INTRODUCTION

The systems development life cycle, in its variant forms, remains one of the oldest and yet still widely used methods of software development and acquisition methods in the information technology (IT) arena. While it has evolved over the years in response to ever-changing scenarios and paradigm shifts pertaining to the building or acquiring of software, its central tenants are as applicable today as they ever were. Life-cycle stages have gone through iterations of different names and number of steps, but at the core the SDLC is resilient in its tried-and-true deployment in business, industry, and government. In fact, the SDLC has been called one of the two dominant systems development methodologies today, along with prototyping (Piccoli, 2012). Thus, learning about the SDLC remains important to the students of today as well as tomorrow.

This paper describes the use of the SDLC in a real-world heath care setting involving a principle component of a regional hospital care facility. The paper can be used as a pedagogical tool in a systems analysis and design course, or in an upper-division or graduate course as a case study of the implementation of the SDLC in practice. First, a review of the SDLC is provided, followed by a description of the case study environment. Next, the application of the methodology is described in detail. Following, inferences and observations from the project are presented, along with lessons learned. Finally, the paper concludes with implications for the three areas of research, practice, and pedagogy, as well as suggestions for future research.

The SDLC has been a part of the IT community since the inception of the modern digital computer. A course in Systems Analysis and Design is requisite in most Management Information Systems programs (Topi, Valacich, Wright, Kaiser, Nunamaker, Sipior, and de Vreede, 2010). While such classes offer an overview of many different means of developing or acquiring software (e.g., prototyping, extreme programming, rapid application development (RAD), joint application development (JAD), etc.), at their heart such programs still devote a considerable amount of time to the SDLC, as they should. As this paper will show, following the steps and stages of the methodology is still a valid method of insuring the successful deployment of software. While the SDLC, and systems analysis and design in general, has evolved over the years, at its heart it remains a robust methodology for developing software and systems.

Early treatises of the SDLC promoted the rigorous delineation of necessary steps to follow for any kind of software project. The Waterfall Model (Boehm, 1976) is one of the most well-known forms. In this classic representation, the methodology involves seven sequential steps: 1) System Requirements and Validation; 2) Software Requirements and Validation; 3) Preliminary Design and Validation; 4) Detailed Design and Validation; 5) Code, Debug, Deployment, and Test; 6) Test, Preoperations, Validation Test; and 7) Operations, Maintenance, Revalidation. In the original description of the Boehm-Waterfall software engineering methodology, there is an interactive backstep between each stage. Thus the Boehm-Waterfall is a combination of a sequential methodology with an interactive backstep (Burback, 2004).

Other early works were patterned after the Waterfall Model, with varying numbers of steps and not-markedly-different names for each stage. For example, Gore and Stubbe (1983) advocated a four-step approach consisting of the study phase, the design phase, the development phase, and the operation phase (p. 25). Martin and McClure (1988) described it as a multistep process consisting of five basic sequential phases: analysis, design, code, test, and maintain (p. 18). Another widely used text (Whitten, Bentley, and Ho, 1986) during the 1980s advocated an eight-step method. Beginning with 1) Survey the Situation, it was followed by 2) Study Current System; 3) Determine User Requirements; 4) Evaluate Alternative Solutions; 5) Design New System; 6) Select New Computer Equipment and Software; 7) Construct New System; and 8) Deliver New System.

Almost two decades later, a book by the same set of authors in general (Whitten, Bentley, and Dittman, 2004) also advocated an eight step series of phases, although the names of the stages changed somewhat (albeit not significantly). The methodology proceeded through the steps of Scope definition, Problem analysis, Requirements analysis, Logical design, Decision analysis, Physical design and integration, Construction and testing, and ending with Installation and delivery (p. 89). It is interesting to note that nearly 20 years later, the naming conventions used in the newer text are almost synonymous with those in the older work. The Whitten and Bentley (2008) text, in its present form, still breaks up the process into eight stages. While there is no consensus in the naming (or number) of stages (e.g., many systems analysis and design textbooks advocate their own nomenclature (c.f. Whitten, Bentley, and Barlow (1994), O’Brien (1993), Taggart and Silbey (1986)), McMurtrey (1997) reviewed the various forms of the life cycle in his dissertation work and came up with a generic SDLC involving the phases of Analysis, Design, Coding, Testing, Implementation, and Maintenance.

Even one of the most current and popular systems analysis and design textbooks (Kendall and Kendall, 2011) does not depart from tradition, emphasizing that the SDLC is still primarily comprised of seven phases. Although not immune to criticism, Hoffer, George, and Valacich (2011) believe that the view of systems analysis and design taking place in a cycle continues to be pervasive and true (p. 24). Thus, while the SDLC has evolved over the years under the guise of different combinations of naming conventions and numbers of steps or stages, it remains true to form as a well-tested methodology for software development and acquisition. We now turn our attention to how it was utilized in a present-day health care setting.

Case Study Setting

The present investigation regards the selection of a software package by a medium-size regional hospital for use in the Home Health segment of their organization. The hospital (to be referred to in this monograph by a fictitious name, General Hospital) is located in the central portion of a southern state in the USA, within 30 minutes of the state capital. Its constituents reside in the largest SMSA (standard metropolitan statistical area) in the state and consist of both rural, suburban, and city residents. The 149-bed facility is a state-of-the-art institution, as 91% of their 23 quality measures are better than the national average (“Where to Find Care”, 2010). Services offered include Emergency Department, Hospice, Intensive Care Unit (ICU), Obstetrics, Open Heart Surgery, and Pediatrics. Additional components of General Hospital consist of an Imaging Center, a Rehabilitation Hospital, Four Primary Care Clinics, a Health and Fitness Center (one of the largest in the nation with more than 70,000 square feet and 7,000 members), a Wound Healing Center, regional Therapy Centers, and Home Care (the focal point of this study).

There are more than 120 physicians on the active medical staff, over 1,400 employees and in excess of 100 volunteers (“General Hospital”, 2010). In short, it is representative of many similar patient care facilities around the nation and the world. As such, it provides a rich environment for the investigation of using the SDLC in a 21 st century health care institution.

Home Health and Study Overview

Home Health, or Home Care, is the portion of health care that is carried out at the patient’s home or residence. It is a participatory arrangement that eliminates the need for constant trips to the hospital for routine procedures. For example, patients take their own blood pressure (or heart rate, glucose level, etc.) using a device hooked up near their bed at home. The results are transmitted to the hospital (or in this case, the Home Health facility near General Hospital) electronically and are immediately processed, inspected, and monitored by attending staff.

In addition, there is a Lifeline feature available to elderly or other homebound individuals. The unit includes a button worn on a necklace or bracelet that the patient can push should they need assistance (“Home Health”, 2010). Periodically, clinicians (e.g., nurses, physical therapists, etc.) will visit the patient in their home to monitor their progress and perform routine inspections and maintenance on the technology.

The author was approached by his neighbor, a retired accounting faculty member who is a volunteer at General Hospital. He had been asked by hospital administration to investigate the acquisition, and eventual purchase, of software to facilitate and help coordinate the Home Health care portion of their business. After an initial meeting to offer help and familiarize ourselves with the task at hand, we met with staff (i.e., both management and the end-users) at the Home Health facility to begin our research.

THE SDLC IN ACTION

The author, having taught the SAD course many times, recognized from the outset that this particular project would indeed follow the stages of the traditional SDLC. While we would not be responsible for some of the steps (e.g., testing, and training of staff), we would follow many of the others in a lockstep fashion, thus, the task was an adaptation of the SDLC (i.e., a software acquisition project) as opposed to a software development project involving all the stages. For students, it is important to see that they benefit from understanding that the core ideas of the SDLC can be adapted to fit a “buy” (rather than “make”) situation. Their knowledge of the SDLC can be applied to a non-development context. The systematic approach is adaptable, which makes the knowledge more valuable. In this project, we used a modified version of the SDLC that corresponds to the form advocated by McMurtrey (1997). Consequently, we proceed in this monograph in the same fashion that the project was presented to us: step by step in line with the SDLC.

Problem Definition

The first step in the Systems Development Life Cycle is the Problem Definition component of the Analysis phase. One would be hard-pressed to offer a solution to a problem that was not fully defined. The Home Health portion of General Hospital had been reorganized as a separate, subsidiary unit located near the main hospital in its own standalone facility. Furthermore, the software they were using was at least seven years old and could simply not keep up with all the changes in billing practices and Medicare requirements and payments. The current system was not scalable to the growing needs and transformation within the environment. Thus, in addition to specific desirable criteria of the chosen software (described in the following section), our explicit purpose in helping General was twofold: 1) to modernize their operations with current technology; and 2) to provide the best patient care available to their clients in the Home Health arena.

A precursor to the Analysis stage, often mentioned in textbooks (e.g., Valacich, George, and Hoffer, 2009) and of great importance in a practical setting, is the Feasibility Study. This preface to the beginning of the Analysis phase is oftentimes broken down into three areas of feasibility:

  • Technical (Do we have the necessary resources and infrastructure to support the software if it is acquired?)
  • Economic (Do we have the financial resources to pay for it, including support and maintenance?)
  • Operational (Do we have properly trained individuals who can operate and use the software?).

Fortunately, these questions had all been answered in the affirmative before we joined the project. The Director of Information Technology at General Hospital budgeted $250,000 for procurement (thus meeting the criteria for economic feasibility); General’s IT infrastructure was more than adequate and up to date with regard to supporting the new software (technical feasibility); and support staff and potential end users were well trained and enthusiastic about adopting the new technology (operational feasibility). Given that the Feasibility Study portion of the SDLC was complete, we endeavored forthwith into the project details.

Requirements Analysis

In the Requirements Analysis portion of the Analysis stage, great care is taken to ensure that the proposed system meets the objectives put forth by management. To that end, we met with the various stakeholders (i.e., the Director of the Home Care facility and potential end-users) to map out the requirements needed from the new system. Copious notes were taken at these meetings, and a conscientious effort to synthesize our recollections was done. Afterwards, the requirements were collated into a spreadsheet for ease of inspection (Exhibit 1). Several key requirements are described here:

MEDITECH Compatible: This was the first, and one of the most important requirements, at least from a technological viewpoint. MEDITECH (Medical Information Technology, Inc.) has been a leading software vendor in the health care informatics industry for 40 years (“About Meditech”, 2009). It is the flagship product used at General Hospital and is described as the number one health care vendor in the United States with approximately 25% market share (“International News”, 2006). All Meditech platforms are certified EMR/EHR systems (“Meditech News”, 2012). “With an Electronic Health Record, a patient's record follows her electronically. From the physician's office, to the hospital, to her home-based care, and to any other place she receives health services, and she and her doctors can access all of this information and communicate with a smartphone or computer” (“The New Meditech”, 2012). Because of its strategic importance to General, and its overall large footprint in the entire infrastructure and day-to-day operations, it was imperative that the new software would be Meditech-compatible.

Point of Care Documentation: Electronic medical record (EMR) point-of-care (POC) documentation in patients' rooms is a recent shift in technology use in hospitals (Duffy, Kharasch, Morris, and Du, 2010). POC documentation reduces inefficiencies, decreases the probability of errors, promotes information transfer, and encourages the caregiver to be at the bedside or, in the case of home care, on the receiving end of the transmission.

OASIS Analyzer: OASIS is a system developed by the Centers for Medicare & Medicaid Services (CMS), formerly an agency of the U.S. Department of Health and Human Services, as part of the required home care assessment for reimbursing health care providers. OASIS combines 20 data elements to measure case-mix across 3 domains–clinical severity, functional status and utilization factors (“Medical Dictionary”, 2010). This module allows staff to work more intelligently, allowing them to easily analyze outcomes data in an effort to move toward improved clinical and financial results (“Butte Home Health”, 2009). Given its strategic link to Medicare and Medicaid reimbursement, OASIS Analyzer was a “must have” feature of the new software.

Physician Portal: The chosen software package must have an entryway for the attending, resident, or primary caregiver physician to interact with the system in a seamless fashion. Such a gateway will facilitate efficient patient care by enabling the physician to have immediate access to critical patient data and history.

Other “Must Haves” of the New Software: Special billing and accounts receivable modules tailored to Home Health; real-time reports and built-in digital dashboards to provide business intelligence (e.g., OASIS Analyzer); schedule optimization; and last, but certainly not least, the system must be user friendly.

Desirable, But Not Absolutely Necessary Features: Security (advanced, beyond the normal user identification and password type); trial period available (i.e., could General try it out for a limited time before fully committing to the contract?).

Other Items of interest During the Analysis Phase: Several other issues were important in this phase:

  • Is the proposed solution a Home Health-only product, or is it part of a larger, perhaps enterprise-wide system?
  • Are there other modules available (e.g., financial, clinical, hospice; applications to synchronize the system with a PDA (Personal Digital Assistant) or smart phone)?
  • Is there a web demo available to view online; or, even better, is there an opportunity to participate in a live, hands-on demonstration of the software under real or simulated conditions?

We also made note of other observations that might be helpful in selecting final candidates to be considered for site visits. To gain insight into the experience, dependability, and professionalism of the vendors, we also kept track of information such as: experience (i.e., number of years in business); number of clients or customers; revenues; and helpfulness (return e-mails and/or phone calls within a timely manner or at all).

Finally, some anecdotal evidence was gathered to help us evaluate each vendor as a potential finalist. For instance, Vendor A had an Implementation/Installation Team to assist with that stage of the software deployment; they also maintained a Knowledge Base (database) of Use Cases/List Cases describing the most frequently occurring problems or pitfalls. Vendor C sponsored an annual User Conference where users could share experiences with using the product, as well as provide feedback to be incorporated into future releases. To that end, Vendor C also had a user representative on their Product Advisory Board. Vendor E offered a “cloud computing” choice, in that the product was hosted in their data center. (A potential buyer did not have to choose the web-enabled solution.) Vendor E’s offering was part of an enterprise solution, and could be synchronized with a PDA or smart phone.

As previously noted, for this particular case study of software selection, the researchers did not have to proceed through each step of the SDLC since the software products already existed. Thus, the Design stage of the SDLC has already been carried out by the vendors. In a similar vein, the coding, testing, and debugging of program modules had too been performed by each vendor candidate. Thus, after painstakingly analyzing all the wares, features, pros and cons, and costs and benefits associated with each product, we were now ready to make a choice: we would whittle our list of five potential vendors down to the two that we felt met our needs and showed the most interest and promise.

The principle investigators arranged another meeting with the primary stakeholders of General Hospital’s Home Health division. After all, although we had done the research, they were the ones that would be using the system for the foreseeable future. As such, it only made sense that they be heavily involved. This is in line with what is put forth in systems analysis and design textbooks: user involvement is a key component to system success. Having carefully reviewed our research notes, in addition to the various brochures, websites, proposals, communications, and related documents from each of our shortlist of five vendors, together as a group we made our decision. We would invite Vendor B for a site visit and demonstration.

Vendor B was very professional, courteous, prompt, and conscientious during their visit. One thing that greatly supported their case was that their primary business model focused on Home Health software. It was, and still is, their core competency. In contrast, one other vendor (not on our original short list of five) came and made a very polished presentation, in the words of the Director. However, this company was a multi-billion dollar concern, of which Home Health software was only a small part. Thus the choice was made to go with Vendor B.

Ironically, this seller’s product was not Meditech compatible, which was one of the most important criteria for selection. However, through the use of a middleware company that had considerable experience in designing interfaces to be used in a Meditech environment, a suitable arrangement was made and a customized solution was developed and put into use. The middleware vendor had done business with General before and, therefore, was familiar with their needs.

Implementation

As is taught in SAD classes, the implementation stage of the SDLC usually follows one of four main forms. These are, according to Valacich, George, and Hoffer (2009): 1) Direct Installation (sometimes also referred to as Direct Cutover, Abrupt, or Cold Turkey method) where the old system is simply removed and replaced with the new software, perhaps over the weekend; 2) Parallel Installation, when the old and new systems are run side-by-side until at some point (the “go live” date) use of the former software is eliminated; 3) Single Location Installation (or the Pilot approach) involves using one site (or several sites if the software rollout is to be nationwide or international involving hundreds of locations) as beta or test installations to identify any bugs or usage problems before committing to the new software on a large scale; and 4) Phased Installation, which is the process of integrating segments of program modules into stages of implementation, ensuring that each block works before the whole software product is implemented in its entirety.

The Home Care unit of General Hospital utilized the Parallel Installation method for approximately 60 days before the “go live” date. Clinicians would “double enter” patient records and admissions data into both the old and new systems to ensure that the new database was populated, while at the same time maintaining patient care with the former product until its disposal. The Director of the Home Care facility noted that this process took longer than anticipated but was well worth it in the long run. Once the “go live” date was reached the new system performed quite well, with a minimal amount of disruption.

Training of staff commenced two weeks before the “go live” date. Of the approximately 25 users, half were trained the first week and the rest the next. Clinicians had to perform a live visit with one of their patients using the new system. Thus they would already have experience with it in a hands-on environment before switching to the new product and committing to it on a full-time basis.

It is again worth noting that the implementation method, Parallel Installation, follows from the SDLC and is what is taught in modern-day SAD courses. Thus, it was satisfying to the researchers that textbook concepts were being utilized in “real world” situations. It also reinforced that teaching the SDLC was in line with current curriculum guidelines and should continue.

Maintenance/Support

Software upgrades (called “code loads” by the vendor) are performed every six weeks. The Director reported that these advancements were not disruptive to everyday operations. Such upgrades are especially important in the health care industry, as changes to Medicare and billing practices are common occurrences. The Director also noted that all end users, including nurses, physical therapists, physicians, and other staff, were very happy with the new system and, collectively, had no major complaints about it. General Hospital expects to use the software for the foreseeable future, with no plans to have to embark on another project of this magnitude for quite some time.

Many inferences and observations were gleaned by both the researchers and hospital staff during the course of the investigation. First, we all learned that we must “do our homework”; that is, much research and analysis had to be performed to get up to speed on the project. For instance, while the principle investigators both had doctoral degrees in business administration, and one of them (the author) had taught the systems analysis and design course for over ten years at two different institutions, neither of us had any practical experience in the Home Health arena. Thus, we had to familiarize ourselves with the current environment as well as grasp an understanding of the criteria set forth by the stakeholders (both end-users and management). This was an important lesson learned, because we teach our students (in the SAD class) that they must not only familiarize themselves with the application at hand, but they must also interact with the users. Much research has been conducted in the area of user involvement and its relationship to system success (e.g., Ives and Olson, 1984; Baroudi, Olson, and Ives, 1986; Tait and Vessey, 1988). Therefore it was satisfying, from a pedagogical standpoint, to know that concepts taught in a classroom setting were being utilized in a real-world environment.

It was also very enlightening, from the standpoint of business school professors, to see how the core functional areas of study (e.g., marketing, management, accounting, etc., not to mention MIS) were also highly integral to the project at hand. During our research on the various vendor companies, we were subjected to a myriad of different marketing campaigns and promotional brochures, which typically touted their wares as the “best” on the market. Key, integral components (such as billing, scheduling, business intelligence, patient care, electronic medical records (EMR), etc.) that are critical success factors in almost any business were promoted and we were made keenly aware of their strategic importance. Again, this was very rewarding from the point of view from business school professors: we were very pleased that our graduates and students are learning all of these concepts (and more) as core competencies in the curriculum.

Finally, probably the most positive outcome from the project was that patient care will be improved as a result of this endeavor. Following that, it was enlightening that an adaptation of the SDLC was applied to a healthcare setting and it achieved positive results. This showed that the SDLC, in part or in whole, is alive and well and is an important part of the MIS world in both practice and academia. In addition, key outcomes regarding each were identified and are elaborated upon in the following section.

IMPLICATIONS FOR PRACTICE, RESEARCH AND PEDAGOGY

Implications for practice.

This project, and case study, was an application of pedagogy on a real-world systems analysis project. As such, it has implications for practice. First, it showed that concepts learned in a classroom environment (such as the SDLC in the systems analysis and design course) can be effectively applied in a business (or in our case, a health care) environment. It was very satisfying for us, as business school professors, to see instructional topics successfully employed to solve a real-world problem. For practitioners, such as any organization looking to acquire a software package, we hope that we have shown that if one applies due diligence to their research effort that positive outcomes can be achieved. Our findings might also help practitioners appreciate that tried and true methods, such as the SDLC, are applicable to projects of a similar nature, and not just academic exercises to fulfill curriculum requirements. We find this among the most gratifying implications.

Implications for Research

This project could be used as the beginning of a longitudinal study into the life cycle of the Home Health software product selected. It is customary to note that maintenance can consume half of the IS budget when it comes to software, especially large-scale systems (Dorfman and Thayer, 1997). It would be interesting to track this project, in real time, to see if that is indeed the case. Furthermore, an often-neglected phase of the SDLC is the stage at the very end: disposal of the system. By following the present study to the end, it would be enlightening (from all three viewpoints of research, practice, and pedagogy) to see what happens at the end of the software’s useful life. Additional future research might investigate the utilization of the SDLC in different contexts, or even other settings with the healthcare arena.

Implications for Pedagogy

Insights for the sad course.

After learning so much about real-world software acquisition throughout this voluntary consulting project, the author has utilized it in classroom settings. First, the obvious connection with the SAD course was made. To that end, in addition to another semester-long project they work on in a group setting, the students pick an application domain (such as a veterinary clinic, a dentist’s office, a movie rental store, etc.) and perform a research effort not unlike the one described in this monograph. Afterwards, a presentation is made to the class whereby three to five candidate vendors are shown, along with the associated criteria used, and then one is chosen. Reasons are given for the selection and additional questions are asked, if necessary. This exercise gives the students a real-world look at application software through the lens of the SDLC.

While some SAD professors are able to engage local businesses to provide more of a “real-world” application by allowing students to literally develop a system, such an endeavor was not possible at the time of this study. The benefits of such an approach are, or course, that it provides students “real world” experience and applying concepts learned in school to practical uses. The drawback is that it requires a substantial commitment from the business and oftentimes the proprietors pull back from the project if they get too busy with other things. Thus, the decision was made to allow students to pick an application domain, under the assumption that they had been contracted by the owners to acquire a system for them.

Such an exercise enables students to engage in what Houghton and Ruth (2010) call “deep learning”. They note that such an approach is much more appropriate when the learning material presented involves going beyond simple facts and into what lies below the surface (p. 91). Indeed, this particular exercise for the SAD students was not rote memorization of facts at a surface level; it forced them to perform critical thinking and analysis at a much greater depth of understanding. Although the students were not able to complete a “real world” project to the extent that other educators have reported (e.g., Grant, Malloy, Murphy, Foreman, and Robinson (2010), the experience did allow students to tackle a contemporary project and simulate the solving of it with real-world solutions. This gave them a much greater appreciation for the task of procuring software than just reading about it in textbooks. The educational benefits of using real-world projects are well established both in the United States (Grant et al., 2010) and internationally (Magboo and Magboo, 2003).

From an IS curriculum standpoint, this form of exercise by SAD students helps bridge the well-known gap between theory and practice (Andriole, 2006). As was shown in this monograph, the SDLC is a theory that has widespread application in practice. The project performed by students in the SAD class reinforces what Parker, LeRouge, and Trimmer (2005) described in their paper on alternative instructional strategies in an IS curriculum. That is, SAD is a core component of an education in information systems, and there is a plethora of different ways to deliver a rich experience, including the one described here.

Insights for IS Courses, SAD and non-SAD

Other insights gained, by the SAD students as well as the core MIS course, have to do with what the author teaches during the requisite chapter on software. In class, I present this topic as “the software dilemma”. This description is tantamount to the recognition that when acquiring software, businesses must make one of three choices (in general). The options are “make” versus “buy” versus “outsource” when it comes to acquiring software. (There is also a hybrid approach that involves customizing purchased software.)

Briefly explained, the “make” option presupposes that the organization has an IT staff that can do their own, custom, programming. The “buy” alternative relates to what was described in this paper, in that General Hospital did not have the resources to devote to developing software for their Home Health segment, and as such enlisted the researchers to assist in that endeavor. The “outsource” choice alludes to several different options available, under this umbrella, on the modern-day IT landscape. The decision to outsource could range from an application service provider (ASP) delivering the solution over the internet (or the “cloud”) to complete transfer of the IT operation to a hosting provider or even a server co-location vendor.

Thus, a project like this one could be used in the core MIS course to further illustrate problems and potential pitfalls faced by businesses, small and large, when it comes to software acquisition. Instructors could use the features of this case study to focus on whatever portion of it they thought best: project management, budgeting, personnel requirements, marketing, etc. It could even be used in a marketing class to investigate the ways in which vendors, offering similar solutions to standard problems, differentiate themselves through various marketing channels and strategies.

Furthermore, the case study is ripe for discussion pertaining to a plethora of business school topics, from economics and accounting to customer relationship management. The case is especially rich fodder for the MIS curriculum: not only systems analysis and design, but programming and database classes can find useful, practical, real-world issues surrounding this case that can be used as “teaching tools” to the students.

Finally, a case study like this one could even be used in an operations management, or project management, setting. The discovery of issues, such as those raised in this paper, could be fruitful research for both undergraduate and graduate students alike. A team project, along with a group presentation as the finale, would also give students much-needed experience in public speaking and would help prepare them for the boardrooms of tomorrow.

Two business school professors, one an MIS scholar and the other retired from the accounting faculty, were called upon by a local hospital to assist with the procurement of software for the Home Health area. These academics were up to the challenge, and pleasantly assisted the hospital in their quest. While both researchers hold terminal degrees, each learned quite a bit from the application of principles taught in the classroom (e.g., the SDLC) to the complexities surrounding real-world utilization of them. Great insights were gained, in a variety of areas, and have since been shown as relevant to future practitioners (i.e., students) in the business world. It is hoped that others, in both academe and commerce, will benefit from the results and salient observations from this study.

  • About Meditech (2009) Retrieved on May 19, 2010 from http://www.meditech.com/AboutMeditech/homepage.htm
  • Andriole, S. (2006) Business Technology Education in the Early 21 st Century: The Ongoing Quest for Relevance . Journal of Information Technology Education , 5, 1-12.
  • Baroudi, J., Olson, M.. and Ives, B. (1986, March) An Empirical Study of the Impact of User Involvement on System Usage and Information Satisfaction. Communications of the ACM , 29, 3, 232-238. http://dx.doi.org/10.1145/5666.5669
  • Boehm, B. W. (1976, December) Software Engineering. IEEE Transactions on Computers , C-25, 1226-1241. http://dx.doi.org/10.1109/TC.1976.1674590
  • Burback, R. L. (2004) The Boehm-Waterfall Methodology, Retrieved May 20, 2010 from http://infolab.stanford.edu/~burback/watersluice/node52.html
  • Butte Home Health & Hospice Works Smarter with CareVoyant Healthcare Intelligence (2009) Retrieved May 21, 2010 from http://www.carevoyant.com/cs_butte.html?fn=c_cs
  • Dorfman, M. and Thayer, R. M. (eds.) (1997) Software Engineering, IEEE Computer Society Press, Los Alamitos, CA.
  • Duffy, W. J. RN, Morris, J., CNOR; Kharasch, M. S. MD, FACEP; Du, H. MS (2010, January/March) Point of Care Documentation Impact on the Nurse-Patient Interaction. Nursing Administration Quarterly , 34, 1, E1-E10.
  • “General Hospital” (2010) Conway Regional Health System . Retrieved on May 18, 2010 from http://www.conwayregional.org/body.cfm?id=9
  • Gore, M. and Stubbe, J. (1983) Elements of Systems Analysis 3 rd Edition, Wm. C. Brown Company Publishers, Dubuque, IA.
  • Grant, D. M., Malloy, A. D., Murphy, M. C., Foreman, J., and Robinson, R. A. (2010) Real World Project: Integrating the Classroom, External Business Partnerships and Professional Organizations. Journal of Information Technology Education , 9, IIP 168-196.
  • Hoffer, J. A., George, J. F. and Valacich, J. S. (2011) Modern Systems Analysis and Design, Prentice Hall, Boston.
  • “Home Health” (2010) Conway Regional Health System . Retrieved on May 18, 2010 from http://www.conwayregional.org/body.cfm?id=31
  • Houghton, L. and Ruth, A. (2010) Making Information Systems Less Scrugged: Reflecting on the Processes of Change in Teaching and Learning. Journal of Information Technology Education , 9, IIP 91-102.
  • “International News” (2006) Retrieved on May 19, 2010 from http://www.meditech.com/aboutmeditech/pages/newsinternational.htm
  • Ives, B. and Olson, M. (1984, May) User Involvement and MIS Success: A Review of Research. Management Science , 30, 5, 586-603. http://dx.doi.org/10.1287/mnsc.30.5.586
  • Kendall, K. and Kendall, J. E. (2011) Systems Analysis and Design, 8/E, Prentice Hall , Englewood Cliffs, NJ.
  • Magboo, S. A., and Magboo, V. P. C. (2003) Assignment of Real-World Projects: An Economical Method of Building Applications for a University and an Effective Way to Enhance Education of the Students. Journal of Information Technology Education , 2, 29-39.
  • Martin, J. and McClure, C. (1988) Structured Techniques: The Basis for CASE (Revised Edition), Englewood Cliffs, New Jersey, Prentice Hall.
  • McMurtrey, M. E. (1997) Determinants of Job Satisfaction Among Systems Professionals: An Empirical Study of the Impact of CASE Tool Usage and Career Orientations, Unpublished doctoral dissertation, Columbia, SC, University of South Carolina.
  • “Medical Dictionary” (2010) The Free Dictionary . Retrieved May 21, 2010 from http://medical-dictionary.thefreedictionary.com/OASIS
  • “Meditech News” (2012) Retrieved April 1, 2012 from http://www.meditech.com/AboutMeditech/pages/newscertificationupdate0111.htm
  • O’Brien, J. A. (1993) Management Information Systems: A Managerial End User Perspective, Irwin, Homewood, IL.
  • Parker, K. R., Larouge, C., and Trimmer, K. (2005) Alternative Instructional Strategies in an IS Curriculum. Journal of Information Technology Education , 4, 43-60.
  • Piccoli, G. (2012) Information Systems for Managers: Text and Cases, John Wiley & Sons, Inc., Hoboken, NJ.
  • Taggart, W. M. and Silbey, V. (1986) Information Systems: People and Computers in Organizations, Allyn and Bacon, Inc., Boston.
  • Tait, P. and Vessey, I. (1988) The Effect of User Involvement on System Success: A Contingency Approach. MIS Quarterly , 12, 1, 91-108. http://dx.doi.org/10.2307/248809
  • “The New Meditech” (2012) Retrieved April 1, 2012 from http://www.meditech.com/newmeditech/homepage.htm
  • Topi, H., Valacich, J., Wright, R., Kaiser, K., Nunamaker Jr, J., Sipior, J., and de Vreede, G.J. (2010) IS 2010: Curriculum Guidelines for Undergraduate Degree Programs in Information Systems. Communications of the Association for Information Systems , 26, 1, 1-88.
  • Valacich, J.S., George, J. F., and Hoffer, J. A. (2009) Essentials of System Analysis and Design 4 th Ed., Prentice Hall , Upper Saddle River, NJ.
  • “Where to Find Care” (2010) Retrieved on May 18, 2010 from http://www.wheretofindcare.com/Hospitals/Arkansas-AR/CONWAY/040029/CONWAY-REGIONAL-MEDICAL-CENTER.aspx
  • Whitten, J. L. and Bentley, L. D. (2008) Introduction to Systems Analysis and Design 1 st Ed., McGraw-Hill, Boston.
  • Whitten, J. L., Bentley, L. D., and Barlow, V. M. (1994) Systems Analysis and Design Methods 3 rd Ed. Richard D. Irwin, Inc., Burr Ridge, IL.
  • Whitten, J. L., Bentley, L. D., and Dittman, K. C. (2004) Systems Analysis and Design Methods 6 th Ed., McGraw Hill Irwin, Boston.
  • Whitten, J. L., Bentley, L. D., and Ho, T. I. M. (1986) Systems Analysis and Design, Times Mirror/Mosby College Publishing, St. Louis.

case study system development life cycle

Understanding the SDLC: Software Development Lifecycle Explained

Learn about the software development lifecycle (SDLC) and gain valuable insights into its essential phases, methodologies, and best practices. Enhance your understanding of this crucial process to drive successful software development projects.

Building great software is a big challenge, and development teams rely on the software development lifecycle (SDLC) to help them succeed. By providing a structured approach to software development, an effective SDLC helps teams:

Clarify and understand stakeholder requirements.

Estimate project costs and timeframes.

Identify and minimize risks early in the process.

Measure progress and keep projects on track.

Enhance transparency and improve client relations.

Control costs and accelerate time to market.

What is SDLC?

The software development lifecycle (SDLC) is a step-by-step process that helps development teams efficiently build the highest quality software at the lowest cost. Teams follow the SDLC to help them plan, analyze, design, test, deploy, and maintain software. The SDLC also helps teams ensure that the software meets stakeholder requirements and adheres to their organization’s standards for quality, security, and compliance.

The SDLC includes different phases, and each phase has a specific process and deliverables. Although SDLC meaning might vary for each development team, the most common phases include:

Requirements gathering and analysis: Business analysts work with stakeholders to determine and document the software requirements.

System design: Software architects translate the requirements into a software solution and create a high-level design.

Coding: Developers build the software based on the system design.

Testing: The software is tested for bugs and defects and to make sure that it meets the requirements. Any issues are fixed until the software is ready for deployment.

Deployment: The software is released to the production environment where it is installed on the target systems and made available to users.

Maintenance and support: This ongoing process includes training and supporting users, enhancing the software, monitoring performance, and fixing any bugs or security issues.

SDLC phases and how they work

Each phase of the SDLC has key activities designed to drive efficiently, quality, and customer satisfaction.

Requirements gathering and analysis

Accurate, complete, and measurable user requirements are the foundation for any successful SDLC project—to ensure that the software meets user expectations and to avoid costly rework and project delays. The IT business analyst:

Gathers requirements by conducting interviews, holding workshops or focus groups, preparing surveys or questionnaires, and observing how stakeholders work.

Evaluates the requirements as they relate to system feasibility, and software design and testing.

Models the requirements and records them in a document, such as a user story, software requirements specification, use case document, or process specification.

System design

Effective system design properly accounts for all documented requirements. In this phase, software architects use tools to visualize information about the application’s behavior and structure, including:

The unified modeling language (UML) to illustrate the software’s architectural blueprints in a diagram.

Data flow diagrams to visualize system requirements.

Decision trees and decision tables to help explain complex relationships.

Simulations to predict how the software will perform.

To support the distinct layers within a software application, software architects use a design principle called separation of concerns. A software program that’s designed to align with the separation of concerns principle is called a modular program.

Modular software design separates program functionality into interchangeable, independent modules, so that each module contains everything it needs to execute one aspect of the software’s functionality. This approach makes it easier to understand, test, maintain, reuse, scale, and refactor code.

In the coding phase, developers translate the system design specifications into actual code. It’s critical that developers follow best practices for writing clean, maintainable, and efficient code, including:

Writing code that’s easy to understand and read.

Using comments to explain what the code does.

Using version control to track any changes to the codebase.

Refactoring the code if needed.

Conducting a code review when coding is completed to get a second opinion on the code.

Providing code documentation that explains how the code works.

Before it’s released to production, the software is thoroughly tested for defects and errors.

The software test plan provides critical information about the testing process, including the strategy, objectives, required resources, deliverables, and criteria for exit or suspension.

Test case design establishes the criteria for determining if the software is working correctly or not.

Test execution is the process of running the test to identify any bugs or software defects.

Developers and quality assurance teams use automated testing tools to quickly test software, prepare defect reports, and compare testing results with expected outcomes. Automated testing saves time and money, provides immediate feedback, and helps improve software quality. Automated testing can be used for:

Unit testing: Developers test the individual software modules to validate that each one is working correctly.

Integration testing: Developers test how the different modules interact with each other to verify that they work together correctly.

System testing : Developers test the software to verify that it meets the requirements and works correctly in the production environment.

User acceptance testing: Stakeholders and users test the software to verify and accept it before it’s deployed to production.

There are three main phases to deploying software in a production environment:

The development team commits the code to a software repository.

The deployment automation tool triggers a series of tests.

The software is deployed to production and made available to users.

Effective software installation requires a consistent deployment mechanism and a simple installation structure with minimal file distribution. The team must also make sure that the correct configuration file is copied to the production environment and that the correct network protocols are in place. Before migrating data to the new system, the team also needs to audit the source data and resolve any issues.

Release management makes software deployment smooth and stable. This process is used to plan, design, schedule, test, and deploy the release. Versioning helps ensure the integrity of the production environment when upgrades are deployed.

Maintenance and support

After the software is deployed, the software maintenance lifecycle begins. Software requires ongoing maintenance to ensure that it operates at peak performance. Developers periodically issue software patches to fix bugs in the software and resolve any security issues.

Maintenance activities also include performance monitoring of both the software’s technical performance and how users perceive its performance. Providing training and documentation to users, along with addressing user issues and upgrading their systems to make sure they’re compatible with the new software, are also key components of the software maintenance lifecycle.

case study system development life cycle

GitHub’s DevOps Solution

Learn why 90% of the Fortune 100 use GitHub to build, scale, and deliver secure software. Start your journey with GitHub

What are the SDLC methodologies?

In the world of software development, different methodologies serve as structured approaches to guide the process of creating and delivering software. These methodologies shape how teams plan, execute, and manage their projects, impacting factors such as flexibility, collaboration, and efficiency. Let's take a look at some of the more prominent SDLC methodologies.

Waterfall model

Introduced in 1970, the first SDLC approach to be widely used by development teams is called the waterfall model. This method divides the software development process into sequential phases. Work flows down from one phase to another, like a waterfall, with the outcome of one phase serving as the input for the next phase. The next phase can’t begin until the previous one is completed.

The waterfall model works best for small projects where the requirements are well-defined, and the development team understands the technology. Updating existing software and migrating software to a new platform are examples of scenarios that are well-suited for the waterfall model.

Waterfall model advantages

The straightforward process is easy to understand and follow.

An output is delivered at the end of each phase.

Project milestones and deadlines are clearly defined.

Waterfall model disadvantages

Lack of flexibility makes it difficult for development teams to adapt when stakeholder requirements change.

Once a phase is completed, any changes can be costly to implement and might delay the project schedule.

Testing does not take place until the end of the SDLC.

Agile methodology

The term “agile” describes an approach to software development that emphasizes incremental delivery, team collaboration, and continual planning and learning. Unlike the waterfall model’s sequential process, the agile methodology takes an iterative approach to software development.

Iterative software development speeds the SDLC by completing work in sprints, which are fixed project cycles that typically last between two and four weeks. Key terms include:

User stories: User stories are short descriptions of product requirements from the customer’s point of view. The most critical user stories are prioritized at the top of each sprint’s backlog of work.

Increment: The sprint’s output is called the increment. Each increment should be of potentially shippable quality, with all coding, testing, and quality verification completed.

Retrospectives: At the end of each sprint, the agile team conducts a retrospective meeting to evaluate the process and the tools, discuss what did and didn’t go well, and determine what to improve in future sprints.

The agile methodology is well-suited for projects that require flexibility and the ability to quickly adapt to changing requirements. Because it encourages collaboration, agile is also well-suited for complex projects where many teams work together.

Agile methodology advantages

Stakeholders and users can provide feedback throughout the SDLC, making it easier for developers to build software that meets their needs.

Incremental delivery helps development teams identify and fix issues early in the project before they become major problems.

Cost savings might be realized by reducing the amount of rework required to fix issues.

Retrospectives provide an opportunity for teams to continuously improve the process.

Agile methodology disadvantages

Requirements must be clearly defined in the user story. If not, the project can quickly derail.

Too much user feedback might change the scope of the project, cause delays, or make it difficult to manage.

Incremental deliverables can make it difficult to determine how long it will take to finish the entire project.

Agile frameworks

Agile methods are often called frameworks and the most common agile framework is called “scrum.” There are the three key roles on the scrum team:

The scrum master ensures that the team follows the scrum process and is continuously looking for ways that the team can improve while resolving issues that arise during the sprint.

The product owner takes responsibility for what the team builds and why they build it, along with keeping the backlog of work in priority order and up to date.

The scrum team members build the product and are responsible for engineering and quality.

The scrum team decides how to manage its own workload for each sprint based on the backlog shown on a task board. Team members participate in a daily scrum (or daily standup) meeting where each person reports on their individual progress. At the end of the sprint, the team demonstrates their potentially shippable increment to stakeholders, conducts a retrospective, and determines actions for the next sprint.

Kanban is another agile framework. Kanban is Japanese term that means billboard or signboard. Kanban boards visualize work items as cards in different states to provides at-a-glance insight into the status of each project and make it easy to identify any bottlenecks.

To help them work most effectively, development teams might adopt aspects of both the scrum and kanban agile frameworks.

Other popular SDLC methodologies

The iterative model  emphasizes continuous feedback and incremental progress. It organizes the development process into small cycles where developers make frequent, incremental changes to continuously learn and avoid costly mistakes. The iterative model is well-suited for large projects that can be divided into smaller pieces, and for projects where the requirements are clearly defined from the start.

The spiral model  combines the iterative and waterfall models. It takes an evolutionary approach where developers iteratively develop, test, and refine the software in successive cycles, or spirals. Large, complex, and costly projects are well-suited for this model.

The v-shaped model  emphasizes testing and validation in a sequential process. This model is very useful for projects in industries like healthcare, where thorough testing is critical.

The lean model  focuses on increasing efficiency throughout the development process. This model takes an iterative approach and is well-suited for projects where achieving short-term goals is a priority and when there’s frequent interaction between the development team and users.

SDLC best practices and challenges

The biggest challenges to a successful SDLC often stem from inadequate communication, planning, testing, or documentation. Best practices to address these issues include:

Collaboration between the development team, IT operations, the security team, and stakeholders.

Clearly defining user requirements and project deliverables, timelines, and milestones.

Detailed documentation of resources, schedules, code, and other deliverables.

Daily scrum meetings to identify and resolve issues.

Retrospectives to drive continuous improvement across the SDLC.

SDLC security

Due to increasing cyberattacks and security breaches, development teams are under pressure to improve  application security . SDLC security is a set of processes that incorporate robust security measures and testing into the SDLC. Best practices support the detection and remediation of security issues early in the lifecycle—before the software is deployed to production.

Security that empowers developers

To get ahead of security issues, some teams are using development platforms that build security analysis into their workflow. For example, the  GitHub platform  scans code for security issues as it’s written in the coding phase.

How does DevOps work with the SDLC?

DevOps  is an approach to SDLC that combines development (dev) and operations (ops) to speed the delivery of quality software. The core principles of this approach are automation, security, and  continuous integration and continuous delivery (CI/CD) , which combines the SDLC into one integrated workflow.

DevOps follows the lean and agile SDLC methodologies and emphasizes collaboration. Throughout the entire SDLC, developers, IT operations staff, and security teams regularly communicate and work together to ensure successful project delivery.

See a comparison of DevOps solutions

A well-structured SDLC helps development teams deliver high-quality software faster and more efficiently. Although SDLC methods vary by organization, most development teams use SDLC to guide their projects.

The SDLC helps development teams build software that meets user requirements and is well-tested, highly secure, and production ready. Popular tools that support the SDLC process include:

GitHub Actions to automate SDLC workflows.

GitHub security tools to help developers ship secure applications.

GitHub Copilot to developers write code faster with AI.

GitHub code review tools to help avoid human error.

Frequently asked questions

What are the phases of sdlc.

The phases of the software development lifecycle (SDLC) include requirements gathering and analysis, system design, coding, testing, deployment, and maintenance and support. By taking a structured approach to software development, SDLC provides a process for building software that’s well-tested and production ready.

The software development lifecycle (SDLC) is a step-by-step process that helps development teams efficiently build the highest quality software at the lowest cost. Teams follow the SDLC to help them plan, analyze, design, test, deploy, and maintain software. The SDLC also helps teams ensure that the software meets stakeholder requirements and adheres to the organization’s standards for quality, security, and compliance.

What is the main purpose of SDLC?

The main purpose of the software development lifecycle (SDLC) is to drive successful software development projects. Building great software is a big challenge, and most software development teams rely on the SDLC to help them succeed. By taking a structured approach to software development, SDLC provides a process for building software that’s well-tested and production ready.

What are SDLC models?

Software development lifecycle (SDLC) models are workflow processes that development teams follow to plan, analyze, design, test, deploy, and maintain software. Examples of SDLC models include the waterfall model, the iterative model, the spiral model, and the v-shaped model. Another type of SDLC model is the agile method, which emphasizes incremental delivery, team collaboration, and continual planning and learning.

Understanding Software Architecture

Explore the foundational role of software architecture in development, its principles, and impact on project success. Perfect for beginners and pros.

Explore Software Engineering

Delve into the world of software engineering: understand its principles, the role of engineers, and how it shapes the development of reliable, efficient software.

Discover Open Source Software

Unveil the world of Open Source Software (OSS): its benefits, community-driven development model, and how it fosters innovation and collaboration.

The Ultimate Guide to Understanding and Using a System Development Life Cycle

By Kate Eby | June 27, 2017

  • Share on Facebook
  • Share on LinkedIn

Link copied

There is a lot of literature on specific systems development life cycle (SDLC) methodologies, tools, and applications for successful system deployment. Not just limited to purely technical activities, SDLC involves process and procedure development, change management, identifying user experiences, policy/procedure development, user impact, and proper security procedures. Books such as David Avison and Guy Fitzgerald’s Information Systems Development and Alan Daniels and Don Yeates’ Basic Systems Analysis , delve into the intricacies of information systems development lifecycles.  This article will provide an in-depth analysis of the history, definition, phases, benefits, and disadvantages, along with solutions that support the system development life cycle.

What Is a System Development Life Cycle?

In order to understand the concept of system development life cycle, we must first define a system. A system is any information technology component - hardware, software, or a combination of the two. Each system goes through a development life cycle from initial planning through to disposition. Some methodologies provide the necessary framework to guide the challenging and complex process with an aim to avoid costly mistakes and expedite development, all of which have the same goal of moving physical or software-based systems through phases.

A system development life cycle is similar to a project life cycle. In fact, in many cases, SDLC is considered a phased project model that defines the organizational, personnel, policy, and budgeting constraints of a large scale systems project. The term “project” implies that there is a beginning and an end to the cycle and the methods inherent in a systems development life cycle strategy provide clear, distinct, and defined phases of work in the elements of planning, designing, testing, deploying, and maintaining information systems.

Those involved in the SDLC include the c-suite executives, but it is the project/program managers, software and systems engineers, users, and the development team who handle the multi-layered process. Each project has its own level of complexity in planning and execution, and often within an organization, project managers employ numerous SDLC methods. Even when an enterprise utilizes the same methods, different project tools and techniques can differ dramatically.

History and Origin of the System Development Lifecycle

Completely defined in 1971, the term originated in the 1960s when mainframe computers filled entire rooms and a pressing need developed to define processes and equipment centered on building large business systems. In those days, teams were small, centralized, and users were ‘less’ demanding. This type of scenario meant that there was not a true need for refined methodologies to drive the life cycle of system development. However, technology has evolved, systems have become increasingly complex, and users have become accustomed to well-functioning technology. Models and frameworks have been developed to guide companies through an organized system development life cycle. Today, the traditional approaches to technology system development have been adjusted to meet the ever-changing, complex needs of each unique organization and their users. Below you will find sequential steps to SDLC, but each company will vary in their process.

The Phases of SDLC

The SDLC framework provides a step-by-step guide through the phases of implementing both a physical and software based system. A variety of models are available, but whether utilizing the oldest method of SDLC, the waterfall method , adopting an Agile method , or employing a hybrid of several methods, all methods embrace a phased iterative structure that you can adapt to your organization’s needs.  You may find phases with varying naming conventions, but these are the most common stages of SDLC. Organizations may adopt any, all, or a variation of these phases:

  • Analysis/Feasibility: For an SDLC strategy to work there should be a strong idea of what deficiencies exist in the current structure and the goals for the new approach. A feasibility study determines if you can or should accomplish the goals of the plan. Information is gathered and analyzed to identify what technical assets, personnel, and training is already in place and utilized. The study also inventories what is needed to augment or replace, and at what cost. During this phase you determine the overall project scope, including economic, operational and human factors, identify key personnel, and develop timelines. 
  • Planning/Requirements: A plan can include adapting a current system to meet new needs or developing a completely new system. This phase defines user requirements, identifies needed features, functions, and customizations, and investigates overall capabilities
  • Design: Once you make the plan and identify costs, systems, and user requirements, a detailed system design can begin that includes features and other documentation. The architects can then build a sample framework.
  • System Development: An approved design is the catalyst for authorizing development for the new or augmented system. Some say that this is the most robust part of the life cycle. During this phase, developers write code and you construct and fine-tune technical and physical configurations. 
  • Testing: Users are brought in to test before deployment to identify areas of concern or improvement.
  • Deployment: The system is put into a production environment and used to conduct business.
  • Maintenance: The cyclical nature of SDLC recognizes that the process of change and upgrading are constant. Carry out the replacement of outdated hardware/software, security upgrades, and continuous improvement on a regular basis. 
  • Evaluation:  An often overlooked element of any large scale system roll-out is the evaluation process, which supports the continuous improvement of the system. The team continuously reviews what is working and what is in need of improvement. This can mean recommending additional training, procedures, or upgrades.
  • Disposition/Disposal/End-of-Life: A well-rounded life cycle identifies and decommissions surplus or obsolete assets at the end of their life cycle. Included in this phase is the secure retrieval of data and information for preservation, as well as, the physical disposition of an asset.

Following each phase of a system development life cycle the team and project manager may establish a baseline or milestones in the process. The baseline may include start date, end date, phase/stage duration, and budget data. These baseline assists the project manager in monitoring performance.

System Development Lifecycle

There is an increased interest in system security at all levels of the life cycle, that include the elements of confidentiality, information availability, the integrity of the information, overall system protection, and risk mitigation . Aligning the development team and the security team is a best practice that ensures security measures are built into the various phases of the system development life cycle. For example, SAMM, the Software Assurance Maturity Model is a framework that aids organizations in evaluating their software security practices, building security programs, demonstrating security improvements, and measuring security-related activities. In addition, governance and regulations have found their way into technology, and stringent requirements for data integrity impact the team developing technology systems. Regulations impact organizations differently, but the most common are Sarbanes-Oxley, COBIT, and HIPAA.

Each company will have their own defined best practices for the various stages of development. For example, testing may involve a defined number of end users and use case scenarios in order to be deemed successful, and maintenance may include quarterly, mandatory system upgrades.

Benefits of a Well-Defined System Development Life Cycle

There are numerous benefits for deploying a system development life cycle that include the ability to pre-plan and analyze structured phases and goals. The goal-oriented processes of SDLC are not limited to a one-size-fits-all methodology and can be adapted to meet changing needs. However, if well-defined for your business, you can:

  • Have a clear view of the entire project, the personnel involved, staffing requirements, a defined timeline, and precise objectives to close each phase.  
  • Base costs and staffing decisions on concrete information and need.  
  • Provide verification, goals, and deliverables that meet design and development standards for each step of the project, developing extensive documentation throughout. 
  • Provide developers a measure of control through the iterative, phased approach, which usually begins with an analysis of costs and timelines.  
  • Improve the quality of the final system with verification at each phase.

Disadvantages of a Structured System Development Life Cycle

In these same areas, there are some who find disadvantages when following a structured SDLC. Some of the downfalls include:

  • Many of the methods are considered inflexible, and some suffer from outdated processes. 
  • Since you base the plan on requirements and assumptions made well ahead of the project’s deployment, many practitioners identify difficulty in responding to changing circumstances in the life cycle. 
  • Some consider the structured nature of SDLC to be time and cost prohibitive.
  • Some teams find it too complex to estimate costs, are unable to define details early on in the project, and do not like rigidly defined requirements.
  • Testing at the end of the life cycle is not favorable to all development teams. Many prefer to test throughout their process.
  • The documentation involved in a structured SDLC approach can be overwhelming.
  • Teams who prefer to move between stages quickly and even move back to a previous phase find the structured phase approach challenging.

Another Form of SDLC: The Software Development Life Cycle

When the word “systems” is replaced with the word “software,” it creates another version of SDLC. The Software Development Life Cycle follows an international standard known as ISO 12207 2008. In this standard, phasing similar to the traditional systems development life cycle is outlined to include the acquisition of software, development of new software, operations, maintenance, and disposal of software products. An identified area of growing concern and increased adoption continues to revolve around the need for enhanced security functionality and data protection. Like systems development life cycle, discussed previously, there are numerous methods and frameworks that you can adopt for software development including:

  • The Waterfall Method is a steady sequence of activity that flows in a downward direction much like its name. This traditional engineering process that closes each phase upon completion is often criticized for being too rigid.
  • The V-Shaped Model is an adaptation of Waterfall that has testing as an integral part to close each phase.
  • The Prototype Method advocates a plan to build numerous software methods that allow different elements to be “tried-out” before fully developing them. The Prototype method can increase “buy-in” by users/customers. 
  • Rapid Application Development (RAD) is a hybrid of the prototype method, but works to de-emphasize initial planning to rapidly prototype and test potential solutions.
  • The Spiral Method provides more process steps, which are graphically viewed in a spiral formation and is generally credited to provide greater flexibility and process adaptation.
  • Agile Methods are software-based systems that provide feedback through an iterative process and include Kanban , Scrum , Extreme Programming (XP) , and Dynamic systems development method (DSDM).

Other models and methods include Synchronize and Stabilize, Dynamic Systems Development (DSDM), Big Bang Model, Fountain, and Evolutionary Prototyping Model, among others. Each has elements of a defined stepped process with variations to adapt for flexibility. Choosing the right SDLC method is critical for the success of your development project as well as for your business. There is not a hard and fast rule that you must choose only a single methodology for each project, but if you are to invest in a methodology and supporting tools, it is wise to utilize them as much as possible. To choose the right methodology you must first:

  • Understand the various methodologies, their advantages, and disadvantages.
  • Become familiar with the team dynamics, stakeholders involved, and the projects you will be managing.
  • Compare the methodologies to the criteria your team has defined and business facts – size of your team, type of technology projects, complexity of projects, etc. The methodology should be easy for the team to understand and learn. 
  • Share the decision and reasoning with your team and stakeholders.

Project Managing the System Development Life Cycle

The iterative and phased stages of an SDLC benefit from the leadership of a dedicated project manager. The major goal of an SDLC is to provide cost effective and appropriate enhancements or changes to the information system that meet overall corporate goals. The project manager is responsible for executing and closing all the linear steps of planning, building, and maintaining the new or improved system throughout the process. 

Other elements for the project manager involve administration of human elements including communication, change management strategies, and training, initiating and driving the planning for the project, setting and monitoring goals, providing avenues for communication and training, and keeping track of budgets and timelines. The project manager is the overall control agent for a strong SDLC process.

Software Solutions That Support the System Development Life Cycle

SDLC products from software vendors promise organizational clarity, modern process development procedures, legacy application strategies, and improved security features. Many options provide customized or integrated solutions. Vendors such as Oracle, Airbrake, and Veracode provide software development solutions in their complete enterprise software offerings. Many of these vendors also have a strong focus on identifying and de-bugging systems that may support the process of testing in software development life cycles. In many cases, SDLC teams utilize a variety of software solutions to support the varying stages. For example, requirements may be gathered, tracked and managed in one solution while testing use cases may take place in a completely different solution. 

Regardless of the process implemented and the tools used, all require the crucial element of documentation to support findings, close iterative phases, and to analyze success. Today’s increasing demand for data and information security also factor into the overall planning, training, testing, and deployment of a system. However, one of the most important elements of success of any SDLC method continues to be in the initial planning, followed by choosing the appropriate framework and method, and finally sticking to, deploying, and maintaining a robust project plan.

Start Managing Your System Development Life Cycle with a Helpful Template

Project managers in charge of SDLC need the right tools to help manage the entire process, provide visibility to key stakeholders, and create a central repository for documentation created during each phase. One such tool is Smartsheet, a work management and automation platform that enables enterprises and teams to work better. 

With its customizable spreadsheet interface and powerful collaboration features, Smartsheet allows for streamlined project and process management. Use Smartsheet’s SDLC with Gantt template to get started quickly, and help manage the planning, development, testing, and deployment stages of system development. Create a timeline with milestones and dependencies to track progress, and set up automated alerts to notify you as anything changes. Share your plan with your team and key stakeholders to provide visibility, and assign tasks to individuals to ensure nothing slips through the cracks.

case study system development life cycle

Try the Smartsheet SDLC template for free, today.

Create Your SDLC Plan in Smartsheet

A Better Way to Manage System and Software Development Life Cycles

Empower your people to go above and beyond with a flexible platform designed to match the needs of your team — and adapt as those needs change. 

The Smartsheet platform makes it easy to plan, capture, manage, and report on work from anywhere, helping your team be more effective and get more done. Report on key metrics and get real-time visibility into work as it happens with roll-up reports, dashboards, and automated workflows built to keep your team connected and informed. 

When teams have clarity into the work getting done, there’s no telling how much more they can accomplish in the same amount of time.  Try Smartsheet for free, today.

Discover why over 90% of Fortune 100 companies trust Smartsheet to get work done.

Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption

What is the software development lifecycle (SDLC)? Phases and models

case study system development life cycle

In this guide, we’ll provide an overview of the software development life cycle (SDLC) and its seven phases, as well as a comparison of the most popular SDLC models.

What is the software development lifecycle (SDLC)?

What Is The Software Development Lifecycle (SDLC)? Phases And Models

Every software development team needs a guiding framework. This might come from a lightweight framework such as scrum or a traditional heavyweight framework such as the software development lifecycle (SDLC).

The SDLC is a methodology that involves multiple steps (also called stages) that enable software development teams to produce low-cost and high-quality software products.

The development team often owns the SDLC. They use the SDLC alongside the engineering manager to organize their workflow. However, the SDLC is also a part of the holistic product development framework.

The product manager is typically involved in the SDLC in the same way as any other framework. Product managers:

  • Ensure with the engineering manager and the development team that the SDLC is aligned with the business objectives
  • Guard the team against any organizational obstacles
  • Define the product vision and strategy and scope the features related to them in an unambiguous manner to avoid issues during the implementation
  • Ensure that the product built during the SDLC aligns with the scope, schedule, and budget
  • Remain actively involved during the testing stage to make sure the product produced adheres to the expected quality

What are the 7 phases of the SDLC?

Corporations use the SDLC to define, build, and maintain software products. It is a detailed process that creates a comprehensive outline for the engineers’ workflow.

The SDLC comprises seven phases (stages or steps) whose names and numbers differ from company to company and book to book. However, they all serve the same purpose.

The following phases are the most common within the SDLC model:

Defining requirements

Prototyping, implementation, integration and testing, operations and maintenance.

The work plan is constructed. The team members are assigned and the activities needed to build the software are defined (e.g., gather requirements, interview clients, conduct smoke tests, etc.).

A detailed requirements document is prepared (e.g., product requirement document , product specifications document, etc.).

In traditional SDLC, the requirements should be supported by different product architecture diagrams such as use case diagrams , activity diagrams, sequence diagrams, component diagrams, composite structure diagrams, and interaction overviews.

The designers pass the requirements to create a very detailed prototype that covers every aspect of the user journey. The prototype should cover all possible cases, including error messages, status, and interactions.

The engineers receive the requirements and the design from the other team members and the actual implementation work starts.

The backend work integrates with the front-end work and the testers start executing their test cases to identify bugs or any potential issues.

case study system development life cycle

Over 200k developers and product managers use LogRocket to create better digital experiences

case study system development life cycle

After successfully building the software, the team coordinates with the product manager to deploy the software to production.

The team continuously identifies technical and functional enhancements to improve the product. This includes refactoring and bug bashing.

SDLC origins

The SDLC was initially introduced in a book called Global Business Information by Feoffrey Elliott. After it was proven successful by large organizations that develop business systems, countless software development companies started adopting it, and different variations of the SDLC model evolved over time.

SDLC models

The SDLC phases or stages can be used in various ways with multiple sequences. Organizing and reorganizing the steps of the SDLC will produce so-called models or methodologies.

Each model has its own advantages and disadvantages. SDLC methodologies are divided into traditional models and contemporary models:

  • Traditional models — Frameworks or models that are distinguished by their linear nature, meaning that SDLC phases are carried out consecutively
  • Contemporary models — Frameworks or models that are based on the iterative nature throughout SDLC phases to provide more adaptability during the production flow of the software. Those models have evolved into agile models down the road

Examples of traditional SDLC models

The SDLC has more than 10 traditional models, however the most popular models are:

Spiral development

Waterfall vs. spiral development.

The Waterfall model is one of the oldest SDLC models, known for its basic and classical structure. The stages of this model are fixed. Each phase must be completed before moving onto the next, which prohibits overlapping. The output of each stage is an input for the next stage.

Phases of the waterfall model

The six phases of the waterfall model are as follows:

Requirements

Maintenance.

This phase concentrates on communicating with the users/end users to gather the requirements and to capture information regarding a user’s needs. The product manager, at this stage, defines and documents the scope of the project in a document called a business case .

A business analyst evaluates the business case and starts the logical design of the software by using the requirements and information collected by the product manager. Based on the high-level design created by the business analyst, a system analyst translates the high-level design to a detailed low-level design that considers software and hardware technology.

A full user interface design with the system architecture is defined at this stage. A couple of documents are also produced to help the engineers understand the end-to-end expected output.

Here, the actual code of the software system is written. Software developers create the system according to the instruction and requirements recorded, written, and prepared in the design and requirement phases. The output of this phase is the actual product.

More great articles from LogRocket:

  • How to implement issue management to improve your product
  • 8 ways to reduce cycle time and build a better product
  • What is a PERT chart and how to make one
  • Discover how to use behavioral analytics to create a great product experience
  • Explore six tried and true product management frameworks you should know
  • Advisory boards aren’t just for executives. Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.

This stage gets the input from the implementation stage. Software testers draft test plans based on the functional specification documented in the low-level design document (LLDD). On the other hand, software developers prepare testing plans in the form of a checklist to examine if every function is executable as expected.

Finally, quality assurance engineers gather all documents written in all phases and conduct an overall deep test on every specific aspect of the system.

After passing all processes of the testing phase, the product is ready to release. The software system is either released for users to install on their own machine or deployed to production servers.

This phase focuses on enhancements, delivering changes, or fixing any defects and issues that may arise.

Applications for the waterfall model

The waterfall model is most suitable for:

  • Small and simple projects
  • Projects with a limited number of unconfirmed and ambiguous requirements
  • A software system that requires well-documented artifacts (e.g., issuance software)

Advantages of the waterfall model

The waterfall model helps to:

  • Provide the team with the ability to detect errors early in the process
  • Define the specific starting and ending point of the project. It ensures that the project deadline is in control
  • Provide well-written and structured documents that make it easier to revise the code for future enhancements and scaling work

Disadvantages of the waterfall model

The waterfall model is limited by:

  • The biggest disadvantage of this model is that there’s no way to go back to a specific phase. Once a phase is completed, it’s locked
  • In some cases, estimating the required time to finish a phase is tough. An incorrect assumption may result in a failure to meet the deadline
  • If changes are proposed during the execution of the project, the project has to stop and start all over again

The spiral model is a risk-driven hybrid model that features some of the traits of the waterfall model and Iterative model. Based on the identified patterns of risk, the team can adopt specific activities of different processes.

Phases of the spiral model

  • Risk analysis
  • Engineering/implementation

1. Planning

Requirements are collected and the overall objective is identified during this phase. A business analyst collects and generally documents those system and business requirements.

2. Risk analysis

This phase is meant to identify any potential risk by planning the risk mitigation strategy. The project manager, team members, and end user collaborate to identify potential risks that may impact the project.

3. Engineering/implementation

The system is developed along with quality assurance checks and testing processes at this stage.

4. Evaluation

The product manager/end user in this phase is responsible for evaluating the system software, which is the output of the previous phases. The evaluation is done before the project proceeds to the next planned spiral cycle.

Application of the spiral development model

The spiral development model is suitable for projects that:

  • Have a large or medium scope
  • Come with high risk
  • Are complex or unclear in requirements

Advantages of the spiral development model

  • Flexible and easy to manage
  • The process of monitoring the process effectiveness is easy
  • Coping with the late proposed changes is easy for the product manager
  • It eliminates the errors early during the project

Disadvantages of the spiral development model

  • Not easy to implement. Needs high expertise
  • Requires risk analysts paired with the development team continuously
  • High in cost
  • Meeting the scheduling and budgetary constraints is challenging with this model

Final thoughts

The SDLC is a framework that was invented around 50 years ago. Since then, it has contributed to building tons of successful software products. Many companies later adopted and adapted it to develop an effective process tailored to their needs. The SDLC, by its nature, was invented to save costs, build quality and complex software, and satisfy the end-user.

Currently, the SDLC is not as popular as before, especially with the rise of agile models and mindsets. However, having information about all those frameworks will allow product managers and product teams to build better processes that generate better results.

LogRocket generates product insights that lead to meaningful action

Get your teams on the same page — try LogRocket today.

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • #agile and scrum
  • #project management

case study system development life cycle

Stop guessing about your digital experience with LogRocket

Recent posts:.

case study system development life cycle

Practical ways to improve your customer experience

Improving your customer experience enhances customer satisfaction and boosts business growth and customer retention.

case study system development life cycle

Leader Spotlight: Viewing loyalty as a commitment to the customer, with Josh Engleka

Josh Engleka talks about how his team at Lowe’s takes a unique perspective because they are all customers, as well as employees.

case study system development life cycle

Principal product manager: Responsibilities and career insights

This position mentors junior product managers, conducts market research, and monitors product performance.

case study system development life cycle

Leader Spotlight: The power of customer experience journey mapping, with Justina Cho

Justina Cho talks about the importance of customer journey maps and goes through best practices for creating them.

Leave a Reply Cancel reply

Guru99

Software Development Life Cycle (SDLC) Phases & Models

Matthew Martin

What is SDLC?

SDLC is a systematic process for building software that ensures the quality and correctness of the software built. SDLC process aims to produce high-quality software that meets customer expectations. The system development should be complete in the pre-defined time frame and cost. SDLC consists of a detailed plan which explains how to plan, build, and maintain specific software. Every phase of the SDLC life Cycle has its own process and deliverables that feed into the next phase. SDLC stands for Software Development Life Cycle and is also referred to as the Application Development life-cycle.

Here, are prime reasons why SDLC is important for developing a software system.

  • It offers a basis for project planning, scheduling, and estimating
  • Provides a framework for a standard set of activities and deliverables
  • It is a mechanism for project tracking and control
  • Increases visibility of project planning to all involved stakeholders of the development process
  • Increased and enhance development speed
  • Improved client relations
  • Helps you to decrease project risk and project management plan overhead

JIRA Software

On Jira Software Website

ClickUp

On ClickUp Website

Zoho Assist

On Zoho Assist Website

SDLC Phases

The entire SDLC process divided into the following SDLC steps:

SDLC Phases

Phase 1: Requirement collection and analysis

Phase 2: feasibility study, phase 3: design, phase 4: coding, phase 5: testing, phase 6: installation/deployment, phase 7: maintenance.

In this tutorial, I have explained all these Software Development Life Cycle Phases

The requirement is the first stage in the SDLC process. It is conducted by the senior team members with inputs from all the stakeholders and domain experts in the industry. Planning for the quality assurance requirements and recognization of the risks involved is also done at this stage.

This stage gives a clearer picture of the scope of the entire project and the anticipated issues, opportunities, and directives which triggered the project.

Requirements Gathering stage need teams to get detailed and precise requirements. This helps companies to finalize the necessary timeline to finish the work of that system.

Once the requirement analysis phase is completed the next sdlc step is to define and document software needs. This process conducted with the help of ‘Software Requirement Specification’ document also known as ‘SRS’ document. It includes everything which should be designed and developed during the project life cycle.

There are mainly five types of feasibilities checks:

  • Economic: Can we complete the project within the budget or not?
  • Legal: Can we handle this project as cyber law and other regulatory framework/compliances.
  • Operation feasibility: Can we create operations which is expected by the client?
  • Technical: Need to check whether the current computer system can support the software
  • Schedule: Decide that the project can be completed within the given schedule or not.

In this third phase, the system and software design documents are prepared as per the requirement specification document. This helps define overall system architecture.

This design phase serves as input for the next phase of the model.

There are two kinds of design documents developed in this phase:

High-Level Design (HLD)

  • Brief description and name of each module
  • An outline about the functionality of every module
  • Interface relationship and dependencies between modules
  • Database tables identified along with their key elements
  • Complete architecture diagrams along with technology details

Low-Level Design (LLD)

  • Functional logic of the modules
  • Database tables, which include type and size
  • Complete detail of the interface
  • Addresses all types of dependency issues
  • Listing of error messages
  • Complete input and outputs for every module

Once the system design phase is over, the next phase is coding. In this phase, developers start build the entire system by writing code using the chosen programming language. In the coding phase, tasks are divided into units or modules and assigned to the various developers. It is the longest phase of the Software Development Life Cycle process.

In this phase, Developer needs to follow certain predefined coding guidelines. They also need to use programming tools like compiler, interpreters, debugger to generate and implement the code.

Once the software is complete, and it is deployed in the testing environment. The testing team starts testing the functionality of the entire system. This is done to verify that the entire application works according to the customer requirement.

During this phase, QA and testing team may find some bugs/defects which they communicate to developers. The development team fixes the bug and send back to QA for a re-test. This process continues until the software is bug-free, stable, and working according to the business needs of that system.

Once the software testing phase is over and no bugs or errors left in the system then the final deployment process starts. Based on the feedback given by the project manager, the final software is released and checked for deployment issues if any.

Once the system is deployed, and customers start using the developed system, following 3 activities occur

  • Bug fixing – bugs are reported because of some scenarios which are not tested at all
  • Upgrade – Upgrading the application to the newer versions of the Software
  • Enhancement – Adding some new features into the existing software

The main focus of this SDLC phase is to ensure that needs continue to be met and that the system continues to perform as per the specification mentioned in the first phase.

Popular SDLC Models

Here, are some of the most important models of Software Development Life Cycle (SDLC):

Waterfall model in SDLC

The waterfall is a widely accepted SDLC model. In this approach, the whole process of the software development is divided into various phases of SDLC. In this SDLC model, the outcome of one phase acts as the input for the next phase.

This SDLC model is documentation-intensive, with earlier phases documenting what need be performed in the subsequent phases.

Incremental Model in SDLC

The incremental model is not a separate model. It is essentially a series of waterfall cycles. The requirements are divided into groups at the start of the project. For each group, the SDLC model is followed to develop software. The SDLC life cycle process is repeated, with each release adding more functionality until all requirements are met. In this method, every cycle act as the maintenance phase for the previous software release. Modification to the incremental model allows development cycles to overlap. After that subsequent cycle may begin before the previous cycle is complete.

V-Model in SDLC

In this type of SDLC model testing and the development, the phase is planned in parallel. So, there are verification phases of SDLC on the side and the validation phase on the other side. V-Model joins by Coding phase.

Agile Model in SDLC

Agile methodology is a practice which promotes continue interaction of development and testing during the SDLC process of any project. In the Agile method, the entire project is divided into small incremental builds. All of these builds are provided in iterations, and each iteration lasts from one to three weeks.

Spiral Model

The spiral model is a risk-driven process model. This SDLC testing model helps the team to adopt elements of one or more process models like a waterfall, incremental, waterfall, etc.

This model adopts the best features of the prototyping model and the waterfall model. The spiral methodology is a combination of rapid prototyping and concurrency in design and development activities.

Big bang model

Big bang model is focusing on all types of resources in software development and coding, with no or very little planning. The requirements are understood and implemented when they come.

This model works best for small projects with smaller size development team which are working together. It is also useful for academic software development projects. It is an ideal model where requirements is either unknown or final release date is not given.

  • The Software Development Life Cycle (SDLC) is a systematic process for building software that ensures the quality and correctness of the software built
  • The full form SDLC is Software Development Life Cycle or Systems Development Life Cycle.
  • SDLC in software engineering provides a framework for a standard set of activities and deliverables
  • Seven different SDLC stages are 1) Requirement collection and analysis 2) Feasibility study: 3) Design 4) Coding 5) Testing: 6) Installation/Deployment and 7) Maintenance
  • The senior team members conduct the requirement analysis phase
  • Feasibility Study stage includes everything which should be designed and developed during the project life cycle
  • In the Design phase, the system and software design documents are prepared as per the requirement specification document
  • In the coding phase, developers start build the entire system by writing code using the chosen programming language
  • Testing is the next phase which is conducted to verify that the entire application works according to the customer requirement.
  • Installation and deployment face begins when the software testing phase is over, and no bugs or errors left in the system
  • Bug fixing, upgrade, and engagement actions covered in the maintenance face
  • Waterfall, Incremental, Agile, V model, Spiral, Big Bang are some of the popular SDLC models in software engineering
  • SDLC in software testing consists of a detailed plan which explains how to plan, build, and maintain specific software
  • Perl Tutorial: Variable, Array, Hashes with Programming Example
  • WebPagetest API Tutorial with Example
  • Difference Between Waterfall vs Spiral and Incremental Model
  • Capability Maturity Model (CMM) & it’s Levels in Software Engineering
  • Incremental Model in SDLC: Use, Advantage & Disadvantage
  • What is RAD Model? Phases, Advantages and Disadvantages
  • Spiral Model: When to Use? Advantages and Disadvantages
  • What is Waterfall Model in SDLC? Advantages and Disadvantages

National Academies Press: OpenBook

Human-System Integration in the System Development Process: A New Look (2007)

Chapter: 5 case studies, 5 case studies.

T his chapter provides three examples of specific system development that illustrate application of human-system integration (HSI) methods in the context of the incremental commitment model (ICM). The examples are drawn from the committee’s collective experience and specific application of the concepts developed during our work to these particular projects. They represent projects at three stages of development: the early stages of planning, in mid-development, and fully realized.

The first example involves the development of unmanned aerial systems and identifies numerous HSI issues in these systems that will require solution. This example provides a “notional” application of human factors methods and potential implementation of the incremental commitment model. The case study illustrates the theme of designing to accommodate changing conditions and requirements in the workplace. Specifically, it addresses the issue of adapting current unmanned aerial systems to accommodate fewer operators, with individual operators controlling multiple vehicles. The hypothetical solutions to this problem reveal the potential costs of reliance on automation, particularly prior to a full understanding of the domain, task, and operator strengths and limitations. This case study also reveals the tight interconnection between the various facets of human-system integration, such as manpower, personnel, training, and design. In other words, answering the “how many operators to vehicles” question necessarily impacts design, training, and personnel decisions.

The second example focuses on a large-scale government implementation of port security systems for protection against nuclear smuggling. The example discusses the HSI themes and incremental application of methods

during the iterative development of the system. This case is useful for illustrating application of human factors methods on a risk-driven basis, as they tend to be applied as needed over time in response to the iterative aspects of defining requirements and opportunities, developing design solutions, and evaluation of operational experience.

The third example describes development of an intravenous infusion pump by a medical device manufacturer. This example is the most detailed and “linear” of the three cases, in that it follows a sequential developmental process; the various systems engineering phases are discussed in terms of the human factors methods applied during each phase. This case study illustrates the successful implementation of well-known HSI methods, including contextual inquiry, prototyping and simulations, cognitive walkthroughs for estimating use-error-induced operational risks, iterative design, and usability evaluations that include testing and expert reviews. The importance of the incremental commitment model in phased decision making and the value of shared representations is also highlighted.

Each of these examples is presented in a somewhat different format, as appropriate to the type of development. This presentation emphasizes one broad finding from our study, which is that a “one size” system development model does not fit all. The examples illustrate tailored application of HSI methods, the various trade-offs that are made to incorporate them in the larger context of engineering development, and the overall theme of reducing the risk that operational systems will fail to meet user needs.

UNMANNED AERIAL SYSTEMS

Unmanned aerial systems (UASs) or remotely piloted vehicles (RPVs) are airplanes or helicopters operated remotely by humans on the ground or in some cases from a moving air, ground, or water vehicle. Until recently the term “unmanned aerial vehicle” (UAV) was used in the military services in reference to such vehicles as Predators, Global Hawks, Pioneers, Hunters, and Shadows. The term “unmanned aerial system” acknowledges the fact that the focus is on much more than a vehicle. The vehicle is only part of a large interconnected system that connects other humans and machines on the ground and in the air to carry out tasks ranging from UAS maintenance and operation to data interpretation and sensor operation. The recognition of the system in its full complexity is consistent with the evolution from human-machine design to human-system design, the topic of this report. It highlights an important theme of this book: the need for methods that are scalable to complex systems of systems.

Unmanned aerial systems are intended to keep humans out of harm’s way. However, humans are still on the ground performing maintenance, control, monitoring, and data collection functions, among others. Reports

from the Army indicate that 22 people are required on the ground to operate, maintain, and oversee a Shadow UAS (Bruce Hunn, personal communication). In addition, there is a dearth of UAS operators relative to the current need in Iraq and Afghanistan, not to mention the U.S. borders. The growing need for UAS personnel, combined with the current shortage, points to another theme of this report: the need for human-system integration to accommodate changing conditions and requirements in the workplace.

In addition, this issue has strong ties to questions of manning. The manning questions are “How many operators does it take to operate each unmanned aerial system? Can one modify the 2:1 human to machine ratio (e.g., two humans operating one UAS) to allow for a single operator and multiple aircraft (e.g., 1:4)?” Automation is often proposed as a solution to this problem, but the problem can be much more complex. Automation is not always a solution and may, in fact, present a new set of challenges, such as loss of operator situation awareness or mode confusion. Furthermore, the manning question is a good example of how HSI design touches other aspects of human-system integration, such as manpower, personnel, and training. That is, the question of how many vehicles per operator is not merely one of automation, but also involves the number and nature of the operators in question.

A Hypothetical Case

This example is based on an ongoing debate about the manning question, which has not been fully resolved. Therefore some aspects of the case are hypothetical, yet not improbable. In this example we assume that the objective of the design is to change the operator to UAS ratio from 2:1 to 1:4. That is, instead of two operators for one UAS there will be one operator for four UASs. This operator to UAS ratio is a requirement of the type that may be promulgated by the Department of Defense with minimal HSI input. It could be too late for human-system integration, which needs to be fully integrated into the engineering life cycle before system requirements have been determined. It could be too late in the sense that up-front analysis might have revealed that an effective 1:4 ratio is beyond the capabilities of current humans and technology under the best of circumstances. If this is the case, then there is a huge risk of designing a system that is doomed to fail. Even worse, this failure may not reveal itself until the right operational events line up to produce workload that breaks the system.

In our example, we present another scenario. The design of a UAS with a 1:4 ratio of operator to system is carried through the ICM development process to illustrate the potential role of human-system integration and one of the themes of this book. The Department of Defense is one of many

critical stakeholders in this scenario, all of whom are to be considered in the satisficing process that ensues.

Human-System Integration in the Context of the Incremental Commitment Model

In the earliest exploration phases of ICM development, the problem space and concept of operations are defined, and concept discovery and synthesis take place. Table 5-1 provides highlights of the entire example. It is often the case that human-system integration is not brought into the development cycle at this point, although at great risk. Up-front analyses, such as interviews of UAS operators, observations of operations of 2:1 systems, examination of mishap reports, understanding of the literature and data, an analysis of the 2:1 workload, event data analysis targeted at communications in the 2:1 UAS system, application of models of operator workload, and work flow analysis are all methods that could be used to explore the HSI issues in the current UAS system.

There is much that could come from this kind of up-front analysis. One hypothetical possibility is that the up-front HSI analyses could determine that UAS workload is not constant but peaks in target areas where photos need to be taken or in situations in which the route plan needs to change.

One of the key principles of ICM development is risk management , including risk-driven activity levels and anchor point commitment milestones. What are the risks if human-system integration is not considered early in the development life cycle? In this case, the formal requirements that are established may target workload reduction incorrectly. For example, autopilot automation might be developed to help to get multiple UASs from point A to point B and so on. This might have the effect of reducing workload when a reduction was not needed, while providing no relief from the high-workload tasks. Ultimately the neglect of up-front human-system integration could result in a system that is ineffective or prone to error. Consideration of risks like these should guide system development.

What if there is not enough time to interview UAS operators and to do a thorough job in the exploration phase? There is also risk associated with application of costly up-front techniques. The up-front methods used often during the exploration phase of the life cycle can be tailored to meet time and budget constraints—another theme of this book. For example, in this case in which the manning question is the issue and automation appears to be a promising solution, it would make sense to focus on aspects of the task that may be automated and the workload associated with each. One caveat is that decisions on how to scope and tailor the methods require some HSI expertise in order to target the aspects of human-system integration that promise the most risk reduction.

As system development progresses, other principles of ICM development come into play, including incremental growth of system development and stakeholder commitment. This part of the development life-cycle synthesis leads to construction, invention, or design that is iteratively refined as it is evaluated. HSI activities that would be useful at this point include function allocation and the development of shared representations, such as storyboards and prototypes.

Based on the previous finding of fluctuating workload, it may be decided that human intervention is needed at target areas and during route changes, but that the single operator can handle only one of these peak-workload tasks at a time. It may also be determined that, although automation could handle the routine flight task, an even more important place for automation is in the hand-off between the flight tasks and the human planning/replanning operation. The automation would therefore serve a scheduling and hand-off function, allocating complex tasks to the human operator as they arise and in order of priority (e.g., priority targets first). There could also be automation that serves as a decision aid for the targeting task.

Because only one nonroutine task can be handled at a time under the 1:4 scenario, it may also be decided that operators should be relieved of the flight functions completely but be on call for hand-offs from automation. For example, four controllers could handle the prioritized hand-offs from the automation, much as air traffic controllers handle multiple planes in a sector. Note that this new design and staffing plan are completely different in terms of operator roles and tasks from the former 2:1 operation. It is human-system integration that guided the allocation of tasks to human and machine; without it there would have been many other possibilities for automation that may not have produced the same end-state.

As the ICM development continues, the system engineers will go from working prototypes to product development, beta testing, product deployment, product maintenance, and product retirement. But there is continual iteration along the way. The incremental growth in the automation for scheduling, hand-offs, and targeting would occur in parallel with the next iteration’s requirements and subsystem definitions (i.e., concurrent engineering). Incremental growth will be influenced by stakeholder commitment. The HSI methods in the later stages include interviews and observations in conjunction with the newly designed system and usability testing. Some of the same methods used in up-front analysis (e.g., event data analysis, participatory analysis) can be again used and results contrasted with those of the earlier data collection.

The goal of human-system integration at this stage is to verify that the situation for the user has improved and that no new issues have cropped up in the interim. For instance, it may be determined from testing that the targeting decision aid is not trusted by the human operator (a stakeholder)

TABLE 5-1 Example of Human-System Integration for UASs in the Context of the Risk-Driven Spiral

and as a result is not used (a risk). Through iterations, a new design will be tested or the decision aid will be completely eliminated (i.e., stakeholder satisficing).

Conclusion and Lessons Learned

In this example, human-system integration plays a major role throughout the design process and is critical in the early stages before requirements are established. It can be integrated throughout the design life cycle with other engineering methods. It is also clear that the HSI activities serve to reduce human factors risks along the way and make evident the human factors issues that are at stake, so that these issues can be considered as they trade off with other design issues.

This example illustrates several lessons regarding human-system integration and system design:

The importance and complexity of the “system” in human-system integration compared with “machine” or “vehicle.”

Design concerns are often linked to manpower, personnel, and training concerns.

Up-front analysis and HSI input in early exploration activities is critical.

Methods can be tailored to time and money constraints, but HSI expertise is required to do so.

Risks are incurred if human-system integration is not considered or if it is considered late. In this case the risk would be a system that is not usable and that ultimately leads to catastrophic failure.

PORT SECURITY

The U.S. Department of Homeland Security (DHS) is in the process of implementing a large-scale radiation screening program to protect the country from nuclear weapons or dirty bombs that might be smuggled across the border through various ports of entry. This program encompasses all land, air, and maritime ports of entry. Our example focuses on radiation screening at seaports, which have a particularly complex operational nature. Seaports are structured to facilitate the rapid offloading of cargo containers from ocean-going vessels, provide temporary storage of the containers, and provide facilities for trucks and trains to load containers for transport to their final destination. The operation involves numerous personnel, includ-

case study system development life cycle

FIGURE 5-1 RPM security screening at seaports involves multiple tasks, displays, and people.

ing customs and border protection (CBP) officers for customs and security inspection, terminal personnel, such as longshoremen for equipment operation, and transport personnel, such as truck drivers and railroad operators. Figure 5-1 illustrates the steps involved in the radiation screening process.

Design and deployment of radiation portal monitoring (RPM) systems for seaport operations engages the incremental commitment model for ensuring commitments from the stakeholders and to meet the fundamental technical requirement of screening 100 percent of arriving international cargo containers for illicit radioactive material.

This example illustrates aspects of the ICM process with specific instances of human-system integration linked to concurrent technical activities in the RPM program. The development of RPM systems for application in the seaport environment entails an iterative process that reflects the overall set of themes developed in this book. We discuss how these themes are reflected in the engineering process.

Human-System Integration in the Context of Risk-Driven Incremental Commitments

The human factors design issues encountered in this program are very diverse, ranging from fundamental questions of alarm system effectiveness at a basic research level, to very practical and time-sensitive issues, such as the most appropriate methods of signage or traffic signaling for controlling

the flow of trucks through an RPM system. HSI methods have been applied on a needs-driven basis, with risk as a driver for the nature of the application. With the issue of alarm system effectiveness, for example, it was recognized early in the program that reducing system nuisance alarms is an important issue, but one that requires a considerable amount of physics research and human factors display system modeling and design. The ICM process allowed early implementation of systems with a higher nuisance alarm rate than desirable while pursuing longer term solutions to problems involving filtering, new sensors, and threat-based displays. The nuisance alarm risk was accepted for the early implementations, while concurrent engineering was performed to reduce the alarm rate and improve the threat displays for implementation in later versions.

A contrasting example involves traffic signage and signaling. Since the flow of cargo trucks through port exits is a critical element of maintaining commercial flow, yet proper speed is necessary for RPM measurement, methods for proper staging of individual vehicles needed to be developed. Most ports involve some type of vehicle checkout procedure, but this could not be relied on to produce consistent vehicle speed through the RPM systems. Instead, the program engaged the HSI specialty to assist in developing appropriate signage and signaling that would ensure truck driver attention to RPM speed requirements.

HSI Methods Tailored to Time and Budget Constraints

Since the RPM program focus is homeland security, there has been schedule urgency from the beginning. The need for rapid deployment of RPM systems to maximize threat detection and minimize commercial impact has been the key program driver, and this has also influenced how the HSI discipline has been applied. The primary effect of program urgency and budgetary limitations has been to focus HSI efforts in work domain analysis, the modeling of human-system interactions, and theory-based analysis rather than experiment.

The work domain analysis has typically focused on gaining a rapid understanding of relatively complicated seaport operations in order to evaluate technology insertion opportunities and to better understand design requirements. In contrast to work domain analysis oriented toward cognitive decision aids, which requires time-intensive collaboration with subject matter experts, the RPM analysis worked at a coarser level to characterize staff functions and interactions, material flow, and operational tempo. Similarly, modeling of human-system interactions (such as responding to a traffic light or an intercom system) was performed at the level of detail necessary to facilitate design, rather than a comprehensive representation of operator cognitive processes—this was not required to support engineering.

Theory-based analysis of alarm system effectiveness has been conducted on a somewhat longer time scale, since the problem of human response to alarms is more complex. This work consisted of adapting traditional observer-based signal detection theory, in which the human is an active component of the detection system, to RPM systems in which the human operator evaluates the output of a sensor system that detects a threat precondition. Various threat probability analyses have been conducted in this effort, and they can be used to guide subsequent advanced RPM designs. This work has been guided by empirical studies, but it has not required an independent data collection effort.

Shared Representations Used to Communicate

The rapid-paced nature of the RPM program places a premium on effective communication between human-system integration and the engineering disciplines. In this program, fairly simple communication mechanisms that use graphics or presentation methods adapted from engineering have the best chance of successful communication. For example, it is important to evaluate the human error risks associated with new security screening systems so that mitigation approaches can be designed. One approach to describing this to the engineering community might be to simply borrow existing taxonomies from researchers in the field, such as Reason (1990). Alternatively, a more graphic and less verbose approach is to represent the approach as a fault tree, shown in Figure 5-2 . This type of representation is immediately recognizable to the engineering community and is less subject to interpretation than abstract descriptions of error typologies.

case study system development life cycle

FIGURE 5-2 General model of human error analysis for security screening used as a shared representation to communicate the concept to engineering staff.

case study system development life cycle

FIGURE 5-3 Graphical representation of work flow with a threat-based RPM display.

Human-system integration has used graphics to convey fairly abstract design ideas to the engineering staff, as shown in Figure 5-3 . This display conveys the concept of a threat likelihood display, which informs the RPM operator about the contents of a vehicle based on processing algorithms. The graphic contrasts the eight-step process shown in Figure 5-1 , with a four-step screening process, illustrating the functional utility of the display in a direct way.

Accommodation to Changing Conditions and Workplace Requirements

The RPM program started with a set of baseline designs for seaports that involved a cargo container passing through an exit gate. As the program expanded to a wider range of port operations, numerous variations in the container-processing operations became apparent. In some instances, the traffic volume is so low that the costs of installing a fixed installation are too high; alternatively, trenching limits or other physical constraints may preclude a fixed portal. Operational differences, such as moving containers direct to rail cars, also present challenges for design.

case study system development life cycle

FIGURE 5-4 Standard truck exit RPM system (left), mobile RPM system (middle), and straddle carrier operation (right).

Figure 5-4 illustrates several variants of RPM operational configurations that have HSI implications. The truck exit shown in the figure is a standard design that accommodates the majority of seaport operations as they are currently configured. In order to accommodate reconfiguration and low volume, a mobile RPM system has been developed, as shown above. For ports at which straddle carriers are used to move containers directly to rail, solutions are currently being evaluated. Human-system integration has been directly responsible for operations studies of straddle carrier operation to discern technology insertion opportunities. The critical issue for seaports is that current operations do not predict future operations; the rapid expansion of imports will fundamentally alter how high-volume ports process their cargo, and HSI studies will be an important element of adapting the security screening technologies to evolving operational models.

Scalable Methods

The RPM program is large in scale—involving geographically distributed installations on a nationwide basis, multiple personnel, government agencies and private-sector stakeholders—and seaports are an element of

the nation’s critical infrastructure. To make an effective contribution in this context, human-system integration has focused on problems of an aggregate nature that affect multiple installations. The methods generally employed, such as work domain analysis, probabilistic risk modeling, and timeline analysis, are applicable at an individual operator, work group, or port-wide level. Scalability is inherent in the overall goals of method application (i.e., discerning general operational constraints and potential design solutions); in the process there are requirements for “one-off” tailored solutions, but the fundamental goal is to provide generic solutions.

Principles of System Development

The development of RPM systems for application in the seaport environment has entailed an iterative process that reflects the system development principles described in this book. This section discusses how these principles are reflected in the engineering process.

Success-Critical Stakeholder Satisficing

As mentioned above, this program involves the private sector (seaport terminal management and labor), local public agencies such as port authorities, local and national transportation companies such as railroads, federal government agencies (DHS), federal contractors, and, from time to time, other federal law enforcement agencies, such as the Federal Bureau of Investigation. The issues and requirements of all need to be addressed in RPM deployments. The dual program goals of maximizing threat detection and minimizing impact on commerce define the parameters for stakeholder satisficing.

Incremental Growth of System Definition and Stakeholder Commitment

The objective of minimal disruption to ongoing seaport operations and the need to identify traffic choke points and screening opportunities require considerable up-front analysis, as well as continuing evaluation of impact as individualized deployments are designed. The general activities in this category include

initial site surveys to identify choke points.

operational process analysis to identify traffic flow and screening procedures for individual seaport sites.

adaptation of baseline screening systems to specific seaport site constraints.

continued monitoring and evaluation of impact, including nuisance alarm rates and traffic flow, from design through deployment.

modification of RPM system elements as required to meet security and operational missions.

This process generally involves initial stakeholder meetings to establish the relationships necessary to adapt the technologies to individual operations. Based on information gathered in operational studies, conceptual designs (50-percent level) are proposed, reviewed, and revised as a more detailed understanding of requirements and impacts is obtained. This leads to more refined definitions of implementation requirements and operational impacts, which in turn lead to commitment at the 90-percent design review.

Risk Management

The multiple operational personnel involved in port security and seaport operations necessarily entails a variety of human factors risks when new technology is introduced. One of the major initial risks involved consideration of staffing, as customs and border protection authorities have not typically placed officers on site at seaports. A number of options for operating security equipment were evaluated, and the decision was made that CBP would staff the seaport sites with additional schedule rotations. This reduced the risk of relying on nonlaw enforcement personnel but increased the cost to the government (a trade-off). Other risks include generally low workload associated with processing alarms (a trade-off of boredom and cost, but physical presence is guaranteed), the gradual erosion of alarm credibility based on the exclusive occurrence of nuisance alarms (a trade-off of high sensitivity of detection system with potential for reduced effectiveness), risks of labor disputes as more complex technology is introduced that may be seen as infringing on private-sector territory (a trade-off of the risk of a complex labor situation with the need for security screening), and transfer of training procedure incompatibilities from one location to another (i.e., procedures vary considerably from one site to another, and staff rotate among these locations—a trade-off of procedural variability with the human ability to adapt).

HSI activities tend to be deployed in this program based on continuing assessment of risks associated with individual seaport deployments. For example, HSI operational studies of straddle carrier cargo operations were undertaken midway through seaport deployments, when it was recognized that existing technology solutions could not be adapted to that type of operation. The risk of using existing technology was that seaport operations would need to fundamentally change—this would lead to an unacceptable

impact on commerce. Thus operational studies were undertaken to identify potential technology insertion opportunities that would minimize the risk of commercial impact.

Concurrent System Definition and Development

The RPM program involves substantial concurrent engineering activity. The initial deployments have utilized relatively low-cost, high-sensitivity but low-resolution sensors made of polyvinyl toluene. These sensors are highly sensitive to radioactive material but tend to generate nuisance alarms because of low resolution of the type of radioactive material (naturally occurring versus threat material). While this yields high threat sensitivity, it is also nonspecific and creates a larger impact on commerce due to nuisance alarms and the need for secondary inspections.

However, development of advanced spectroscopic portals (ASPs) that utilize high-resolution sensors is taking place concurrently with the installation of lower resolution portals and will be deployed subsequently. These portals will be able to identify specific radioactive isotopes and will help to reduce nuisance alarms that create an adverse impact on commerce. Concurrent human factors research concerning threat-based displays will be used for developing appropriate end-user displays for the new systems.

“NEXT-GENERATION” INTRAVENOUS INFUSION PUMP

The next-generation infusion pump is a general-purpose intravenous infusion pump (IV pump) designed primarily for hospital use with secondary, limited-feature use by patients at home. The device is intended to deliver liquid medications, nutrients, blood, and other solutions at programmed flow rates, volumes, and time intervals via intravenous and other routes to a patient. The marketed name is the Symbiq™ IV Pump. The device will offer medication management features, including medication management safety software through a programmable drug library. The infuser will also have sufficient memory to support extensive tracking logs and the ability to communicate and integrate with hospital information systems. The infuser will be available as either a single-channel pump or a dual-channel pump. The two configurations can be linked together to form a 3- or 4-channel pump. The infuser includes a large touchscreen color display and can be powered by either A/C power or rechargeable batteries.

To ensure that the infuser has an easy-to-use user interface, the development of the product was based on a user-centered design approach. As part of the user-centered design approach, the team involved potential users at each phase in the design cycle. During the first phase, the team conducted interviews with potential users and stakeholders, including nurses, anes-

thesiologists, doctors, managers, hospital administrators, and biomedical technicians to gather user requirements. The team also conducted early research in the form of contextual observations and interviews in different clinical settings in hospitals as a means to understand user work flow involving infusion pumps. The information from these initial activities was used in the conceptual development phase of the next-generation infusion pump. Iterative design and evaluation took place in the development of each feature. Evaluations included interviews, usability testing in a laboratory setting, usability testing in a simulated patient environment, testing with low-fidelity paper prototypes, and testing with high-fidelity computer simulation prototypes. Computer simulations of the final user interface of each feature were used in focus groups to verify features and to obtain additional user feedback on ease of use before the final software coding began. In the final phases of development, extensive usability testing in simulated patient environments was conducted to ensure design intent has been implemented and that ease of use and usability objectives were met. Throughout the development process, iterative risk analysis, evaluation, and control were conducted in compliance with the federally regulated design control process (see Figures 5-5 and 5-6 ).

Motivation Behind the Design

The primary motivation was to design a state-of-the-art infusion pump that would be a breakthrough in terms of ease of use and improved patient safety. Over recent decades, the quality of the user interface in many IV pump designs has fallen under scrutiny due to many human factors–related issues, such as difficulty in setting up and managing a pump’s interface through careful control and display interplay. In the past 20 years, the type, shape, and use of pumps have been, from outward appearances, very similar and not highly differentiated among the different medical device manufacturers. In fall 2002, Hospira undertook a large-scale effort to redesign the IV pump. Their mission was to create a pump that was easier to set up, easier to manage, easier to oversee patient care, and easier to use safely to help the caregiver prevent medication delivery errors. There was a clear market need for a new-generation IV pump. The Institute of Medicine in 2000 estimated 98,000 deaths a year in the United States due to medical errors (Institute of Medicine, 2000).

The User-Centered Design Process in the Context of the Incremental Commitment Model

The Symbiq™ IV Pump followed a classic user-centered design process, with multiple iterations and decision gates that are typically part of the in-

case study system development life cycle

FIGURE 5-5 Two channel IV pumps with left channel illuminated. Photographs courtesy of Hospira, Inc.

case study system development life cycle

FIGURE 5-6 IV tube management features. Photographs courtesy of Hospira, Inc.

cremental commitment model of product development. Risk management was a central theme in the development, both in terms of reducing project completion and cost risks and managing the risk of adverse events to patients connected to the device. Many of the interim project deliverables, such as fully interactive simulations of graphical user interfaces (GUI), were in the form of shared representations of the design, so that all development team members had the same understanding of the product requirements during the development cycle.

Following a classic human factors approach to device design, the nurse user was the primary influence on the design of the interface and the design of the hardware. Physicians and home patient users were also included in the user profiles. Hospira embarked on a multiphase, user-centered design program that included more than 10 user studies, in-depth interviews, field observations, and numerous design reviews, each aimed at meeting the user’s expectations and improving the intelligence of the pump software aimed at preventing medication errors.

Preliminary Research

Much preliminary work needed to be done in order to kick off this development. A well-known management and marketing planning firm was hired to lead concept analysis in which the following areas were researched:

Comparison of the next-generation pump and major competitors, using traditional strengths/weaknesses/opportunities methodology, included the following features:

Physical specifications

Pump capabilities, e.g., number of channels

Programming options

Set features

Pressure capabilities

Management of air in line

Biomedical indicators

Competitive advantages of the next-generation pump were identified in the following areas:

Bar code reading capability with ergonomic reading wand

Small size and light weight

Standalone functional channels (easier work flow, flexible regarding number of pumping channels)

Extensive drug library (able to set hard and soft limits for the same drug for different profiles of use)

High-level reliability

Clear mapping of screen and pumping channels

Vertical tubing orientation that is clear and simple

An extensive competitive analysis was undertaken, against the five largest market leaders. Task flows and feature lists and capabilities were created. A prioritization of the possible competitive advantage features and their development cost estimates was generated and analyzed.

Business risks were examined using different business case scenarios and different assumptions about design with input from the outside management consultants. Engineering consultants assisted Hospira with input on technical development issues and costs, including pump mechanisms, software platforms, and display alternatives.

Extensive market research was conducted as well to identify market windows, market segment analyses, pricing alternatives, hospital purchasing decision processes, and the influence of outside clinical practice safety groups. Key leaders in critical care were assembled in focus groups and individually to assess these marketing parameters. This process was repeated. Key outcomes were put into the product concept plan and its marketing product description document. This document also captured current and future user work needs and the related environments.

The concept team reached a decision gate with the concurrence of the management steering committee. The project plan and budget were approved and development began. Again, business risks were assessed. This step is typical in an ICM development approach.

Design Decisions

A fundamental architecture decision was reached to have an integrated design with either one or two delivery channels in a single integrated unit. Two or more integrated units could themselves be connected side by side in order to obtain up to four IV channel lines. This alternate was chosen over the competing concept of having modular pumping units that would interconnect and could be stacked onto one master unit to create multiple channels. The integrated master unit approach won out based on problems uncovered from the market research, such as a higher likelihood of lost modular units, inventory problems, and reduced battery life.

Feature Needs and Their Rationale

Based on the preliminary market research and on an analysis of medical device reports from the Food and Drug Administration (FDA) as well as complaints data from the Hospira customer service organization, the Marketing Requirements Document was completed and preliminary decisions were made to include the features described in this section. Field studies and contextual inquiry were planned as follow-on research to verify the need for these features and to collect more detail on how they would be designed.

Types of programmable therapies. Decisions were made to offer a set of complex therapies in addition to the traditional simple therapies usually offered by volumetric IV pumps. The traditional simple therapies were

continuous delivery for a specified period of time (often called mL/Hr delivery).

weight-based dosing, which requires entering the patient’s weight and the ordered drug delivery rate.

bolus delivery (delivery of a dose of medication over a relatively short period of time).

piggyback delivery (the delivery type that requires Channel A delivery suspension while Channel B delivers and then its resumption when Channel B completes).

The more complex therapies included

tapered therapy (ramping up and down of a medicine with a programmed timeline. It is sometimes used for delivery of nutritional and hydration fluids, called total parenteral nutrition).

intermittent therapy (delivery of varying rates of medication at programmed time intervals).

variable time delivery.

multistep delivery.

Business risks were examined to understand the sales consequences of including these features of therapy types to address the issue of stakeholder satisficing.

Medication libraries with hard and soft dosage limits. Research uncovered that several outside patient safety advocate agencies, including the Emergency Care Research Institute and the Institute for Safe Medical Practices were recommending only IV pumps with safety software consisting of upper and lower dosage limits for different drugs as a function of the

programmed clinical care area in a hospital. (Clinical care areas include emergency room, intensive care unit, oncology, pediatrics, transplants, etc.) It became clear that it would have been imperative to have safety software in the form of medication libraries that were programmed by each hospital to have soft limits (which could be overridden by nurses with permission codes) and hard limits (that could under no circumstances be overridden). It was decided at this time that separate software applications would need to written that would be used by hospital pharmacy and safety committees to enter drugs in a library table with these soft and hard limits, which would vary by clinical care area in the hospital. This is an example of incremental growth and stakeholder commitment in the design process.

Large color touch screen. A human factors literature review was conducted to create a list of advantages and disadvantages of various input and display technologies. This research was supplemented with engineering data on the costs and reliabilities of these technologies. Again, business risks were examined, including reliability of supply of various display vendors. After much research and debate, the list of choices was narrowed to three vendors of touch-sensitive color LCD displays.

This was a breakthrough, in the sense that no current on-market IV pumps were using color touchscreen technology. A large 8.4-inch diagonal color LCD display with resistive touchscreen input was selected for further testing. A resistive touchscreen was believed to reduce errors due to poor screen response to light finger touch forces.

Another issue that required some data from use environment analysis was the required angle of view and display brightness under various use scenarios. Subsequent contextual inquiry data did verify the need for viewing angles of at least +/- 60 degrees horizontal viewing and +/- 30 degrees vertical viewing angles. The minimum brightness or luminance levels were verified at 35 candelas per square meter. A business risk analysis examined the trade-offs between a large touchscreen display and the conflicting customer desire for small footprint IV pumps. The larger display size of 8.4-inch diagonal would allow larger on-screen buttons to minimize use errors due to inadvertent selection of adjacent on-screen buttons as well as allowing larger more readable on-screen text. Again, human factors research literature and standards on display usability were included in these decisions.

Special alarms with melodies. FDA medical device reports and customer complaint data reinforced the need for more effective visual and auditory alarms to alert IV pump users to pump fault conditions, such as air in line, occlusion in IV tubing, pending battery failure, IV bag nearly empty or unsafe dosage rates for a particular drug in a specific critical care area.

The team also decided to adopt the recommendations of the International Electrotechnical Commission (IEC) for an international standard for medical device auditory alarms to use unique melody patterns for IV pumps to distinguish these devices from other critical care devices, such as ventilators and vital sign patient monitors. These auditory alarms were later subjected to extensive lab and field studies for effectiveness and acceptability.

An early beta test in actual hospital settings with extended use subsequently showed user dissatisfaction with the harshness of some of the alarm melodies. The IEC standard had purposely recommended a discordant set of tone melodies for the highest alarm level, but clinicians, patients, and their families complained that they were too harsh and irritating. Some clinicians complained that they would not use these IV pumps at all, unless the alarms were modified. Or worse, they would permanently disable the alarms, which would create a very risky use environment.

This outcome highlights a well-known dilemma for human factors: lab studies are imperfect predictors of user behavior and attitudes in a real-world, extended-use setting. The previous lab usability studies were by their very nature short-duration exposures to these tones and showed that they were effective and alerting, but they did not capture long-term subjective preference ratings. A tone design specialist was engaged who redesigned the tones to be more acceptable, while still being alerting, attention grabbing, and still in compliance with the IEC alarm standard for melodies. Subsequent comparative usability evaluations (group demonstrations and interviews) demonstrated the acceptability of the redesigned melodies. This is a prime example of design iteration and concurrent system definition and development.

Semiautomatic cassette loading. Another early decision involved choosing between a traditional manual loading of the cassette into the IV pump or a semiautomated system, in which a motor draws a compartment into the pumping mechanism, after the clinician initially places the cassette into the loading compartment. The cassette is in line with the IV tubing and IV bag containing the medication. The volumetric pumping action is done through mechanical fingers, which activate diaphragms in the plastic cassette mechanism. Customer complaint history suggested the need for the semiautomated system to avoid use error in loading the cassette and to provide a fail-safe mechanism to close off flow in the IV line except when it was inserted properly into the IV pump.

A major problem with earlier cassette-based volumetric IV pump systems was the problem of “free flow,” in which medication could flow uncontrolled into a patient due to gravitational forces, with the possibility of severe adverse events. Early risk analysis and evaluation were done from both a business and use-error safety perspective to examine the benefit of

the semiautomated loading mechanism. Later usability testing and mechanical bench testing validated the decision to select the semiautomated loading feature.

A related decision was to embed a unique LED-based lighting indication system into the cassette loading compartment that would signal with colored red, yellow, and green lights and steady versus flashing conditions the state of the IV pump in general and specifically of the cassette loading mechanism. The lights needed to be visible from at least 9 feet to indicate that the IV pump is running normally, pump is stopped, cassette is improperly loaded, cassette compartment drawer is in the process of activation, etc.

Special pole mounting hardware. Again, data from the FDA medical device reports and customer complaints indicated the need for innovative mechanisms for the mounting of the IV pump on poles. Later contextual inquiry and field shadowing exercises validated the need for special features allowing for the rapid connection and dismounting of the IV pump to the pole via quick release/activation mechanisms that employed ratchet-like slip clutches. Subsequent ergonomics-focused usability tests of hardware mechanisms validated the need and usability of these design innovations for mounting on both IV poles and special bed-mounted poles, to accommodate IV pumps while a patient’s bed is being moved from one hospital department to another.

Risk analyses for business and safety risks were updated to include these design decisions. Industrial design models were built to prototype these concepts, and these working prototypes were subjected to subsequent lab-based usability testing. Again, these actions are examples of stakeholder satisficing, incremental growth of system definition, and iterative system design.

Stacking requirements. Given the earlier conceptual design decision to have an integrated IV pump rather than using add-on pumping channel modules, decisions were needed on how integrated IV pumps could be stacked together to create additional channels. A concomitant decision was that the integrated IV pump would be offered with either one or two integrated channels. Based on risk assessment, it was decided to allow side-by-side stacking to allow the creation of a 4-channel system when desired. The 4-channel system would be electronically integrated and allow the user interface to operate as one system. Again, trade-off analyses of risks were made against the competing customer need for a smaller device size footprint. A related design decision was to have an industrial design that allowed handles for easy transportation, but would also allow stable vertical stacking, while the units are stored between uses in the biomedi-

cal engineering department. Market research clearly indicated the need for vertical stacking in crowded storage areas. To facilitate safe storage of the pumps, the special pole clamps were made removable.

Tubing management. A well-known use-error problem of tangled and confusing IV tubing lines was addressed in the housing design by including several holders for storing excess tubing. Notches were also included to keep tubes organized and straight to reduce line-crossing confusion. These same holders were built as slight protrusions that protected the touchscreen from damage and inadvertent touch activation, if the pump were to be laid on its side or brushed against other medical devices.

Many other preliminary design decisions were made in these early stages that were based on both business and use-error risk analysis. In all cases, these decisions were verified and validated with subsequent data from usability tests and from field trials.

Design Process Details

The development of the Symbiq™ IV Pump followed the acknowledged best practices iterative user-centered design process as described in medical device standards (ANSI/AAMI HE 74:2001, IEC 60601-1-6:2004, and FDA human factors guidance for medical device design controls). The following sections are brief descriptions of what was done. Table 5-2 outlines the use of these human factors techniques and some areas for methodology improvements.

Contextual Inquiry

Contextual inquiry was done by multiple nurse shadowing visits to the most important clinical care areas in several representative hospitals. Several team members spent approximately a half-day shadowing nurses using IV pumps and other medical devices and observing their behaviors and problems. A checklist was used to record behaviors and, as time permitted, ask about problem areas with IV pumps and features that needed attention during the design process. Subsequent to the field visits, one-on-one interviews with nurses were conducted to explore in depth the contextual inquiry observations. These observations and interviews were used to generate the following elements:

task analyses

use environment analyses

user profiles analyses

Figure 5-7 shows an example of one of many task flow diagrams generated during the task analyses phases of the contextual inquiry.

Setting Usability Objectives

Quantitative usability objectives were set based on data from the contextual inquiry, user interviews, and the previous market research. Early use-error risk analysis highlighted tasks that were likely to have high risk, with particular attention to setting usability objectives to ensure that these user interface design mitigations were effective. Experience with earlier IV pump designs and user performance in usability tests also influenced the setting of these usability objectives. The objectives were primarily based on successful task performance measures and secondarily on user satisfaction measures. Examples of usability objectives were

90 percent of experienced nurses would be able to insert the cassette the first time while receiving minimal training; 99 percent would be able to correct any insertion errors.

90 percent of first-time users with no training would be able to power the pump off when directed.

90 percent of experienced nurses would be able to clear an alarm within 1 minute as first-time users with minimal training.

80 percent of patient users would rate the overall ease of use of the IV pump 3 or higher on a 5-point scale of satisfaction with 5 being the highest value.

Early Risk Management

Many rounds of iterative risk analysis, risk evaluation, and risk control were initiated at the earliest stages of design. The risk-management process followed recognized standards in the area of medical device design (e.g., ISO 14971:2000, see International Organization for Standardization, 2000a). The risk analysis process was documented in the form of a failure modes and effects analysis (FMEA), which is described in more detail in Chapter 8 . Table 5-3 presents excerpts from the early Symbiq™ FMEA. Business and project completion risks were frequently addressed at phase review and management review meetings.

The concept of risk priority number (RPN) was used in the operation risk assessment for the Symbiq™ infusion system. RPN is the resulting product of multiplying fault probability times risk hazard severity times probability of detecting the fault. A maximum RPN value is typically 125, and decision rules require careful examination of mitigation when the RPN

TABLE 5-2 Methodology Issues and Research Needs

case study system development life cycle

FIGURE 5-7 Illustrative task flow diagram from the task analysis.

values exceed a value of 45. RPN values between 8 and 45 require an explanation or justification of how the risk is controlled.

The product requirements document (PDR) was formally created at this point to describe the details of the product design. It was considered a draft for revision as testing and other data became available. It was based on the customer needs expressed in the marketing requirements document. This document recorded the incremental growth of system definitions and stakeholder commitment and served as a shared representation of the design requirements for the development team.

Many prototypes and simulations were created for evaluation:

Hardware models and alternatives considered

hardware industrial design mock-ups

early usability tests of hardware mock-ups.

Paper prototypes for graphical user interfaces with wireframes consisting of basic shapes, such as boxes and buttons without finished detail graphic elements.

GUI simulations using Flash™ animations. 1

Early usability tests with hardware mock-ups and embedded software that delivered the Flash™ animations to a touchscreen interface that was integrated into the hardware case.

Flash animations are excellent examples of shared representations because they were directly used in the product requirements document to specify to software engineering exactly how the GUI was to be developed. All team discussions regarding GUI design were focused exclusively on the Flash animation shared representations of the Symbiq™ user interface.

Integrated Hardware and Software Models with Integrated Usability Tests

As noted earlier, the usability tests performed later in the development cycle were done with integrated hardware mock-ups and software simulations. Usability test tasks were driven by tasks with high-risk index values in the risk analysis, specifically the FMEA. Tasks were also included that had formal usability objectives associated with them. Although the majority of usability test tasks were focused on the interaction with the touchscreen-

TABLE 5-3 Excerpts from Symbiq™ Failure Modes and Effects Analysis (FMEA)

based graphical user interface, critical pump-handling tasks were included as well, such as IV pump mounting and dismounting on typical IV poles.

Tests of Alarm Criticality and Alerting

The initial alarm formative usability studies, described earlier, had the goal of selecting alarms that would be alerting, attention getting, and properly convey alarm priority, as well as communicating appropriate actions. These formative studies evaluated the subject’s abilities to identify and dis-

criminate among different visual alarm dimensions, including colors, flash rates, and text size and contrast. For auditory alarms, subjects were tested on their ability to discriminate among various tones with and without melodies and among various cadences and tone sequences for priority levels and detectability. Subjects were asked to rate the candidate tones relative to a standard tone, which was given a value of 100. The standard was the alternating high-low European-style police siren. Subjective measures were also gathered on the tones using the PAD rating system, standing for perceived tone pleasure, arousal, and dominance, as well as perceived criticality. Data

from these studies enabled the team to make further incremental decisions on system definitions for both visual and auditory alarms and alerts.

Tests of Display Readability

Another set of early formative usability tests was conducted to validate the selection of the particular LCD touchscreen for readability and legibility. During the evaluation it was determined that the screen angle (75 degrees) and overall curvature were acceptable. The screen could be read in all tested light conditions at a 15-foot viewing distance.

Iterative Usability Tests

As noted, a series of 10 usability studies were conducted iteratively as the design progressed from early wireframes to the completed user interface with all the major features implemented in a working IV pump. In one of the intermediate formative usability tests, a patient simulator facility was used at a major teaching hospital. Users performed a variety of critical tasks in a simulated room in an intensive care unit, in which other medical devices interacted and produced noises and other distractions. The prototype IV pump delivered fluid to a mannequin connected to a patient monitor that included all vital signs. As the pump was programmed and subsequently changed (e.g., doses titrated), the software-controlled patient mannequin would respond accordingly. The patient simulator also introduced ringing telephones and other realistic conditions during the usability test. This test environment helped in proving the usability of visual alarms and tones, as well as the understandability and readability of the visual displays. Final summative usability tests demonstrated that the usability objectives for the pump were achieved.

Focus Groups

Focus groups of nurses were also used as part of the usability evaluation process. These were used to complement the task-based usability tests. Many of the focus groups had a task performance component. Typically the participants would perform some tasks with new and old versions of design changes, such as time entry widgets on the touchscreen, and then convene to discuss and rate their experiences. This allowed a behavioral component and addressed one of the major shortcomings of typical focus groups, that they focus only on opinions and attitudes and not behaviors.

Field Studies

Field studies in the form of medical device studies have also been incorporated in the design process. Thoroughly bench-tested and working beta versions of the IV pump were deployed in two hospital settings. The hospitals programmed drug libraries for at least two clinical care areas. The devices were used for about 4 weeks. Surveys and interviews were conducted with the users to capture their real-world experiences with the pump. Data from the pump usage and interaction memory were also analyzed and compared with original doctor’s orders. This study revealed a number of opportunities to make improvements, including the problem with the perceived annoyance of the alarm melodies and the data entry methods for entering units of medication delivery time (e.g., hours or minutes).

Instructions for Use Development and Testing

Usability testing was also conducted on one of the sets of abbreviated instructions called TIPS cards. These cards serve as reminders for how to complete the most critical tasks. These usability studies involved 15 experienced nurses with minimal instructions performing 9 tasks with the requirement that they read and use the TIPS cards. Numerous suggestions for improvement in the TIPS cards themselves as well as the user interface came from this work, including how to reset the air-in-line alarm and how to address the alarm and check all on-screen help text for accuracy.

Validation Usability Tests

Two rounds of summative usability testing were conducted, again with experienced nurses performing critical tasks identified during the task analysis, including those with higher risk values in the risk analysis. The tasks were selected to simulate situations that the nurses may encounter while using the IV pump in a hospital setting. The tasks included selecting a clinical care area, programming simple deliveries, adding more volume at the end of an infusion, setting a “near end of infusion” alarm, titration, dose calculations, piggyback deliveries, intermittent deliveries, using standby, programming a lock, adjusting the alarm volume, and responding to messages regarding alarms.

Usability objectives were used as acceptance criteria for the summative validation usability tests. The study objectives were met. The calculated task completion accuracy was 99.66 percent for all tasks for first-time nurse users with minimal training. The null hypothesis that 80 percent of the participants would rate the usability 3 or higher on a 5-point scale in the overall categories was met. There were a few minor usability problems

uncovered that were subsequently fixed without major changes to the user interface or that affected critical safety-related tasks.

Federal regulations on product design controls require that a product’s user interface be validated with the final working product in a simulated work environment. In this instance, the working product was used in a laboratory test, but without having the device connected to an actual patient. Bench testing is also a part of validation to ensure that all mechanical and electrical specifications and requirements have been met.

Revised Risk Analysis

As part of the incremental commitment model, the risk analysis was iterated and revised as the product development matured. FMEAs were updated for three product areas, which were safety-critical risks associated with the user interface, the mechanical and electrical subsystems, and the product manufacturing process. Explicit analysis of the business risks and the costs of continued financial commitment to the funding of development were also incremented and reviewed at various management and phase reviews.

Product Introduction

Product introduction planning included data collection from initial users to better understand remaining usage issues that can be uncovered only during prolonged usage in realistic clinical conditions. The many cycles of laboratory-based usability testing typically are never detailed enough or long enough to uncover all usability problems. The plan is to use the company complaint handling and resolution process (e.g., corrective action and preventive action) to address use issues if they arise after product introduction.

Life-Cycle Planning

The product was developed as a platform for the next generation of infusion pump products. As such, there will be continued business risk assessment during the life cycle of this first product on the new platform as well as on subsequent products and feature extensions.

Summary of Design Issues and Methods Used

This infusion pump incorporated the best practices of user-centered design in order to address the serious user interface deficiencies of previous infusion pumps. The development process took excellent advantage of the

detailed amount of data that is derived from an integrated HSI approach and used it to improve and optimize the safety and usability of the design. Because of these efforts, the Symbiq™ IV Pump won the 2006 Human Factors and Ergonomics Society award for best new product design from the product design technical group.

This case study also illustrates and incorporates the central themes of this report:

Human-system integration must be an integral part of systems engineering.

Begin HSI contributions to development early and continue them throughout the development life cycle.

Adopt a risk-driven approach to determining needs for HSI activity (multiple applications of risk management to both business and safety risks).

Tailor methods to time and budget constraints (scalability).

Ensure communication among stakeholders of HSI outputs (shared representations).

Design to accommodate changing conditions and requirements in the workplace (the use of iterative design and the incremental commitment model).

This case study also demonstrates the five key principles that are integral parts of the incremental commitment model of development: (1) stakeholder satisficing, (2) incremental growth of system definition and stakeholder commitment, (3) iterative system development, (4) concurrent system definition and development, and (5) risk management—risk-driven activity levels.

This page intentionally left blank.

In April 1991 BusinessWeek ran a cover story entitled, "I Can't Work This ?#!!@ Thing," about the difficulties many people have with consumer products, such as cell phones and VCRs. More than 15 years later, the situation is much the same—but at a very different level of scale. The disconnect between people and technology has had society-wide consequences in the large-scale system accidents from major human error, such as those at Three Mile Island and in Chernobyl.

To prevent both the individually annoying and nationally significant consequences, human capabilities and needs must be considered early and throughout system design and development. One challenge for such consideration has been providing the background and data needed for the seamless integration of humans into the design process from various perspectives: human factors engineering, manpower, personnel, training, safety and health, and, in the military, habitability and survivability. This collection of development activities has come to be called human-system integration (HSI). Human-System Integration in the System Development Process reviews in detail more than 20 categories of HSI methods to provide invaluable guidance and information for system designers and developers.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

AIS Electronic Library (AISeL)

  • eLibrary Home
  • eLibrary Login

Home > Journals > AIS Journals > JSAIS > Vol. 1 (2013) > Iss. 1

The Journal of the Southern Association for Information Systems

A case study of the application of the systems development life cycle (sdlc) in 21st century health care: something old, something new.

Mark E. McMurtrey , University of Central Arkansas Follow

The systems development life cycle (SDLC), while undergoing numerous changes to its name and related components over the years, has remained a steadfast and reliable approach to software development. Although there is some debate as to the appropriate number of steps, and the naming conventions thereof, nonetheless it is a tried-and-true methodology that has withstood the test of time. This paper discusses the application of the SDLC in a 21st century health care environment. Specifically, it was utilized for the procurement of a software package designed particularly for the Home Health component of a regional hospital care facility. We found that the methodology is still as useful today as it ever was. By following the stages of the SDLC, an effective software product was identified, selected, and implemented in a real-world environment. Lessons learned from the project, and implications for practice, research, and pedagogy, are offered. Insights from this study can be applied as a pedagogical tool in a variety of classroom environments and curricula including, but not limited to, the systems analysis and design course as well as the core information systems (IS) class. It can also be used as a case study in an upper-division or graduate course describing the implementation of the SDLC in practice.

10.3998/jsais.11880084.0001.103

Recommended Citation

McMurtrey, M. E. (2013). A Case Study of the Application of the Systems Development Life Cycle (SDLC) in 21st Century Health Care: Something Old, Something New?. The Journal of the Southern Association for Information Systems, 1, 14-25. https://doi.org/10.3998/jsais.11880084.0001.103

Since July 12, 2017

  • Journal Home
  • About This Journal
  • Aims & Scope
  • Editorial Board
  • Publication Ethics Statement
  • SPECIAL ISSUE Call For Papers
  • Submit Article
  • Most Popular Papers
  • Receive Email Notices or RSS

Special Issues:

  • SPECIAL ISSUE: Information Systems in the Time of Covid

Advanced Search

ISSN: 2325-3940

Home | About | FAQ | My Account | Accessibility Statement

Privacy Copyright

Guide to System Development Life Cycle

  • Custom Software Development
  • Guide to System...

What is the System Development Life Cycle?

7 stages of the system development life cycle, basic 6 sdlc methodologies, benefits of sdlc, possible drawbacks of sdlc.

If you’re a developer or project manager, an understanding of the most up-to-date SDLC methodologies is a powerful tool. It empowers you to speed up the development process, cut costs, leverage the full creative capacity of your team, and more.

With that in mind, Intellectsoft’s best experts have created a complete guide to the system development life cycle. You’ll learn about its core meaning and phases, major software engineering methodologies , and the most important benefits it can provide during project development.

Special attention has been given to the characteristics of each of the seven SDLC phases because a thorough understanding of these different stages is required to implement both new and modified software systems.

Ready to maximize the efficiency of your systems development life cycle? Let’s dive in. 

The system development life cycle or SDLC is a project management model used to outline, design, develop, test, and deploy an information system or software product. In other words, it defines the necessary steps needed to take a project from the idea or concept stage to the actual deployment and further maintenance.

SDLC represents a multitude of complex models used in software development. On a practical level, SDLC is a general methodology that covers different step-by-step processes needed to create a high-quality software product. 

There are seven separate SDLC stages . Each of them requires different specialists and diverse skills for successful project completion. Modern SDLC processes have become increasingly complex and interdisciplinary .

7 stages of SDLC in the form of a pie chart

That is why it’s highly recommended that project managers engage a dedicated team of professional developers . Such a team will possess enough expertise and knowledge to launch a first-class software product that perfectly corresponds to all your expectations, needs, and goals.

Let’s take a look at the core tasks associated with each of the different phases of the development life cycle.

1. Planning Stage – What Are the Existing Problems?

Planning is one of the core phases of SDLC . It acts as the foundation of the whole SDLC scheme and paves the way for the successful execution of upcoming steps and, ultimately, a successful project launch.

In this stage, the problem or pain the software targets is clearly defined. First, developers and other team members outline objectives for the system and draw a rough plan of how the system will work. Then, they may make use of predictive analysis and AI simulation tools at this stage to test the early-stage validity of an idea. This analysis helps project managers build a picture of the long-term resources required to develop a solution, potential market uptake, and which obstacles might arise. 

At its core, the planning process helps identify how a specific problem can be solved with a certain software solution. Crucially, the planning stage involves analysis of the resources and costs needed to complete the project, as well as estimating the overall price of the software developed.

Finally, the planning process clearly defines the outline of system development. The project manager will set deadlines and time frames for each phase of the software development life cycle , ensuring the product is presented to the market in time.

2. Analysis Stage – What Do We Want?

Once the planning is done, it’s time to switch to the research and analysis stage. 

In this step, you incorporate more specific data for your new system. This includes the first system prototype drafts, market research, and an evaluation of competitors. 

To successfully complete the analysis and put together all the critical information for a certain project, developers should do the following:

  • Generate the system requirements. A Software Requirement Specification (SRS) document will be created at this stage. Your DevOps team should have a high degree of input in determining the functional and network requirements of the upcoming project.
  • Evaluate existing prototypes.  Different prototypes should be evaluated to identify those with the greatest potential. 
  • Conduct market research. Market research is essential to define the pains and needs of end-consumers. In recent years, automated NLP (natural language processing) research has been undertaken to glean insights from customer reviews and feedback at scale. 
  • Set concrete goals. Goals are set and allocated to the stages of the system development life cycle . Often, these will correspond to the implementation of specific features.

Most of the information generated at this stage will be contained in the SRS. This document shapes the strict regulations for the project and specifies the exact software model you will eventually implement.

3. Design Stage – What Will the Finished Project Look Like?

The next stage of a system development project is design and prototyping. 

This process is an essential precursor to development. It is often incorrectly equated with the actual development process but is rather an extensive prototyping stage. 

This step of the system development life cycle can significantly eliminate the time needed to develop the software. It involves outlining the following: 

  • The system interface
  • Core software features (including architecture like microservices) 
  • User interface and usability
  • Network and its requirement

As a rule, these features help to finalize the SRS document as well as create the first prototype of the software to get the overall idea of how it should look like.

Prototyping tools, which now offer extensive automation and AI features, significantly streamline this stage. They are used for the fast creation of multiple early-stage working prototypes, which can then be evaluated. AI monitoring tools ensure that best practices are rigorously adhered to.

4. Development Stage – Let’s Create the System

In the development stage of SDLC, the system creation process produces a working solution. Developers write code and build the app according to the finalized requirements and specification documents.

This stage includes both front and back-end development. DevOps engineers are essential for allocating self-service resources to developers to streamline the process of testing and rollout, for which CI/CD is typically employed. 

This phase of the system development life cycle is often split into different sub-stages, especially if a microservice or miniservice architecture, in which development is broken into separate modules, is chosen. 

Developers will typically use multiple tools, programming environments, and languages (C++, PHP, Python, and others), all of which will comply with the project specifications and requirements outlined in the SRS document. 

5. Testing Stage – Is It the Exact One We Needed?

The testing stage ensures the application’s features work correctly and coherently and fulfill user objectives and expectations. 

This process involves detecting the possible bugs, defects, and errors, searching for vulnerabilities, etc . , and can sometimes take up even more time compared to the app-building stage.

There are various approaches to testing, and you will likely adopt a mix of methods during this phase. Behavior-driven development, which uses testing outcomes based on plain language to include non-developers in the process, has become increasingly popular. 

Similarly, automated and cloud-based platforms, which simulate testing environments, take a significant amount of manual time out of this stage of the system development life cycle. Selenium, a browser testing tool, is one popular example of such a platform. 

6. Integration and Implementation Stage – How Will We Use It?

Once the product is ready to go, it’s time to make it available to its end users and deploy it to the production environment. 

At this stage, the software undergoes final testing through the training or pre-production environment, after which it’s ready for presentation on the market.

It is important that you have contingencies in place when the product is first released to market should any unforeseen issues arise. Microservices architecture, for example, makes it easy to toggle features on and off. And you will likely have multiple rollback protocols. A canary release (to a limited number of users) may be utilized if necessary. 

7. Maintenance Stage – Let’s Make the Improvements

The last but not least important stage of the SDLC process is the maintenance stage, where the software is already being used by end-users.

D uring the first couple of months, developers might face problems that weren’t detected during initial testing, so they should immediately react to the reported issues and implement the changes needed for the software’s stable and convenient usage.

This is particularly important for large systems, which usually are more difficult to test in the debugging stage.

Automated monitoring tools, which continuously evaluate performance and uptime and detect errors, can assist developers with ongoing quality assurance. This is also known as “instrumentation.”

Now that you know the basic SDLC phases and why each of them is important, it’s time to dive into the core methodologies of the system development life cycle.

These are the approaches that can help you to deliver a specific software model with unique characteristics and features. Most developers and project managers opt for one of these 6 approaches. Hybrid models are also popular.

Let’s discuss the major differences and similarities of each.

Waterfall Model

SDLC Waterfall model illustration

This approach implies a linear type of project phase completion, where each stage has its separate project plan and is strictly related to the previous and next steps of system development.

Typically, each stage must be completed before the next one can begin, and extensive documentation is required to ensure that all tasks are completed before moving on to the next stage. This is to ensure effective communication between teams working apart at different stages. 

While a Waterfall model allows for a high degree of structure and clarity, it can be somewhat rigid. It is difficult to go back and make changes at a later stage. 

Iterative Model

SDLC Iterative model illustration

The Iterative model incorporates a series of smaller “waterfalls ,” where manageable portions of code are carefully analyzed, tested, and delivered through repeating development cycles. Getting early feedback from an end user enables the elimination of issues and bugs in the early stages of software creation.

The Iterative model is often favored because it is adaptable , and changes are comparatively easier to accommodate. 

Spiral Model

Spiral model SDLC illustration

The Spiral model best fits large projects where the risk of issues arising is high. Changes are passed through the different SDLC phases again and again in a so-called “spiral” motion.

It enables regular incorporation of feedback, which significantly reduces the time and costs required to implement changes.

SDLC V-Model illustration

Verification and validation methodology requires a rigorous timeline and large amounts of resources. It is similar to the Waterfall model with the addition of comprehensive parallel testing during the early stages of the SDLC process.

The verification and validation model tends to be resource-intensive and inflexible. For projects with clear requirements where testing is important, it can be useful. 

The Big Bang Model

SDLC Big Bang model illustration

Mostly used for creating and delivering a wide range of ideas, this model perfectly fits the clients who don’t have a clear idea or vision of what their final product should look like.

A more concrete vision of project completion is gained via delivering different system variations that may more accurately define the final output. 

While it is usually too expensive for the delivery of large projects, this SDLC methodology perfectly works for small or experimental projects.

Agile Model

SDLC Agile model illustration

The Agile model prioritizes collaboration and the implementation of small changes based on regular feedback. The Agile model accounts for shifting project requirements, which may become apparent over the course of SDLC. 

The Scrum model, which is a type of time-constrained Agile model, is popular among developers. Often developers will also use a hybrid of the Agile and Waterfall model, referred to as an “Agile-Waterfall hybrid . ”

As you can see, different methodologies are used depending on the specific vision, characteristics, and requirements of individual projects. Knowing the structure and nuances of each model can help to pick the one that best fits your project.

Having covered the major SDLC methodologies offered by software development companies, let’s now review whether they are actually worth employing. 

Here are the benefits that the system development life cycle provides:

  • Comprehensive overview of system specifications, resources, timeline, and the project goals
  • Clear guidelines for developers
  • Each stage of the development process is tested and monitored
  • Control over large and complex projects
  • Detailed software testing
  • Process flexibility
  • Lower costs and strict time frames for product delivery
  • Enhanced teamwork, collaboration, and shared understanding

Just like any other software development approach, each SDLC model has its drawbacks:

  • Increased time and costs for the project development if a complex model is required
  • All details need to be specified in advance
  • SDLC models can be restrictive
  • A h igh volume of documentation which can slow down projects
  • Requires many different specialists
  • Client involvement is usually high
  • Testing might be too complicated for certain development teams

While there are some drawbacks, SDLC has proven to be one of the most effective ways for successfully launching software products. 

Alternative development paradigms, such as rapid application development (RAD), may be suitable for some projects but typically carry limitations and should be considered carefully. 

The system development life cycle (SDLC) is a complex project management model that encompasses system or software creation from its initial idea to its finalized deployment and maintenance.

SDLC comprises seven different stages: planning, analysis, design, development, testing, implementation, and maintenance. All are necessary for delivering a high-quality and cost-effective product in the shortest time frame possible.

Learning about major methodologies of SDLC, along with their benefits and drawbacks, enables you to set up effective system development processes that deliver the best possible outcomes. 

At Intellectsoft, we know how important an effective project management strategy is. Our developers and specialists have a track record of building innovative software solutions that perfectly fit our clients’ business goals and requirements.

If you’re looking for a reliable software development company to turn your idea into a top-quality software product, contact our team today.

What are the 7 phases of SDLC?

The typical stages of the system development life cycle are planning and feasibility, requirements analysis, design and prototyping, software development, system testing, implementation, and maintenance.

Alternatively, the processes described above are sometimes split into 5 phases of the system development life cycle: planning, design, implementation, maintenance, and follow-up testing.

What is the most popular SDLC model?

The Agile approach is probably the most widely used SDLC model. Hybrid models are also common. At Intellectsoft , we are proficient with a wide range of models.

What are the latest SDLC innovations?

Automation and AI are transforming the way developers approach SDLC. DevOps processes have also had a significant impact. Intellectsoft works at the cutting edge of SDLC tech and can help you implement it in your organization.

YOU MIGHT ALSO LIKE

Developing construction estimating software, how to choose a software development company: fundamental do’s and don’ts, dedicated development team vs. in-house development: what to choose, how to create a successful mvp: a blueprint for startups, mobile app development trends of 2023, the importance of user experience in software development.

Something went wrong. Send form again, please.

Thank you for your response!

We have sent an email to acknowledge receipt of your request. In the event that you have not received our email, we kindly suggest checking your spam folder or alternatively, contacting us directly at [email protected]

What’s Next?

  • We will send a short email notifying you that we successfully received your request and started working on it.
  • Our solution advisor analyzes your requirements and will reach back to you within 3 business days.
  • We may sign an optional mutual NDA within 1-2 business days to make sure you get the highest confidentiality level.
  • Our business development manager presents you an initial project estimation, ballpark figures, or our project recommendations within approximately 3-5 days.

Request a Free Quote

We have offices in:, contact us request a free quote, thank you for your message.

We will get in touch with you regarding your request within one business day.

  • Software Engineering Tutorial
  • Software Development Life Cycle
  • Waterfall Model
  • Software Requirements
  • Software Measurement and Metrics
  • Software Design Process
  • System configuration management
  • Software Maintenance
  • Software Development Tutorial
  • Software Testing Tutorial
  • Product Management Tutorial
  • Project Management Tutorial
  • Agile Methodology
  • Selenium Basics

Real world applications of SDLC (Software Development Life Cycle)

  • Agile SDLC (Software Development Life Cycle)
  • Software paradigm and Software Development Life Cycle (SDLC)
  • Software Development Life Cycle (SDLC)
  • Full form of SDLC | Software Development Life Cycle
  • Top 8 Software Development Life Cycle (SDLC) Models used in Industry
  • Rapid application development model (RAD) - Software Engineering
  • Stages of the Agile SDLC (Software Development Lifecycle)
  • Waterfall vs Agile Development | Software Development Life Cycle Models
  • What is Secure Software Development Life Cycle (SSDLC )?
  • Bug Life Cycle in Software Development
  • Product Development Life Cycle and its Stages
  • What Software Development Kit (SDK) contains?
  • Software Development | Introduction, SDLC, Roadmap, Courses
  • Database Application System Life Cycle - Software Engineering
  • Software Deployment in Software Development
  • Software Development Models - SDLC Models
  • Program Development Life Cycle (PDLC) - Software Engineering
  • Top 5 SDLC(Software Development Life Cycle ) Methodologies
  • What is SDLC(Software Development Life Cycle) and its phases

In this article we will study what the real-world applications of SDLC are, but first, what is SDLC? The software development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations , as a system can be composed of hardware only, software only, or a combination of both. First, we will get straight to the point and discuss the real-world applications of SDLC then we will conclude the article by summarizing its key points.

Real-World Applications of SDLC

Real-world applications of the Software Development Life Cycle (SDLC) encompass various projects and industries. Here are some examples below:

Real-world-applications-of-SDLC-(Software-Development-Life-Cycle)

Data Science

A real-world example of the Software Development Life Cycle (SDLC) in data science could be the process of developing a machine learning model to predict customer churn for a telecommunications company.

  • Planning: Define project goals, data sources, and success metrics.
  • Analysis: Explore and preprocess the data, identifying relevant features and potential challenges.
  • Design: Choose a suitable machine learning algorithm and design the model architecture.
  • Implementation: Develop the model using programming languages like Python or R, and integrate it with existing systems.
  • Testing: Evaluate the model’s performance using validation techniques and adjust parameters as needed.
  • Deployment: Deploy the model into production, ensuring scalability, reliability, and security.
  • Maintenance: Monitor the model’s performance over time, retraining it with new data and making updates as necessary to adapt to changing conditions.

In big tech companies:

In a big tech company like Amazon, the Software Development Life Cycle (SDLC) is crucial for developing and deploying complex systems like the recommendation engine.

  • First, the planning phase involves setting goals for the recommendation system, such as increasing user engagement and sales. Then, engineers collect data from various sources, including user interactions and product metadata.
  • During development, data scientists and engineers work together to design and train machine learning models to personalize recommendations. Testing ensures that the system performs well across different user scenarios.
  • Once the system passes testing, it undergoes deployment, where it’s integrated into Amazon’s platform. Continuous monitoring ensures the system’s performance remains optimal, and regular updates are made to improve accuracy and relevance. This SDLC process enables Amazon to deliver a robust recommendation engine that enhances the shopping experience for millions of users worldwide.

In healthcare:

In the healthcare sector, the Software Development Life Cycle (SDLC) is essential for developing and implementing electronic health record (EHR) systems.

  • During the planning phase, healthcare providers define the requirements for the EHR system, such as patient information management, medical history tracking, and appointment scheduling.
  • Next, developers gather data from various sources, including patient records and medical databases. In the design phase, user interface designers collaborate with healthcare professionals to create an intuitive and efficient system layout.
  • Development involves building the EHR software according to specifications, ensuring compliance with regulatory standards like HIPAA. Rigorous testing is conducted to verify system functionality, data security, and interoperability with existing healthcare IT infrastructure.
  • Once testing is successful, the EHR system is deployed across healthcare facilities, with training provided to staff for seamless adoption. Continuous monitoring and maintenance ensure ongoing system performance and compliance with evolving healthcare regulations, ultimately improving patient care and operational efficiency.

Finance Industry:

In the finance industry, the Software Development Life Cycle (SDLC) plays a critical role in developing and deploying applications like online banking systems.

  • During the planning phase, financial institutions define project objectives, such as enhancing customer experience and improving transaction security.
  • In the analysis phase, developers gather requirements from stakeholders and conduct feasibility studies to assess the project’s viability.
  • The design phase involves creating system architecture and user interface mockups, ensuring a user-friendly and secure banking experience.
  • Development entails coding the online banking application according to specifications, with a focus on features like account management, fund transfers, and bill payments.
  • Extensive testing is conducted to validate system functionality, security, and compliance with regulatory standards like PCI-DSS.
  • Upon successful testing, the online banking system is deployed, with ongoing monitoring and maintenance to ensure reliability and security. This SDLC process enables financial institutions to offer robust and secure online banking services to their customers.

In gaming industry:

In the gaming industry, the Software Development Life Cycle (SDLC) is crucial for creating and releasing video games. For instance, consider the development of a multiplayer online game like Fortnite.

  • During the planning phase, game developers define the game’s concept, mechanics, and target audience.
  • In the design phase, designers create the game’s characters, environments, and gameplay features, ensuring they align with the intended player experience.
  • Development involves coding the game engine, user interface, and backend systems to support multiplayer functionality and online matchmaking.
  • Testing includes gameplay testing to identify bugs, balance issues, and server stability concerns.
  • Upon successful testing, the game is deployed to platforms like PC, console, and mobile devices, with regular updates and patches to address player feedback and maintain engagement.

This SDLC process enables game developers to create immersive and enjoyable gaming experiences for players worldwide.

Building IoT Devices:

In the development of Internet of Things (IoT) devices, the Software Development Life Cycle (SDLC) is pivotal for creating smart devices like smart thermostats.

  • During the planning phase, engineers define the device’s functionalities, such as temperature control and energy efficiency monitoring.
  • In the design phase, designers create the physical device’s hardware components and the software architecture for collecting and processing data.
  • Development involves coding the firmware and embedded software that control the device’s operations and communication protocols with other devices and cloud platforms.
  • Testing includes functionality testing to ensure the device operates correctly under different conditions and security testing to safeguard against potential cyber threats.
  • Upon successful testing, the IoT device is manufactured and distributed to consumers, with ongoing updates and patches to improve performance and address security vulnerabilities.

This SDLC process ensures the creation of reliable and secure IoT devices that enhance convenience and efficiency in homes and businesses.

Cloud Application:

In the development of a cloud application, such as a document collaboration platform like Google Docs, the Software Development Life Cycle (SDLC) is crucial for creating a scalable and reliable solution.

  • During the planning phase, developers define the application’s objectives, such as real-time collaboration and accessibility from any device.
  • In the design phase, architects create the application’s architecture, including the frontend interface and backend infrastructure, ensuring it can handle large volumes of users and data.
  • Development involves coding the application using web technologies like HTML, CSS, and JavaScript, and backend frameworks like Node.js or Django, with a focus on scalability and fault tolerance.
  • Testing includes performance testing to ensure the application can handle concurrent user requests and stress testing to simulate high traffic scenarios.
  • Upon successful testing, the cloud application is deployed to cloud platforms like AWS or Google Cloud, with continuous monitoring and maintenance to optimize performance and address any issues that arise.

This SDLC process enables the creation of robust and scalable cloud applications that provide seamless and reliable user experiences.

Related Posts:

  • Software Development Life Cycle (SDLC) – GeeksforGeeks
  • What is SDLC(Software Development Life Cycle)
  • Top 8 Software Development Life Cycle (SDLC) Models
  • Various Activities of SDLC

Conclusion: Real world applications of SDLC

In summary, the real-world applications of the Software Development Life Cycle (SDLC) demonstrate its versatility across industries like data science, healthcare, finance, gaming, IoT, and cloud computing. By planning, designing, developing, testing, deploying, and maintaining software systems, SDLC ensures efficient and effective project execution. Understanding these applications provides insights into how SDLC principles are applied to address specific challenges, driving innovation and improving software development practices. With its structured approach, SDLC remains crucial for delivering high-quality, reliable, and scalable solutions in today’s dynamic technological landscape.

FAQs: Real world applications of SDLC

What is sdlc.

SDLC is a strucutred way of creating and deploying applications in the software industry. It is used to design, develop and test various production grade applications.

Why SDLC is important?

SDLC is important to properly and efficiently create applications by studying it from the ground up. SDLC allows projects to be sucessfully created without errors.

Why studying applications of SDLC is important?

Studying the applications of SDLC allows us to properly understand the use case of SDLC in the real world which will help us understand SDLC better overall.

Please Login to comment...

Similar reads.

  • Software Development
  • Software Engineering

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Spatial evolution and spatial production of traditional villages from “backward poverty villages” to “ecologically well-off villages”: Experiences from the hinterland of national nature reserves in China

  • Original Article
  • Published: 17 May 2024
  • Volume 21 , pages 1100–1118, ( 2024 )

Cite this article

case study system development life cycle

  • Yiyi Zhang   ORCID: orcid.org/0009-0003-4286-2696 1 &
  • Yangbing Li   ORCID: orcid.org/0000-0002-8331-2709 1  

Explore all metrics

With the rapid urbanization process, the space of traditional villages in China is undergoing significant changes. Studying the spatial evolution of traditional villages is significant in promoting rural spatial transformation and realizing rural revitalization and sustainable rural development. Based on the traceability analysis of spatial production theory, this paper constructed an analytical framework for the spatial production evolution of traditional villages, analyzed the spatial evolution process and characteristics of traditional villages by using buffer analysis, spatial syntax, and other research methods, and revealed the characteristics of the spatial production evolution of traditional villages and the driving mechanism. The results show that: (1) The village spatial formation and development follow the village life cycle theory and usually develop from embryonic villages to diversified and integrated villages; (2) The evolution of village spatial production is characterized by the diversity of material space, the sublimation of daily life space, and the integration of social system space and generalization of emotional space; (3) The evolution of village spatial production from backward and poor village to ecologically well-off village is influenced by a combination of factors; (4) The village has formed a spatial structure of “people-land-scape-culture-industry”, realized comprehensive reconstruction and spatial reproduction. The study results reflect the spatial evolution characteristics of traditional villages in mountainous areas in a more comprehensive way, which helps to promote the protection and development of traditional villages in mountainous areas and, to a certain extent, provides a reference for the development of rural revitalization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Availability of Data/Materials: Data will be made available on request.

Bi GH, Yang QY (2023) The spatial production of rural settlements as rural homestays in rural revitalization: Evidence from a rural tourism experiment in a Chinese village. Land Use Policy 128:106600. https://doi.org/10.1016/j.landusepol.2023.106600

Article   Google Scholar  

Buckley R, Zhong LS, Ma XY (2017) Visitors to protected areas in China. Biol Conserv 209: 83–88. https://doi.org/10.1016/j.biocon.2017.01.024

Buser M (2012) The production of space in metropolitan regions: A Lefebvrian analysis of governance and spatial change. Plan Theor 11(3). https://doi.org/10.1177/1473095212439693

Hillier B (2009) Spatial sustainability in cities: organic patterns and sustainable forms. Royal Institute of Technology.

Chen WX, Gu TC, Xiang JW KRE, et al. (2023) Assessing the conservation effectiveness of national nature reserves in China. Appl Geogr 161: 103125. https://doi.org/10.1016/j.apgeog.2023.103125

Douglass M (2014) Afterword: Global householding and social reproduction in Asia. Geoforum 51:313–316. https://doi.org/10.1016/j.geoforum.2013.11.003

Dunn CE (2007) Participatory GIS-a people’s GIS?. Prog Hum Geog 31(5):616–637. https://doi.org/10.1177/0309132507081493

Elden S (2004) Between Marx and Heidegger: Politics, Philosophy and Lefebvre’s The Production of Space. Antipode 36(1):86–105. https://doi.org/10.1111/j.1467-8330.2004.00383.x

Gromov EI, Agibalov AV (2017) Criteria and principles of integrated assessment of the conditions of development of rural territorial systems. Vestnik of Voronezh State Agrarian University. https://doi.org/10.17238/issn2071-2243.2017.4.175

Gong JZ, Jian YQ, Chen WL, et al. (2022) Transitions in rural settlements and implications for rural revitalization in Guangdong Province. J Rur Stud 93:359–366. https://doi.org/10.1016/j.jrurstud.2019.10.037

Halfacre K (2006) Rural Space: Constructing A Three-Fold Architecture. https://doi.org/10.4135/9781848608016.n4

Hu D, Zhou S, Chen Z, et al. (2021) Analysis of the effect of naming tools for “Traditional Chinese villages” in the context of rapid urbanization: a spatial mapping analysis of Jiangxi Province, Advances in Geographical Sciences 40(1):10. (In Chinese)

Google Scholar  

Hu XL, Li HB, Zhang XL, et al. (2019) Multi-dimensionality and the totality of rural spatial restructuring from the perspective of the rural space system: A case study of traditional villages in the ancient Huizhou region, China. Habitat Int 94:102062. https://doi.org/10.1016/j.habitatint.2019.102062

Jia KY, Qiao WF, Chai YB, et al. (2020) Spatial distribution characteristics of rural settlements under diversified rural production functions: A case of Taizhou, China. Habitat Int 102:102201. https://doi.org/10.1016/j.habitatint.2020.102201

Jonathan H (2009) Theories of human emotion Sociology. Eastern Publishing. https://doi.org/10.1146/annurev.soc.32.061604.123130

Kong XS, Liu DF, Tian YS, et al. (2021) Multi-objective spatial reconstruction of rural settlements considering intervillage social connections. J Rur Stud 84:254–264. https://doi.org/10.1016/j.jrurstud.2019.02.028

Kurniawan F, Adrianto L, Bengen DG, et al. (2019) The social-ecological status of small islands: an evaluation of island tourism destination management in Indonesia. Tour Manag Perspect (31)136-144. https://doi.org/10.1016/j.tmp.2019.04.004

Lefebvre H (1991) The Production of Space. British Library in Publication, Cambridge, UK.

Li B, Li X, Wang S, et al. (2022) Study on the transformation and development of traditional village habitat environment in the perspective of rural revitalization. Journal of Hunan Normal University (Natural Science Edition) 45(01):1–10. (In Chinese)

Lin LK, Du CL, Yao Y, et al. (2021) Dynamic influencing mechanism of traditional settlements experiencing urbanization: A case study of Chengzi Village. J Cleaner Prod 320:128462. https://doi.org/10.1016/j.jclepro.2021.128462

Li HB, Yuan Y, Zhang XL, et al. (2022) Evolution and transformation mechanism of the spatial structure of rural settlements from the perspective of long-term economic and social change: A case study of the Sunan region, China. J Rur Stud 93:234–243. https://doi.org/10.1016/j.jrurstud.2019.03.005

Li Y, Dang A, Cao H (2010) Conservation-oriented accessibility assessment for historical and cultural towns: Taking Zhouzhuang town as an example. International Conference on Geoinformatics. (In Chinese)

Lin Z, Nannan LI, Chang J, et al. (2016) The Differentiation characteristics and formation mechanism of hollowrization in traditional villages: a case study of the traditional villages in Yangquan, Shanxi. Modern Urban Research. (In Chinese)

Liu C, Duan DZ, Yu R, et al. (2014) Spatial correlation and its evolution of urban-rural road network based on Complex Network Approach: Evidence from the Wuhan Metropolitan Area. Human Geography. (In Chinese)

Liu C, Cao YJ, Chen Y, et al. (2020) Pattern identification and analysis for the traditional village using low altitude UAV-borne remote sensing: multifeatured geospatial data to support rural landscape investigation, documentation, and management. J Cult Herit 44:185–195. https://doi.org/10.1016/j.culher.2019.12.013

Liu WP, Henneberry SR, Ni JP, et al. (2019) Socio-cultural roots of rural settlement dispersion in Sichuan Basin: The perspective of Chinese lineage. Land Use Policy 88:104162. https://doi.org/10.1016/j.landusepol.2019.104162

Lu YH, Qian JX (2023) Rural creativity for community revitalization in Bishan Village, China: The nexus of creative practices, cultural revival, and social resilience. J Rur Stud 97:255–268. https://doi.org/10.1016/j.jrurstud.2022.12.017

Ma HD, Tong Y (2021) Spatial differentiation of traditional villages using ArcGIS and GeoDa: A case study of Southwest China. Ecol Inform (1):101416. https://doi.org/10.1016/j.ecoinf.2021.101416

Mackrell P (2018) New representations of rural space: Eastern European migrants and the denial of poverty and deprivation in the English countryside. J Rur Stud 59. https://doi.org/10.1016/j.jrurstud.2018.01.009

Miao Z, Fang W, Xiang SN, et al. (2020) Inheritance or variation? Spatial regeneration and acculturation via implantation of cultural and creative industries in Beijing’s traditional compounds. Habitat Int 95:102071. https://doi.org/10.1016/j.habitatint.2019.102071

Pei YF, Gong K, Leng JW (2020) Study on the inter-village space of a traditional Huizhou Region: Hongguan Village group as an example. Front Archit Res 9(3):588–605. https://doi.org/10.1016/j.foar.2020.03.006

Article   CAS   Google Scholar  

Qiao JC, Gao SX (1991) Prachik’s theory of emotional evolution. J Psychol 23(4):8. https://doi.org/10.1037/h0069054

Qu YB, Jiang GH, Li ZT, et al. (2019) Understanding rural land use transition and regional consolidation implications in China. Land Use Policy 82. https://doi.org/1016/j.landusepol.2018.11.014

Roth R (2009) The challenges of mapping complex indigenous spatiality: from abstract space to dwelling space. Cult Geogr 16(2):207–227. https://doi.org/10.1177/1474474008101517

Segun O (2014) Behavioral Outcomes of Culture and Socioeconomic Status on Urban Residential Morphology: A Case Study of Lagos. Conference on Environment-Behaviour Studies.

Shi LN, Wang YS (2021) Evolution characteristics and driving factors of negative decoupled rural residential land and resident population in the Yellow River Basin. Land Use Policy 109. https://doi.org/1016/j.landusepol.2021.105685

Song XH (2014) Place, space, and Existence: on the eco-cultural thoughts of Yi-Fu Tuan. Chinese Social Science Press. (In Chinese)

Sun JX (2017) China’s tourism development: conservation and utilization of traditional villages - Traditional villages: theoretical connotations and development paths. Journal of Tourism 32(1):3. (In Chinese)

Sun J, Jing SU (2014) Traditional Community Space Change under the Influence of Tourism: A Reflective Study Based on Space Production Theory. Tourism Tribune. (In Chinese)

Sun J, Zhou Y (2014) study on the reproduction of space of tourism community from the perspective of everyday life: Based on Lefebvre and De Certeau theories. Acta Geographica Sinica 69(10):1575–1589. (In Chinese)

Thrift N (2008) Non-Representational Theory: Space. Politics. Affect, Non-Representational Theory: Space. Politics. Affect.

Wang F (2017) Memory and Nostalgia in Migration: the evolutionary mechanism and spatial logic of urban-rural memory. Geographical Studies (1):23. (In Chinese)

Wang M, Luo ZW, Jiang RH, et al. (2023a) Heritage space multiple temporalities and the reproduction of Guangzhou Overseas Chinese Village. Emot Space soc 48:100958.

Wang ZH (2009) Multiple dialectical Leviovorian spatial production concepts of triadic deduction and derivation. J Geogr (55):1–24. (In Chinese)

Wilson GA (2001) From Productivism to Post-Productivism and Back Again? Exploring the (Un)changed Natural and Mental Landscapes of European Agriculture. T I Brit Geogr 26(1) 77–102. https://doi.org/10.1111/1475-5661.00007

Wang F, Zhao XG, Qiu YX, et al. (2023) A study on native and constructed localities in the modern adaptation of villages. Habitat Int 138:102849. https://doi.org/10.1016/j.habitatint.2023.102849 .

Xia B, Wxa B, Hlc D (2020) The spatial evolution process, characteristics and driving factors of traditional villages from the perspective of the cultural ecosystem: A case study of Chengkan Village. Habitat Int 104. https://doi.org/10.1016/j.habitatint.2020.102250

Xia C, Zhang AQ, Wang HJ, et al. (2019) Predicting the expansion of urban boundary using space syntax and multivariate regression model. Habitat Int 86:126–134. https://doi.org/10.1016/j.habitatint.2019.03.001 .

Xie YZ (2021) research on identifying and revitalizing cultural genes in Leshan GeTou village from the perspective of poverty alleviation through non-heritage. Guizhou University. (In Chinese)

Yang R, Lu J, Li W (2022) Multi-dimensional spatial evolution of typical traditional villages in the Pearl River Delta’s urban fringe is influencing mechanism. Economic Geography 42(03):190–199. (In Chinese)

Yang YY, Bao WK, Liu YS (2020) Coupling coordination analysis of rural production-living-ecological space in the Beijing-Tianjin-Hebei region 117(4):106512. https://doi.org/10.1016/j.ecolind.2020.106512

Ye C, Cai Y (2012) A case study of change in geographic thought: Harvey’s academic transformation. Journal of Geography 67(1):10. (In Chinese)

Zeng M, Wang F, Xiang S, et al. (2020) Inheritance or variation? Spatial regeneration and acculturation via implantation of cultural and creative industries in Beijing’s traditional compounds. Habitat Int 95:102071. https://doi.org/10.1016/j.habitatint.2019.102071

Zhang QY, Ye C, Duan JJ (2022) Multi-dimensional superposition: Rural collaborative governance in Liushe Village, Suzhou City. J Rur Stud 96:141–153. https://doi.org/10.1016/j.jrurstud.2022.10.002

Zhang YZ, Baimu SL, Tong J, et al. (2018) Geometric spatial structure of traditional Tibetan settlements of Degger County, China: A case study of four villages. Front Archit Res. https://doi.org/10.1016/j.foar.2018.05.005

Zhang YX, Hu YX, Zhang B, et al. (2020) Conflict between nature reserves and surrounding communities in China: an empirical study based on a social and ecological system framework. Glob Ecol Conserv 21. https://doi.org/10.1016/j.gecco.2019.e00804

Zhao XY, Ju SL, Wang WJ, et al. (2022) Intergenerational and gender differences in farmers’ satisfaction with rural public space: Insights from a traditional village in Northwest China. Appl Geogr 146:102770. https://doi.org/10.1016/j.apgeog.2022.102770

Zou LL, Liu YS, Yang JX, et al. (2020) Quantitative identification and spatial analysis of land use ecological-production-living functions in rural areas on China’s southeast coast. Habitat Int 100, 102182.10. https://doi.org/1016/j.habitatint.2020.102182

Download references

Acknowledgments

This study was supported by the National Natural Science Foundation of China (Grant No. 42061035) and the Guizhou Provincial Program on Commercialization of Scientific and Technological Achievements ([2022]010). We are very grateful to the anonymous reviewers who helped to improve the clarity and relevance of our research presentation.

Author information

Authors and affiliations.

College of Geography and Environmental Sciences, Guizhou Normal University, Guiyang, 550001, China

Yiyi Zhang & Yangbing Li

You can also search for this author in PubMed   Google Scholar

Contributions

ZHANG Yiyi: Methodology, Investigation, Writing-original draft, Visualization. LI Yangbing: Supervision, Project administration, Funding acquisition.

Corresponding author

Correspondence to Yangbing Li .

Ethics declarations

Conflict of Interest: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Rights and permissions

Reprints and permissions

About this article

Zhang, Y., Li, Y. Spatial evolution and spatial production of traditional villages from “backward poverty villages” to “ecologically well-off villages”: Experiences from the hinterland of national nature reserves in China. J. Mt. Sci. 21 , 1100–1118 (2024). https://doi.org/10.1007/s11629-023-8349-2

Download citation

Received : 13 September 2023

Revised : 19 March 2024

Accepted : 22 March 2024

Published : 17 May 2024

Issue Date : April 2024

DOI : https://doi.org/10.1007/s11629-023-8349-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Traditional villages
  • Spatial production
  • Spatial evolution
  • Spatial reconstruction
  • Find a journal
  • Publish with us
  • Track your research

How capital expenditure management can drive performance

One of the quickest and most effective ways for organizations to preserve cash is to reexamine their capital investments. The past two years have offered a fascinating look into how different sectors have weathered the COVID-19 storm: from the necessarily capital expenditure–starved airport industry to the cresting wave of public-sector investments in renewable infrastructure and anticipation of the next mining supercycle. Indeed, companies that reduce spending on capital projects can both quickly release significant cash and increase ROIC, the most important metric of financial value creation (Exhibit 1).

This strategy is even more vital in competitive markets, where ROIC is perilously close to cost of capital. In our experience, organizations that focus on actions across the whole project life cycle, the capital project portfolio, and the necessary foundational enablers can reduce project costs and timelines by up to 30 percent to increase ROIC by 2 to 4 percent. Yet managing capital projects is complex, and many organizations struggle to extract cost savings. In addition, ill-considered cuts to key projects in a portfolio may actually jeopardize future operating performance and outcomes. This dynamic reinforces the age-old challenge for executives as they carefully allocate marginal dollars toward value creation.

Companies can improve their odds of success by focusing on areas of the project life cycle— capital strategy and portfolio optimization , project development and value improvement, and project delivery and construction—while investing in foundational enablers.

Cracking the code on capital expenditure management

Despite the importance of capital expenditure management in executing business strategy, preserving cash, and maximizing ROIC, most companies struggle in this area for two primary reasons. First, capital expenditure is often not a core business; instead, organizations focus on operating performance, where they have extensive institutional knowledge. When it comes to capital projects, executives rely on a select few people with experience in capital delivery. Second, capital performance is typically a black box. Executives find it difficult to understand and predict the performance of individual projects and the capital project portfolio as a whole.

Across industries, we see companies struggle to deliver projects on time and on schedule (Exhibit 2). In fact, cost and schedule overruns compared with original estimates frequently exceed 50 percent. Notably, these occur in both the public and private sectors.

The COVID-19 pandemic has accelerated and magnified these challenges. Governments are increasingly viewing infrastructure spending as a tool for economic stimulus, which amplifies the cyclical nature of capital expenditure deployments. At the same time, some organizations have had to make drastic cutbacks in capital projects because of difficult economic conditions. The reliance on just a few experienced people when travel restrictions necessitated a remote-operating model further increased the complexity. As a result, only a few organizations have been able to maintain a through-cycle perspective.

In addition, current inflation could put an end to the historically low interest rates that companies are enjoying for financing their projects. As the cost of capital goes up, discipline in managing large projects will become increasingly important.

Improving capital expenditure management

In our experience, the organizational drivers that impede capital expenditure management affect all stages of a project life cycle, from portfolio management to project execution and commissioning. Best-in-class capital development and delivery require companies to outperform in three main areas, supported by several foundational enablers (Exhibit 3).

Recipes for capturing value

Companies can transform the life cycle of a capital expenditure project by focusing on three areas: capital strategy and portfolio optimization, project development and value improvement, and project delivery and construction. While the savings potential applies to each area on a stand-alone basis, their impact has some overlap. In our experience, companies that deploy these best practices are able to save 15 to 30 percent of a project’s cost.

Capital strategy and portfolio optimization

The greatest opportunity to influence a project’s outcome comes at its start. Too often, organizations commit to projects without a proper understanding of business needs, incurring significant expense to deliver an outcome misaligned with the overall strategy. Indeed, a failure to adequately recognize, price, and manage the inherent risks of project delivery is a recurring issue in the industry. Organizations can address this challenge by following a systematic three-step approach:

Assess the current state of capital projects and portfolio. It’s essential to identify strengths, areas of improvement, and the value at stake. To do so, organizations must build a transparent and rigorously tested baseline and capital budget, which should provide a clear understanding of the overall capital expenditure budget for the coming years as well as accurate cost and time forecasts for an organization’s portfolio of capital projects.

Ensure capital allocation is linked to overall company strategy. This step involves reviewing sources and uses of cash and ensuring allocated capital is linked to strategy. Companies must set an enterprise-wide strategy , assess the current portfolio against the relevant market with forward-looking assessments and cash flow simulation, and review sources and uses of cash to determine the amount of capital available. Particular focus should be given to environmental, social, and governance (ESG) considerations—by both proactively managing risks and capturing the full upside opportunity of new projects—because sustainability is becoming a real source of shareholder value (Exhibit 4). With this knowledge, organizations can identify internal and external opportunities to strengthen their portfolio based on affordability and strategic objectives.

Optimize the capital portfolio to increase company-wide ROIC. Executives should distinguish between projects that are existing or committed, planned and necessary (for legal, regulatory, or strategic requirements), and discretionary. They can do so by challenging a project’s justification, classifications, benefit estimates, and assumptions to ensure they are realistic. This analysis helps companies to define and calibrate their portfolios by prioritizing projects based on KPIs and discussing critical projects not in the portfolio. Executives can then verify that the portfolio is aligned with the business strategy, risk profile, and funding constraints.

For example, a commercial vehicle manufacturer recently undertook a rigorous review of its project portfolio. After establishing a detailed baseline covering several hundred planned projects in one data set, the manufacturer classified the projects into two categories: must-have and discretionary. It also considered strategic realignment in light of a shift to e-mobility and the implications on investments in internal-combustion-engine vehicles. Last, it scrutinized individual maintenance projects to reduce their scope. Overall, the manufacturer uncovered opportunities to decrease its capital expenditure budget by as much as 20 percent. This strict review process became part of its annual routine.

Project development and value improvement

While value-engineering exercises are common, we find that 5 to 15 percent of additional value is typically left on the table. Too often, organizations focus on technical systems and incremental improvements. Instead, executives should consider the full life cycle cost across several areas:

Sourcing the right projects with the right partners. Companies must ensure they are sourcing the right projects by aligning on prioritization criteria and identifying the sectors to play in based on their strategy. Once these selections are made, organizations can use benchmarking and advanced-analytics tools to accelerate project timelines and improve planning. Building the right consortium of contractors and partners at the outset and establishing governance and reporting can have a huge impact. Best-in-class teams secure the optimal financing, which can include public and private sources, by assessing the economic, legal, and operational implications for each option.

A critical success factor is a strong tendering office, which focuses on choosing better projects. It can increase the likelihood of winning through better partnerships and customer insights and enhance the profitability of bids with creative solutions for reducing cost and risk. Best-in-class tendering offices identify projects aligned with the company’s strategy, have a clear understanding of success factors, develop effective partnerships across the value chain, and implement a risk-adjusted approach to pricing.

Achieve the full potential of the preconstruction project value. Companies can take a range of actions to strengthen capital effectiveness. For example, they should consider the project holistically, including technical systems, management systems, and mindsets and behaviors. To ensure they create value across all stages of the project life cycle, organizations should design contract and procurement interventions early in the project. An emphasis on existing ideas and proven solutions can help companies avoid getting bogged down in developing new solutions. For instance, a minimum-technical-solution approach can be used to identify the highest-value projects by challenging technical requirements once macro-elements are confirmed.

Companies should also seek to formalize dedicated systems and processes to support decision making and combat bias. We have identified five types of biases to which organizations should pay close attention (Exhibit 5). For instance, interest biases should be addressed by increasing transparency in decision making and aligning on explicit decision criteria before assessing the project. Stability biases can also be harmful. We have seen it too many times: companies have a number of underperforming projects that just won’t die and that take up valuable and already limited available resources. Organizations should invest in quickly determining when to halt projects—and actually stop them.

Setting up a system to take action in a nonbiased way is a crucial element of best-in-class portfolio optimization. Changing the burden of proof can also help. One energy company counterbalanced the natural desire of executives to hang on to underperforming assets with a systematic process for continually upgrading the company’s portfolio. Every year, the CEO asked the corporate-planning team to identify 3 to 5 percent of the company’s assets that could be divested. The divisions could retain any assets placed in this group but only if they could demonstrate a compelling turnaround program for them. The burden of proof was on the business units to prove that an asset should be retained, rather than just assuming it should.

An effective governance system ensures that all ideas generated from project value improvements are subject to robust tracking and follow-up. Further, the adoption of innovative digital and technological solutions can enhance standardization, modularization, transparency, and efficiency. A power company recently explored options to phase out coal-powered energy using a project value improvement methodology and a minimum technical solution. The process helped to articulate options to maximize ROI and minimize greenhouse-gas emissions. An analysis of each option, using an idea bank of more than 2,000 detailed ideas, let the company find solutions to reduce investment on features with little value added, reallocate spending to more efficient technologies, and better adjust capacity configurations with business needs. Ultimately, the company reduced capital costs by 30 percent while increasing CO 2 abatement by the same amount.

Designing the right project organization. An open, collaborative, and result-focused environment enabled by stringent performance management processes is critical for success, regardless of the contractual arrangement between owners and contractors. Improving capital project practices is possible only if companies measure those practices and understand where they stand compared with their peers. The organization should be designed with a five-year capital portfolio in mind and built by developing structures for project archetypes and modeling the resources required to deliver the capital plan. A rigorous stage-gate process of formal reviews should also be implemented to verify the quality of projects moving forward. Too many projects are rushed through phases with no formal review of their deliverables, leading to a highly risky execution phase, which usually results in delays and cost overruns.

As successful organizations demonstrate, addressing organizational health in project teams is as important as performance initiatives. McKinsey research has found that the healthiest organizations generate three times higher returns than companies in the bottom quartile and more than 60 percent higher returns compared with companies in the middle two quartiles. 1 Scott Keller and Bill Schaninger, Beyond Performance 2.0: A Proven Approach to Leading Large-Scale Change , second edition, Hoboken, NJ: Wiley, 2019.

Project delivery and construction

Since the root causes of poor performance—project complexity, data quality, execution capabilities, and incentives and mindsets—can be difficult to identify and act on, organizations can benefit from taking the following actions across project delivery and construction dimensions.

Optimize the project execution plan. Organizations should embrace principles of operations science to develop an optimized configuration for the production system, as well as set a competitive and realistic baseline for the project. This execution plan identifies the execution options that could be deployed on the project and key decisions that need to be made. Companies should also break the execution plan into its microproduction systems and visualize the complicated schedule. Approaching capital projects as systems allows companies to apply operations science across process design, capacity, inventory, and variability.

Contract, claims, and change orders management. While claims are quite common on capital projects, proactive management can keep them under control and allow owners to retain significant value. Focusing on claims avoidance when drafting terms and conditions can head off many claims before they arise. In addition, partnering with contractors creates a more collaborative environment, making them less inclined to pursue an aggressive claims strategy. To manage change orders on a project, companies should address their contract management capability, project execution change management, and project closeout negotiation support. A European chemical company planning to build greenfield infrastructure in a new Asian geography recently employed this approach. It reduced risk on the project by bringing together bottom-up, integrated planning and performance management with targeted lean-construction interventions. By doing so, the company reduced the project’s duration by a year, achieved on-time delivery, and stayed within its €1 billion budget.

Enablers of the capital transformation

These three value capture areas must be supported by a capable organization with the right tools and processes—what we call the “transformational chassis.” To establish this infrastructure, organizations should focus on several activities.

Performance management

The best organizations institute a performance management system to implement a cascading set of project review meetings focused on assessing the progress of value-creation initiatives. Building on a foundation of quality data, the right performance conversations must take place at all levels of the organization.

Companies should also be prepared to reexamine their stage-gate governance system to shift from an assurance mindset (often drowning in bureaucracy and needless reporting) to an investor mindset. Critical value-enabling activities should be defined at each stage of the project life cycle, supported by a playbook of best practices for execution and implemented by a project review board. While governance processes exist, they often involve reporting without decision making or are not focused on the right outcomes—for example, ensuring that the investment decision and thesis remain valid through a project’s life. Quite often, companies provide incentives for project managers to execute an outdated project plan rather than deliver against the organization’s needs and goals.

Creating project transparency is also critical. Companies should establish a digital nerve center—or control tower—that collects field-level data to establish a single source of truth and implement predictive analytics. Equally important, companies must address capability building to ensure that the team has a solid understanding of the baseline and embraces data-based decision making.

Companies should stand up delivery teams that integrate owner and contractor groups across disciplines and institute a consistent and effective project management rhythm that can identify risks and opportunities over a project’s duration. Once delivery teams prioritize the biggest opportunities, dedicated capacity should be allocated to solve a project’s most challenging problems. Finally, companies should build and deploy comprehensive programs that improve culture and workforce capabilities throughout the organization, including the front line.

Capital analytics

Many organizations struggle to get a clear view of how projects are performing, which limits the possibility for timely interventions, decision making, and resource planning. By digitalizing the performance management of construction projects using timely and transparent project data, companies can track value capture and leading indicators while making data available across the enterprise. Using a single source of truth can reduce delivery risk, increase responsiveness, and enable a more proactive approach to the identification of issues and the capture of opportunities. The most advanced projects build automated, real-time control towers that consolidate information across systems, engineering disciplines, project sites, contractors, and broader stakeholders. The ability to integrate data sets speeds decision making, unlocks further insights, and promotes collaborative problem solving between the company that owns the capital project and the engineering, procurement, and construction company.

Ways of working

In many cases, executives are unwilling to engage in comprehensive capital reviews because they lack a sufficient understanding of capital management processes, and project managers can be afraid to expose this lack of proficiency. Agile practices can facilitate rapid and effective decision making by bringing together cross-functional project teams. Under this approach, organizations establish daily stand-ups, weekly showcases, and fortnightly sprints to help eliminate silos and maintain a focus on top priorities. Agility must be supported by an organizational structure, well-developed team capabilities, and an investment mindset. Organizations should also build skills and establish a culture of cooperation to optimize their capital investments.

We do recognize that getting capital expenditure management right feels like a lot to do well. And although many of these tasks are somehow done by a slew of companies, pockets of organizational excellence can be undermined instantly (and sometimes existentially) by one big project that goes wrong or a strategic misfire that pushes an organization from being a leader to a laggard in the investment cycle. In some ways, capital expenditure management leaders face similar challenges to those in other functions that have already undergone major productivity improvements: often these challenges are not technical problems but instead relate to how people work together toward a common goal.

Yet we believe organizations have a significant opportunity to fundamentally improve project outcomes by rethinking traditional approaches to project delivery. Sustainable improvements can be achieved by resizing the project portfolio, optimizing the cash flows for individual projects, and improving and reducing individual project delivery risk.

Explore a career with us

Related articles.

“”

Unlocking cash from your balance sheet

Moving from cash preservation to cash excellence for the next normal

Moving from cash preservation to cash excellence for the next normal

Scenario-based cash planning in a crisis: Lessons for the next normal

Scenario-based cash planning in a crisis: Lessons for the next normal

IMAGES

  1. Ultimate Guide to System Development Life Cycle

    case study system development life cycle

  2. System Development Life Cycle: Methodologies, Phases & Roles

    case study system development life cycle

  3. System Development Life Cycle (SDLC)

    case study system development life cycle

  4. Cycle Diagram Example

    case study system development life cycle

  5. Systems development life cycle (SDLC)

    case study system development life cycle

  6. 7.3. Systems Development Life Cycle

    case study system development life cycle

VIDEO

  1. SYSTEMS ANALYSIS AND DESIGN Chapter 1 Part 1 Tagalog lecture

  2. How can we build sustainable product life cycles?

  3. System Development Life Cycle:Preliminary Investigation

  4. System Development Life Cycle: System Development

  5. الجزء الثاني: ماذا نحتاج لبناء نظام يصل لمرحلة عالمية

  6. SDLC -Stages of Software Development Life Cycle #ict #grade11ict #ictsinhala

COMMENTS

  1. Software Development Life Cycle (Case Study)

    Agile Software Development Life Cycle (SDLC), is the process for doing exactly that - planning, developing, testing, and deploying information systems. The benefit of agile SDLC is that project managers can omit, split, or mix certain steps depending on the project's scope while maintaining the efficiency of the development process and the ...

  2. A Case Study of the Application of the Systems Development Life Cycle

    It can also be used as a case study in an upper-division or graduate course describing the implementation of the SDLC in practice. INTRODUCTION. The systems development life cycle, in its variant forms, remains one of the oldest and yet still widely used methods of software development and acquisition methods in the information technology (IT ...

  3. Understanding the SDLC: Software Development Lifecycle Explained

    The phases of the software development lifecycle (SDLC) include requirements gathering and analysis, system design, coding, testing, deployment, and maintenance and support. By taking a structured approach to software development, SDLC provides a process for building software that's well-tested and production ready.

  4. Ultimate Guide to System Development Life Cycle

    The Software Development Life Cycle follows an international standard known as ISO 12207 2008. In this standard, phasing similar to the traditional systems development life cycle is outlined to include the acquisition of software, development of new software, operations, maintenance, and disposal of software products.

  5. What Is SDLC? Understand the Software Development Life Cycle

    The Software Development Life Cycle (SDLC) refers to a methodology with clearly defined processes for creating high-quality software. in detail, the SDLC methodology focuses on the following phases of software development: Requirement analysis. Planning. Software design such as architectural design.

  6. The Software Development Life Cycle (SDLC): 7 Phases and 5 Models

    The software development life cycle is a process that development teams use to create awesome software that's top-notch in terms of quality, cost-effectiveness, and time efficiency. ... Systems development is a broader process that encompasses the setup and management of hardware, software, people, and processes needed for a complete system ...

  7. What is the software development lifecycle (SDLC)? Phases and models

    The SDLC is a methodology that involves multiple steps (also called stages) that enable software development teams to produce low-cost and high-quality software products. The development team often owns the SDLC. They use the SDLC alongside the engineering manager to organize their workflow. However, the SDLC is also a part of the holistic ...

  8. System Development Life Cycle

    System Development Life Cycle (SDLC) is a methodology that structures how an organization operationalizes projects at a conceptual level. It represents the process of developing, implementing, maintaining, and retiring information systems through a defined process that moves an organization from a phase of a strategy to a phase of execution using a standard methodology that is uniform and ...

  9. Best Practices in Systems Development Lifecycle: An Analyses ...

    Topics covered include an introduction to the Systems Development Lifecycle and the Waterfall Model including advantages and disadvantages. It also discusses the six stages of SDLC 1) Requirements Gathering and Analysis, 2) Systems Development, 3) Systems Implementation and Coding, 4) Testing, 5) Deployment, 6) Systems Operations and Maintenance.

  10. Software Development Life Cycle (SDLC)

    The goal of the SDLC life cycle model is to deliver high-quality, maintainable software that meets the user's requirements. SDLC in software engineering models outlines the plan for each stage so that each stage of the software development model can perform its task efficiently to deliver the software at a low cost within a given time frame that meets users' requirements.

  11. What Is the Software Development Life Cycle? SDLC Explained

    The software development life cycle (SDLC) is the process of planning, writing, modifying, and maintaining software. Developers use the methodology as they design and write modern software for computers, cloud deployment, mobile phones, video games, and more. Adhering to the SDLC methodology helps to optimize the final outcome.

  12. System Development Life Cycle

    The systems development life cycle ( SDLC, also called the software development life cycle or simply the system life cycle) is a system development model. SDLC is used across the IT industry, but SDLC focuses on security when used in context of the exam. Think of "our" SDLC as the secure systems development life cycle; the security is implied.

  13. Software Development Life Cycle (SDLC) Phases & Models

    Feasibility Study stage includes everything which should be designed and developed during the project life cycle; In the Design phase, the system and software design documents are prepared as per the requirement specification document; In the coding phase, developers start build the entire system by writing code using the chosen programming ...

  14. 5 Case Studies

    This case study also illustrates and incorporates the central themes of this report: Human-system integration must be an integral part of systems engineering. Begin HSI contributions to development early and continue them throughout the development life cycle.

  15. A Case Study of the Application of the Systems Development Life Cycle

    The systems development life cycle (SDLC), while undergoing numerous changes to its name and related components over the years, has remained a steadfast and reliable approach to software development. Although there is some debate as to the appropriate number of steps, and the naming conventions thereof, nonetheless it is a tried-and-true methodology that has withstood the test of time. This ...

  16. 7 Phases of the System Development Life Cycle

    The system development life cycle (SDLC) is a complex project management model that encompasses system or software creation from its initial idea to its finalized deployment and maintenance. SDLC comprises seven different stages: planning, analysis, design, development, testing, implementation, and maintenance.

  17. PDF THE SYSTEM DEVELOPMENT LIFE CYCLE (SDLC)

    The system development life cycle is the overall process of developing, implementing, and retiring information systems through a multistep process from initiation, analysis, design, implementation, and maintenance to disposal. There are many different SDLC models and methodologies, but each generally consists of a series of defined steps or phases.

  18. Systems development life cycle

    Model of the software development life cycle, highlighting the maintenance phase. In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and ...

  19. Scenarios, Stories, Use Cases: Through the Systems Development Life-Cycle

    Description. Extending the scenario method beyond interface design, this important book shows developers how to design more effective systems by soliciting, analyzing, and elaborating stories from end-users. Contributions from leading industry consultants and opinion-makers present a range of scenario techniques, from the light, sketchy, and ...

  20. System Development Life Cycle

    The System Development Life Cycle encompasses a series of interconnected stages that ensure a systematic approach to system development. The stages include Planning, Analysis, Design, Development, Implementation, and Maintenance. Each stage contributes to the successful completion of the system, with System Design serving as a crucial component.

  21. PDF Systems Analysis, Design, and Development Case Study: Sarah'S Short

    able to follow this realistic and fairly common case study of a small business and conduct the planning, analysis, and design phases of the System Development Life Cycle (SDLC), using either a traditional or object-oriented approach. Deliverables would include process and data diagrams and modeling, and user interface designs, and should ...

  22. Real world applications of SDLC (Software Development Life Cycle)

    The software development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both.

  23. A Case Study of the Application of the Systems Development Life Cycle

    The method used in this study is the System Development Life Cycle (SDLC), which is a pattern for developing software systems consisting of the planning, analysis, design, implementation, testing ...

  24. A life-cycle analysis of deep enhanced geothermal systems

    The climate impacts of deep enhanced geothermal systems (EGS) have been understudied in the academic literature. Using life-cycle analysis (LCA) conducted in accordance with ISO 14040 and ISO 14044 standards, this paper explores the climate change impacts of two deep EGS. The first study was in Reykjanes, Iceland, where a single well, IDDP-2/DEEPEGS, was drilled to a depth of 4.6 km for the ...

  25. The development of complex engineering models using ...

    The energy industry has been using life cycle assessment (LCA) to determine the environmental impact of projects. Obtaining accurate data of certain industrial activities requires complex engineering models that have long computing times, are difficult for non-experts to use, and may contain confidential data. This work examines using proxy models based on quadratic and artificial neural ...

  26. Spatial evolution and spatial production of traditional ...

    The results show that: (1) The village spatial formation and development follow the village life cycle theory and usually develop from embryonic villages to diversified and integrated villages; (2) The evolution of village spatial production is characterized by the diversity of material space, the sublimation of daily life space, and the ...

  27. Development of prediction model for information technology equipment

    The high quality of Information Technology (IT) equipment undoubtedly contributes to the seamless functioning of various industries in today's digital era. As organizations strive to increase their IT equipment procurement, there is growing concern about its negative environmental impact. This increased environmental consciousness has made it crucial to adopt a sustainable approach to IT ...

  28. Life Cycle Assessment of Plant-Based vs. Beef Burgers: A Case Study in

    As the world attempts to decarbonise the food industry and limit greenhouse gas (GHG) emissions, plant-based meat analogues (PBMAs) have emerged as a sustainable alternative to traditional meat. The objective of this study is to assess the environmental impacts of PBMAs compared to traditional beef burgers, aiming to address the research gap in the life cycle assessments (LCAs) of publicly ...

  29. Capital expenditure management to drive performance

    Project development and value improvement. While value-engineering exercises are common, we find that 5 to 15 percent of additional value is typically left on the table. Too often, organizations focus on technical systems and incremental improvements. Instead, executives should consider the full life cycle cost across several areas: