Services | Risk Management Training | Quality Management Training

Six Sigma is a method that provides organizations tools to improve the capability of their business processes.

This increase in performance and decrease in process variation helps lead to defect reduction and improvement in profits, employee morale, and quality of products or services.

Lean Six Sigma DMAIC Case Study

The quality team at a commercial lending company is approached by the director of sales. He is very concerned about meeting his sales numbers this year.

The people on his sales team have been telling him that they are losing deals to competitors because of customer satisfaction issues: they cannot meet the customers’ expectations in terms of delivering funds on time.

Consequently, the customers are getting their loans from the firm’s competitors.

The sales director wants to know how the company can use Lean Six Sigma to decrease its response time to the customer.

Define Phase Deliverables

Step 1: Problem Overview

The sales director wants to know how the company can use Lean Six Sigma to decrease its response time to the customer.

Who Is the Customer?

While the internal customer for this project is the sales director, the ultimate customer for whom the process has to improve is the individual or business requesting the loan.

What Is the Project CTQ?

Based on VOC collected by the sales director, loan applicants want a faster response from the lending company.

Step 2: Outline the Business Case

Based on data collection and interviews with key executives, the Lean Six Sigma team has developed the preliminary business case shown in Figure below :

Step 3: Develop a High-Level Process Map

Using the SIPOC tool, the Lean Six Sigma team was able to better define the boundaries of the project, as shown in Figure below :


Step 5: Define the CTQ Characteristics (Project Y)

With the approval from the Lean Six Sigma Steering Committee, the team can now proceed to the Measure phase.

A detailed process map is created to ensure that everyone has a common understanding of the flow, key inputs, and deliverables.

A segment of the map is shown in Figure below :

The map helps the team identify the process steps that have historically caused issues in approving loan applications.

Using the fishbone tool (Figure below), the team begins discussing the possible root causes of the issues that are leading to unacceptable response times.

Armed with this information, the team now focuses its data collection and analysis efforts on a few key drivers.

Step 6: Outline Performance Standard

The team has to determine the best way to measure the problem and also agree on a definition of a defect.

Using interviews with customers and with key staff members in sales and operations, the team is able to define an upper specification, a lower specification, a target, and a defect definition (see Figure below )

Steps 7 and 8: Develop a Data Collection Plan and Validate the Measurement System

The Lean Six Sigma team wanted to ensure that the complaints received by the sales team could be validated by data—that is, were the results really as bad as the customers were saying?

However, to do this, the team needed to ensure that the data collection and measurement systems were adequate.

Loan applications were either sent in electronically, in which case the system generated an automatic time stamp, or they were mailed in.

The applications with system time stamps did not require further investigation—the second the loan officer pressed “submit application,” the system would generate a time stamp. However, for the 30 percent of the applications that were received via mail, the application was scanned by the administrator and the application information was manually entered into the system.

While it was impossible to determine the length of time that the application waited to be scanned, the team could determine the cycle time between scanning and data entry (which then would make the application available for operations to begin the review process).

The team agreed that it should only take four minutes to enter data, and that anything beyond that could indicate a measurement system issue. Based on 250 data points, there were only four instances in which the system time stamp and the file time stamp did not meet the four-minute timeline (see Table 2.9).

Therefore, the measurement system accuracy is around 98.4 percent (1 − (4/250)) × 100%.

The team could, with confidence, rely on the historical data to provide a true depiction of process performance.


Step 9: Baseline the Process’s Current Capability

The members of the project team calculated the process capability to ensure that they had properly documented the current performance level before any improvements were implemented.

Using data for the previous year, they calculated the difference between the time when the borrower information packet was received and the time when a final decision was communicated to the borrower.

Using statistical software (or Excel), they calculated the values for the average and standard deviation.

Since they knew that the customer expects an answer within 3 days (72 hours), a z score was calculated:

The sigma value or z score of this process is 0.3, or less than 1. 

This is a very poorly performing process. 

The histogram below compares the current performance of the process with customer expectation (upper specification level) and the performance level required (target) in order to consistently meet customers’ needs.

Step 10: Define the Performance Objectives for the Process

For this project, since the customer requirements were defined very clearly, benchmarking to refine the project goals was not required.

The team just moved on to Step 11, trying to identify the root causes of poor performance.

Step 11: Identify Sources of Variation

To confirm that there is a relationship between response time and lost deals, the team members conducted a correlation study and also plotted the data using a scatter plot.

Using data from two quarters, they plotted the average response time for each month and the number of lost deals (see below).

Using statistical software, the correlation coefficient r was calculated to be 0.979.

Since this value is very close to 1, there is a strong statistical relationship between response time and lost deals; that is, the longer it takes to respond to the customer, the higher the chances that the borrower will choose another lender.

Now that the team members understood the relationship between cycle time and lost deals, they needed to determine whether the competitor was really much better at responding to customers.

To find the answer, they conducted a hypothesis test.

They compared their response time to that of their competitors.

Using a t test, they set up the following test:

Alternative hypothesis: average cycle time for the company ≠ average cycle time for competitors.

Using statistical software, the results of the t test are:

The company’s competitors are responding to a borrower within 36 hours (1.5 days) of having received his information, whereas the company is responding after 64 hours (2.7 days).

The p value of less than 0.05 confirms that there is a statistical difference between the performance of the company and its competitors (when the p value is less than 0.05, the null hypothesis is rejected).

Armed with the statistical confirmation that the company is facing better-performing competitors, the team moves into the Improve phase.


Step 12: Identify the Vital Xs and Implementable Solutions

In the Measure phase, using a fishbone, the team agreed that the manual data entry process had the largest impact on the overall process cycle time. In the Improve phase, the team had to understand why.

Using the same tool, the fishbone, the team conducted another brainstorming session, and the results pointed to one central reason: missing critical information.

As outlined in the fishbone diagram below , some of the main reasons for missing critical customer information were:

  1. The method by which information was received (mail, e-mail, or fax)
  2. Whether a list of required documents was provided to the customer (borrower)
  3. The varying types of documentation requested by the operations team from the borrower

To validate this theory, the team used regression to determine the mathematical relationship between missing information and process cycle time. And sure enough, with a high adjusted R2 value of close to 90 percent, there is a strong relationship between missing data and cycle time.

The regression equation is:

Now the team has to determine what elements in the process of customer data collection it needs to change in order to never exceed three missing data points.

To find the right answer, it has to set up an experiment, a process known as design of experiment (DOE).

For this case study, the team knew that the critical drivers of missing data were:

  1. The method by which information was received (e-mail vs. mail)
  2. Whether or not a list of required items was provided to the borrower
  3. Whether the company had asked for one or two years of financial data

The team set up an experiment in which one, two, or all three variables were changed and the documentation cycle time was measured.

The objective was to test the process under all possible conditions. The results of the experiment were studied using statistical software.

The team concluded that the process cycle time was significantly reduced when a list of required items was provided to the borrower and she was asked to e-mail all information.

Asking for one vs. two years of financial information had minimal impact on overall cycle time.

The team must now pilot these solutions to confirm the results of the experiment and develop control plans for long-term sustainability.


After the process change, the team knew that in order to meet customer expectations, it had to make sure that the operations team completed the review of each packet within 36 hours.

To attain this operational goal, the incoming packet had to be received via e-mail, and no packet could have more than three critical customer data points missing from it.

Prior to implementing a control plan, the team had to ensure that they had an accurate and reliable process for collecting data on the critical Xs (missing data points and receipt of info via email).

Since the data collection process will be 100 percent manual, they trained the future data collectors on the how to identify and record a defect.

After running a repeatability and reproducibility test (discussed in Measure phase), the team was comfortable that they had a reliable data collection process.

Prior to the improvements, the team members had collected data on the number of missing data points in each packet (they used these data to determine whether there was a relationship between missing data and cycle time).

To calculate the new process capability, and also to monitor the process, they implemented an SPC chart.

On a daily basis, 40 random files were collected, and the number of missing data points was counted. This process was repeated for 20 days (similar to the data collection process prior to the improvements).

The data were plotted using a C chart, and they indicated that the average number of missing data points for the 40 files on a daily basis was about 1.15 (see Figure below).

However, there was one instance in which four data points were missing.

So is the process better?

To answer this question, the team compared the before and after process capability for receiving customer data.

The team was able to gain substantial improvements in terms of the reduction in error rates and improved process capability (z score).

With a good control plan for one of the process’s critical Xs (missing data points), the team’s final step will be to develop a control chart for the process Y: response time to the customer.