Enter the URL of the YouTube video to download subtitles in many different formats and languages.

business environment organizations are

always looking for ways to improve their

processes reduce course and increase

customer satisfaction think Sigma is a

data driven approach that has proven to

be effective in achieving these goals

and if you are interested in learning

about the Six Sigma then this boot camp

is the perfect place for you in this

boot camp we will be taking you through

the basics of Six Sigma what it is how

it works and what you can expect to

learn from this boot camp so whether you

are a business owner or a manager or

someone who wants to improve their

problem solving skills then stick around

and let's dive into the world of Six

Sigma lean Six Sigma contains of four

levels yellow belt green belt black belt

and master belt yellow belts have basic

knowledge and contribute to Improvement

projects while Green Belt lead to

smaller scale initiatives black belts

are project leaders responsible for

significant Improvement and master black

belts have the highest expertise leading

multiple projects and providing

strategic guidance today we have brought

you simply learned certified lean Six

Sigma Green Belt certification training

course which provides comprehensive

training in lean Six Sigma principles

and techniques this course aims to equip

participants with the Knowledge and

Skills to lead smaller scale Improvement

projects within their organization it

covers key topics such as dmaic

methodology statistical analysis process

mapping root course analysis and project

management the training includes real

life case studies Hands-On exercises

Interactive Learning sessions to ensure

practical understanding and applications

of lean Six Sigma this course is

designed for professionals looking to

enhance their problem solving and

process Improvement capabilities and

earned a recognized Green Belt

certification to learn more about this

course you can click the link in the

description box below but don't believe

us check out what our Learners have to

even after working for 18 years I

believe you are never too old to learn

new skills and acquire knowledge to

excel further and keep up the growth

that's why I decided to upskill myself

to hone my skills to improve my

performance in my current organization

the course not only helped me to acquire

skills and get certified but also gave

me a decent salary hike hey I am Aditya

Canaria I live in Pune with my family

I am currently working as a quality

manager at the source system Global

Services I recently got certified in the

postgraduate program in lean Six Sigma

in collaboration with UMass armrest from

a curious learner and continuous

Improvement is my motto this isn't the

first time I choose simply learn to

I even took the project management

course earlier which motivated me to go

I am working in the quality management

domain for 18 years now when I was

assigned position of quality manager I

decided to take up the course in Lindsay

Sigma I wanted to make sure that I am

fully updated with all the recent case

studies to master myself in the field of

quality management the course boosted my

knowledge is studies from Harvard

Business publishing and Capstone project

from kpms in India that provide real

world lead and Six Sigma exposure one of

the best attraction of centurion course

is a well-structured course content that

consists of all the industry relevant

models and projects the concepts were

easier to learn thanks to the pedagogy

of the faculty everything they taught

was practical and experience driven even

the support team made the learning

smoother because of their instant

response and great problem solving the

certification has already made me stand

out from the crowd and has brought me

closer to my goal of excelling in the

field of quality management in my

leisure time I spent my time cooking

delicious food and trying my hands at

new recipes I also love photography and

enjoy clicking pictures with my DSL

learning gives me thriving for

it keeps me growing professionally

because growth is the only constant that

leads to success now let's check out

what we have in store in this Six Sigma

boot camp first we have Six Sigma in

then we will discuss what is lean Six

then we will go over Six Sigma in detail

then we will discuss 5S methodology

after that we will go into the green

belt training post that we will draw out

some comparisons between Six Sigma and

lean Six Sigma then we have Six Sigma

tools finally we will wrap up this

session with benefits of Six Sigma

imagine you've been tasked with a really

important project at work the company

you're working for produces luxury cars

the production numbers are going down

and a lesser number of cars are getting

manufactured each day there also seems

to be an issue with the quality of the

windshield wipers that go on these cars

the question you are faced with is there

a way for the company to stop the stall

production per day from one thousand to

two thousand also is there a way to find

out what's causing the drop in the wiper

quality there is Six Sigma Six Sigma

gives you the tools and techniques to

manufacturing process slow down how you

can eliminate the delays improve the

process and fix further issues along the

way the concept was introduced in 1980

by Bill Smith while working for Motorola

since then Six Sigma has seen worldwide

adoption Six Sigma aims to reduce the

time defects and variability experienced

by processes in an organization thanks

to Six Sigma you can produce an effect

99.996 of the time allowing only 3.4

errors per 1 million opportunities 6

Sigma also increases Customer Loyalty

towards the brand improves employee

morale leading to higher productivity

Six Sigma has two major methodologies

dmaic and dmadv let's look at the first

dmaic is an acronym for Define measure

analyze improve and control let's have a

individually and how it relates to your

earlier problem in the Define phase you

determine what issues you're facing what

your opportunities for improvement are

and what the customer requires of you

here you look at the process as a whole

and determine the issues with the

manufacturing process in this case

finding out why the cars had varying

windshield wiper quality and how to

manufacture more cars in the measure

phase you determine how the process is

performing currently in its unaltered

state you determine the current number

of cars that are manufactured in a day

in the current scenario 1000 cars are

manufactured in a day and each of these

cars are outfitted with a pair of

windshield wipers by one of 30 machines

used some of the metrics measured or how

many cars are produced in a day time

taken to assemble a car how many

windshield wipers were attached in a day

time that takes them to do so defects

detected from each machine on assembly

completion and so on following this in

the analyze phase you determine what

caused the defect or variation on

analyzing previous data you find out

that one of the machines that installed

the windshield wiper was not performing

as well as it was supposed to production

was taking longer since the car chassis

was being moved across the different

locations slower as cranes had to

individually pick and drop the frame

this was because the wheels were

attached to the car only in the last

stage next in the improved days you make

changes to the manufacturing process and

ensure the defects are addressed

replace the faulty machines that

installed the windshield wiper with

another one you also find a way to save

time by attaching Wheels on the frame in

the initial stages of the manufacturing

process unlike how it was done earlier

now the car can be moved across the

assembly area faster and finally in the

adjustments to control new processes and

future Performance Based on the changes

made the company was able to reduce

production time and manufacture about 2

000 cars a day with a higher quality of

output dmaic is one of the most commonly

used methodologies in the world it

focuses on improving the existing

products of the organization the second

methodology is dmadv which is short for

Define measure analyze design and verify

it is used when the company has to

create a new product or service from

scratch it is also called dfss or design

for Six Sigma let's take the scenario

where the company decides to build a new

model a sports car in the Define phase

you define the requirements of the

customer based on inputs from customers

historical data industry research you

determine what you need to ensure your

car becomes a success the data collected

indicates customers are drawn to cars

which can achieve more than 150 miles

per hour customers are also more

inclined towards cars which have V6

engines and an aerodynamic frame then in

the measure phase you use the customer's

requirements to create a specification

this specification helps Define the

product in a measurable method so that

data can be collected and compared with

specific requirements some of the major

specifications that you focus on are the

top speed engine type and type of frame

in the analyze phase you analyze the

product to determine whether there are

better ways to achieve the desired

results areas of Improvement are

determined and tested based on the

analysis of the Prototype created in

this phase do you find that the product

satisfies just about all of the customer

requirements except the top speed

so research begins on an aluminum alloy

that could possibly meet the speed

requirements of the customer following

this the design phase based on the

learnings from the analysis phase the

new process or product is designed

revisions are made to the model and the

car is manufactured with the new

material the analysis phase is repeated

based on the new design you also bring a

focus group and see how they receive it

based on their feedback further changes

are made and finally in the verify phase

you check whether the end result meets

or exceeds customer requirements once

you launch your brand new sports car you

collect customer feedback and

incorporate it into future designs and

guess what your customers are loving the

new design and that is DM ADV for you

Six Sigma has also found success in a

number of different Industries the

petrochemical healthcare banking

government and software are some of the

industries that have utilized the

concepts of Six Sigma to achieve their

business goals another commonly used

methodology adopted by companies around

the world is lean lean is a methodology

that aims to remove any part of the

process but does not bring value to the

customer it means doing more with less

while doing it better the philosophy

behind lean comes from the Japanese

manufacturing industry by Bob Hartman

who at the time was part of Toyota since

then across the world services and

Manufacturing organizations have

businesses but what if you could have

the best of both worlds a combination of

both Six Sigma and Lane that's lean Six

Sigma imagine you're the manager of a

supermarket chain you've noticed that

two things need your immediate attention

the first issue is how to handle the

different kinds of ways that you

the next one requires you to address the

supply chain issues at the supermarket

which are causing delays to the morning

leading to customer dissatisfaction and

attrition these problems can be solved

by incorporating two of the most popular

quality management methodologies in the

world lean and Six Sigma one famous for

its ability to handle waste and another

but what if there was a methodology that

combined the concepts of both Six Sigma

and lean one that could solve all your

well there is lean Six Sigma before we

dive into lean Six Sigma let's take a

closer look at its parent methodologies

first off lean is a methodology that

focuses on providing value to the

eliminating waste continuous Improvement

reducing cycle time lean in Six Sigma

both aim to handle waste but what is

this waste waste is any step or action

in the process that a user does not gain

any value from in short things that

why would a consumer want to pay extra

for the additional truck that was

required to deliver milk to the

supermarket just because the other one

this waste can be divided into eight

categories let's have a look at each of

them One Transportation this waste

refers to the excess movement of people

tools inventory equipment and other

components of a process then it is

required 2. inventory this waste occurs

materials than required this can cause

damage and defects to products or

materials greater time for completion

inefficient allocation of capital and so

three motion this refers to the time and

movement of people equipment or

Machinery this could be sitting through

inventory double data entry and so on

4. waiting this can be time wasted

waiting on information instructions

overproduction this is the waste created

due to producing more products than

6. over processing it refers to more

work more components or more steps in a

product or service than required

7. defects this is the waste originating

from a product or service that fails to

meet customer expectations 8. skills

this waste refers to the waste of human

potential under utilizing capabilities

and delegating tasks to people with

for years now many systems have emerged

that use the lean methodology to

identify and handle the different kinds

of waste some of the more popular and

effective ones are jit or just in time

the jit methodology focuses on reducing

the amount of time the production system

takes to provide an output and the

response time from suppliers to

customers 5S is another methodology that

focuses on cleanliness in organization

while improving profits and efficiency

kanban is also another popular

methodology to achieve lean it is a

visual method to manage tasks and

kanban enables users visualize the

workflow to identify issues in the

these methodologies help in optimizing

the waste production and are often used

together to maximize results so that's

the first problem solved now let's have

a look at how you can improve the

supermarket supply chain efficiency for

that let's have a look at the other part

Six Sigma is a set of tools and

techniques that are used for process

Improvement and removing defects let's

see how Six Sigma makes that possible

Six Sigma has two major methodologies

you can learn more about these two

methodologies by checking out our Six

Sigma in nine minutes video by clicking

on the top right corner let's have a

closer look at dmaic since lean Six

Sigma uses the dmaic methodology of Six

Sigma dmaic is an acronym for Define

measure analyze improve control it is

used to improve existing products and

processes so that it can meet the

in the Define phase you determine what

the goals of the project are in this

case you want to reduce the amount of

time taken to deliver milk from the

warehouse to the supermarket so that it

is stocked on the supermarket shelves

in the measure phase you measure the

performance of the current unaltered

process the milk truck leaves at 7 30 am

in the morning and can take one of three

routes a b and c Rel a is currently the

preferred one as it takes only 60

Minutes to reach the supermarket

compared to the routes B and C which

takes 70 and 80 minutes respectively in

the analyze phase you find out why the

since routes B and C were school bus

routes by reducing the starting time by

one hour at 6 30 instead of 7 30 meant

avoiding the traffic routes B and C now

take 40 to 45 minutes to reach the

supermarket route a still takes the milk

truck one hour to get to the supermarket

even when the truck leaves at 6 30 am

in the improve phase performance can be

improved by addressing and eliminating

now that you've realized that advancing

the milk pickup by an hour and changing

the route to Route B can save time you

change the process accordingly providing

your workers with ample time to stock

the milk into the shelves before the

morning rush and finally in the control

phase you make regular adjustments to

control new processes and future

you continue to monitor the delivery

times and try out alternate routes to

continually improve the process and

this process change LED to reduced man

hours and cost enhanced cells and

the lean Six Sigma methodology offers

many such benefits to businesses let's

take a look at some of them one increase

in profits two standardized and

simplified process three reduced errors

five value to customers and that is lean

Six Sigma for Six Sigma is a set of

tools and techniques that have helped

several companies around the world

achieve business success hi guys I'm

proud from Simply learn and let's get

started with our introduction to Six

Sigma now let's understand this better

with an example here let's talk about

how things were before Six Sigma was

introduced here Jenny and James are

having a conversation with each other

Jenny is James's manager and she's not

happy at all she says James is in a lot

of trouble this is because she found out

that the customers were in happy with

the organization's service and the

operational costs were way too high and

as manager James had to make sure that

this did not happen now let's have a

look at the same scenario in present day

here we have Jenny congratulating James

she's very impressed with his work but

James says it's all thanks to Six Sigma

methodology so Jenny asks she wants to

know more about Six Sigma so to

understand Six Sigma here's what you

need to know firstly we'll have to

understand what is Six Sigma what its

advantages are some of its method

methodologies what are the different

roles in a Six Sigma Team Water slain

what is a lean process and what is lean

Six Sigma so now let's get started with

understanding what exactly is Six Sigma

the Six Sigma methodology makes sure to

find as well as eliminate any sort of

defect or variation that could be

affecting your product service or

process now this methodology is

statistics based is data driven and

focused on continuous Improvement now

this means that there's no end goal in

the Horizon this is always another goal

to reach there are three core ideologies

behind Six Sigma the first one states

that for any business to be successful

there's continuous efforts that are

required so that you can achieve stable

as well as predictable process results

the second ideology states that in any

business or manufacturing process there

are certain characteristics that can be

defined measured analyzed and controlled

the final ideology says that along with

the rest of the organization the top

level management plays a very important

role to making sure that they sustained

quality now let's talk about the

advantages of Six Sigma Six Sigma can

help produce a road map or a path

through which you can easily find and

reduce any sort of organizational risk

and reduce the operational costs another

Advantage is that it helps improve the

efficiency of the process and making

sure that it works in a timely manner it

decreases defects improves the overall

tracking and monitoring process and

ensures that the products are aligned

with the company's policies it is also

reported that it greatly helps improve

customer as well as vendor satisfaction

it helps improve the cash flow and

ensures that the products are complying

with the regulations of the organization

now let me tell you about the process of

Six Sigma now Six Sigma projects are of

basically two methodologies the dmaic

and the dma DV now let's talk about

dmaic in detail that's short for Define

measure analyze improve and control this

is one of the most commonly used

methodologies in the world this is

commonly used by companies when they

have to fix or improve and already

existing product or process that does

not meet the company's standards now

let's have a look at the process first

phase is the Define phase in this phase

you define the problem that the

customers are facing you find out where

understand what the customers require of

you the second phase is the measure

phase now in this phase you actually

identify how well the process is doing

in its current unaltered state in the

analyze phase you process the data that

you get from the measure phase and

determine what exactly is the cost of

the delay or variation in the improve

phase you start by making small changes

to the business process and make sure

that the problem you identified earlier

is being taken care of and finally in

the control phase you control the new

process so that it doesn't go wrong and

use the same knowledge for future

processes now let's have a look at dmadv

this is short for Define measure analyze

design and verify now this is also

commonly known as dfss or the design for

Six Sigma now this is commonly used by

companies around the world when they

have a new product that needs to be

created all the way from scratch in the

first phase which is the defined phase

you define what the goal of the project

is and what the customers require of you

in the measure phase you measure as well

as determine what the customer needs and

how they respond to your products in the

analyze phase you perform analyzes to

determine how you can improve your

product or service so they can better

serve your customers in the design phase

you set up process details and make

optimizations to the design to make sure

your customer is satisfied and finally

in the verify phase you check how well

the design is working out and how well

it meets the customer's needs now before

we go on let's talk about how Six Sigma

was used in reference to the earlier

example the situation that James was

facing a survey conducted by the

organization James was working for

indicated that the customers weren't

very happy with the organization so they

decided to fix that with the help of Six

Sigma so they decided that the dmaic

methodology would be best suited to

solve their problem so let's have a look

at what they did firstly in the Define

phase they used a tool called the voice

of the customer this tool represented

the needs as well as requirements of the

customer this showed that the customers

expected prompt delivery the correct

product selection and a knowledgeable

distribution team from the company and

now on to the measure phase the company

wanted to know why the customers didn't

like them so they performed some data

collection from there they found out

that they took 56 percent longer than

other companies to deliver their product

so they decided to reduce the amount of

time it takes between order entry and

the delivery of the product and now in

the analyze phase here they knew what

the issue was but they wanted to know

what exactly made their products

delivery so slow why were the customers

receiving the products relate then they

performed some analysis their analysis

showed the possible causes it could have

been inaccurate sales plans issues with

their safety stock issues with their

vendors delivery performance and falling

behind on the manufacturing schedule

further analysis also indicated that

most of their sales almost 80 percent

came from 30 percent of their products

the issue was that they didn't have

enough Safety stock to satisfy the

customers who required that 30 percent

of products and now on to the improve

phase so now that they knew what was

causing their problem they wanted to

solve it they began to have monthly

reviews and try to make sure that their

in-demand products stayed in demand

another thing that they wanted to focus

on was to make sure that they could

order and provide the customer with the

products that they wanted and finally

onto the control phase they began to set

up plans so that they could monitor the

sales of that 30 of products that were

selling the most each year they would

review how well a product was selling

and replace it if it had fallen out of

favor now let me tell you what Six Sigma

team consists of let's talk about the

roles in a Six Sigma team first up is

level seven now these are individuals

who are at the novice level now these

individuals don't know in great detail

about what the project is but they have

a basic understand ending of the

principles and the methodology behind

the program now they usually support

with smaller projects and with smaller

issues but these individuals found the

foundation for the people who decide

where the program is going and now we're

at level 6. now these are individuals

who have a yellow Belt certification now

they're core members of the Six Sigma

team who have an understanding of how

the basic metrics work and how they can

perform some sort of improvement now

they have their own areas of expertise

and they're required to determine

certain processes that need to improve

at the same time they're also in charge

of smaller Improvement projects now

level five these are people who have a

Green Belt certification now these

individuals are usually part-time

professionals who have a number of

different duties to fulfill they focus

on smaller Six Sigma projects they are

usually involved with Gathering data

performing some sort of experiment and

analyzing information they also assist

with black belt projects and now we're

at level 4 these are individuals who

have a Black Belt certification they

usually team leaders of a Six Sigma

project they complete four to six

projects a year and are experts in the

principles methodologies and lean

Concepts thanks to their understanding

of statistical experimental design they

can also understand the hidden reasons

behind why a particular product failed

and now we're at level 3 these are

individuals who have a master Black Belt

certification now these are individuals

who are experts when it comes to

methodologies that are employed in Six

Sigma their main emphasis is to coach

train and certify black belts they also

are involved with other Six Sigma

leaders to ensure a company's goals are

met now level two these individuals are

called Champions so they work really

closely with the executives and usually

have a role like a senior or a middle

executive level role they also have a

clear understanding of what exactly is

the company's vision and Mission they

also understand metrics so that they can

set up a Six Sigma project that lines up

with the company's goals they're

responsible for removing any sort of

roadblock that could hamper the success

of a project and finally we are at level

1. these are the executives now these

individuals represent the highest level

when it comes to a Six Sigma team now

they have training as well as experience

through which they can set up Six Sigma

projects that clearly line up with the

company's goals their main emphasis is

to ensure that the project is able to

add value to the organization and at the

end of the day is successful now this is

when Jenny interjects she wants to know

about lean John tells her that lean just

like sex Sigma is another methodology so

what exactly is lean now lean is a

methodology that has a very important

ideology to make sure that this

continuous optimization of the processes

and there's an elimination of waste so

what's waste so waste is basically any

part of the process that the customer

doesn't want to pay for it is a process

that does not add any value to the

customer now coming back to lean here

are some of its characteristics whenever

decisions are being made in lean team

the main emphasis is to understand how

it exactly adds value to the customer

every member in a lean team has a clear

understanding of what exactly are the

goals of the organization it also

encourages employers to push for further

success even if the organization is in a

good place or is already doing well

cross-functional collaboration and

communication lean focuses on answering

the difficult question or the complex

ones rather than employing short-term

fixes and with lean you can easily

prepare for issues that can come up in

the future or improvise in unexpected

circumstances so let's talk about how

lean and sex Sigma are different from

one another the lean methodology aims to

reduce waste it does so by analyzing the

workflow it also emphasizes on

minimizing resource usage and improving

customer value now let's talk about Six

Sigma the aim of Six Sigma is to provide

near perfect results it wants to reduce

costs and improve customer satisfaction

basically both of them are moving

towards the same goal to reduce the

amount of waste and to create efficient

processes now let's talk about the

process of lean now there are five

different steps let's start with the

first one identifying value you need to

identify value by determining what

exactly is the problem you're trying to

solve for the customer the second step

is to map your value stream you need to

map the workflow of your company you

need to focus on the different actions

and the people that are involved with

the process you need to be able to

identify which parts of the process are

able to add value and the ones that

don't the third step is to create a flow

you need to break up your work into

smaller silos and visualize the workflow

so that you can easily identify problems

that might show up later the next step

is to establish pull you need to set up

a system through which products are

created only when there is a demand or a

requirement for it through this you can

optimize resource capacity and finally

we're at the fifth step which is

continuous Improvement you need to

ensure all your employees at all levels

are involved in the continuous

Improvement of the process so what

exactly is lean Six Sigma what if you

could combine The Best of Both Worlds

the combination of Six Sigma and lean

methodology led to the creation of lean

Six Sigma lean Six Sigma is a

methodology that aims to solve problems

removes any form of waste or

inefficiency and improving the working

conditions of employees to make sure

that they can serve the customers better

now this is a combination of the tools

methods and principles that are employed

in lean and Six Sigma let's talk about

some of its advantages it aims to

provide customers with a better

experience by streamlining the process

with efficient power flows it aims to

drive higher results it can reduce cost

remove waste and prevent effects it can

help the organization handle day-to-day

problems the decreased lead times help

increase capacity in profitability and

finally it helps with people development

and improving the morale of the

organization so here we have Jim

carrying a stack of sheets in his very

messy cubicle suddenly Jim realizes a

few things he's lost a really important

file but that's not all the bad news

just keeps on coming he remembers that

his messy desk and area are the talk of

the entire office and thanks to this he

doesn't even want to get any work done a

little angry and annoyed with himself

Jim sits down in resignation that's when

Jon walks in John asks him what's wrong

Jim tells him that he's in a mess and

doesn't know what to do that's when Jon

tells him about his solution the 5S

methodology now let me tell you

everything I'm going to teach you we'll

be talking about what is the fire's

methodology benefits of the fire's

methodology the process of Phi s which

consists of five other steps so now

let's have a look at what exactly is the

fire's methodology the phias methodology

is a popular workplace organization

methodology that was introduced in Japan

and was first implemented by Toyota

Motor Corporation the primary reason it

was developed was to make just-in-time

manufacturing possible so what is just

in time it's a form of manufacturing

that aims to produce only the amount of

product needed when it's needed the

fires methodology basically focuses on

cleanliness and organization while at

the same time focusing on maximizing

efficiency and profit so you could say

that 5S provides a framework whose main

focus is using visual management which

is a way to visually communicate a

number of things like performance

standards or warnings in a way that

requires little to no training to

interpret so that you can emphasize on

using a mindset as well as tools to

ensure their sufficiency and value being

created so what you're doing in this

methodology is observing analyzing

collaborating searching for waste and

then removing it so what makes this

methodology so special let's find out

first off we have optimized organization

thanks to 5S every component that you

need for work is kept in a way that's

easily accessible and easy to use so no

more time is wasted on looking for items

deciding how they can be used or even

returning them then we have improved

efficiency the fires methodology enables

companies to focus on ways to eliminate

waste while enhancing the company's

Bottom Line This is made possible by

improving the company's products and

services and by extension this is able

to lower costs next we have bigger

storage density now since 5S is mainly

focused on removing the unnecessary

items from the work area there's a lot

of free space open for efficient usage

then press increase safety so now that

all the unnecessary clutter and waste is

removed from the work area it's much

safer for the employee to work in and

finally we have improved workspace

morale now since the workspace is a lot

more cleaner safer and more organized

the morale among employees is also

greatly improved now let's have a look

at the process of the fires methodology

now the process behind the fires

methodology consists of five Japanese

terms and the translations each of them

start with the letter s hence the name

5S the steps are sort set in order shine

standardize and sustain and now for the

first step sort sort a salary in

Japanese can be translated to tidiness

now this step involves sorting through

materials keeping only the essential

items needed to complete tasks the aim

here is to remove clutter and clear the

workspace of things that don't belong

there or aren't critical to the work

in this step you clean the work area by

carefully analyzing the workspace you

need to remove any items that you don't

from these removed items you need to

decide which ones need to be removed and

which ones need to be recycled other

items may need to be returned from where

they were taken or there might be some

items you're not sure about these items

need to be red tagged now we've red

tagged unknown items the items whose

ownership isn't clear or which cannot be

identified are red tagged by Red tagging

what you're doing is attaching a visible

information like where and when the item

was found the red tagged items are

arranged in a particular location it's

likely that these red tag items could

stay in lost and found for a long time

here are a few things you can do with

them after 30 days of staying in Lost

and Found supervisors from other

departments can claim the items for

themselves if they stay undisturbed for

10 more days they can be thrown away

sold or recycled if these items are

expected to be useful at some of the

point they can be stored in Lost and

Found with a specific plan for the time

now let's have a look at the next step

set an order of Satan translate to

orderliness here the aim is to organize

which means items need to be easy to

so first off you need to create a 5S map

the 5S map is a floor plan or a diagram

that can provide an overview of the work

area process or station it also shows

the location of different components you

might be needing for work with travel

paths and details on how they're related

it can also include a description of the

work that's done in that particular area

it also needs to be updated periodically

then the plan needs to be communicated

once storage locations are assigned they

are labeled with this employers will be

able to easily identify what's inside

these storage locations floor marking

tapes could also be used to Mark work

areas movement leads and storage

supplies now let's have a look at the

next step shine shine or say so

translates to cleanliness here the aim

is to remove the dust that accumulates

under the Clutter while ensuring it

doesn't return here you need to perform

routine cleaning every week every area

within the work area needs to be cleaned

employees need to be responsible for the

cleanliness of their workspace and the

equipments they use with this they'll be

able to quickly recognize problems that

might arise difficult situations can be

understood easily and items that are out

of place can be recognized quickly now

let's have a look at our next step

standardize standardize or say kitsu

translates to standardization in this

step long-term changes are Incorporated

what's being done and by whom is being

written down new practices are also

being incorporated into the work

procedure first things are written down

decisions once written down can be

included as part of the standards

related to a particular area for example

the 5S map created and the red tagging

of items can all be incorporated as

standards based on changing business

needs these standards can be changed as

well then you need to use tools for

standardizing communication is really

important in this step decisions made

about the work practices need to be

communicated with the employees this can

be done with fires checklists job cycle

charts and procedure labels in science

and now let's have a look at our last

step sustain sustain or shitsuke

translates to discipline here the focus

is on continuous Improvement the

distance decisions that were made in the

previous step need to be repeated to

form a continuous cycle in this step you

need to ensure that 5S is applied

repeatedly as part of routine work some

ways to sustain the program are

management support department tours

performance evaluations and so on and

that's it that's the 5S methodology Jim

decides to incorporate it as soon as

possible and after applying 5S Jim's

workspace is a lot more cleaner and

better organized Jim thanks John for

helping with that hey there Learners

check out our certified lean Six Sigma

Green Belt certification training course

and earned a Green Belt certification to

learn more about this course you can

click the course Link in the description

box below what do you see on the screen

well it depends on your perception or

if you are pessimist you might think the

if you're an optimist you might think

if you are a realist you might think

that glass is full with water and air

however the Six Sigma practitioner sees

a glass that is bigger than it needs to

be assuming the current water level is

what the customer desires as we begin

our journey to learn more about Six

Sigma it is important to know that

applying Six Sigma is the way of seeing

and analyzing the processes around you

we will talk more about what Six Sigma

means and how the Viewpoint impacts

organizations in the screens to come

let's begin this lesson by defining

quality quality is defined as meeting

the requirements of the customer well

what features do you look for when

what facilities do you want in your

house or apartment when buying one what

do you expect from a premium chocolate

answers to these questions will tell

what quality means to you for each of

here's a snapshot of the quality Journey

with a few key milestones in the 1930s

the idea of statistical process control

was conceived by Walter shuard to

Monitor and control a process using

statistical methods this was used

extensively during World War II to

quickly expand industrial capabilities

in the 1960s quality circles were

formulated a quality circle is a

self-improvement workers group that

performs similar work meets regularly to

identify analyze and solve work-related

in 1987 the International Organization

for standardization designed ISO 9000

this is a set of international standards

on quality management and quality

assurance for organizations to implement

in 1987 the Baldridge award criteria was

developed by U.S Congress to raise

awareness of quality management systems

and recognize U.S companies that have

successfully implemented Quality

in 1988 the concept of benchmarking was

introduced benchmarking is an

improvement process or an organization

measures its performance against the

best organizations in its field

determines how such performance levels

were achieved and uses the information

to improve its own performance

during the 1990s the balanced scorecard

or BSC was introduced it is a management

tool that helps managers at all levels

to align activities to the strategy of

their organization and to monitor the

multiple results obtained in their key

in 1996 the concept of re-engineering

re-engineering is also known as business

process re-engineering which involves

restructuring an entire organization

the term Six Sigma has different

meanings or implications depending on

Sigma is a Greek letter used in the

statistical world to represent a measure

the Six Sigma process is an important

method of quality principles and

also Six Sigma is a business strategy to

change company culture with top

the sigma level is a measure of

performance for a business process or

So when you say Six Sigma there are

several definitions that are all correct

quality is defined as the degree of

Excellence of a product or service and

conformance to customer requirements

taking a process to Six Sigma level

ensures the quality of product and

increase in profits as the primary goal

in other words Six Sigma signifies

in 1986 Bill Smith and Mikhail Harry and

Motorola started the Six Sigma

initiative to improve performance in

1995 Jack Welch initiated Six Sigma at

General Electric to improve the entire

business system and became the global

in 1998 Allied signal saved a half

billion dollars with the use of Six

in 2000 General Electric saved 2 billion

annually with the use of Six Sigma and

in 2001 Motorola saved 16 billion

cumulatively with the use of Six Sigma

Six Sigma is a business methodology that

employs a customer-centric fact-based

approach to reduce process variation in

this helps us dramatically improve

customer satisfaction increase

shareholder value and strengthen

the methodology is designed to make

companies rethink the way they do

business and generate improvements and

it eliminates the root cause of problems

create robust products and services

reduces process variation in waste

ensures customer satisfaction achieves

process standardization reduces rework

by getting it right the first time

addresses key business requirements

helps gain competitive advantage and

helps achieve organizational goals

following are the three reasons why

organizations are successful with Six

Sigma it is a proven systematic

problem-solving methodology that follows

a tried and effective process known as

demand which improves productivity and

efficiency by eliminating defects

Six Sigma is customer focused this

methodology ensures businesses align

their projects to customers needs it

allows organizations to produce better

products and services and improves

lastly Six Sigma achieves long-term

improvements based on data-driven

statistical analysis to prioritize

the Six Sigma process is known as

Dometic demand comprises five phases

these phases are the roadmap to problem

solving and improving our processes

the effectiveness of Six Sigma method is

derived from its structure each phase

has an overarching objective and

specific deliverables that need to be

completed which helps us achieve the

objectives the purpose of the defined

phase is to document the problem the

desired outcome goals and deliverables

the purpose of the measure phase is to

obtain Baseline process performance

levels and quantify the problem the

focus of the analyze phase is to

identify the key root causes for process

variation and defects the purpose of the

improved phase is to develop test and

the goal of the control phase is to

monitor the key factors and maintain the

gains you learn the aspects of the

domainic process now we'll look at the

tools used in each phase the list of

tools corresponds to the Dometic phase

the use or application of these tools

gives the expected deliverables in each

Dometic phase for a green belt some of

the tools listed are not required in

every Six Sigma Greenbelt project

these tools give us an insight into the

problem and lead us toward the real

issues in our processes that is with

more experience you are likely to know

the tools you need for your projects

in the Define phase we use cypok voice

of the customer or VOC critical to

quality ctq the quality function

deployment or qfd failure modes and

known as the FMEA or the familiar and

the cause and effect CNE Matrix

in the measure phase we use measurement

system analysis or MSA control charts

process capability and normality plots

and the analyze phase we use Simple

linear regression or SLR Prado charts

fishbone diagram Builder mode and

multivariate charts and hypothesis

in the improved phase we use

brainstorming piloting and also the

failure modes effects analysis and

in the last phase control will use

control charts a control plan and

measurement system analysis this lesson

provides an overview of the Certified

Six Sigma Green Belt or cssgv course

a process is a series of steps designed

to produce a product and or service

according to the requirement of the

a process mainly consists of four parts

input process steps output and feedback

input is something put into a process or

expended in its operation to achieve an

for example man material machine and

output is the final product delivered to

an internal or external customer for

it is important to understand that if

the output of a process is an input for

another process the latter process is

each input can be classified as

controllable represented as C

non-controllable represented as NC noise

represented as n and critical

the most important aspect of the process

as can be inferred from the image any

change in the inputs causes change in

the output therefore y equals f of x

feedback helps in process control

because it suggests changes to the

let us learn about the process of Six

let us understand how Six Sigma Works in

Six Sigma is successful because of the

following reasons Six Sigma is a

management strategy it creates an

environment where the management

supports Six Sigma as a business

strategy and not as a standalone

approach or a program to satisfy some

Six Sigma mainly emphasizes the DMACC

Focus teams are assigned well-defined

projects that directly influence the

organization's bottom line with customer

satisfaction and increased quality being

Six Sigma also requires extensive use of

the next screen will focus on some key

let us look at the segment level chart

as discussed earlier the Six Sigma

quality means 3.4 defects in one million

opportunities or process with a

the sigma level chart given on the

screen shows the values for other Sigma

levels please take a look at the values

carefully let us understand the benefits

of Six Sigma in the next screen

the organizational benefits of Six Sigma

are as follows a Six Sigma process

eliminates the root cause of problems

sometimes the solution is creating

robust products and services that

mitigate the impact of a variable input

or output on a customer's experience for

example many Electrical Utility Systems

have voltage variability up to and

sometimes exceeding a 10 percent

deviation from nominal value thus most

electrical products are built to

tolerate the variability drawing more

amperage without damage to any

using Six Sigma reduces variation in a

process and thereby reduces waste in a

it ensures customer satisfaction and

provides process standardization rework

is substantially reduced because one

gets it right the very first time

further Six Sigma addresses the key

Six Sigma can also be used by

organizations to gain advantage and

become world leaders in their respective

fields ultimately the whole Six Sigma

process is to satisfy customers and

Achieve organizational goals in the next

screen let us understand Six Sigma and

taking a process to Six Sigma level

ensures that the quality of the product

is maintained the primary goal of

improved quality is increased profits

for the organization in very simple

terms quality is defined as the degree

of Excellence of a product or a service

requirement if the customer is satisfied

with the product or service then the

product or service is of the required

let us look at the history of quality in

the next screen in the mid-1930s

statistical process control SPC was

developed by Walter schuhart and used

extensively during World War II to

quickly expand the US's industrial

SPC is the application of statistical

techniques to control any process

Walter shoe Hearts work on the common

cause of variation and special cause of

variation assignable has been used

proactively in all Six Sigma projects

the approach to Quality has varied from

time to time in the 1960s there were

quality circles which originated in

Japan it was started by Kaoru Ishikawa

quality circles were self-improvement

groups composed of small number of

employees belonging to a single

quality circles brought in improvements

with little or no help from the top

in 1987 ISO 9000 was introduced ISO

stands for International Organization

ISO 9000 is a set of international

standards on quality management and

quality assurance to help organizations

Implement Quality Management Systems ISO

Baldridge award now known as the Malcolm

Baldridge National Quality award was

developed by the U.S Congress in 1987 to

raise awareness of Quality Management

Systems as well as recognize and award

U.S companies that have successfully

implemented Quality Management Systems

in 1988 another quality approach was

developed known as benchmarking in this

approach an organization measures its

organizations in its field determines

how such performance levels were

achieved and the information is used by

the organization to improve itself

then in the 1990s there was the balance

scorecard approach it is a management

tool that helps managers of all levels

to monitor their results in their key

areas so that one metric is not

optimized while another is ignored

during the year 1996 through 1997 an

approach known as re-engineering was

this approach involved the restructuring

of an entire organization and its

integrating various functional tasks

into cross-functional processes is one

of the examples of re-engineering in the

next screen let us find out about the

quality gurus and their contribution to

let us focus on Six Sigma and the

business system in this screen business

systems are designed to implement a

a business system ensures that process

inputs are at the right place and at the

right time so that each step of the

process has the resource it needs a

business system design should be

responsible for collecting and analyzing

data so that continual Improvement of

its processes products and services is

insured a business system has processes

sub-processes and steps as it subsets

Human Resources manufacturing and

marketing are some examples of processes

in a business system Six Sigma improves

a business system by continuously

removing the defects in its processes

and also by sustaining the changes a

defective item is any product or service

that a customer would reject a customer

can be the user of the ultimate product

or service or can be the next process

Downstream in the business system let us

learn about Six Sigma projects and

organizational goals in the following

let us understand the structure of the

there are totally five levels in the Six

Sigma screen the first level consists of

the top Executives of the organization

these people lead change and provide

direction as they own the vision of the

for any Improvement initiative to work

it is important that top management of

the organization be actively involved in

its propagation the top Executives own

the Six Sigma initiatives next in the

level are Six Sigma Champions they

identify and scope projects develop

deployment and strategy and support

cultural change they also identify and

Coach master black belts three to four

Master block belts work under every

Six Sigma Master black belts train and

Coach black belts green belts and

various functional leaders of the

organization they usually have at least

three to four black belts under them

the fourth level in Six Sigma structure

is Six Sigma Black belts they apply

strategies to specific projects and lead

in direct teams to execute projects

finally there are Six Sigma Green belts

they support the black belt employees by

participating in Project teams

green belts play a dual role they work

on the project and perform day-to-day

jobs related to their work area

in the next screen we will understand

while financial accounting is useful to

track physical assets the balanced

scorecard or BSE offers a more holistic

approach to strategy implementation and

performance measurement by taking into

account perspectives other than the

financial one for an organization

traditional strategic activities that

concentrate only on financial metrics

are not sufficient to predict future

performance they are not sufficient to

implement and control the Strategic plan

BSE translates the organizational

strategy into actionable objectives that

can be met on an everyday basis and

provides a framework for performance

the balanced scorecard helps clarify the

organizational vision and Mission to

workable action items to be carried out

and measured it also provides feedback

on both internal business processes and

external outcomes by doing so it enables

continuous Improvement in strategic

organizational goals the balanced

integrating the organizational strategy

with a limited number of key metrics

from four major areas of performance

Finance customer relations internal

processes and learning and growth

many organizations in the world use

balanced scorecard approaches and the

number is increasing every day in the

next screen we will describe the

balanced scorecard framework we will

learn about developing a balanced

while applying the balance scorecard in

an organization care must be taken to

account for interactions between

different perspectives or strategic

business units and avoid optimizing the

results of one at the expense of another

to outline the strategy a top-down

approach is followed by determining the

Strategic objectives measures targets

and initiatives for each perspective

the Strategic objectives refer to the

strategy to be achieved in that

perspective three or four leading

objectives are agreed upon the progress

towards strategic objectives is assessed

using specific measures these measures

should be closely related to the actual

performance drivers this enables

effectively evaluating progress

high-level metrics are linked to lower

level operational measures the target

values for each measure are set the

initiatives required to achieve the

as already mentioned this exercise is

carried out for all the perspectives

finally the scorecard is integrated into

in the next screen let us understand the

change in the approach to the balance

scorecard from the four Box model of BSE

in earlier approaches to the balance

scorecard the perspectives were

presented in a four Box model this kind

of scorecard was more a comprehensive

glance at the key performance indicators

or metrics in different perspectives

however the key performance indicators

or metrics of different perspectives

reviewed independent of each other which

led to a silo based approach and lack of

however modern scorecards place the

focus on the interrelations between the

objectives and metrics of different

perspectives and how they support each

a well-designed balanced scorecard

recognizes the influence of one

perspective on another and the effect of

these interactions on organizational

strategy to achieve the objectives in

one perspective it is necessary to

achieve the objectives in another

perspectives form a chain of cause and

a map of interlinked objectives from

each perspective is created these

objectives represents the performance

effectiveness of strategy implementation

this is called a strategy map

the function of a strategy map is to

outline what the organization wants to

accomplish and how it plans to

accomplish it the strategy map is one

page view of how the organization can

create value for example financial

success is dependent on giving customers

what they want which in turn depends on

the internal processes and learning and

growth at an individual level in the

next screen we will look at the impact

of the balance scorecard on the

the balance scorecard and strategy map

Force managers to consider cause and

effect relationships which leads to

better identification of key drivers and

a more rounded approach to strategic

organization to improve in the following

being a one-page document a strategy map

facilitates understanding at all levels

an organization is successful in meeting

its objectives only when everyone

the balance scorecard also forces an

organization to measure what really

matters and manage information better so

that quality of decision making is

creating performance reports against a

balanced scorecard allows for a

structured approach to reporting

progress it also enables organizations

to create reports and dashboards to

communicate performance transparently

as expected a balanced scorecard helps

an organization to better align itself

and its processes to the Strategic goals

objectives of the BSE can be cascaded

into each business unit to enable that

unit to work toward the common

organizational goal all the activities

of the organization such as budgeting or

risk management are automatically

aligned to the Strategic objectives to

conclude the balance scorecard is a

simple and Powerful tool that when

implemented correctly equips an

let us proceed to the next topic of this

in this topic we will look at what lean

is and how lean is applied to a process

let us start with the lean Concepts in

the next screen let us look at the

process issues in this screen Lane

focuses on three major issues in a

process known by their Japanese names

muda refers to non-value adding work

mura represents unevenness and Marie

together they represent the key aspects

in lean let us look at the types of

there are seven types of Buddha or waste

as per lean principles let us understand

production this refers to producing more

than is required for example a customer

needed 10 products and 12 were delivered

inventory in simple words this refers to

the term inventory includes finished

goods semi-finished Goods raw materials

supplies kept in waiting and some of the

work in progress for example test

scripts waiting to be executed by the

testing team defects repairs rejects any

product or service deemed unusable by

the customer or any effort to make it

usable to the original customer or a new

customer for example errors found in the

source code of a payroll module by

quality control team motion a waste due

to poor ergonomics of the workplace for

example finance and account team sit on

the first floor but invoices to

customers are printed on the ground

floor causing unnecessary Personnel

over processing additional process on a

product or service to remove unnecessary

attribute or feature is over processing

for example a customer needs a bottle

and you deliver a bottle with extra

plastic casing a customer needs Abe C3

bearing and your process is tuned to

produce more precise abec 7 bearings

taking more time for something the

waiting when a part waits for processing

or the operator waits for work the

wastage of waiting occurs for example

improper scheduling of Staff members

transport when the product moves

unnecessarily in the process without

for example a product is finished and

yet it travels 10 kilometers to the

warehouse before it gets shipped to the

customer another example an electronic

form is transferred to 12 people some of

them seeing the form more than once that

is the form is traveling over the same

next we will look at lean waste other

than the seven types of waste discussed

some lean experts talk about additional

areas of waste under utilized skills

skills are underutilized when the

workforce has capabilities that are not

being fully used toward productive

efforts people are assigned to jobs in

underperforming processes automation of

poorly performing process improving a

process that should be eliminated if

possible for example the product returns

department or product discounts process

a symmetry in processes that should be

eliminated for example two signatures to

approve a cost reduction and six

signatures to reverse a cost reduction

that created higher costs in other areas

in the next screen we will look at an

exercise on identifying the waste type

we will cover each step of the lean

process in the next few screens in this

screen we will learn about the first

step identify value to implement lean to

a process it is important to find out

what the customer wants once this is

done the process should be evaluated to

identify what it needs to possess to

the next screen will focus on the next

step of the lean process value stream

in this screen we will discuss the

differences between push and pull

processes an organization can adopt

either of these processes depending on

contrary to a pull process in a push

process the first step is to forecast

the demand for a product or service the

production line then begins to fill this

demand and produced parts are stocked in

anticipation of customer demand for

example a garments manufacturer produces

200 shirts based on expected demand and

then waits for customer orders for them

note that the demand is expected and not

actual discounts offered to customers by

big retailers are examples of the push

if the Garment company adopts a pull

process instead it would start making

the shirts only after receiving a

confirmed demand from customers note

that although the pull approach seems

better it is not applicable to all

situations for example a pharmacy uses a

in the next screen we will learn about

let us look at an example for the TOC

the three sub-processes in the packing

process are coding or printing filling

and sealing the data for the three

sub-processes are observed and collected

as number of units produced in an hour

coding or printing is 900 units per hour

filling is 720 units per hour and

how can you implement the TOC

let us build the TOC map for this

example the first step in the TOC

constraint looking at the data the

output per hour from The Filling process

is 720. this is the constraint in the

in the Second Step the constraint is

exploited by analyzing the performance

using data to break the constraint a

repair and maintenance Personnel can be

assigned to maintain the filling machine

in the third step the other fixes in the

repair and maintenance function are made

as subordinate decisions to the one

taken in step two in this example carry

out the maintenance of the filling

machine in the fourth step the

constraint is elevated by implementing

the decisions in this example remove the

damages from the filling machine

the next step is to go back to step one

and identify the next system constraint

as per the data collected after

implementation of the first cycle of the

TOC ceiling can be identified as the

let us now analyze the data before and

after Toc implementation in this example

the number of units produced per hour

before implementing the TOC encoding or

printing process was 900 units filling

process was 720 units and sealing

after implementing the TOC the number of

units produced per hour for the filling

process increased to 840 from 720 units

hey dear Learners check out our

certified lean Six Sigma Green Belt

certification training course and earned

a Green Belt certification to learn more

about this course you can click the

course Link in the description box below

let us proceed to the next topic of this

lesson in this topic we will discuss the

concepts in design for Six Sigma or dfss

let us first understand dfss in the next

dfss or design for Six Sigma is a

business process methodology that

ensures that any new product or service

meets customer requirements and the

process for that product or service is

dfss uses tools such as Quality Function

deployment or qfd and failure mode and

dfss can help a business system to

introduce an entirely new product or

it can also be used to introduce a new

category of product or service for the

for example an fmcg company plans to

make a new brand of hair oil a type of

product already in the market

dfss also improves the product or

service and adds to the current product

or service lines to implement dfss a

business system has to know its customer

dfss can be used to design a new product

or service a new process for a new

product or service or redesign of an

existing product or service to meet

let us learn about processes for dfss in

the two major processes for dfss are

idovi stands for identify design

dma DV stands for Define measure analyze

in the idov process the first step

involves identifying specific customer

needs based on which the new product and

business process will be designed the

next step involves design which involves

identifying functional requirements

developing alternate Concepts evaluating

the Alternatives selecting a best fit

concept and predicting Sigma capability

tools such as FMEA are used here the

third step optimize uses a statistical

approach to calculate tolerance with

when idov is implemented to design a

process expected to work at Six Sigma

level this is checked in the optimize

phase if the process does not meet

expectations the optimize phase helps in

developing detailed design elements

predicting performances and optimizing

the last stage of idov is to verify that

is to test and validate the design and

finally to check conformance to Six

the other process dmadv has five stages

the first stage is to define the

customer requirements and goals for the

process product or service next measure

and match performance to customer

the third stage involves analysis and

assessment of the design for the process

the next step is to design and implement

the array of latest processes required

for the new process product or service

the final stage is to verify results and

in the next screen we will look at the

differences between idov and dmad

the primary difference between idov and

dma DB is that while idov is used only

to design a new product or service dma

DB can be used to design either a new

product or service or redesign an

idlv involves design of a new process

while dmadv involves redesigning an

existing process in idov no analysis or

measurement of existing process is done

and the whole development is new the

design step immediately follows the

identification of customer requirements

in contrast dmadv the existing product

service or process is examined

thoroughly before moving to the design

the design stage comes only after

defining requirements and analyzing the

existing product service or process

in the following screen we will learn

about tool Quality Function deployment

or qfd which is one of the dfss tools

qfd also called voice of customer or VOC

or House of quality is a predefined

method of identifying customer

requirements it is a systematic process

to understand the needs of the customer

and convert them into a set of design

qfd motivates business to focus on its

customers and design products that are

competitive in lesser time and at lesser

the primary learning from qfd includes

which customer requirements are most

important what the organization's

strengths and weaknesses are where an

organization should focus their efforts

and where most of the work needs to be

to learn from qfd the organization

should ask relevant questions to

customers and tabulate them to bring out

a set of parameters critical to the

apart from understanding customer

requirements it is also important to

know what would happen if a particular

product or service fails when being used

it is necessary to understand the

effects of failure on the customer to

ensure preventive actions are taken and

to be able to answer the customers in

in the next screen we will look at

another dfss tool failure modes and

failure modes and effects analysis or

FMEA is a preemptive tool that helps any

system to identify potential pitfalls at

all levels of a business system it helps

the organization to identify and

prioritize the different failure modes

of its product or service and what

effect the failure it would have on the

customer it helps in identifying the

critical areas in a system on which the

organization's efforts can be focused

identification of critical areas it does

not offer solutions to the identified

problems we will look at the varieties

of FMEA such as dfmea and pfmea in the

pfmea stands for process failure mode

and effects analysis and dfmea stands

for design failure mode and effects

pfmea is used on a new or existing

process to uncover potential failures it

is done in the quality planning phase to

act as an aid during production a

process FMEA can involve fabrication

assembly transactions or services

dfmea is used in the design of a new

product service or process to uncover

potential failures the purpose is to

find out how failure modes affect the

system and to reduce the effect of

failure on the system this is done

manufacturing all design deficiencies

are sorted out at the end of this

process in the following screen we will

understand FMEA risk priority number

FMEA risk priority number or RPM is a

measure used to quantify or assess risk

associated with the design or process

assessing risk helps identify critical

hire the rpn higher the priority the

rpn is a product of three numbers

severity of a failure occurrence of a

failure and the detectability of a

all these numbers are given a value on a

scale of 1 to 10. the minimum value of

rpn is one and the maximum value is one

a failure mode with a high occurrence

rating means the failure mode occurs

a mode with a high severity rating means

that the mode is really critical to

a mode with a high detection rating

means that the current controls are not

in the next screen we will look at the

the FMEA table helps plan Improvement

initiatives by underlining why and how

failure modes occur and helps

organizations plan for their prevention

typically FMEA is applied on the output

of root cause analysis and is a better

tool for Focus or prioritization as

compared to multi-voting one important

aspect of FMEA is that it does not need

experts in a particular area can form

the FMEA table without having to look at

in functions such as human resources the

FMEA table is very useful as there might

not be much data available to the

the sample FMEA table is given on the

screen please go through the contents

in the following screen we will discuss

severity of risk priority number and

severity refers to the seriousness of

the effect of the failure mode or how

critical the failure mode is to the

the severity of a failure mode is rated

on a scale of 1 to 10 using a severity

different Industries follow different

structures for the severity table

a high severity rating indicates a mode

is critical to operational safety

for example a team working on FMEA of a

radioactive plant May insert fatal as

another example is the severity table

the team manager wants to rate the

severity of failure of the team in an

she might rate it at 9 given that the

team would lose a big sponsorship should

they face defeat which could in turn be

hazardous to the team's future

shown here is a generalized table of

the severity rating can never be changed

for example if a mode has a rating of 9

before Improvement it will continue to

have a rating of nine after Improvement

let us look at occurrence of RPM and

occurrence is the probability that a

specific cause will result in the

as with severity occurrence is rated on

a scale of 1 to 10 based on a table

like the severity table higher the

occurrence of a failure higher is its

rating again this table might vary

depending on the industry and scenario

sometimes the project team can use data

here if available based on past data the

probability of occurrence of a failure

can easily be rated shown here is a

generalized table let us next look at

detection of rpn and scale criteria

detection is the probability that a

particular failure will be detected the

table shown here is again a generalized

the rating here is a bit different from

higher the detectability of a failure

lower is its rating this is because if

the failure can easily be detected then

everyone would know of it and therefore

there would be less or no damage for

example if detection is impossible the

failure is given a rating of 10. please

note that at the start of a Six Sigma

project the failure mode is given a

relatively High detection rating let us

look at an example of FMEA and rpn in

in this example A Bank wants to

recognize and prioritize the risks

involved in the process of withdrawing

it can be observed from the table that

not having a control in place for

network issues has the highest RPM this

is due to the detectability for a

the next set of information in the table

shows the action taken by the bank's

management to address the failure modes

following the implementation the new rpn

is calculated retaining the security

level at nine this is because the

actions were not directed at reducing

the severity but at the causes of

failure it can be seen that the new rpn

is much lower and the risk for both

this lesson will cover the details of

Six Sigma can be applied to everything

around it can be applied across almost

70 different sectors however it cannot

be applied to all problems the first

step is to check if the project

qualifies to be a Six Sigma project the

questions that need to be asked are as

is there an existing process

to implement the DMACC methodology of

problem solving a process needs to exist

the process should be in operation for

the development of the product or

is there a problem in the process

ideally the process should not have any

if there is a problem in the process

performance the process needs to be

the problem has to be measurable in

order to assess the root cause and the

impact of the problem on the process

does the problem impact customer

if the problem affects customer

satisfaction an action needs to be taken

immediately else the customer may start

finding alternate products or switch to

does working on the problem impact

it is very essential to assess the

impact of the project on the profits of

the company if the project affects the

profits of the company adversely then

such a project is not feasible

is the root cause of the problem unknown

if the root cause of the problem is

visible then a Six Sigma project is not

required other problem-solving

techniques can be used in this case

if the solution to the problem is

already known then there is no need for

any project the company can directly

the Define phase of DMACC will be

the Six Sigma project process is known

Define is the first phase in the Six

in the Define phase the problem is

defined and the Six Sigma team is formed

the objectives of the defined phase are

clearly Define the problem statement

understand customer requirements and

ensure that the Six Sigma project goals

are aligned to these requirements

Define the objectives of the Six Sigma

plan the project in terms of time budget

Define the team structure for the

project and establish roles and

in the next screen let us learn about

benchmarking is the process of comparing

an organization's business processes

practices and performance metrics with

there are various types of benchmarking

let us briefly look at each type

process benchmarking entails comparing

specific processes to a leading company

this is useful to obtain a simplified

view of business operations and enables

a focused review of major business

comparisons of production processes data

collection processes performance

indicators and productivity and

Financial benchmarking is performed to

assess overall competitiveness and

it is done by running a detailed

financial analysis and analyzing the

performance benchmarking involves

comparison of products and services with

those of competitors with the intention

of evaluating the organization's

product benchmarking involves designing

new products or services or upgrading

this can involve reverse engineering a

competitor's products to study the

strengths and weaknesses and modeling

the new product on these findings

strategic benchmarking refers to

studying strategies and problem-solving

approaches in other Industries

functional benchmarking is the focused

analysis of a single function with the

complex functions may need to be divided

into processes before benchmarking is

competitive benchmarking includes

standardizing organizational strategies

process products services and procedures

against the competitors in the same

collaborative benchmarking is a type of

benchmarking where the standardization

of various business parameters is

carried out by a group of companies and

if the subsidiary units of a company or

its various branches carry out the

benchmarking it is called collaborative

let us take a look at Best Practices for

benchmarking in the next screen

best practice is a method that ensures

continuous Improvement leading to

exceptional performance it is also a

method to sustain and develop the

process continuously some of the best

practices in benchmarking are as follows

increase the objectives or scope of

set the standards and path to be

reduce unnecessary effort and comply

recognize the best in the industry to

share the information derived from

in the next screen we will discuss

let us understand process business

process and business system in this

a process is a series of steps designed

to produce a product or service to meet

a process mainly consists of three

elements input process and output

a business process is a systematic

organization of objects such as people

machinery and materials into work

activities designed to produce a

as shown on the screen a process is a

a business process is in turn a part of

a business system is a value-added chain

of various business processes such as

for example payroll calculation is a

process in the HR business process of an

I.T company which is a business system

in the next screen we will look at the

let us discuss the challenges to

business process Improvement in this

the Improvement to a business process of

an organization faces challenges due to

the traditional business system

structure because it is generally

grouped around the functional aspect

the main problem in a functionally

grouped organization is the movement or

flow of a product or service a product

or service has to go through various

functions and their functional elements

to reach the customer or end user

the other problem is management of the

flow of products or Services across

this is difficult as usually there is no

these business process Improvement

problems can be solved using the project

management approach to produce the

in the next screen we will learn about

the representation of where the process

owner and stakeholders are placed in the

organizational hierarchy is on the

screen the process owner is a person at

a senior level in the hierarchy

he is the one who takes responsibility

for the performance and execution of a

process and also has the authority and

the ability to make necessary changes

on the other hand a stakeholder is a

person group or organization which is

affected or can affect an organization's

businesses have many stakeholders like

stockholders customers suppliers company

management employees of an organization

and their families the society Etc

let us discuss the effects of process

failure on various stakeholders in the

while it is an absolute business

necessity to keep one stakeholder

satisfied at all times failure to meet

one or more process objectives may

result in negative effects on them

in such situations for the stockholders

the perceived value for the company gets

reduced customers May seek other

competitors for their deals while

imposing penalties and finding recourse

in legal action against the company

suppliers may be on the losing front

with delayed payments or not being paid

at all company management may require

cost cut down employees will receive

diminishing wages the community and

Society will be affected due to

pollution created by the organization

in the next screen we will understand

the relationship between business and

in the diagram shown on the screen each

stakeholder is both a supplier as well

as a customer forming many closed-loop

processes that must be managed

controlled balanced and optimized for

communication is the key in such

situations and is facilitated through

the next screen covers the importance

and relevance of stakeholder analysis

stakeholder analysis is an important

task to be completed before doing a Six

a business has many stakeholders and any

change to a business process affects

some or all of them when a process does

not meet its objectives it results in

the stakeholders being negatively

affected which in turn affects the

the sex Sigma team must factor in the

reasons why a stakeholder May oppose the

let us proceed to the next topic of this

in this topic we will discuss voice of

let us start with how to identify the

customer in the following screen

customers are the most important part of

a customer is someone who decides to

purchase pays consumes and gets affected

by a particular product or service

understand the customer requirements

the products or Services can be designed

according to these requirements

consequently the company is able to

provide products or Services the

customers aren't willing to purchase

there are two types of customers

internal and external customers

in the next screen we will learn about

an internal customer can be defined as

anyone within the business system who is

affected by the product or the service

most often internal customers are the

for example let us assume that there is

a series of processes in a particular

in such a scenario the second process is

the internal customer for the first

the third process is an internal

customer for the second process and so

the basic needs of an internal customer

are to be provided proper tools and

necessary equipment imparted proper

training and given Specific Instructions

to carry out their responsibilities

however the needs are not limited to

other needs include the provision

storyboards to display the letters Etc

team meetings to share business news and

announcements staff meeting to share

information and quality awards from

an internal customer is important first

of all the activities of an internal

customer directly affect the final or

secondly the activities of an internal

customer affect the next process in the

finally an internal customer also

affects the quality of the product

developed or service provided

when the needs of the internal customers

in most cases employees are met they are

more likely to have higher perceptions

of quality and also contribute to

the satisfaction levels of the internal

customers can be improved in various

ways these include a higher amount of

internal communication through company

recognitions for work quality Awards Etc

constant training on how to be ahead and

environment is very essential too in the

next screen we will learn about external

this screen focuses on the positive

effects of a project on the customers

the most important aspect of any process

Improvement project is the customers

internal customers are the ones who

drive the project hence the effect of

the project on internal customers is a

critical factor that needs to be

the positive impact of a project on the

internal customers is as follows

the project is driven by highly

motivated individuals or internal

customers who are aware of the project

individuals belonging to a credible

project understand the project

deliverables and display high levels of

job satisfaction these individuals go

the extra mile to take up tasks beyond

such individuals make a highly motivated

team focused on delivering their

responsibilities in order to meet the

customer requirements working together

in a positive environment also improves

the positive impact of a project on the

external customers is as follows

process Improvement projects analyze the

problems and come up with an effective

solution consequently ensuring a better

a successful process Improvement project

assists the organization in effectively

meeting customer expectations or

there is visible Improvement in customer

good quality product and service ensures

let us learn about different methods of

customer data collection in the

once you begin to identify the customer

types you need to look forward to

collecting customer data collecting data

from customers is very essential as it

helps consider the levels at which these

customers affect the business Begin by

collecting feedback from both internal

customer feedback helps fill the gaps

and improve the various business

it helps Define a good quality product

as perceived by the customer and

identify qualities that make the

competitors products or service better

it also helps identify factors which

provide a Competitive Edge to the

there are various methods to collect

feedback from the customers many of you

might be involved in a similar activity

popular and common methods are surveys

conducted through questionnaires focus

groups individual interviews with the

customer complaints received via call

centers emails and feedback forms are

feedback received in this form are from

in the next screen we will learn about

let us discuss the advantages and

disadvantages of questionnaires in this

the advantages of a questionnaire are

that it costs less the phone response

rate is high anywhere from around 70 to

90 percent and it produces faster

also analysis of mail questionnaires

requires few trained resources

questionnaire is a method used to gather

disadvantages associated with it

there may be incomplete results and

unanswered questions leading to a lack

the response rate of mail surveys is

at times phone surveys can produce

undesirable results as the interviewer

can influence the person being

we will differentiate between telephone

survey and web survey in the next screen

there are different methods to collect

data for a survey the methods need to be

based on the requirements and needs of

the organization the popular methods of

survey are the telephone survey and web

both have their own drawbacks and

benefits which are given on the screen

the organization needs to choose a

method of collecting data according to

it is recommended to go through the

content for a better understanding

in the next screen we will learn about

let us now discuss the advantages and

disadvantages of using a focus group for

data collection the interaction in a

focus group generates information

provides in-depth responses and can

address more complex questions or

qualitative data it is an excellent

platform to get critical to Quality or

on the other hand the disadvantages of

focus groups are that the learning only

applies to those within the group and it

The information collected is more

qualitative than quantitative which is

additionally they can also generate a

lot of information from anecdotes and

incidents experienced by the individuals

in the next screen we will discuss the

this screen discusses advantages and

disadvantages of using the interview

technique for data collection

interviews have a capability to handle

complex questions and a large amount of

they also allow us to use visual aids in

it is a better method to be employed

when people do not respond willingly and

or accurately by phone or email however

there are some shortcomings as well

interviews are time consuming and the

resources or interviewer needs to be

trained and experienced to carry out the

task let us discuss the importance and

urgency of these inputs in the next

the table shows the importance and

urgency of different kinds of input

to understand the kind of input to be

chosen different kinds of methods for

collecting data are identified

telephone survey web survey and

interview are the data collection

to select the best methods the criteria

or the factors which are important to

the criteria are the factors based on

which an organization is going to make

decisions the list of factors is then

given weightage based on the importance

of each factor in decision making as

seen cost is the most important

Criterion for which the weightage is

given 20. response rate of the customer

is next important factor and the list

visualizing feature and compiling and

analyzing data are the factors which

have the lowest impact on the decision

of selecting the methods for data

each of the data collecting methods is

rated between 1 and 10 based on its

impact on the listed factors with 10

being highly favorable to the

organization and one being least

after rating all the methods with the

factors listed the sum or total is

calculated the calculation of the total

involves multiplying each method's

rating with the factor weightage and

adding all the multiplied values of the

column that is for telephone survey the

rating is Multiplied with factors rating

eight multiplied by 12 plus 8 multiplied

by 6 plus 3 multiplied by Twenty plus

five multiplied by five plus three

multiplied by five plus seven multiplied

by fifteen plus one multiplied by ten

plus seven multiplied by three plus zero

multiplied by two plus three multiplied

by two plus one multiplied by ten plus

seven multiplied by five plus eight

multiplied by five and the total of this

in a similar way calculate the total

value for the remaining two methods the

total of other two methods are 744 and

522 respectively looking at the overall

total of the methods 744 is the highest

hence web survey is the best method for

the organization to use for data

let us look at the pros and cons of

customer complaints data in the next

there are pros and cons in gathering

information from customer complaints

advantages include availability of

specific feedback directly from the

customer and ease in responding

appropriately to every customer on the

contrary feedback in this method does

not provide an adequate sample size and

may lead to process changes based on one

or two inputs from the customer the next

screen will discuss the difference

between product complaint and expedited

product complaints and expedited service

requests can act as inputs to the

company for improving their process

these details address the needs of the

a product complaint means that the

customer is not happy with the product

that he has purchased from the company

an expedited service request means a

service request is being rushed if the

customer requires the items immediately

then an expedited service request is

raised from the customer and the

organization tries to fulfill it to

product complaint implies that a product

is not meeting customer specification

expedited service request implies that

service timeliness are not meeting

customer requirements hence service has

product complaint also implies that the

customer needs for product are not

completely identified whereas expedited

service request implies that the

customer timings need to be recalculated

let us discuss the importance and

urgency of these inputs in the next

the table shows the importance and

urgency of different kinds of input

to select the best methods the criteria

or the factors which are important to

the criteria are the factors based on

which organization is going to make

these factors are then given weightage

based on the importance of each factor

as seen cost involved and identification

of customer need are the most important

criteria for which the weightage given

is 15 and the list follows time

consumption and compiling and analyzing

data are the factors which have the

least impact on the decision of

selecting the methods for data

each method is rated between 1 and 10

based on its impact of the listed

factors with 10 being highly favorable

to the organization and one being least

favorable to the organization

after reading all the methods with the

factors listed the sum or total is

calculation of the total is derived by

multiplying each method's rating with

the factor weightage and adding all the

multiplied values that is for product

complaint the rating is Multiplied with

8 multiplied by fifteen plus four

multiplied by fifteen plus three

multiplied by two plus one multiplied by

ten plus one multiplied by ten plus one

multiplied by ten plus one multiplied by

eight plus one multiplied by ten plus

four multiplied by two plus one

multiplied by eight plus one multiplied

by ten and the total of this is two

in the similar way calculate the total

value for expedited service request the

total of expedited service request is

817 and hence it is effective to the

let us discuss the key elements of data

collection tools in the next screen

data collection tools will be selected

based on the type of data to be

collected the key elements that make

these tools effective are as follows

data is collected directly from the

primary source or customer hence there

is no scope for miscommunication or loss

data is collected exclusively for the

stated purpose hence data is highly

reliable the data is captured is after

understanding the organizational purpose

this makes the data exclusively relevant

and serves the purpose of the

data is collected instantaneously when

there is a requirement this ensures that

the data is up to date hence the data is

the tools accurately Define customer

requirements the customer requirements

could be current needs or Improvement to

the product or service that they are

the tools help to get enough information

about customer requirement through which

the process for improving or creating

the product or service that the customer

in the next screen we will discuss how

the collected data can be reviewed

collated data must be reviewed to

eliminate vagueness ambiguity and any

neutral worldwide buys laptops for its

employees from a company that is into

manufacture and sales of laptops the

company also provides servicing and

repairs for their products to the

to understand the level of customer

satisfaction in neutral worldwide and to

improve its process the laptop company

is conducting a survey the questionnaire

questionnaire before review had

questions that led to ambiguity

let us look at each item on the survey

to understand the level of usage of the

laptop and to know their customer better

the survey is raising a question related

to the occupation of the customer it

gives the option of student or

professional but with this low amount of

information the company is neither able

to gather the information nor will the

given option cover the entire possible

occupation in the market including an

option of other please specify would

help the customer to choose and provide

the information if he does not belong to

one of the two given groups hence the

same is added in the review so that the

customer will not be in any ambiguity

while filling the questionnaire

the question whether the sales executive

was supportive with an option of yes or

no is a question which leads again to

ambiguity and unintended bias the

customer might be partially happy or

partially not happy but the choice does

not let them inform their exact feeling

if the customer selects no as the option

then the company does not get enough

information to understand where their

hence in the reviewed questionnaire the

customer is asked to rate the qualities

of their sales executive which will

provide better data to the company in

next we will discuss a technique named

the voice of customer is a technique to

organize analyze and profile the

customer's requirements voice of the

customer is an expression for listening

requirements while purchasing an air

conditioner in all cases the customer is

purchasing for his or her domestic usage

each customer is further categorized

according to his needs and requirements

when the customer says that he needs a

silent air conditioner he needs sound

sleep at night in the bedroom this is

primarily to remain fresh the next

morning and to get rid of the noisy

ceiling fan being used currently

in case the customer says that he needs

an efficient AC he needs a machine which

provides good cooling at night in the

bedroom this is mainly because it gets

extremely hot in summer also he

currently uses a ceiling fan which is

not so effective in Summers on the other

hand when the customer wants to buy an

AC which is not too costly he has

limited cash for the purchase he wants

let us discuss the importance of

translating customer requirements in the

customer requirement is the data

collected from customers that gives

information about what they need or want

from the process customer requirements

are often high level vague and

some customers may give you a set of

specific requirements to the business

but broadly customers requirements are a

customer requirements when translated

into critical process requirements that

are specific and measurable are called

critical to Quality ctq factors a fully

developed ctq has four major elements

output characteristic y metric Target

and specification or tolerance limits

we will discuss the meaning of ctq in

let us understand what Quality Function

qfd is a process to ensure that the

customers wants and needs are heard and

it is also known as the voice of the

qfd is a process to understand the

customer's needs and translate them into

a set of design and Manufacturing

requirements while motivating businesses

to focus on their customers it also

helps companies to design and build more

competitive products in less time and

qfd helps in prioritizing customer

requirements recognizing strengths and

weaknesses of an organization and

recognizing areas that need to be worked

on and areas that need immediate focus

qfd is carried out by asking relevant

questions to the customers and

tabulating them to bring out a set of

parameters critical to the product

design let us discuss phases of qfd in

quantity function deployment involves

four phases phase one product planning

in this phase the qfd team translates

the customer requirements into product

phase two product design in this phase

the qfd team translates the identified

technical requirements into key part

phase three process planning in this

phase the qfd team identifies the key

process operations necessary to achieve

the identified key part characteristics

phase 4 production planning or process

in this phase the qfd team establishes

process control plans maintenance plans

and training plans to control operations

next we will understand the structure of

let us see what happens after completing

completing one hoq Matrix is not the end

the output of the first hoq Matrix can

be the first stage of the second qfd

phase as shown in the image the

translation process is continued using

linked hoq type matrices until the

production planning targets are

let us proceed to the next topic of this

lesson in the following screen hey there

Learners check out our certified lean

Six Sigma Green Belt certification

training course and earned a Green Belt

certification to learn more about this

course you can click the course Link in

in this topic we will discuss the basics

of project management let us start with

a discussion on problem statement

every Six Sigma project targets a

problem that needs to be resolved the

first step of project initiation is

defining the problem statement a problem

statement needs to describe the problem

in a clear and concise manner a problem

statement needs to identify and specify

it should indicate the current

performance state of a process and the

required performance State completely

derived from customer requirements a

problem statement should be quantifiable

this means it should have specified

metrics including the respective units

please note that the problem statement

cannot contain Solutions or causes for

the problem in the next screen we will

discuss the is or is not template

the is or is not technique was first

popularized by Kepner Trigo Incorporated

in the 1970s it is a powerful tool that

helps Define the problem and gather

required information an example of a

problem statement of paper cup leaks is

the Six Sigma team has to answer what is

the problem what isn't the problem where

is it where isn't it when is it when

isn't it the problem to what extent is

it a problem and to what extent isn't it

the information is then used to fill the

question areas in the is and is not

issue template in the analysis phase if

a cause cannot describe the is and the

is not data then it's not likely the

main cause in the next screen we will

list the criteria for the project

the project objectives must meet the

characteristics desired in Project

attainable relevant time-based and

the project deliverables should be

specific example hospitals maintain

records of all patients often a few

forms are rejected or missed due to

errors in recording the ID numbers

in this case setting the objective as

reduce form rejection is very vague

instead reduce patient ID errors in

recording lab results is specific and

effectively targets solving the problem

the project objectives should be

quantifiable example setting the

objective as fewer form rejections is

very vague instead reduce patient ID

Errors By 30 percent sets a specific

the project objectives should be

achievable and practical the project

objectives should be relevant to the

problem the project objectives must

specify a time frame within which they

should be delivered the project

objectives must not be easily achievable

example most problems and errors can be

reduced by creating awareness hence the

objective must stretch beyond the easily

in the next screen we will understand

project documentation refers to creating

documents to provide details about the

such documents are used to gain a better

understanding of the project prevent and

resolve conflict among stakeholders and

Share Plans and status for the project

documentation of a project is critical

throughout the project some of the

benefits achieved through project

documentation are mentioned below

documentation serves as written proof

for execution of the project it helps

teams achieve a common understanding of

the requirements and the status of the

it removes personal bias as there is a

documented history of discussions and

decisions made for the project

depending on the nature of the project

each project produces a number of

different documents some of these

documents are the project Charter

project plan and its subsidiary plans

other examples of project documentation

include project status reports including

key Milestones report risk items and

pending action items the frequency of

these reports is determined by the need

and complexity of the project these

reports are sent to All stakeholders to

keep them abreast of the status of the

project another example of project

documentation is the final project

report this report is prepared at the

end of the project and includes a

summary of the complete project

project storyboard inputs generated from

spreadsheets checklists and other

miscellaneous documents are also

classified as project documents

in the next screen we will understand

we will list the project Charter

sections in this screen the major

sections of a project Charter are

project name and description business

requirements name of the project manager

project purpose or justification

including Roi stakeholder and

stakeholder requirements broad timelines

major deliverables constraints and

assumptions and the budget summary of

the charter in the next screen we will

a project plan is the final approved

document which is used to manage and

control the various processes within the

project and ensure its seamless

the project manager uses the project

Charter as an input to create a detailed

a project plan comprises various

sections prominent among them being the

project management approach the scope

statement the work breakdown structure

the cost estimates scheduling defining

performance baselines marking major

Milestones to be achieved and the key

members and required staff Personnel for

it also includes the various open and

pending decisions related to the project

and the key risks involved additionally

it also contains references to other

subsidiary plans for managing risk scope

in the next screen we will learn about

we will look at different techniques

used for interpreting the project scope

in this screen project scope can be

interpreted from the problem statement

and project Charter using various tools

like the Pareto chart and the cypok map

the principle behind the burrito chart

or the 80 20 Principle as we know it is

the burrito chart helps the teams to

trim the scope of the project by

identifying the causes which have a

major impact on the outcome of the

the cypok map is a high level process

map which helps all team members in

understanding the process functions in

terms of addressing questions like who

are the suppliers what are the inputs

they provide what are the outputs that

can be obtained and who are the

as discussed earlier cypok stands for

suppliers inputs process outputs and

in the subsequent screen we will learn

cypok is a macro level map that provides

an overview of the business process

where a process map is a micro level

flowchart that provides an in-depth

the process map covers details at all

levels and provides a walk through the

the cypok map is used as a basis while

a level 1 process map provides in-depth

information but the final process map

drills further into detail in the

following screen we will understand the

let us discuss consequential metrics in

consequential metrics measure any

negative consequences these can be

business metrics process metrics or both

they measure the negative effects of

improving the primary or key metrics

they are used to measure the indemnity

triggered by any damage in the project

the inconsistent use of consequential

metrics can lead to loss of opportunity

and rework after a project ends

consequential metrics help to understand

the cause and effect relationship

between the primary and the secondary

metrics and the impact it has on the

let us take a look at an example for

consequential metrics in the next screen

we will discuss the best practices in

the following are some of the best

practices of consequential metrics

setting consequential metrics during the

measure phase and monitoring these

metrics after finalizing the project

will help to analyze whether the link

between previous primary and secondary

also linking consequential metrics with

primary metrics and finally linking them

with secondary metrics provides Clarity

on the impact of these metrics

assessing and evaluating the cause and

effect relationship between these

metrics is helpful to the organization

as a whole in the next screen we will

list some project planning tools

the project manager uses various tools

to plan and control a project

one of the tools which he uses is the

burrito chart other prominent tools

include the network diagram the critical

path method also called CPM the program

evaluation and review technique which is

also known as pert Gantt charts and the

work breakdown structure also known as

WBS in the next screen we will discuss

Pareto chart is a histogram ordered by

the frequency of occurrence of events it

is also known as the 80 20 rule or vital

it helps project teams to focus on the

issues which cause the highest number of

to explain further the given chart plots

all the causes for defects in a product

or service the values are represented in

descending order by bars and the

cumulative total is represented by the

Pareto chart emphasizes that 80 percent

of the effects come from 20 percent of

thus a Pareto chart Narrows the scope of

the project or problem solving by

identifying the major causes affecting

Quality Burrito charts are useful only

when required data is available

if data is not available then other

tools such as brainstorming and

multi-voting should be used to find the

in the following screen we will continue

to discuss burrito chart with an example

a hotel receives plenty of complaints

from its customers and the hotel manager

wishes to identify the key areas of

complaints complaints were received in

the following areas cleaning check-in

pool timings minibar room service and

cleaning and check-in can be noted as

areas of concern with 35 and 19

percentage is calculated for each cause

of complaint and the cumulative is

derived burrito chart is plotted using

in the next screen we will discuss

Network diagrams are one of the tools

used by the project manager for project

planning they are also sometimes

referred to as Arrow diagrams because

they use arrows to connect activities

interdependencies between activities of

there are some assumptions that need to

be made while forming the network

diagram the first assumption is that

before a new activity begins all pending

activities have been completed

the second assumption is that all arrows

indicate logical precedence this means

that the direction of the arrow

represents the sequence that activities

the last assumption is that a network

diagram must start from a single event

and end with a single event there cannot

be multiple start and endpoints to the

network diagram in the next screen let

us discuss some terms related to network

for the network diagram to calculate the

total duration of the project the

project manager needs to Define four

dates for each task the first two dates

relate to the date by when the task can

be started the first date is early start

this is the earliest date by when the

the second date is late start this is

the last date by when the task should

the second two dates relate to the dates

when the task should be complete

early finish is the earliest date by

when the task can be completed late

finish is the last date by when the task

the duration of the task is calculated

as the difference between the early

start and early finish of the task

the difference between the early start

and late start of the task is called the

slack time available for the task

slack can also be calculated as the

difference between the early finish and

late finish dates of the task

slack time or float time for a task is

the amount of time the task can be

delayed before it causes a delay in the

in the next screen we will discuss

critical path method also known as CPM

is an important tool used by project

managers to monitor the progress of the

project and to ensure that the project

the critical path for a project is the

longest sequence of tasks on the network

the critical path in the given Network

diagram is highlighted in Orange

critical path is characterized by zero

slack for all tasks on the sequence this

means that the smallest delay in any

other tasks on the critical path will

cause a delay in the overall timeline of

this makes it very important for the

project manager to closely monitor the

tasks on the critical path and ensure

that the tasks go smoothly if needed the

project manager can divert resources

from other tasks that are not on the

critical path to task on the critical

path to ensure that the project is not

when a project manager removes resources

from such tasks he needs to ensure that

the task does not become a critical path

task because of the reduced number of

resources during the execution of the

project the critical path can easily

shift because of multiple factors and

hence needs to be constantly monitored

a complex project can also have multiple

critical paths in the next screen we

will discuss project evaluation and

we will understand the concept of risk

risk is an uncertain event or a

consequence probable of occurring during

the main objectives of any project are

risk affects at least one of the four

it is important to understand that risk

can be both positive as well as negative

a positive risk enhances the success of

the project whereas a negative risk is a

threat to a Project's success some of

the terms used in Risk analysis and

management are risk probability issue

and risk consequences the likelihood

that a risk will occur is called risk

to assess any risk is to assess the

probability and impact of the risk

issue is the occurrence of a risk risk

consequences are the effects on Project

objectives if there is an occurrence of

in the subsequent screen we will

understand the process of risk analysis

we will list and understand some of the

elements of risk analysis in this screen

qualitative methods like interview

checklists and brainstorming are used to

quantitative method quantitative methods

are data based and a computer is

required to calculate and analyze these

methods are used to evaluate the cost

time and probabilistic combination of

feasibility is the study of the project

risk this is usually carried out in the

beginning of the project when the

project is most flexible and risks can

be reduced at a relatively low cost

it helps in deciding different

implementing options for the projects

potential impact once the potential

risks are identified the impact of these

using this data possible solutions for

rpn of a failure is the product of its

probability of occurrence severity and

a failure is prioritized based on its

a high rpn indicates high risk rpn

when potential risks are identified

their impact in terms of cost time

resources and objective perspective is

calculated if the impact is huge then

avoiding the risk is the best option

mitigating risk mitigating is the second

option when dealing with risks the loss

that arises from mitigating a risk is

much less than the loss that arises from

the temporary avoiding of risk

accepting the risk if a risk cannot be

avoided or mitigated then it has to be

accepted the risk will be accepted if it

doesn't greatly impact the cost time and

in the following screen we will discuss

benefits of risk analysis are as follows

once the risk has been identified it can

be either mitigated transferred or

when risk is identified in a task slack

time is provided as a buffer identifying

risks also helps in setting up an actual

slack time for an activity in a project

could be the result of a risk identified

identifying risks helps in setting

realistic expectations from the project

by communicating the risk probability

risk analysis also helps to identify and

plan contingency activities if the risk

the project team is then well prepared

to work on the issue thereby reducing

the impact of the risk in the following

screen we will take a look at the risk

the potential risks of a project are

assessed using the risk assessment

Matrix it covers potential risk areas

like project scope team Personnel

material facility and equipment and

each of these areas is assessed in terms

of risk of loss of money productivity

resources and customer confidence

in the subsequent screen we will discuss

by definition a project has a beginning

and an end but without a formal closure

process project teams can fail to

recognize the end and then the project

can drag on sometimes at Great expense

every project requires closure for

larger complex projects it's a good idea

to close each major project phase for

project closure ensures that outcomes

match the stated goals of the project

customers and stakeholders are happy

critical knowledge is captured

the team feels a sense of completion and

project resources are released for new

in the next screen we will list the

goals of a project closure report

the project closure report is created to

accomplish the following goals

review and validate the success of the

confirm outstanding issues limitations

accomplished to complete the activity

highlight the best practices for future

provide the project report or summary

provide a project background overview

summarize the planned activities of a

evaluate project performance

provide a synopsis of the process

generate discussions and recommendations

generate project closure recommendations

in the following screen we will list and

understand project closure activities

during project closure the project

manager needs to take care of the

finalize the project documents

much of a Project's documentation is

created during the life of the project

document collection and update

procedures are well established during

capture the project knowledge

project documents are helpful for future

projects in troubleshooting the product

ideally the project library is set up at

the beginning of the project and team

members add documents as they produce

document the project learnings

project learnings can be captured

through team meetings meetings with

stakeholders and sponsor and through

feedbacks from consultants and vendors

the project manager needs to provide a

summary of the project results to team

members either as a presentation at a

meeting or as a formal document

Consultants should not be relieved from

their position until they have

transferred all the important product

maintenance knowledge to the team

schedule a meeting with the project

sponsor and key stakeholders to get

their final sign off on the project

if the project team used a project

management office or a dedicated work

area Arrangements need to be made to

return that space for General use

the project manager has the best

understanding of which of the team

members have worked the best have

transformed themselves with new skills

and who might be ready for a new level

of responsibility the project manager

needs to report to the team's superiors

what each team member has brought to the

after completion of every project the

team needs and deserves a celebration a

team dinner a team outing gift

certificates or other rewards are minor

costs that generate a large return in

terms of morale and job satisfaction

an announcement to the organization is a

good way to highlight the success of the

project and its benefits to the company

formal project closure ensures that the

team has met its objectives satisfied

the customer captured important

knowledge and been rewarded for their

let us proceed to the next topic of this

hey there Learners check out our

certified lean Six Sigma Green Belt

certification training course and earned

a Green Belt certification to learn more

about this course you can click the

course Link in the description box below

in this topic we will discuss management

let us start with the discussion on

Affinity diagram in the next screen

the Affinity diagram method is employed

by an individual or team to solve

unfamiliar problems it is an effective

medium where the consensus of the group

the given Affinity diagram is based on

an organization where the employees are

to begin with each member writes down

ideas and opinions on sticky notes each

note can have a single idea the points

brainstorming session are that the

workers are unkind pay is low and it is

difficult to survive on the pay

structure working hours are too long Etc

in The Next Step all the sticky notes

are pasted on a table or wall

the sticky papers are arranged according

to categories or thought patterns

members happen to arrange their ideas

based on Affinity in case a particular

idea is good to go into more than one

category it is duplicated and added to

after the arrangement is done each

category is named with a header card

the header card captures the central

idea of all the cards in that category

and draws a boundary around them

poor compensation combines ideas like

poor work environment encompasses issues

like poor lighting uncomfortable rooms

similarly poor relationships prevailed

in the workspace as the workers are

unkind and there is mutual dislike

lack of motivation is due to repetitive

work and no work related challenges

you can see in the diagram on that slide

that once all the ideas are grouped to

the respective header cards a diagram is

drawn and borders are placed around the

group of ideas thus Affinity diagram

helps in grouping ideas with the common

in the next screen we will discuss the

interrelationship diagram technique

helps in identifying the relationship

between problems and ideas in complex

if the problem is really complex it may

not be easy to determine the exact

the given interrelationship diagram is

the result of a team brainstorming

session which identified 10 major issues

involved in developing an organization's

initially the problem is defined and all

the members put down their ideas on

sticky notes each note contains only one

all the sticky notes are put on a table

in The Next Step the causes or areas of

cause-effect arrangement of cards is

constructed by drawing an arrow between

the causes and effects of the cause

this is done until all the ideas on the

sticky notes are accounted for and made

a part of the interrelationship diagram

take a large sheet of paper and

replicate the cause effect Arrangement

on it as depicted in the image a large

number of outgoing arrows indicate the

whereas a higher number of incoming

there are as many as six arrows

originating from lack of quality

this leads us to understand that it is a

on the other hand there are three arrows

ending with the idea lack of tqm

commitment by managers making it an

in the next screen we will understand

the tree diagram is a systematic

approach to outline all the details

needed to complete a given objective

in other words it is a method used to

identify the tasks and methods needed to

solve a problem and reach a predefined

goal it is mostly used while developing

actions to execute a solution while

analyzing processes in detail during the

evaluation of implementation issues for

several potential Solutions and also as

a communication tool to explain the

details of a process to others

the given tree diagram shows the plan of

a coffee shop trying to set standards

first the objective is noted on a note

card and placed on the far left side of

the board the basic goal of the coffee

shop is to provide a delightful

in The Next Step the coffee shop needs

to determine the means required to

achieve the goal and furnish three

in other words the answers to the how or

why questions of the objectives

in this case the cappuccino needs to be

at a comfortable temperature and it

should have strong and pleasing coffee

Aroma with the right amount of sweetness

in The Next Step the three issues

mentioned in the second stage are

each issue is answered by maintaining

temperature the cappuccino can be served

strong flavored cappuccino can be

prepared using a good amount of finely

and a good quality sweetener used in the

right amount makes a great cappuccino

thus the tree diagram can be used to

achieve a goal or Define a process

in the following screen we will discuss

let us learn about Matrix diagram in

Matrix diagrams show the relationship

between objectives and methods results

their objective is to provide

information about the relationship

they provide importance of tasks and

Method elements of the subject

they also help determine the strength of

relationships between a grid of rows and

they help in organizing a large amount

of inter-process related activities

let us discuss various types of matrices

let us learn about a process decision

program chart in this screen

process decision program chart or the

pdpc method is used to chart the course

of events from the beginning of a

while emphasizing the ability to

identify the failure of important issues

on activity plans the pdpc helps create

appropriate contingency plans to limit

the number of risks involved the pdpc is

used before implementing a plan

especially when the plan is large and

complex if the plan must be completed on

schedule or if the price of failure is

the given process decision program chart

shows the process which can help in

the process starts when the seller

receives an order request from a

potential buyer this can lead to fixing

an appointment with the buyer confirming

the appointment date and meeting the

if a date is not fixed then buyers

should be contacted till the meeting is

confirmed without a meeting there is a

considering an optimistic scenario where

a meeting is fixed with a buyer the

seller describes the price of the

if the price is competitive the order is

if the price is not competitive the

seller may have to repeat the bid until

the buyer agrees and the order is secure

however the buyer may not agree to a

revised bid either in which case the

in such a scenario the seller can

justify the pricing and pursue the buyer

it might work and the seller might

in the next screen we will discuss the

and activity Network diagram is used to

show the time required for solving a

problem and to identify items that can

it is used in scheduling and monitoring

tasks within a complex project or

process with interrelated tasks and

resources moreover it is also used when

you know the steps of the project or

process their sequence and the time

taken by each of the steps involved the

original Japanese name for this tool is

the given activity Network diagram shows

a house construction plan and identifies

the factors involved separately like the

amount of time for each operation in one

situation the relationship of work

without time for each operation and in

the number of days is denoted by D so

the time taken for an activity like

Foundation to scaffolding takes around

five days plus four days which is nine

malign joining electrical work and

interior walls is dotted this shows

relation between them but without any

basically it means that electrical work

has to be done before interior walls but

the time is either not important or not

let us proceed to the next topic of this

lesson in the following screen

in this topic we will introduce business

results for projects let us start with

the discussion on defect per unit

we will learn about throughput yield in

throughput yield or tpy is the number of

acceptable pieces at the end of a

process divided by the number of

starting pieces excluding scrap and

throughput yield is used to measure a

if the dpu is known tpy can be easily

calculated as e to the power of the

mathematical constant and has a value of

2.7183 the expression can also be stated

as dpu equals the negative of natural

in the next screen we will discuss

rolled throughput yield or rty is the

probability of the entire process

producing zero defects rty is the true

measure of process efficiency and is

considered across multiple processes

it is important as a metric when a

tdpu is total defects per unit and is

defined for a set of processes when the

total defects per unit is known rolled

throughput yield is calculated using the

expression e to the power of negative of

tdpu the expression can also be written

as tdpu is equal to negative of natural

when the defectives are known roll

throughput yield can be calculated as

the product of each process's first pass

first pass yield is the number of

products which pass without any rework

over the total number of units

first pass yield is calculated as total

number of quality products over total

total number of quality products is

total number of units minus total number

in the following screen we will

understand fby and rty with an example

we will discuss process capability in

process capability or CP is defined as

the inherent variability of a

characteristic of a process or a product

in other words it might also mean how

well a process meets customer

CP is an indicator of capability of a

process and is expressed as difference

of USL and LSL divided by product of Six

USL stands for upper specification limit

LSL is lower specification limit and

sigma is the standard deviation of a

process the difference between USL and

LSL is also called the specification

in the following screen we will discuss

process capability indices or CPK was

developed to objectively measure the

degree to which a process meets or does

not meet customer requirements

it was developed to account for the

position of mean with respect to USL and

to calculate CPK the first step is to

determine if the process mean is closer

to the LSL or the USL if the process

cpkl is mean minus LSL divided by

product of 3 and sigma if the process

mean is closer to USL cpku is calculated

cpku is USL minus mean divided by

product of 3 and sigma here mean is the

process average and sigma represents the

if the process mean is equidistant

either of the specification limit can be

chosen CPK takes up the value of cpku

and cpkl depending on whichever is the

in the next screen we will understand

process capability indices with an

in this screen we will discuss CPK and

a CP value of less than one indicates

even if CP is greater than one to

ascertain if the process really is not

a CPK value of less than one indicates

that the process is definitely not

capable but might be if CP is greater

than one and the process mean is at or

near the midpoint of the tolerance range

the CPK value will always be less than

CP especially as long as the process

mean is not at the center of the process

non-centering can happen when the

process has not understood the customer

expectations clearly or the process is

complete as soon as the output reaches a

for example a shirt size of 40 has a

Target chest diameter of 40 inches but

the process consistently delivers shirts

with a mean of 41 inches as the chest

a machine stops removing material as

soon as the measured Dimension is within

let us proceed to the next topic of this

lesson in the following screen

in this topic we will discuss Team

Dynamics and performance let us start

with a discussion on team stages

there are five typical stages in the

team building process each team passes

through these stages as they start and

proceed through the project the five

stages in the team building process are

as follows forming storming norming

in the next screen we will discuss the

the first stage in the team building

process is called the forming stage in

this stage the team comes together and

the team leader is identified and he or

she starts directing the team and

assigning responsibilities to other team

members most team members at this stage

are generally enthusiastic and motivated

by desire to be accepted within the team

but later employs a directive style of

management which includes delegating

responsibility within the team providing

a structure to the team and determining

processes needed for the smooth

functioning of the team toward the end

of this phase the team should achieve a

commitment to the project and an

in the next screen we will discuss the

the second phase in the team building

process is called the storming stage as

suggested by the name itself in this

stage conflicts start to arise within

the team team members often struggle

over responsibilities and control within

it is the responsibility of the team

leader to coach and conciliate the team

the leader employs a coaching style of

management which is reflected through

facilitating change managing conflict

and mediating understanding between

towards the end of this phase team

members need to learn to voice

disagreement openly and constructively

while staying focused on common

objectives and areas of agreement

in the next screen we will discuss the

the third stage in the team building

process is called the norming stage in

this stage people get along and the team

develops a unified commitment toward the

the team leader promotes the team and

participates in the team activities team

members look to the leader to clarify

their understanding as some leadership

roles begin to shift within the lower

the leader employees a participatory

style of management through facilitating

change working to build consensus and

toward the end of this phase team

members need to accept individual

responsibilities and work out agreements

about team procedures in the next screen

we will discuss the fourth stage

the next stage in the team building

process is called the Performing stage

this is the most productive stage for

the project team in this stage team

members manage complex tasks and work

toward the common goals of the project

the leader employs a supervisory style

of management by overseeing progress

rewarding achievement and supervising

the team leader leads the project on

more or less an automated mode

when the project has completed

successfully or when the end is in sight

the team moves into the final stage

in the next screen we will discuss the

the last stage of team building is

called the adjourning stage in this

stage the project is winding down and

the goals are within reach the team

members are dealing with their impending

separation from the team the team leader

provides feedback to the team the leader

employs a supportive style of management

by giving feedback celebrating

accomplishments and providing closure

the team leader needs to adopt a

different style of leadership at every

stage it is therefore important for a

leader to understand these stages and

identify the current stage that a team

the success of the team depends on how

well the leader can guide them through

in the next screen we will learn about

team members can exhibit negative

behavior in more than one way during the

this Behavior has a negative effect on

the first kind of negative participants

fall in the category of overbearing

these participants use their influence

or expertise to take on a position of

authority discounting contributions from

other team members to cope with such

participants team leaders must establish

ground rules for participation and

reinforce that the group has the right

to explore any area pertinent to team

goals and objectives another kind of

negative participant is often referred

to as the dominant participant

these participants take up an excessive

amount of group Time by talking too much

focusing on trivial concerns and

otherwise preventing participation by

team leaders need to be able to control

dominant participants without inhibiting

some other participants are reluctant

participants who feel intimidated and

are not happy with the team process

owing to their reluctance they miss

opportunities to bring up data that is

valuable to the project this can often

lead to hostility within the team one

way to deal with reluctant participants

is to respond positively and with

encouragement to any contribution from

teamwork is more than a natural

consequence of working together

team management is more than building a

relationship with individual team

all teams face group challenges that

need group-based diagnosis and problem

solving to ensure that negative

participants are able to contribute and

in the next screen we will learn about

responsibilities are described here

various roles assist the smooth

execution of a Six Sigma project

these roles are required to support the

project by providing the information and

resources that are needed to execute the

the first important member of the Six

Sigma team is the executive sponsor

sponsors are the source or conduit for

project resources and they are usually

the recipients of the benefits the

the sponsor is responsible for setting

the direction and priorities for the

the sponsor may be a functional manager

the next important role is that of the

process owners they work with the black

belts to improve their respective

processes they provide functional

expertise about the process to the

project usually this role is played by

the functional managers in charge of

the next role in the project is that of

the Champions they are typically upper

level managers who control and allocate

organization is providing necessary

resources to the project and the project

is fitting into the Strategic plans of

the first role related to the execution

of the project is the role of the master

this role acts as a consultant to team

leaders and offers expertise in the use

of Six Sigma tools and methodologies

Master black belts are experts in Six

Sigma statistical tools and are

qualified to teach high-level Six Sigma

methodologies and applications

each Master black belt will have

multiple black belts under him

black belts are the leaders of

individual Six Sigma projects

they lead project teams and conduct the

detailed analysis required in Six Sigma

black belts act as instructors and

mentors for grain belts and educate them

in Six Sigma tools and methods they also

protect the interests of the project by

coordinating with functional managers

green belts are trained in Six Sigma but

typically lead project teams working in

their own areas of expertise

they are focused on the basic Six Sigma

tools for acceleration of projects

greenbelts work on projects on a

part-time basis dividing time between

project and functional responsibilities

an executive is the person who manages

and leads the team to ensure smooth

working of tasks and has the power to

a coach takes on a number of roles he or

she is the person who trains mentors

teaches and guides the team when

coach also motivates and builds

a facilitator is a guide for the team or

group also known as a discussion leader

facilitators help the group or team to

understand their common objective and

a sponsor is a person who supports the

event or the project by providing all

a team member is an individual who

belongs to a particular project team

a team member contributes to the

performance of the team and actively

participates for fulfillment of the

the progress achievements and the

details of the project have to be

effectively communicated to the team

management customers and stakeholders

we will learn about modes of

communication in the next screen

let us understand communication within

the purpose of communication within the

team and the modes of communication used

are as follows meetings and emails are

suitable to communicate the roles and

responsibilities of the team members

meetings memos and emails are used by

the team to understand the project

workshops and meetings are conducted to

identify the outstanding tasks risks and

team meetings assist decision making

and emails ensure coordination and

the next screen will focus on

communication with stakeholders

the purpose of communication with

stakeholders and the modes of

communication used are as follows

meeting emails and events are suitable

to convey project objectives and goals

meetings emails and newsletters assist

stakeholders in understanding project

workshops meetings and events help

stakeholders to identify the adverse

meetings with stakeholders assist

in the next screen we will discuss the

communication techniques can be grouped

in various ways the first grouping of

communication techniques is based on the

direction in which communication flows

vertical communication consists of two

subtypes namely downward flow of

communication and upward flow of

in the downward flow of communication

the managers must pass information and

give orders and directives to the lower

on the contrary upward communication

consists of information relayed from the

bottom or grassroot levels to the higher

horizontal communication refers to the

sharing of information across the same

levels of the organization this can be

in the form of formal and informal

formal Communications are official

company sanctioned methods of

communicating to the employees

the Grapevine Rumor Mill Etc are some of

the means of informal communication in

the second grouping of communication

techniques is based on the usage of

verbal communication includes use of

words for communication via telephone

non-verbal communication conveys

messages without the use of words

through body language facial expressions

the last grouping of communication

techniques is based on participation of

the people involved in communication

one-way communication happens when

information is relayed from the sender

to the receiver without the expectation

two-way communication is a method in

which both parties are involved in the

team tools are a part of the team

Dynamics and performance the various

brainstorming nominal group technique

and multi-voting if getting your

learning started is half the battle what

if you could do that for free visit

skillup by simply learn click on the

link in the description to know more

this lesson will cover the details of

the measure phase the key objective of

the measure phase is to gather as much

information as possible on the current

this involves three key tasks that is

creating a detailed process map

Gathering Baseline data and summarizing

let us understand process modeling in

process modeling refers to the

visualization of a proposed system

layout or other change in the process

process modeling and simulation can

determine the effectiveness or

ineffectiveness of a new design or

they can be done using process mapping

and flowcharts we will learn about these

let us understand process mapping in

process mapping refers to a workflow

understanding of the process or a series

it is also known as process charting or

flow charting process mapping can be

done either in the measure phase or the

the features of process mapping are as

process mapping is usually the first

step in process improvement process

mapping gives a wider perspective of the

problems and opportunities for process

it is a systematic way of recording all

process mapping can be done by using any

of the methods like flowcharts written

let us learn about flowchart in the

representation of all the steps of a

process in consecutive order it is used

to plan a project document processes and

communicate the process methodology with

others there are many symbols used in a

flowchart and the common symbols are

it is recommended you take a look at the

symbols and their description for better

click the button to view an example of a

the given flowchart shows the processes

involved in software development

the flowchart starts with the start box

which connects to the design box in a

software project a software design is

followed by coding which is Then

in The Next Step there is a check for

errors in case of Errors it is evaluated

for the error type if it is a design

error it goes back to the beginning of

the design stage if it is not a design

error it is then routed to the beginning

on the contrary if there are no errors

let us learn about written procedures in

this screen a written procedure is a

step-by-step guide to direct The Reader

through a task it is used when the

process of a routine task is lengthy and

complex and it is essential for everyone

to strictly follow the rules

written procedures can also be used when

you want to know what is going on during

product or process development phases

there are a number of benefits of

writing procedures help you avoid

mistakes and ensure consistency

they streamline the process and help

your employees take relevant decisions

and save a lot of time written

procedures help in improving quality

they are simple to understand as they

tend to describe the processes at a

in the next screen we will discuss how

work instructions are helpful in

understanding the process in detail

work instructions Define how one or more

activities involved in a procedure

should be written in a detailed manner

with the aid of technology or other

resources like flowcharts they provide

step-by-step details for a sequence of

activities organized in a logical format

so that an employee can follow it easily

for example in the internal audit

procedure how to fill out the audit

results report comes under work

selection of the three process mapping

tools is based on the amount of detail

for a less detailed process you can

select flowchart and for a detailed

process with lots of instructions you

click the button to view an example of

this example shows the work instructions

for shipping electronic instruments the

company name is Nutri worldwide Inc the

instructions are written by Brianna

Scott and approved by Andrew Murphy it

the work instructions are documented for

the shipping of electronic instruments

by the shipping Department the scope of

the project states that it is applicable

the procedure is divided into three

as a first step the order for the

in this step the shipping person

receives an order number from the sales

department through an automatic order

the quantity of the instrument and its

card number are looked up from the

system file and the packaging is done as

per the instructions on the card

special packing instructions must be

the instruments are then marked as per

the instructions on the card and packed

in a special or standard container as

per the requirement the order number is

written in the shipping system and the

packing list and shipping documentation

finally the quantity of instruments and

let us understand process input and

output variables in this screen

any Improvement of a process has a few

prerequisites to improve a process the

key process output variables kpov and

key process input variables kpiv should

metrics for key process variables

include percent defective operation cost

elapsed time backlog quantity and

critical variables are best identified

process owners know and understand each

step of a process and are in a better

position to identify the critical

once identified the relationship between

the variables is depicted using tools

such as cypok and cause and effect

the process input variables results are

compared to determine which input

variables have the greatest effect on

let us proceed to the next topic of this

lesson in the following screen

in this topic we will discuss

probability and statistics in detail let

us learn about probability in the

probability refers to the chance of

something occurring or happening an

outcome is the result of a single trial

Suppose there are n possible outcomes

that are equally likely the probability

that a specific type of event or outcome

say F can occur is the number of

specific outcomes divided by the total

click the button to view an example of

in the event of tossing a coin what is

the probability of the occurrence of

a single trial of tossing a coin has two

outcomes heads and tails hence the

probability of heads occurring is one

divided by two the total number of

let us look at some basic properties of

probability in this screen there are

three basic properties of probability

click each property to know more

property 1 states that the probability

of an event is always between zero and

according to Property 2 the probability

of an event that cannot occur is zero in

other words an event that cannot occur

is called an impossible event

property 3 states that the probability

of an event that must occur is one in

other words an event that must occur is

if e is an event then the probability of

its occurrence is given by P of e it is

also read as the probability of event e

in this screen let us look at some

common terms used in probability along

with an example the commonly used terms

improbability are sample space Venn

sample space is the collection of all

possible outcomes for a given experiment

in the coin example discussed earlier

the sample space consists of one

instance each of heads and tails if two

coins are tossed the sample space would

be four in total a Venn diagram shows

all hypothetically possible logical

relations between a finite collection of

an event is a collection of outcomes for

an experiment which is any subset of the

click the button to view an example of

what is the probability of getting a 3

followed by two when a dice is thrown

when the dice is thrown twice the first

row can't have any number from one to

similarly the second row can also have

so the total sample space is six times

six that is 36. the event in this case

this can happen in only one way so the

probability in the question is 1 divided

let us discuss the basic concepts of

some basic concepts of probability are

independent event dependent event

mutually exclusive and mutually

click each concept to know more

when the probability of occurrence of an

event does not affect the probability of

occurrence of another event the two

events are said to be independent

suppose you roll the dice and flipped a

the probability of getting any number on

the dice in no way influences the

probability of getting heads or tails on

when the probability of one event

occurring influences the likelihood of

the other event the events are said to

events are said to be mutually exclusive

if the occurrence of any one of them

prevents the occurrence of all the

others in other words only one event can

consider an example of flipping a coin

when you flip a coin you will either get

you can add the probabilities of these

two events to prove they aren't mutually

any two events wherein one event cannot

occur without the other are said to be

in this screen let us learn about the

multiplication rules also known as and

rules the multiplication rules or and

rules depend on the event dependency

for independent events that is if two

events are independent of each other the

special multiplication rule applies for

mutually independent events the special

multiplication rule is as follows

if the events a b c and so on are

independent of each other then the

probability of A and B and C and so on

is equal to the product of their

click the button to view an example of

Suppose there are three events which are

independent of each other such as the

event of flipping a coin and getting

heads drawing a card and getting an Ace

and throwing a dice and getting a one

what is the probability of occurrence of

the answer is the probability of A and B

and C is equal to the product of their

which is half multiplied by 1 13

multiplied by 1 6. the result is

hence there is 0.64 percent probability

of all of the events occurring

we will continue the discussion on

multiplication rules in this screen

non-independent or conditional events

which is also the general multiplication

rule is as follows if a and b are two

events then the probability of A and B

is equal to the product of probability

of a and the probability of B given a

alternatively we can say that for any

two events their joint probability is

equal to the probability that one of

these events occurs multiplied with the

conditional probability of the other

event given the first event click the

button to view an example of this rule

a bag contains six golden coins and four

silver coins two coins are drawn without

what is the probability that both of the

coins are silver let a be the event that

the first coin is silver and B be the

event that the second coin is silver

there are 10 coins in the bag four of

which are silver therefore P of a equals

after the first election there are nine

coins in the bag three of which are

silver therefore P of B given a equals

therefore based on the rule of

multiplication probability of a

intersection b equals four divided by

ten multiplied by three divided by nine

the answer is twelve divided by ninety

0.1334 hence there is 13 probability

that both the coins are silver

in this screen we will look at the

definitions and formula of permutation

permutation is the total number of ways

in which a set group or number of things

the order matters to a great extent in

the manner in which the objects or

numbers are arranged will be considered

the formula for permutation is NPR

equals P of N and R equals n factorial

divided by n minus r factorial where n

is the number of objects and R is the

number of objects taken at a time

the unordered arrangement of set group

or number of things is known as

combination the order does not matter in

combination the formula for combination

is NCR equals c of N and R equals n

factorial divided by R factorial

multiplied by n minus r factorial where

n is the number of objects and R is the

number of objects taken at a time

calculating permutation and combination

from a group of 10 employees a company

has to select four for a particular

in how many ways can this election

happen given the following conditions

when the arrangements of employees needs

when the arrangement of employees need

click the button to know the answer

in the given example the values of N and

let us consider the first condition from

a group of 10 employees four employees

need to be selected the arrangement

using the permutation formula NPR equals

P of N and R equals n factorial divided

10p4 equals P of 10 and 4 equals 10

factorial divided by 10 minus 4

therefore the four employees can be

let us now consider the second condition

from a group of 10 employees four

employees need to be selected the

arrangement of employees need not be

using the combination formula NCR equals

c of N and R equals n factorial divided

by R factorial multiplied by n minus r

10 C 4 equals c of 10 and 4 equals 10

factorial divided by 4 factorial

multiplied by 10 minus 4 factorial

therefore the four employees can be

selected from a group of 10 employees in

let us understand the two types of

statistics refers to the science of

collection analysis interpretation and

presentation of data in Six Sigma

statistical methods and principles are

used to measure and analyze the process

there are two major types of Statistics

descriptive statistics and inferential

descriptive statistics is also known as

enumerative statistics and inferential

statistics is also known as analytical

descriptive statistics include

organizing summarizing and presenting

the data in a meaningful way whereas

inferential statistics includes making

inferences and drawing conclusions from

the data descriptive statistics

describes what's going on in the data

the main objective of inferential

statistics is to make inferences from

the data to more General conditions

histograms pie charts box plots

frequency distributions and measures of

central tendency mean median and mode

are all examples of descriptive

statistics on the other hand examples of

inferential statistics are hypothesis

the main objective of statistical

inference is to draw conclusions on

population characteristics based on the

information available in the sample

collecting data from a population is not

always easy especially if the size of

the population is Big the easier way is

to collect a sample from the population

and from the sample statistic collected

make an assessment about the population

click the button to see an example of

the management team of a qriket council

wants to know if the team's performance

has improved after recruiting a new

the management conducts a test to prove

let us consider why a and YB where y a

stands for efficiency of Coach a and YB

stands for efficiency of Coach B

to conduct the test the basic assumption

is coach a and Coach B are both

effective this basic assumption is known

here let us assume the status quo is

null hypothesis and null hypothesis ho

the management team also challenges

their basic assumption by assuming the

coaches are not equally effective this

the alternate hypothesis states that the

efficiencies of the two coaches differ

if the null hypothesis is proven wrong

the alternate hypothesis must be right

hence alternate hypothesis H1 can be

these hypothesis statements are used in

a hypothesis test which will be

discussed in the later part of the

in this screen we will learn about the

types of errors when collecting data

from a population as a sample and

forming a conclusion on the population

based on the sample you run into the

risk of committing errors there are two

possible errors that can happen type 1

error and type 2 error the type 1 error

occurs when the null hypothesis is

rejected when it is in fact true

type 1 error is also known as producer's

the chance of committing a type 1 error

alpha or significance level is the

chance of committing a type 1 error and

is typically chosen to be five percent

this means the maximum amount of risk

you have for committing a type 1 error

let us consider the previous example

arriving at a conclusion that Coach B is

better than coach a when in fact they

are at the same level is a type 1 error

the risk you have of committing this

error is five percent which means there

is a five percent chance your experiment

can give wrong results the type 2 error

occurs when the null hypothesis is

accepted when it is in fact false also

when you reject the alternate hypothesis

when it is actually true you commit a

type 2 error is also referred to as

consumer's risk in comparing the two

coaches the coaches were actually

different in their efficiencies but the

conclusion was that they are the same

the chance of committing a type 2 error

is known as beta the maximum chance of

committing a type 2 error is 20 percent

in the next screen we will learn about

Central limit theorem CLT states that

for a sample size greater than 30 the

sample mean is very close to the

population mean in simple words the

sample mean approaches the normal

for example if you have sample one and

its mean is mean one sample two and its

mean is mean to and so on take the means

of mean one mean to Etc and you will

find that it is the same as the

population mean is the average of the

in such cases the standard error of mean

also known as sem that represents the

variability between the sample means is

the SCM is often used to represent the

standard deviation of the sample

the formula for sem is population

standard deviation divided by the square

selecting a sample size also depends on

the concept called Power also known as

we will cover this concept in detail in

let us look at the graphical

representation of the central limit

theorem in the following screen

the plot of the three numbers two three

and four looks as shown in the graph it

is interesting to note that the total

number of times each digit is chosen is

six when the plot of the sample mean of

nine samples of size 2 each is drawn it

looks like the red line which is plotted

in the figure the x-axis shows numbers

of the mean which are 2 2.5 3 and 4. on

the y-axis the frequency is plotted the

point at which arrows from number two

and three converge is the mean of two

and three similarly the point at which

arrows from two and four converge is the

mean of the numbers two and four

let us discuss the concluding points of

the central limit theorem in the next

the central limit theorem concludes that

the sampling distributions are helpful

in dealing with non-normal data if you

take the sample data points from a

population and plot the distribution of

the means of the sample you get the

sampling distribution of the means

the mean of the sampling distribution

also known as the mean of means will be

also the sampling distribution

approaches normality as the sample size

note that CLT enables you to draw

inferences from the sample statistics

about the population parameters this is

irrespective of the distribution of the

CLT also becomes the basis for

calculating confidence interval for

hypothesis tests as it allows the use of

let us proceed to the next topic of this

lesson in the following screen

in this topic we will cover the concept

let us start with discrete probability

distribution in the following screen

discrete probability distribution is

characterized by the probability Mass

function it is important to be familiar

with discrete distributions while

dealing with discrete data some of the

examples of discrete probability

distribution are binomial distribution

poisson distribution negative binomial

distribution geometric distribution and

we will focus only on the two most

useful discrete distributions binomial

distribution and poisson distribution

like most probability distributions

these distributions also help in

predicting the sample behavior that has

been observed in a population

let us learn about binomial distribution

binomial distribution is a probability

distribution for discrete data named

after the Swiss mathematician Jacob

Bernoulli it is an application of

popular knowledge to predict the sample

binomial distribution also describes the

discrete data as a result of a

particular process like the tossing of a

coin for a fixed number of times and the

success or failure in an interview

a process is known as Bernoulli's

process when the process output has only

two possible values like defective or OK

binomial distribution is used to deal

with defective items defect is any

non-compliance with a specification

defective is a product or service with

binomial distribution is most suitable

when the sample size is less than 30 and

less than 10 percent of the population

it is the percentage of non-defective

items provided the probability of

creating a defective item Remains the

the probability of exactly our successes

out of a sample size of n is denoted by

P of R which is equal to NCR whole

multiplied by P to the power of R and 1

minus P whole to the power of n minus r

in the equation B is the probability of

success R is the number of successes

desired and N is the sample size to

continue discussing the binomial

distribution let us look at some of its

key calculations in the following screen

the mean of a binomial distribution is

denoted by Mio and is given by n

the standard deviation of a binomial

distribution is denoted by Sigma which

is equal to n multiplied by P multiplied

the method of calculating factorials say

a factorial of 5 is the product of five

four three two and one which is equal to

similarly factorial of 4 is the product

of 4 3 2 and 1 which is equal to 24.

let us look at an example of calculating

binomial distribution in the next screen

suppose you wish to know the probability

of getting heads five times in eight

coin tosses you can use the binomial

click the answer button to see how this

the tossing of a coin has only two

outcomes heads and tails it means that

the probability of each outcome is 0.5

and it remains fixed over a period of

time Additionally the outcomes are

in this case the probability of success

denoted by P is 0.5 the number of

successes desired is denoted by R which

is 5 and the sample size is denoted by n

which is 8. therefore the probability of

five heads is equal to factorial of 8 CR

which is 8 divided by a factorial of 5

and factorial of 8 minus 5 whole

multiplied by 0.5 to the power of 5

multiplied by one minus 0.5 whole to the

this calculation gives a result of

21.87 percent let us learn about poisson

poisson distribution is named after

Simeon de ni poisson and is also used

poisson distribution is an application

of the population knowledge to predict

the sample Behavior it is generally used

for describing the probability

distribution of an event with respect to

some of the characteristics of poisson

distribution are as follows

croissant distribution describes the

discrete data resulting from a process

like the number of calls received by a

call center agent or the number of

unlike binomial distribution which deals

with binary discrete data The Zone

distribution deals with integers which

can take any value poisson distribution

is suitable for analyzing situations

wherein the number of Trials similar to

the sample size in binomial distribution

is large and tends towards Infinity

additionally it is used in situations

where the probability of success in each

trial is very small almost tending

towards zero this is the reason why

poisson distribution is applicable for

predicting the occurrence of rare events

like plane crashes car accidents Etc and

is therefore widely used in the

insurance sector poisson distribution

can be used for predicting the number of

defects as well given a low defect

let us look at the formula for

calculating poisson distribution in the

the poisson distribution for a

probability of exactly X occurrences is

given by P of x equals to Lambda to the

power of X multiplied with log e to the

power of minus Lambda whole divided by

factorial of X in this equation Lambda

is the mean number of occurrences during

the interval X is the number of

occurrences desired and E is the base of

natural logarithm which is equal to

the mean of the poisson distribution is

given by the Lambda and the standard

deviation of a poisson distribution is

given by Sigma which is the square root

let us look at an example to calculate

poisson distribution in the next screen

the past records of a row Junction which

is accident prone show a mean number of

five accidents per week at this Junction

assume that the number of accidents

follows up poisson distribution and

calculate the probability of any number

of accidents happening in a week

click the button to know the answer

given the situation you know that the

value of Lambda or mean is 5. so P of 0

that is the probability of zero

accidents per week is calculated as 5 to

the power of zero multiplied by e to the

power of minus five whole divided by a

factorial of zero the answer is

0.006 applying the same formula the

probability of one accident per week is

0.03 the probability of more than two

accidents per week is one minus the sum

of probabilities of zero one and two

0.884 in other words the probability is

let us learn about normal distribution

the normal or gaussian distribution is a

continuous probability distribution the

normal distribution is represented as n

and depends on two factors Miu which

stands for mean and sigma which gives

the standard deviation of the data

points normal distribution normally has

a higher frequency of values around the

mean and lesser occurrences away from it

approximation to describe real valued

random variables that tend to Cluster

the distribution is bell-shaped and

symmetrical the total area under the

normal curve is one which is p of x

various types of data such as body

weight height the output of a

manufacturing device Etc follow the

normal distribution additionally normal

distribution is continuous and

symmetrical with the tails asymptotic to

the x-axis which means they touch the

x-axis at Infinity let us continue to

discuss normal distribution in the

in a normal distribution to standardized

comparisons of dispersion or the

different measurement units like inches

meters grams Etc a standard Z variable

is used the uses of Z value are as

follows while the value of Z or the

number of standard deviations is unique

for each probability within the normal

distribution it helps in finding

probabilities of data points anywhere

within the distribution it is

dimensionless as well that is it has no

units such as millimeters liters

there are different formulas to arrive

at the normal distribution we will focus

on one commonly used formula for

calculating normal distribution which is

z equals y minus mu whole divided by

Sigma here Z is the number of standard

deviations between Y and the mean

denoted by mu Y is the value of the data

point in concern mu is mean of the

population or data points and sigma is

the standard deviation of the population

or data points let us look at an example

for calculating normal distribution in

suppose the time taken to resolve

customer problems follows a normal

distribution with a mean of 250 hours

and standard deviation of 23 hours find

the probability of a problem resolution

taking more than 300 hours click the

in this case Y is equal to 300 mu equals

250 and sigma equals 23. applying the

normal distribution formula Z is equal

to 300 minus 250 whole divided by 23.

the result is 2.17 when you look at the

normal distribution table the Z value of

this means the probability of a problem

taking zero to three hundred hours to be

resolved is 98.5 percent and therefore

the chances of a problem resolution

taking more than 300 hours is 1.5

let us understand the usage of Z table

the graphical representation of Z table

the probability of areas under the curve

for the actual value one can identify

the z-score by using the Z table

as shown this probability is the area

under the curve to the left of point

using the actual data when you calculate

mean and standard deviation and the

values are 25 and 5 respectively it is

if the same data is standardized to a

mean value of zero and standard

deviation value of one it is the

standard normal distribution

in the next screen we will take a look

the Z table gives the probability that Z

is between 0 and a positive number

there are different forms of normal

distribution Z tables followed globally

the most common form of Z table with

positive z-scores is shown here

the value of a called the percentage

point is given along the borders of the

table in bold and is to two decimal

the values in the main table are the

probabilities that Z is between 0 and

note that the values running down the

table are to one decimal place

the numbers along the column change only

let us look at some examples and how to

use a z table in the following screen

let us find the value of P of Z Less

the table is not needed to find the

answer once we know that the variable Z

takes a value less than or equal to zero

first the area under the curve is one

and second the curve is symmetrical

about Z equals zero hence there is 0.5

or 50 percent above chance of Z equals 0

and 0.5 or 50 percent below chance of Z

let us find the value of P of Z greater

in this case the chance of Z is greater

than a number in this case 1.12

you can find this by using the following

the opposite or complement of an event

of a is the event of not a that is the

opposite or complement of event a

occurring is the event a not occurring

its probability is given by P of not

equal a equals 1 minus P of a

in other words P of Z greater than 1.12

is 1 minus the opposite which is p of Z

using the table P of Z less than 1.12

equals 0.5 plus P of 0 less than Z less

hence the answer is p of Z greater than

0.1314 note the answer is less than 0.5

let us find the value of P of Z lies

in this case where Z Falls within an

interval the probability can be read

P of Z lies between 0 and 1.12 equals

0.3686 we will learn about chi-square

chi-squared distribution is also known

Chi Squared with K minus 1 degrees of

freedom is the distribution of a sum of

the squares of K independent standard

the chi-square distribution is one of

the most widely used probability

distributions in inferential statistics

it is also known as hypothesis testing

and the distribution is used in

hypothesis tests when used in hypothesis

tests it only needs one sample for the

conventionally degree of freedom is K

minus one where K is the sample size

for example if w x y and z are four

random variables with standard normal

distributions then the random variable F

which is the sum of w Square x square y

square and z-square has a chi-square

the degrees of the freedom of the

distribution DF equals the number of

normally distributed variables used

in this case DF is equal to 4.

let us look at the formula to calculate

chi-square distribution in the following

chi-square calculated or Sigma or the

chi-square index equals F of O minus F

of e whole Square divided by F of E

here F of O stands for an observed

frequency and F of e stands for an

expected frequency determined through a

contingency table let us understand T

distribution in the next screen

the T distribution method is the most

appropriate method to be used in the

following situations when you have a

sample size of less than 30 when the

population standard deviation is not

unlike the normal distribution a t

distribution is lower at the mean and

higher at the Tails as seen in the image

T distribution is used for hypothesis

testing also as seen in the image the

t-distribution is symmetrical in shape

but flatter than the normal distribution

as the sample size increases the T

distribution approaches normality

for every possible sample size or

degrees of freedom there is a different

let us learn about F distribution in the

the F distribution is a ratio of two

chi-squared distributions a specific F

distribution is denoted by the ratio of

the degrees of freedom for the numerator

chi-square and the degrees of freedom

for the denominator chi-square

the f-test is performed to calculate and

observe if the standard deviations or

variances of two processes are

the project teams are usually concerned

about reducing the process variance as

per the formula f calculated equals S1

Square divided by S2 Square where S1 and

S2 are the standard deviations of the

if the F calculated is one it implies

there is no difference in the variance

if S1 is greater than S2 then the

numerator must be greater than the

denominator in other words df1 equals N1

minus 1 and df2 equals N2 minus 1.

from the F distribution table you can

distribution at Alpha and the degrees of

freedom of the samples of two different

processes df1 and df2 let us proceed to

the next topic of this lesson in the

in this topic we will discuss collecting

and summarizing data in detail let us

learn about types of data in the

data is objective information which

everyone can agree on it is a collection

of facts from which conclusions may be

drawn the two types of data are

attribute data and variable data click

discrete data is data that can be

counted and only includes numbers such

attribute data is commonly called pass

attribute or discrete data cannot be

broken down into a smaller unit

meaningfully it answers questions such

as how many how often or what type some

examples of attribute data are number of

defective products percentage of

defective products frequency at which a

machine is repaired or the type of award

any data that can be measured on a

continuous scale is continuous or

variable data this type of data answers

questions such as how long what volume

examples of continuous data include

height weight time taken to complete a

let us understand the importance of

selecting the data type in this screen

deciding the data type facilitates

therefore the first step in the measure

phase is to determine what type of data

should be collected this can be done by

the first consideration is to identify

for this the values already identified

these include critical to Quality

parameters or ctqs key process output

variables or kpovs and the key process

input variable or kpivs next to

understand how to proceed with the data

gathered it is necessary to determine

the data type that fits the metrics for

the key variables identified

the question now arises why should the

this is important as it enables the

right set of data to be collected

analyzed and used to draw inferences

it is not advisable to convert one type

of data into another converting

attribute data to variable data is

difficult and requires assumptions to be

made about the process it may also

require additional data Gathering

let us look at measurement scales in the

there are four measurement scales

arranged in the table in increasing

order of their statistical desirability

in the nominal scale the data consists

of only names or categories and there is

an example of this type of measurement

can be a bag of colored balls which

contains 10 green balls five black balls

eight yellow balls and nine white balls

this is the least informative of all

scales the most appropriate measure of

central tendency for this scale is mode

in the ordinal or ranking scale data is

arranged in order and values can be

an example of this scale can be the

ratings given to different restaurants

three for a five for B two for C and

the central tendency for this scale is

median or mode the interval scale is

used for ranking items in Step order

along a scale of equidistant points for

example the temperatures of three metal

rods are 100 degrees 200 degrees and 600

degrees Fahrenheit respectively note

that 3 times 200 degrees is not the same

as 600 Degrees as a temperature

the central tendency here is mean median

mean is used if the data does not have

the ratio scale represents variable data

and is measured against a known standard

however this scale also has an absolute

zero that is no numbers exist below zero

an example of the ratio scale are

physical measures where height weight

and electric charge represent ratio

note that negative length is not

possible again here you would use mean

median or mode as the central tendency

in the next screen we will learn about

to ensure data is accurate sampling

techniques are used sampling is the

process act or technique of selecting an

appropriate test group or sample from a

it is preferable to survey 100 people to

sampling saves the time money and effort

the three types of sampling techniques

described here are random sampling

sequential sampling and stratified

random sampling is the technique where a

group of subjects or a sample for study

is selected from a larger group or

sequential sampling is similar to

multiple sampling plans except that it

can in theory continue indefinitely

in other words it is a non-probability

sampling technique wherein the

researcher picks a single subject or a

group of subjects in a given time

interval conducts the study analyzes the

results and then picks another group of

subjects if needed and so on

in stratified sampling the idea is to

take samples from subgroups of a

this technique gives an accurate

estimate of the population parameter

in this screen we will compare simple

random sampling with stratified sampling

simple random sampling is easy to do

while stratified sampling takes a lot of

time the possibility of simple random

sampling giving erroneous results is

while stratified sampling minimizes the

chances of error simple random sampling

doesn't have the power to show possible

causes of variation while stratified

sampling if done correctly will show

in the next screen we will look at the

check sheet method of collecting data

the process of collecting data is

expensive wrongly collected data leading

to wrong analysis and inferences results

a check sheet is a structured form

prepared to collect and analyze data it

is a generic tool that is relatively

simple to use and can be adopted for a

check sheets are used when the data can

be observed and collected repeatedly by

the same person or at the same location

they are also used while collecting data

from a production process a common

example is calculating the number of

the table shows absentee data collected

we will discuss data coding and its

advantages in the following screen

data coding is a process of converting

and condensing raw data into categories

and sets so that the data can be used

for further analysis the benefits of

data coding are listed here

data coding simplifies the large

quantity of data that is collected from

sources the large amount of data makes

analysis and drawing conclusions

it leads to chaos and ambiguity

data coding simplifies the data by

coding it into variables and then

categorizing these variables raw data

cannot be easily entered into computers

for analysis data coding is used to

convert raw data into process data that

can be easily fed into Computing systems

coding of data makes it easy to analyze

the data converted data can either be

analyzed directly or fed into computers

the analyst can easily draw conclusions

when all the data is categorized and

data coding also enables organized

representation of data division of data

into categories helps organize large

chunks of information thus making

analysis and interpretation easier data

coding also ensures that data repetition

does not occur and duplicate entries are

eliminated so that the final result is

not affected in the following screen we

will discuss measures of central

tendency of the descriptive statistics

a measure of central tendency is a

single value that indicates the central

point in a set of data and helps in

identifying data trends the three most

commonly used measures of the central

tendency are mean median and mode

click each measure to know more

mean is the most common measure of

central tendency it is the sum of all

the data values divided by the number of

also called arithmetic mean or average

it is the most widely used measure of

also known as positional mean median is

the number present in the middle of the

data set when the numbers are arranged

in ascending or descending order

if the data set has an even number of

entries then the median is the mean of

median can also be calculated by the

formula n plus 1 divided by two where n

mode also known as frequency mean is the

value that occurs most frequently in a

data sets that have more than one mode

let us look at an example for

determining mean median and mode in this

the data set has the numbers 1 2 3 4 5 5

6 7 and 8. click the button to know the

as previously defined mean is the sum of

all the data items divided by the number

of items therefore the mean is equal to

41 divided by 9 which is equal to 4.56

the number in the middle of the data set

is five therefore the median is five

mode is the most frequently occurring

in this screen we will understand the

effect of outliers on the data set

let us consider a minor change to the

data set a new number 100 is added to

on using the same formula to calculate

mean the new mean is 15.11 ideally 50

percent of values should lie on either

however in this example it can be seen

that almost 90 percent of values lie

below the mean value of 15.11 and only

the data point 100 is called an outlier

an outlier is an extreme value in the

data set that skews the mean value to

note that the median remains unchanged

at five therefore mean is not an

appropriate measure of central tendency

if the data has outliers median is

in the next screen we will look at

measures of dispersion of the

apart from central tendency another

important parameter to describe a data

set is spread or dispersion contrary to

the measures of central tendency such as

mean median and mode measures of

dispersion Express the spread of values

higher the variation of data points

higher the spread of the data

the three main measures of dispersion

are range variance and standard

deviation we will discuss each of these

let us start with the first measure of

the range of a particular set of data is

defined as the difference between the

largest and smallest values of the data

in the example the largest value of the

data is nine and the smallest value is

one therefore the range is nine minus

one eight in calculating range all the

data points are not needed and only the

maximum and minimum values are required

let us understand the next measure of

dispersion variance in the following

the variance denoted as Sigma square or

S square is defined as the average of

squared mean differences and shows the

to calculate the variance for a sample

data set of 10 numbers type the numbers

in an Excel sheet calculate the variance

using the formula equals varp or vars

the varp formula gives the population

variance which is 7.24 for this example

the vars formula gives the sample

population variance is calculated when

the data set is for the entire

population and Sample variance is

calculated when data is available only

for a sample of the population

population variance is preferred over

sample variance as the latter is only an

sample variance allows for a broader

range of possible answers for the true

that is the confidence levels are higher

note that variance is a measure of

variation and cannot be considered as

the variation in a data set

in the following screen we will

understand the next measure of

standard deviation denoted by Sigma or S

is given by the square root of variance

the statistical notation of this is

standard deviation is the most important

standard deviation is always relative to

for the same data set the population

standard deviation is 2.69 and Sample

standard deviation is 2.83 as in

variance calculation if the data set is

measured for every unit in a population

the population standard deviation and

Sample standard deviation can be

calculated in Excel using the formula

the steps to manually calculate the

first calculate the mean then calculate

the difference between each data point

and the mean and square that answer

next calculate the sum of the squares

next divide the sum of the squares by n

or n minus 1 to find the variance lastly

find the square root of variance which

in the next screen we will look at

frequency distribution of the

frequency distribution is a method of

grouping data into mutually exclusive

categories showing the number of

an example is presented to demonstrate

frequency distribution a survey was

conducted among the residents of a

particular area to collect data on cars

a total of 20 homes were surveyed

to create a frequency table for the

results collected in the survey the

first step is to divide the results into

intervals and count the number of

for instance in this example the

intervals would be the number of

households with no car one car two cars

next a table is created with separate

columns for the intervals the tallied

results for each interval and the number

of occurrences or frequency of results

each result for a given interval is

recorded with a tally mark in the second

column the tally marks for each interval

are added and the sum is entered in the

the frequency table allows viewing

distribution of data across a set of

in the following screen we will look at

cumulative frequency distribution

a cumulative frequency distribution

table is similar to the frequency

distribution table only more detailed

there are additional columns for

cumulative frequency percentage and

in the cumulative frequency column the

cumulative frequency or the previous row

or rows is added to the current row

the percentage is calculated by dividing

the frequency by the total number of

results and multiplying by 100. the

cumulative percentage is calculated

similar to the cumulative frequency

let us look at an example for cumulative

the ages of all the participants in a

chess tournament are recorded the lowest

age is 37 and the highest is 91.

keeping intervals of 10 the lowest

interval starts with the lower limit as

35 and the upper limit as 44.

similar intervals are created until an

in the frequency column the number of

times a result appears in a particular

interval is recorded in the cumulative

frequency column the cumulative

frequency of the previous row is added

to the frequency of the current row

for the first row the cumulative

frequency is the same as the frequency

in the second row the cumulative

frequency is one plus two which is 3 and

in the percentage column the percentage

of the frequency is listed by dividing

the frequency by the total number of

results which is 10 and multiplying the

value by 100. for instance in the first

row the frequency is 1 and the number of

results is 10. therefore the percentage

is 10. the final column is the

cumulative percentage column in this

column the cumulative frequency is

divided by the total number of results

which is 10 and the value is multiplied

note that the last number in this column

should be equal to 100. in this example

the cumulative frequency is one and the

total number of results is 10. therefore

the cumulative percentage of the first

row is 10. let us look at the stem and

leaf plots which is one of the graphical

methods of understanding distribution

graphical methods are extremely useful

tools to understand how data is

distributed sometimes merely by looking

at the data distribution errors in a

the stem and leaf method is a convenient

method of manually plotting data sets it

is used for presenting data in a

graphical format to assist visualizing

the shape of a given distribution

in the example on the screen the

temperatures in Fahrenheit for the month

of May are given to collate this

information in a stem and leaf plot all

the tens digits are entered in the stem

column and all the units digits against

each tens digit are entered in the leaf

column to start with the lowest value is

considered in this case the lowest

temperature is 51. in the first row five

is entered in the stem column and zero

in the length column the next lowest

temperature is 58. 8 is entered in the

leaf column corresponding to 5 in the

the next number is 59. all the

temperatures falling in the 50s are

in the next row the same process is

repeated for temperatures in the 60s

this is continued till all the

temperature values are entered in the

let us understand another graphical

method in the next screen box and

a box and whisker graph based on medians

or quartiles is used to display a data

set in a way that allows viewing the

distribution of the data points easily

consider the following example the

lengths of 13 fish caught in a lake were

measured and recorded the data set is

the first step to draw a box and whisker

plot is therefore to arrange the numbers

next find the median as there is an odd

number of data entries the median is the

number in the middle of the data set

which in this case is 12. the next step

is to find the lower median or quartile

this is the median of the lower six

numbers the middle of these numbers is

halfway between eight and nine which

similarly the upper median or quartile

is located for the upper six numbers to

the right of the median the upper median

is halfway between the two values 14 and

14. therefore the upper median is 14.

let us now understand how the box and

whisker chart is drawn using the values

of the median and upper and lower

quartiles the next step is a number line

is drawn extending far enough to include

then a vertical line is drawn from the

median point 12. the lower and upper

quartiles 8.5 and 14 respectively are

marked with vertical lines and these are

joined with the median line to form two

boxes as shown on the screen

next two whiskers are extended from

either ends of the boxes as shown to the

smallest and largest numbers in the data

the box and whiskers graph is now

complete the following inferences can be

drawn from the box and whisker plot the

lengths of the fish range from 5 to 20.

the range is therefore 15. the quartiles

split the data into four equal parts in

other words one quarter of the data

numbers is less than 8.5 one quarter

between 8.5 and 12 next quarter of the

data numbers are between 12 and 14. and

another quarter has data numbers greater

in this screen we will learn about

another graphical method scatter

a scattered diagram or scatter plot is a

tool used to analyze the relationship or

correlation between two sets of

variables X and Y with X as the

independent variable and Y as the

a scatter diagram is also useful when

cause effect relationships have to be

examined or root causes have to be

there are five different types of

correlation that can be used in a

let us learn about them in the next

the five types of correlation are

perfect positive correlation moderate

positive correlation no relation or no

correlation moderate negative

correlation and perfect negative

click each type to learn more

in perfect positive correlation the

value of dependent variable y increases

proportionally with any increase in the

value of independent variable X

this is said to be one is to one that is

any change in one variable results in an

equal amount of change in the other

the following example is presented to

demonstrate perfect positive correlation

the consumption of milk is found to

increase proportionally with an increase

in the consumption of coffee

the data is presented in the table on

the scattered diagram for the data is

it can be observed from the graph that

as X increases y also increases

proportionally hence the points are

in this type of correlation as the value

of the X variable increases the value of

y also increases but not in the same

to demonstrate this the following

the increase in savings for increase in

salary is shown in the table

as you can notice in the scatter diagram

the points are not linear although the

value of y increases with increase in

the value of x the increase is not

when a change in one variable has no

impact on the other there is no relation

let us consider the following example to

study the relation between the number of

fresh graduates in the city and the job

data for both was collected over a few

months and tabulated as shown

the scatter diagram for the same is also

it can be observed that the data points

are scattered and there is no Trend

therefore there is no correlation

between the number of fresh graduates

and the number of job openings in the

in moderate negative correlation an

increase in one variable results in a

decrease in the other variable however

this change is not proportional to the

change in the first variable to

demonstrate modern negative correlation

the prices of different products are

listed along with the number of units

the data is shown in the table

from the scatter diagram shown it can be

observed that higher the price of a

product lesser are the number of units

however the decrease in the number of

units with increasing price is not

in perfect negative correlation an

increase in one variable results in a

proportional decrease of the other

this is also an example of one is to one

correlation as an example the effect of

an increase in the project time

extension on the success of project is

considered the data is shown in the

table the scattered diagram for the data

shows a proportional decrease in the

probability of the Project's success

with each extension of the project time

hence the points are linear

perfect correlations are rare in the

real world when encountered they should

in this screen we will look at another

histograms are similar to bar graphs

except that the data in histograms is

grouped into intervals they are used to

represent category wise data graphically

a histogram is best suited for

the following example illustrates how a

histogram is used to represent data

data on the number of hours spent by a

group of 15 people on a special project

in one week is collected this data is

then divided into intervals of two and

the frequency table for the data is

the histogram for the same data is also

looking at the histogram it can be

observed at a glance that most of the

team members spent between two to four

in the following screen we will look at

the next graphical method normal

normal probability plots are used to

identify if a sample has been taken from

a normal distributed population

when sample data from a normal

distributive population is represented

as a normal probability plot it forms a

the following example is presented to

illustrate normal probability plots

a sampling of diameters from a drilling

operation is done and the data is

recorded the data set is given

to create a normal probability plot the

first step is to construct a cumulative

this is followed by calculating the mean

rank probability by dividing the

cumulative frequency by the number of

samples plus one and multiplying the

the fully populated table for mean rank

probability estimation is shown on the

screen please take a look at the same

in The Next Step a graph is plotted on

log paper or with minitab using this

data minitab is a statistical software

used in Six Sigma minitab normal

probability plot instructions are also

the completed graph is shown on the

from the graph it can be seen that the

random sample forms a straight line and

therefore the data is taken from a

normally distributed population Learners

check out our certified lean Six Sigma

Green Belt certification training course

and earn a Green Belt certification to

learn more about this course you can

click the course Link in the description

box below let us proceed to the next

topic of this lesson in this topic we

will discuss measurement system analysis

let us understand what MSA is in the

following screen throughout the DMACC

process the output of the measurement

system Ms is used for metrics analysis

an error-prone measurement system will

only lead to incorrect data incorrect

data leads to incorrect conclusions

it is important to set right the MS

measurement system analysis or MSA is a

technique that identifies measurement

error or variation and sources of that

error in order to reduce the variation

it evaluates the measuring system to

ensure the Integrity of data used for

MSA is therefore one of the first

activities in the measure phase

the measurement system's capability is

calculated analyzed and interpreted

using gauge repeatability and

reproducibility to determine measurement

correlation bias linearity percent

agreement and precision or tolerance

let us discuss the objectives of MSA in

a primary objective of MSA is to obtain

information about the type of

measurement variation associated with

the measurement system it is also used

to establish criteria to accept and

release new measuring equipment

MSA also Compares measuring one method

against another it helps to form a basis

for evaluating a method which is

suspected of being deficient

the measurement system variations should

be resolved to arrive at the correct

baselines for the project objectives

as baselines contain crucial data based

on which decisions are taken it is

extremely important that the measurement

system be free of error as far as

let us look at measurement analysis in

in measurement analysis The observed

value is equal to the sum of the true

value and the measurement error the

measurement error can be a negative or a

measurement error refers to the net

effect of all sources of measurement

variability that cause and observed

value to deviate from the True Value

true variability is the sum of the

process variability and the measurement

process variability and measurement

variability must be evaluated and

measurement variability should be

addressed before looking at process

you have process variability is

corrected before resolving measurement

variability then any improvements to the

process cannot be trusted to have taken

place owing to a faulty measurement

in the following screen we will identify

the types of measurement errors

the two types of measurement errors are

measurement system bias and measurement

click each type to know more

measurement system bias involves

calibration study in the calibration

study the total mean is given by the sum

of the process mean and the measurement

mean the statistical notation is shown

measurement system variation involves

gauge repeatability and reproducibility

or grr study in the grr study the total

variance is calculated by adding the

process variance with the measurement

the statistical notation is shown on the

in this screen we will discuss the

sources of variation the chart on the

screen lists the different sources of

variation observe process variation is

divided into two actual process

variation and measurement variation

actual process variation can be divided

into long-term and short-term process

in a gauge RR study process variation is

often called heart variation measurement

variation can be divided into variations

caused by operators and variations due

the variation due to operators is owing

variation due to gauges owing to

both actual process variation and

measurement variation have a common

factor that is variation within a sample

let us understand gauge repeatability

and reproducibility or grr in the next

gauge repeatability and reproducibility

or grr is a statistical technique to

assess if a gauge or gauging system will

obtain the same reading each time a

particular characteristic or parameter

gauge repeatability is the variation in

measurement when one operator uses the

same gauge to measure identical

characteristics of the same part

repeatedly gauge reproducibility is the

variation in the average of measurements

when different operators use the same

characteristics of the same part

the figures on the screen illustrate

gauge repeatability and reproducibility

in the next screen we will discuss the

the figure on the screen illustrates the

difference between gauge repeatability

and reproducibility the figure shows the

repeatability and reproducibility for

six different parts represented by the

numbers one to six for two different

trial readings by three different

as can be observed a difference in

reading for part one indicated by the

color grain by three different operators

is known as reproducibility error a

difference in reading of part 4

indicated by Red by the same operator in

two different trials is known as the

repeatability error in the following

screen we will look at some guidelines

the following should be kept in mind

while carrying out gauge repeatability

and reproducibility or grr studies

grr studies should be performed over the

range of expected observations

care should be taken to use actual

equipment for grr studies written

procedures and approved practices should

be followed as would have been in actual

the measurement variability should be

represented as is not the way it was

after grr the measurement variability is

separated into casual components sorted

according to priority and then targeted

in the following screen let us look at

some more Concepts associated with grr

bias is the distance between the sample

mean value and the sample True Value it

is also called accuracy bias is equal to

mean minus reference value process

variation is equal to six times the

standard deviation the bias percentage

is calculated as bias divided by the

the next term is linearity linearity

refers to the consistency of bias over

the range of the gauge linearity is

given by the product of slope and

Precision is the degree of repeatability

smaller the dispersion in the data set

the variation in the gauge is the sum of

variation due to repeatability and the

variation due to reproducibility

in the following screen we will

understand measurement resolution

measurement resolution is the smallest

detectable increment that an instrument

the number of increments in the

measurement system should extend over

the full range for a given parameter

some examples of wrong gauges or

incorrect measurement resolution are

a truck weighing scale is used for

measuring the weight of a t-pack

a caliper capable of measuring

differences of 0.1 millimeters is used

to show compliance when the tolerance

limits are plus or minus 0.07

thus the measurement system that matches

the range of the data should only be

an important prerequisite for grr

studies is that the gauge has an

in the next screen we will look at

examples for repeatability and

repeatability is also called equipment

variation or EV it occurs when the same

technician or operator repeatedly

measures the same part or process under

identical conditions with the same

the following example illustrates this

a 36 kilometer per hour Pace mechanism

is timed by a single operator over a

distance of 100 meters on a stopwatch

and three readings are taken

trial 1 takes 9 seconds trial two takes

10 seconds and trial 3 takes 11 seconds

the process is measured with the same

equipment in identical conditions by the

same operator assuming no operator error

the variation in the three readings is

known as repeatability or equipment

reproducibility is also called appraiser

variation or AV it occurs when different

technicians or operators measure the

same part or process under identical

conditions using the same measurement

let us extend the example for

repeatability to include data measured

the ratings are displayed on the slide

the difference in the readings of both

operators is called reproducibility or

it is important to resolve equipment

variation before appraiser variation

if appraiser variation is resolved first

the results will still not be identical

due to variation in the equipment itself

in this screen we will learn about data

collection in grr there are some

important considerations for data

collection in grr studies there are

usually three operators and around 10

units to measure General sampling

techniques must be used to represent the

population and each unit must be

measured two to three times by each

operator it is important that the gauge

be calibrated accurately it should also

be ensured that the gauge has an

another practice is that the first

operator measures all the units in

random order then this order is

maintained by all other operators all

in the next screen we will discuss the

Anova method of analyzing grr studies

the Anova method is considered to be the

best method for analyzing grr studies

this is because of two reasons the first

being Anova not only separates equipment

and operator variation but also provides

Insight on the combined effect of the

two second Anova uses standard deviation

instead of range as a measure of

variation and therefore gives a better

estimate of the measurement system

the one drawback of using Anova is the

considerations of time resources and

in the next screen we will understand

two results are possible for an MSA

in the first case the reproducibility

error is larger than the repeatability

error this occurs when the operators are

not trained and calibrations on the

the other possibility is that the

repeatability error is larger than the

reproducibility error this is clearly a

maintenance issue and can be resolved by

calibrating the equipment or performing

maintenance on the equipment

this indicates that the gauge needs

redesigned to be more rigid and the

location needs to be improved it also

occurs when there is ambiguity in Sops

MSA is an experiment which seeks to

identify the components of variation in

in the following screen we will look at

a template used for grr studies

a sample gauge RR sheet is given on this

screen The Operators here are Andrew

Murphy and Lucy Wang who are the

they have measured and rated the

performance of three employees Ibraham

glassoff Brianna Scott and Jason Schmidt

this is a sample template for a gauge

RNR study the parts are shown across the

in this case the measurement system is

being evaluated using three parts the

employees Abraham glassoff Rhianna Scott

and Jason Schmidt The Operators measure

from this data the average X and ranges

are for each inspector and for each part

the grand average for each inspector and

in this example a control limit UCL in

the sheet was compared with the

difference in averages of the two

inspectors to identify if there is a

significant difference in their

0.111 which is outside the UCL of

0.108 given the r average of 0.042

in the next screen we will look at the

results page for this grr study

the sheet on the screen displays the

results for the data entered in the

template in the previous screen please

spend some time to go through the data

for a better understanding of the

in the following screen we will look at

the interpretation to this results page

the percentage grr value is highlighted

in the center right of the table in the

there are three important observations

to be made here above the gauge RR study

first this study also shows the

interaction between operators and parts

If the percentage grr value is less than

30 then the gauge is acceptable and the

measurement system does not require any

change if the value is greater than 30

then the gauge needs correction

the equipment variation is checked and

resolved first followed by the appraiser

second if EV equals zero it means the MS

is reliable the equipment is perfect and

the variation in the gauge is

contributed by different operators

if the AV is equal to zero the MS is

third if EV is equal to zero and there

is Av The Operators have to be trained

to ensure all operators follow identical

steps during measurement and the AV is

the interaction between operators and

parts can also be studied under grr

using part variation the trueness and

precision cannot be determined in a grr

if only one gauge or measurement method

is evaluated as it may have an inherent

bias that would go undetected merely by

let us proceed to the next topic of this

lesson in the following screen

in this topic we will discuss process

and performance capability in detail

in the following screen we will look at

the differences between natural process

limits and specification limits

natural process limits or control limits

are derived from the process data and

are the voice of the process

the data consists of real-time values

from past process performance therefore

these values represent the actual

process limits and indicate variation in

the two control limits are upper control

limit UCL and lower control limit LCL

specification limits are provided by

customers based on their requirements or

the voice of the customer and cannot be

these limits act as targets for the

organization and processes are designed

the product or service has to meet

customer requirements and has to be well

within the specification limits

If the product or service does not meet

customer requirements it is considered

as a defect therefore specification

limits are the intended results or

requirements from the product or service

that are defined by the customer

the two specification limits are upper

specification limit or USL and lower

the difference between the two is called

an important point to note is that for a

process if the control limits lie within

the specification limits the process is

conversely if specification limits lie

within the control limits the process

will not meet customer requirements

in the following screen we will look at

process performance metrics and how they

the two major metrics used to measure

process performance are defects per unit

or dpu and defects per million

dpu is calculated by dividing the number

of defects by the total number of units

dpmo is calculated by multiplying the

defects per opportunity with 1 million

in the following screen we will look at

an example for calculating process

in this example the quality control

Department checks the quality of

finished goods by sampling a batch of 10

items from the produced lot every hour

the data is collected over 24 hours

the table displays the data for the

number of defectives for the sampling

if items are consistently found to be

outside the control limits on any given

day the production process is stopped

let us now interpret the results of the

sampling in this example as the sample

size is constant dpu or P bar is used to

calculate the process capability the

total number of defectives is 34 and the

subgroup size is 10. the total number of

units is 10 multiplied by 24 which is

240. the defects per unit is 0.0124

the defects per million opportunities is

obtained by multiplying the defects per

unit with 1 million which is 141 666.66

therefore by looking at the dpmo table

it can be said that the process is

currently working at 2.6 Sigma or 86.4

we will learn about process stability

the activities carried out in the

measure phase are MSA collection of data

statistical calculations and checking

this is followed by a test for stability

as changes cannot be made to an unstable

process with a set of data believed to

be accurate the process is checked for

stability this is important because if a

process is unstable no changes can be

why does a process become unstable

a process can become unstable due to

special causes of variation multiple

special causes of variation lead to

a single special cost leads to an out of

run charts in minitab can be used to

check for process stability let us look

at the steps to plot a run chart in

minitab in the following screen

to plot a run chart in minitab first

enter the sample data collected to check

next click stat on the minitab window

next click run charts select the column

and choose the subgroup size as two

the graph shown on the screen is

interpreted by looking at the last four

values if any of the P values is less

than 0.05 the presence of special causes

of variation can be validated

this means there is a good chance that

the process will become unstable

in the following screen we will look at

process stability studies causes of

variation can be due to two types of

causes common causes of variation and

click each type to learn more

common causes of variation are the many

sources of variation within a process

which have a stable and repeatable

distribution over a period they

contribute to a state of statistical

control where the output is predictable

some other factors which do not always

act on the process can also cause

variation these are special causes of

variation these are external to the

process and are irregular in nature when

present the process distribution changes

and the process output is not stable

over a period special causes may result

in defects and need to be eliminated to

bring the process under control

run charts indicate the presence of

special causes of variation in the

process if special causes are detected

the process has to be brought to a stop

and a root cause analysis has to be

carried out if the root cause analysis

reveals the special cause to be

undesirable corrective actions are taken

to remove the special cause

we will learn about verifying process

stability and normality in this screen

based on the type of variation of

process exhibits it can be verified if

if there are special causes of variation

the process output is not stable over

time the process cannot be said to be in

conversely if there are only common

causes of variation in a process the

output forms a distribution that is

stable and predictable over time a

process being in control means the

process does not have any special causes

once a process is understood to be

stable the control chart data can be

used to calculate the process capability

in the following screen we will discuss

process capability studies process

capability is the actual variation in

the process specification to carry out a

process capability study first plan for

data collection next collect the data

finally plot and analyze the results

obtaining the appropriate sampling plan

for the process capability study depends

on the purpose and whether there are any

customer or standard requirements for

for new processes or a project proposal

the project capability can be estimated

by a pilot run let us look at the

objectives of process capability studies

in the next screen the objectives of a

process capability study are to

establish a state of control over a

manufacturing process and then maintain

the state of control over a period of

on comparing the natural process limits

or the control limits within the

specification limits any of the

following outcomes is possible

first the process limits are found to

fall between the specification limits

this shows the process is running well

the second possibility is that the

process spread and the specification

spread are approximately the same in

this case the process is centered by

making an adjustment to the centering of

this would bring the batch of products

the third possibility is that the

process limits fall outside the

specification limits in this case reduce

the variability by partitioning the

pieces of batches to locate and Target

a design experiment can be used to

identify the primary source of variation

in the following screen we will learn

about identifying characteristics in

process capability deals with the

ability of the process to meet customer

requirements therefore it is crucial

that the characteristics selected for a

process capability study indicates a key

factor in the quantity of the product or

also it should be possible to influence

the value of the characteristic by

the operating conditions that affect the

characteristic should also be defined

apart from these requirements other

factors determining the characteristics

to be measured are customer purchase

order requirements or industry standards

in the following screen we will look at

identifying specifications or tolerances

the process specification or tolerances

are defined either by industry standards

based on customer requirements or by the

organization's engineering department in

consultation with the customer a

comprehensive capability study also

helps in identifying if the process mean

meets the Target or the customer mean

the process capability study indicates

whether the process is capable it is

used to determine if the output

consistently meets specifications and

the probability of a defect or defective

this information is used to evaluate and

improve the process to meet the

in the following screen we will learn

about process performance indices

process performance is defined as a

statistical measurement of the outcome

of a process characteristic which may or

may not have been demonstrated to be in

a state of statistical control

in other words it is an estimate of the

process capability of a process during

its initial setup before it has been

brought into a state of statistical

it differs from the process capability

in that for process performance a state

of statistical control is not required

the three basic process performance

indices are process performance or PP

process performance index or PPK and

process capability index denoted as PPM

click each index to know more

PP stands for process performance it is

computed by subtracting the lower

specification limit from the upper

specification limit the whole divided by

natural process variation or Six Sigma

PPK is the process performance index and

a minimum of the values of the upper and

lower process capability indices the

upper and lower process capability

indices are calculated as shown on the

screen PPU or upper process capability

index is given by the formula USL minus

PPL or lower process capability index is

given by x minus LSL divided by 3s here

x is process average better known as X

bar and S is sample standard deviation

CPM denotes the process capability index

mean which accounts for the location of

the process average relative to a Target

value it can be calculated as shown on

the screen here myu stands for process

average Sigma symbol denotes the process

standard deviation USL is the upper

specification limit and LSL is the lower

specification limit T is the target

value which is typically the center of

the tolerance x i is the sample reading

and N is the number of sample ratings

we will look at the key terms in process

zst or short-term capability is the

potential performance of the process in

control at any given point of time it is

based on the sample collected in the

the long-term performance is denoted by

zlt it is the actual performance of the

process over a given period of time

subgroups are several small samples

collected consecutively each sample

forms a subgroup the subgroups are

chosen so that the data points are

likely to be identical within the

subgroup but different between two

the process shift is calculated by

subtracting the long-term capability

from the short-term capability

the process shift also reflects how well

a process is controlled it is usually a

factor of 1.5 let us look at short-term

and long-term process capability in the

the concept of short-term and long-term

process shift is explained graphically

there are three different samples taken

at time one time two and time three the

smaller waveforms represent the

short-term capability and they are

joined with their means to show the

shift in long-term performance

the long-term performance curve is shown

below with the target value marked in

the center it is important to note that

over a period of time or subgroups a

typical process will shift by

approximately 1.5 times the standard

deviation also long-term variation is

more than short-term variation this

difference is known as the sigma shift

and is an indicator of the process

the reasons for a process shift include

changes in operators raw material used

wear and tear and time periods we will

discuss the assumptions and conventions

of process variations in the following

long-term variation is always longer

than short-term variation click each

short-term variations are due to the

common causes the variance is inherent

in the process and known as the natural

short-term variations show variation

within subgroup and are therefore called

they are usually a small number of

samples collected at Short intervals in

short-term variation the variation due

to common causes are captured however

common causes aren't difficult to

identify and correct the process may

have to be redesigned to remove common

long-term variations are due to common

as well as special causes the added

variation or abnormal variation is due

to factors external to the usual process

long-term variation is also known as the

overall variation and is a sample

standard deviation for all the samples

long-term variation shows variations

within the subgroup and between

special causes increasing variation

include changes in operators raw

material and wear and tear the special

causes need to be identified and

corrected for process Improvement

this screen explains how the factors of

stability capability spread and defect

summary are used to interpret the

process condition this table gives the

process condition for different levels

or types of variation with reference to

common causes and special causes

in the first scenario the process has

lesser common causes of variation or CCV

and note special causes of variation or

SCV in this case the variability is less

the capability is high the possibility

of defects is less and the process is

said to be capable and in control next

if the process has lesser CCV and some

SCV are present then it has high

variability low capability and a high

possibility of defects the process is

said to be out of control and incapable

the third possibility is that the

process has high CCV and no SCV in this

case the variability is moderate to high

the capability is very low and

possibility of defects is very high

although the process is in control it is

finally at the Other Extreme is the

situation where the process has high CCV

and SCV is also present here the process

has high variability low capability high

possibility of defects and is out of

this table is a quick reference to

understand process conditions

in the next screen we will compare the

when CPK and CP values are compared

three outcomes are possible when CPK is

lesser than CP it can be inferred that

the mean is not centered when CPK is

equal to CP the inference is that the

process is accurate the process is

considered capable if CPK is greater

than one this will happen only if

CPK can never be greater than CP if this

situation occurs the calculations have

to be rechecked we will look at an

example problem for calculating process

variation in the following screen

the table on this screen shows data for

customer complaint resolution time over

a period of three weeks each week's data

forms a subgroup for example the

resolution time is 48 hours for a

particular case in week one in week two

the case takes up 50 hours and in week 3

the subgroup size is 10. let us

understand how to calculate long-term

and short-term standard deviations are

the average for each week is calculated

by dividing the total number of

complaints resolved by the subgroup size

a grand average is also calculated for

the variations within subgroups and

between subgroups for each week are

calculated this is followed by

calculating the total variations within

overall variation is given by the sum of

total variation within subgroups and

total variation between subgroups

finally the standard deviations for the

short term and the long term are

calculated using the formula given on

the results for the process variation

calculations are as follows the grand

average for all three weeks is 47.5

the total variation within subgroups is

the total variation between subgroups is

both these variations are added to give

the overall variation of 1185.5

the short-term standard deviation is 6.2

and the long-term standard deviation is

note that the overall variation can also

be calculated with the usual sample

let us discuss the effect of mean shift

on the process capability in this screen

the table given here shows that effect

level at different Sigma multiple values

and different mean shifts from the table

it can be seen that when the mean is

centered within the specification limits

and the process capability is one that

is plus or minus 3s fits within the

specification limits the dpmo is 2700

percent and the probability of a good

result is 99.73 percent if the mean

shifts by 1.5 Sigma then a tail moves

outside the specification limit to a

greater extent now the dpmo increases to

over 66 000. this is almost a twenty

five hundred percent increase in defects

if the process has a process capability

of two that is plus or minus 6s fits

within the specification limits and the

mean shifts by 1.5 Sigma then the

probability you have a good result is

this is the same as a process with a

capability of 1.5 that is plus or minus

4.5 s fitting within the specification

limits and no shift in the mean

the long-term and short-term capability

table shows the variations in

capabilities for the purposes of Six

Sigma the assumption is that the

long-term variability will have a 1.5 s

difference from the short-term

as seen in statistical process control

this assumption can be challenged if

control charts are used and these kinds

of shifts are detected quickly

in the chart it can be seen that the

mean shift is negligible as the process

capability increases therefore for a Six

Sigma process the long-term variation

does not have much effect in the next

screen we will look at Key Concepts in

process capability for attribute data

the customary procedure for defining

process capability for attribute data is

non-conformity defects and defectives

are examples of non-conformity

defects per million opportunities or

dpmo is the measure of process

capability for attribute data for this

the mean and the standard deviation for

attribute data have to be defined

for defectives p bar is used for

checking process capability for both

constant and variable sample sizes for

defects c bar and u-bar are used for

constant and variable sample sizes

the P Bar C bar and u-bar are of the

equivalent of the standard deviation

denoted by Sigma for continuous data in

this topic we will learn about the

patterns of variation in detail let us

start with the classes of distributions

when data obtained from the measurement

phase is plotted on a chart it is

observed that it exhibits a variety of

distributions depending on the data type

these distribution patterns will help

you understand the data better

probability statistics and inferential

statistics are the methods used to

describe the parameters for the classes

click each method to know more

probability is based on the assumed

model of distribution and it is used to

find the chances of a certain outcome or

statistics uses the measured data to

determine a model to describe the data

inferential statistics describe the

population parameters based on the

sample data using a particular model

in this screen we will discuss the types

of distributions there are two types of

discrete distribution and continuous

discrete distribution includes binomial

distribution and poisson distribution

continuous distribution includes normal

distribution chi-square distribution T

distribution and F distribution let us

learn about discrete probability

distribution in the following screen

discrete probability distribution is

characterized by the probability Mass

function it is important to be familiar

with discrete distributions while

dealing with discrete data some of the

examples of discrete probability

distribution are binomial distribution

poisson distribution negative binomial

distribution geometric distribution and

we will focus only on the two most

useful discrete distributions binomial

distribution and poisson distribution

like most probability distributions

these distributions also help in

predicting the sample behavior that has

let us learn about binomial distribution

binomial distribution is a probability

distribution for discrete data named

after the Swiss mathematician Jacob

Bernoulli it is an application of

popular knowledge to predict the sample

binomial distribution also describes the

discrete data as a result of a

particular process like the tossing of a

coin for a fixed number of times and the

success or failure in an interview

a process is known as Bernoulli's

process when the process output has only

two possible values like defective or OK

binomial distribution is used to deal

with defective items defect is any

non-compliance with a specification

defective is a product or service with

binomial distribution is most suitable

when the sample size is less than 30 and

less than 10 percent of the population

it is the percentage of non-defective

items provided the probability of

creating a defective item Remains the

the probability of exactly our successes

out of a sample size of n is denoted by

P of R which is equal to NCR whole

multiplied by P to the power of R and 1

minus P whole to the power of n minus r

in the equation B is the probability of

success R is the number of successes

desired and N is the sample size to

continue discussing the binomial

distribution let us look at some of its

key calculations in the following screen

the mean of a binomial distribution is

denoted by Mio and is given by n

the standard deviation of a binomial

distribution is denoted by Sigma which

is equal to n multiplied by P multiplied

the method of calculating factorials say

a factorial of 5 is the product of five

four three two and one which is equal to

similarly factorial of 4 is the product

of four three two and one which is equal

let us look at an example of calculating

binomial distribution in the next screen

suppose you wish to know the probability

of getting heads five times in eight

coin tosses you can use the binomial

click the answer button to see how this

the tossing of a coin has only two

outcomes heads and tails it means that

the probability of each outcome is 0.5

and it remains fixed over a period of

time Additionally the outcomes are

statistically independent in this case

the probability of success denoted by P

is 0.5 the number of successes desired

is denoted by R which is 5 and the

sample size is denoted by n which is 8.

therefore the probability of five heads

is equal to factorial of 8 CR which is

eight divided by a factorial of 5 and

factorial of eight minus five whole

multiplied by 0.5 to the power of 5

multiplied by one minus 0.5 whole to the

this calculation gives a result of

fuson distribution is named after Simeon

de ni poisson and is also used for

poisson distribution is an application

of the population knowledge to predict

the sample Behavior it is generally used

for describing the probability

distribution of an event with respect to

some of the characteristics of poisson

distribution are as follows

croissant distribution describes the

discrete data resulting from a process

like the number of calls received by a

call center agent or the number of

unlike binomial distribution which deals

with binary discrete data poisson

distribution deals with integers which

can take any value poisson distribution

is suitable for analyzing situations

wherein the number of Trials similar to

the sample size in binomial distribution

is large and tends towards Infinity

additionally it is used in situations

where the probability of success in each

trial is very small almost tending

towards zero this is the reason why

poisson distribution is applicable for

predicting the occurrence of rare events

like plane crashes car accidents Etc and

is therefore widely used in the

insurance sector poisson distribution

can be used for predicting the number of

defects as well given a low defect

let us look at the formula for

calculating poisson distribution in the

next screen the poisson distribution for

a probability of exactly X occurrences

is given by P of x equals to Lambda to

the power of X multiplied with log e to

the power of minus Lambda whole divided

by factorial of X in this equation

Lambda is the mean number of occurrences

during the interval X is the number of

currencies desired and E is the base of

natural logarithm which is equal to

the mean of the poisson distribution is

given by the Lambda and the standard

deviation of a poisson distribution is

given by Sigma which is the square root

let us look at an example to calculate

poisson distribution in the next screen

the past records of a road Junction

which is accident prone show a mean

number of five accidents per week at

this Junction assume that the number of

accidents follows a poisson distribution

and calculate the probability of any

number of accidents happening in a week

click the button to know the answer

given the situation you know that the

value of Lambda or mean is 5. so P of 0

that is the probability of zero

accidents per week is calculated as 5 to

the power of zero multiplied by e to the

power of minus 5 whole divided by a

factorial of zero the answer is

0.006 applying the same formula the

probability of one accident per week is

0.03 the probability of more than two

accidents per week is one minus the sum

of probabilities of zero one and two

0.884 in other words the probability is

let us learn about continuous

probability distribution in this screen

continuous probability distribution is

characterized by the probability density

a variable is said to be continuous if

the range of possible values Falls along

a continuum for example loudness of

cheering at a ball game weight of

cookies in a package length of a pen or

the time required to assemble a car

continuous probability distributions

help in predicting the sample Behavior

let us learn about normal distribution

the normal or gaussian distribution is a

continuous probability distribution the

normal distribution is represented as n

and depends on two factors Miu which

stands for mean and sigma which gives

the standard deviation of the data

points normal distribution normally has

a higher frequency of values around the

mean and lesser occurrences away from it

approximation to describe real valued

random variables that tend to Cluster

the distribution is bell-shaped and

symmetrical the total area under the

normal curve is one which is p of X

various types of data such as body

weight height the output of a

manufacturing device Etc follow the

normal distribution additionally normal

distribution is continuous and

symmetrical with the tails asymptotic to

the x-axis which means they touch the

x-axis at Infinity let us continue to

discuss normal distribution in the

in a normal distribution to standardize

comparisons of dispersion or the

different measurement units like inches

meters grams Etc a standard Z variable

is used the uses of Z value are as

follows while the value of Z or the

number of standard deviations is unique

for each probability within the normal

distribution it helps in finding

probabilities of data points anywhere

it is dimensionless as well that is it

has no units such as millimeters liters

there are different formulas to arrive

at the normal distribution we will focus

on one commonly used formula for

calculating normal distribution which is

z equals y minus mu whole divided by

Sigma here Z is the number of standard

deviations between Y and the mean

denoted by mu Y is the value of the data

point in concern mu is mean of the

population or data points and sigma is

the standard deviation of the population

or data points let us look at an example

for calculating normal distribution in

suppose the time taken to resolve

customer problems follows a normal

distribution with a mean of 250 hours

and standard deviation of 23 hours find

the probability of a problem resolution

taking more than 300 hours click the

in this case Y is equal to 300 mu equals

250 and sigma equals 23. applying the

normal distribution formula Z is equal

to 300 minus 250 whole divided by 23.

the result is 2.17 when you look at the

normal distribution table the Z value of

this means the probability of a problem

taking zero to three hundred hours to be

resolved is 98.5 percent and therefore

the chances of a problem resolution

taking more than 300 hours is 1.5

percent Learners check out our certified

lean six Sigma Green Belt certification

training course and earn a Green Belt

certification to learn more about this

course you can click the course Link in

let us understand the usage of Z table

the graphical representation of Z table

the probability of areas under the curve

for the actual value one can identify

the z-score by using the Z table

as shown this probability is the area

under the curve to the left of point

using the actual data when you calculate

mean and standard deviation and the

values are 25 and 5 respectively it is

if the same data is standardized to a

mean value of zero and standard

deviation value of one it is the

in the next screen we will take a look

the Z table gives the probability that Z

is between 0 and a positive number there

are different forms of normal

distribution Z tables followed globally

the most common form of Z table with

positive z-scores is shown here

the value of a called the percentage

point is given along the borders of the

table in bold and is to two decimal

the values in the main table are the

probabilities that Z is between 0 and

note that the values running down the

table are to one decimal place

the numbers along the column change only

let us look at some examples on how to

use a z table in the following screen

let us find the value of P of Z Less

the table is not needed to find the

answer once we know that the variable Z

takes a value less than or equal to zero

first the area under the curve is one

and second the curve is symmetrical

about Z equals zero hence there is 0.5

or 50 above chance of Z equals 0 and 0.5

or 50 below chance of Z equals zero

let us find the value of P of Z greater

in this case the chance of Z is greater

than a number in this case 1.12

you can find this by using the following

the opposite or complement of an event

of a is the event of not a that is the

opposite or complement of event a

occurring is the event a not occurring

its probability is given by P of not a

in other words P of Z greater than 1.12

is 1 minus the opposite which is p of Z

using the table P of Z less than 1.12

equals 0.5 plus P of 0 less than Z less

0.8686 hence the answer is p of Z

greater than 1.12 equals 1 minus

0.1314 note the answer is less than 0.5

let us find the value of P of Z lies

in this case where Z Falls within an

interval the probability can be read

P of Z lies between 0 and 1.12 equals

we will learn about chi-square

chi-squared distribution is also known

as Chi Squared or chi-square

Chi Squared with K 1 degrees of freedom

is the distribution of a sum of the

squares of K independent standard normal

the chi-square distribution is one of

the most widely used probability

distributions in inferential statistics

it is also known as hypothesis testing

and the distribution is used in

hypothesis tests when used in hypothesis

tests it only needs one sample for the

conventionally degree of freedom is k1

click the button to view the chi-squared

chi-square calculated or Sigma or the

chi-square index equals F of O minus F

of e the whole squared divided by F of e

here F of O stands for an observed

frequency and F of e stands for an

expected frequency determined through a

we will learn about the chi-square

distribution in detail in the later part

of this lesson let us proceed to the

next screen to discuss T distribution

the T distribution method is the most

appropriate method to be used in the

when you have a sample size of less than

30 when the population standard

deviation is not known when the

population is approximately normal

unlike the normal distribution a t

distribution is lower at the mean and

higher at the Tails as seen in the image

T distribution is used for hypothesis

testing also as seen in the image the T

distribution is symmetrical in shape but

flatter than the normal distribution

as the sample size increases the T

distribution approaches normality

for every possible sample size or

degrees of freedom there is a different

let us learn about F distribution in the

the F distribution is a ratio of two

a specific F distribution is denoted by

the ratio of the degrees of freedom for

the numerator chi-square and the degrees

of freedom for the denominator

the f-test is performed to calculate and

observe if the standard deviations or

variances of two processes are

the project teams are usually concerned

about reducing the process variance as

per the formula f calculated equals S1

Square divided by S2 Square where S1 and

S2 are the standard deviations of the

if the F calculated is one it implies

there is no difference in the variance

if S1 is greater than S2 then the

numerator must be greater than the

denominator in other words df1 equals N1

minus 1 and df2 equals N2 minus 1.

from the F distribution table you can

easily find out the critical F

distribution at Alpha and the degrees of

freedom of the samples of two different

let us proceed to the next topic of this

lesson in the following screen

in this topic we will discuss

exploratory data analysis in detail let

us learn about multivariate studies in

multivari studies or multi-variable

studies are used to analyze variation in

investigating the stability of a process

more stable the process less is the

multivari studies also help in

identifying areas to be investigated

finally they help in breaking down the

variation into components to make the

multivari studies classify variation

sources into three major types

positional cyclical and temporal click

positional variation occurs within a

single piece or a product variation in

pieces of a batch is also an example of

positional variation in positional

variation measurements at different

locations of a piece would produce

suppose a company is manufacturing a

metal plate of thickness one inch and

the plate thickness is different at many

points it is an example of positional

some of the other examples can be pallet

stacking in a truck temperature gradient

in an oven variation observed from

cavity to cavity within a mold region of

a country and line on invoice

cyclical variation occurs when

measurement differs from piece to piece

or product to product but over a short

measurements may change if a product

such as a hot metal sheet is measured

if the measurement at the same location

in a piece varies with different pieces

it is an example of cyclical variation

other examples of cyclical variations

are batch to batch variation lot to lot

variation and account activity week to

temporal variation occurs over a longer

period of time such as machine wear and

tear and changes in efficiency of an

operator before and after lunch

temporal variations may also be seasonal

if the range of positional variation in

a piece is more in Winter than in summer

it is an example of temporal variation

the variation may occur because of

unfavorable working conditions in winter

process drift performance before and

after breaks seasonal and shift base

differences month-to-month closings and

quarterly returns can be examples of

we will learn about creating a multivari

the outcome of multivari studies is the

it depicts the type of variation in the

product and helps in identifying the

there are five major steps involved in

creating a multivariate chart

characteristics decide sample size

plot the chart and Link The observed

the first step is to select the process

and the relevant characteristics to be

for example selecting the process where

the plate of one inch thickness is being

in this process four equipment numbered

one to four produce the one inch plates

the characteristic to be measured is the

thickness of the plate ranging from 0.95

any plate thickness outside this range

the second step is to decide the sample

size and the frequency of data

in this example the sample size is five

pieces per equivalent and the frequency

of collecting data is every two hours

starting from eight in the morning to

then the tabulation sheet is created

where the data will be recorded

so one should measure the thickness of

the plate being produced by the four

equipment and a data collection

the third step is to create a tabulation

sheet in this example the tabulation

sheet with data records contains the

columns with time equipment number and

the fourth step is to plot the chart

in this example a chart can be plotted

with time on the x-axis and plate

the last step is to link The observed

in this Example The observed values can

be linked by appropriate lines

we will continue to learn about creating

a multivari chart in this screen

the path to create a multivari chart in

minitab is by selecting stat then

quality tools followed by multivari

the multivari chart created from the

the upper specification limit of 1.05

inches and the lower specification limit

of 0.95 inches has been marked by Green

data outside these lines are defects

the blue dots show the positional

the dots are the measurements of pieces

in a batch of any single equipment

the black lines join the mean of the

data recorded from the equipment

the mean of the data recorded from the

products of equipment number three is

much below the similar mean of other

this shows that equipment number three

is producing more defects than the other

the red line is the mean of the data

the red line Rises toward the right

which means the data points shift up

this may be because of the change in

operator efficiency after a lunch break

multivari chart helps us visually depict

the variations and establish the root

in the next screen we will learn about

correlation means association between

variables simple linear regression and

multiple regression techniques are very

important as they help invalidating the

the coefficient correlation shows the

strength of the relationship between Y

to associate y with a single X and

statistically validate the relationship

correlation is used in Excel use equals

Corel Open Bracket close bracket

function to calculate correlation

the dependent variable y may depend on

many independent variables X

but correlation is used to find the

behavior of Y as one of the X's changes

correlation helps us to predict the

direction of movement and values in y

statistical significance of this

movement is denoted by correlation

coefficient R it is also known as

Pearson's coefficient of correlation in

any correlation the value of the

correlation coefficient is always

positive value of R denotes the

direction of movement in both variables

as X increases y also increases and vice

versa negative value of R denotes that

the direction of movement in both

variables is in inverse fashion

as X increases y decreases and as X

decreases y increases when the value of

R is zero it means that there is no

correlation between the two variables

higher the absolute value of R stronger

the correlation between Y and X

absolute value of a number is its value

plus 4 has an absolute value of 4 and

minus four again has an absolute value

an R value of greater than plus 0.85 or

lesser than minus 0.85 indicates a

strong correlation hence r value of

minus 0.95 shows a stronger correlation

the next screen will elaborate on

correlation with the help of an example

and illustrations through Scatter Plots

the four graphs on the screen are

Scatter Plots displaying four different

correlation measures the linear

association between the dependent

variable or output variable Y and one

independent or input variable X

as can be deduced from the graphs a

definite pattern emerges as the absolute

value of correlation coefficient R

it is easy to see a pattern in r value

of 0.9 and above than to see a pattern

it is difficult to find a pattern below

correlation coefficient of 0.5 click the

to understand how correlation helps let

a correlation test was performed on the

scores of a set of students from their

the undergraduation score was the

dependent variable and first grade score

the value of correlation coefficient R

undergraduation scores and high school

this means the high school scores have

higher correlation compared to the first

this States the performance of students

in high school is a better indicator of

their performance in undergraduation

than their performance in the first

grade although the correlation exists as

both the values of R are less than 0.85

it will be difficult to draw a straight

in this screen we will learn about

regression although correlation gives

the direction of movement of the

dependent variable y as independent

variable X changes it does not provide

the extent of the movement of Y as X

this degree of movement can be

if a high percentage of variability in y

is explained By changes in x one can use

the model to write a transfer equation Y

is equal to f x and use the same

equation to predict future values of Y

the output of regression on Y and X is a

transfer function equation that can

predict values of Y for any other value

transfer function is generally denoted

by F and the equation is written as y

y can be regressed on one or more X's

simple linear regression is for 1X and

multiple linear regression is for more

the next screen will focus on key

there are two key concepts of regression

transfer function to control Y and vital

click each concept to learn more

the output of regression is a transfer

although the transfer function f of x

gives the degree of movement in y as X

changes it is not the correct transfer

function to control y as there may be a

low level of correlation between the two

the main thrust of regression is to

discover whether a significant

statistical relationship exists between

Y and a particular X that is by looking

at P values based on regression one can

infer the vital X and eliminate the

the analyze phase helps in understanding

if there is statistical relevance

between Y and X if the relevance is

established using metrics from

regression analysis one can move forward

with the tests the simple linear

regression or SLR should be used as a

statistical validation tool in the

in this screen we will understand the

concept of simple linear regression

a simple linear regression equation is a

represented by the equation shown here

in this equation Y is the dependent

variable and X is the independent

a is The Intercept of the fitted line on

the y-axis which is equal to the value

B is the regression coefficient or the

slope of the line and C is the error in

the regression model which has a mean of

the next screen will focus on the least

squares method in simple linear

with reference to the error mentioned

earlier if correlation coefficient of Y

and X is not equal to 1 meaning the

relation is not perfectly linear there

could be several lines that could fit in

notice the two graphs displayed for the

same set of Five Points two different

types of lines are drawn and both of

error refers to the points on the

scatter plot that do not fall on the

straight line drawn the second graph

statistical software like minitab fits

the line which has the least value of

as is clear from the graph error is the

distance of the point from the fitted

typically the data lies off the line

in perfect linear relation All Points

would lie on the line an error would be

zero the distance from the point to the

line is the error distance used in the

let's understand SLR with the help of an

consider the following example

suppose a farmer wishes to predict the

relationship between the amount spent on

fertilizers and the annual sales of his

he collects the data shown here for the

last few years and determines his

expected Revenue if he spends eight

dollars annually on fertilizers

he has targeted sales of thirty one

the steps to perform simple linear

regression in Ms Excel are as follows

copy the data table on an Excel

select all the data from B1 to C6 this

is assuming the years table appears in

click insert and choose the plane

it is titled scatter with only markers

the basic scatter chart will appear as

right click on the data points in the

scatter chart and choose the option add

then choose the option linear and select

the boxes titled display r squared value

a linear line will appear which is

called the best fit line or the least

to use the data for regression analysis

the interpretation of the scatter chart

the R square value or the coefficient of

determination conveys if the model is

0.3797 it means 38 percent of

variability in y is explained by X

the remaining 62 variation is

unexplained or due to residual factors

other factors like rain amount and

variability Sunshine temperatures seed

type and Seed quality could be tested

the low value of R square statistically

validates poor relationship between Y

the equation presented cannot be used

in a similar situation one should refer

to the cause and effect Matrix and study

the relationship between Y and a

we will discuss multiple linear

if a new variable X2 is added to the

r-square model the impact of X1 and X2

this is known as multiple linear

the value of R square changes due to the

introduction of the new variable

the resulting value of R square which

can be used in cases of multiple

regression is known as R square adjusted

the model can be used if R square

adjusted value is greater than 70

we will look at the key Concepts in the

the key Concepts in multiple linear

the residuals or the differences between

the actual value and the predicted value

given indication of how good the model

if the errors or residuals are small and

predictions use X's that are within the

range of the collected data the

the sum of squares total can be

calculated as follows sum of squares

total or SST equals the sum of squares

of regression or SSR plus sum of squares

to arrive at sum of squares of

regression SSR use the formula SSR

equals sum of squares total or SST minus

sum of squares of error or SSE

since SSR is SSE subtracted from SST

value of SSE should be less than SST

r squared is sum of squares of

regression or SSR divided by sum of

calculating SST and SSE helps in

to get a sense of the error in the

fitted model calculate the value of y

for a given data using the fitted line

to check for error take two observations

of Y at the same X the most important

thing to remember in regression analysis

is that the obtained fitted line

equation cannot be used to predict y for

for example it would not be possible to

predict the amount spent on fertilizers

for a forecasted sales of fifteen

both data points lie outside the data

set on which regression analysis is

if Y is dependent on many X's then

simple linear regression analysis can be

used to prioritize X but it requires

running separate regressions on y with

if an X does not explain variation in y

then it should not be explored any

these were the interpretations of the

simple linear regression equation

in the next screen we will learn that

despite a relationship being established

between two variables the change in one

may not cause a change in the other

let us discuss the difference between

correlation and causation in the

following screen a regression equation

denotes only a relationship between the

this does not mean that a change in one

variable will cause a change in the

if number of schools and incidents of

crime in a city rise together there may

be a relationship but no causation

the increase in both the factors could

be due to a third factor that is

in other words both of them may be

dependent variables to an independent

consider the graphs shown on the screen

the graphs on the left show the

relations between number of sneezes and

incidence of death with respect to

both have a positive correlation

finding a positive correlation between

incidents of deaths and number of

sneezes does not mean we assume sneezing

is the cause of somebody's death despite

the correlation being very strong as

depicted in the graph on the right

let us proceed to the next topic of this

lesson in the following screen

in this topic we will discuss hypothesis

let us learn about statistical and

practical significance of hypothesis

tests in the following screen

the differences between a variable and

its hypothesized value may be

statistically significant but may not be

practical or economically meaningful

for example based on a hypothesis test

neutral worldwide Inc wants to implement

a trading strategy which is proven to

provide statistically significant

however it does not guarantee trading on

economically meaningful positive returns

when the logical reasons are examined

before implementation the returns are

the returns may not be significant when

statistically proven strategy is

the returns may not be economically

significant after accounting for taxes

transaction costs and risks inherent in

thus there should be a practical or

economic significant study before

implementing any statistically

the next screen will briefly focus on

the conceptual differences between a

null and an alternate hypothesis

the conceptual differences between a

null and an alternate hypothesis are as

assume the specification of the current

process is itself the null hypothesis

null hypothesis denoted as the basic

assumption for any activity or

experiment is represented as h o

hypothesis cannot be proved it can only

it is important to note that if null

hypothesis is rejected alternative

hypothesis must be right for example

assuming that a movie is good one plans

to watch it therefore the null

hypothesis in this scenario will be

alternative hypothesis or h a challenges

the null hypothesis or is the converse

in this scenario alternate hypothesis

in the following screen we will discuss

rejecting a null hypothesis when it is

true is called type 1 error it is also

for example the rejection of a product

by the QA team when it is not defective

will cause loss to the producer suppose

when a movie is good it is reviewed to

be not good this reflects type 1 error

in this case the null hypothesis is

rejected when it is actually true

the two important points to be noted are

significance level or Alpha is the

chance of committing a type 1 error

the value of alpha is 0.05 or 5 percent

accepting a null hypothesis when it is

false is called type 2 error it is also

for example the acceptance of an

effective product by the quality analyst

of an organization will cause loss to

minimizing type 2 error requires

acceptance criteria to be very strict

suppose when a movie is not good it is

reviewed to be good this reflects type 2

error in this case the alternate

hypothesis is rejected when it was

the two important points to be noted are

beta is the chance of committing a type

2 error the value of beta is 0.2 or 20

any experiment should have as less beta

the next screen will cover the key

points to remember about type 1 and type

as you start dealing with the two types

of Errors keep the following points in

the probability of making one type of

error can be reduced when one is willing

to accept a higher probability of making

suppose the management of a company

producing pacemakers wants to ensure no

defective pacemaker reaches the consumer

so the quality assurance team makes

stringent guidelines to inspect the

this would invariably decrease the beta

error or type 2 error but this will also

increase the chance that a non-defective

pacemaker is declared defective by the

thus Alpha error or type 1 error

if all null hypotheses are accepted to

avoid rejecting true null hypothesis it

will lead to type 2 error typically

Alpha is set at 0.05 which means that

the risk of committing a type 1 error is

in case of any product the teams must

decide what type of error should be less

and set the value of Alpha and beta

in the next screen we will discuss the

the power of a hypothesis test or the

power of test is the probability of

correctly rejecting the null hypothesis

power of a test is represented by 1

minus beta which is also the type 2

the probability of not committing a type

2 error is called the power of a

the power of a test helps in improving

the advantage of hypothesis testing

the higher the power of a test the

better it is for purposes of hypothesis

testing given a choice of tests the one

with the highest power should be

the only way to decrease the probability

of a type 2 error given the significance

level or probability of type 1 error is

it is important to note that quality

inspection is done on Sample pieces and

so beta error is a function of the

if the sample size is not appropriate

the defects in a product line could

easily be missed out giving a wrong

perception of the quality of the product

this will increase the type 2 error to

decrease this error the quality

assurance team has to increase the

in hypothesis testing Alpha is called

the significance level and one minus

Alpha is called the confidence level of

the test in the next screen we will

focus on the determinants of sample size

the sample size can be calculated by

answering three simple questions

how much variation is present in the

at what interval does the true

population mean need to be estimated

and how much representation error is

continuous data is data which can be

the sample size for continuous data can

be determined by the formula shown on

we will learn about the standard sample

size formula for continuous data in the

representation error or Alpha error is

generally assumed to be five percent or

hence the expression of 1 minus Alpha

0.975 or 97.5 percent looking up the

value of Z 97.5 from the Z table gives

the expression reduces to the one shown

when Alpha is five percent Z is 1.96

to detect a change that is half the

standard deviation one needs to get at

least 16 data points for the sample

click the example tab to view an example

of continuous data calculation using

the population standard deviation for

the time to resolve customer problems is

what should be the size of a sample that

can estimate the average problem

resolution time within plus or minus 5

hours tolerance with 99 confidence

to know with 99 confidence that the time

to resolve a customer problem ranges

the value of Z for 99.5 must be

2.575 a good result should fall outside

the range of 0.5 percent which is one in

it is expected that 199 out of 200

trials will confirm a proper conclusion

the calculation gives a result of 238.70

one cannot have 0.70 of a sample so one

needs to round up to the nearest integer

significance level is greater than 0.01

which indicates the confidence is less

using 239 reduces Alpha and increases

the rounded up value 239 means the

expectations are being met for the

we will learn about the standard sample

size formula for discrete data in this

screen like continuous data one can find

out the sample size required while

dealing with discrete population

if the average population proportion

non-defective is p then population

standard deviation can be calculated by

using the expression shown on the screen

the expression for sample size is

present it is important to note that in

this expression the interval or

click the example tab to view an example

of discrete data calculation using

the non-defective population proportion

for pen manufacturing is 80 percent

what should be the sample size to draw a

sample that can estimate the proportion

of compliant pans within plus or minus

five percent with an alpha of five

percent consider calculating the sample

size for discrete data for which the

population proportion non-defective is

80 percent and the tolerance limit is

within plus or minus five percent

substituting the values it is found the

sample size should be 246. in this

example to know if the population

proportion for good pens is still within

75 to 85 percent and to have 95 percent

confidence that the sample will allow a

good conclusion one needs to inspect

0.86 of a pen cannot be inspected so the

value is rounded up to maintain the

inspecting 245 or fewer pens reduces the

this means the Z value would be lower

than 1.96 and Alpha would be greater

suppose one is willing to accept a

greater range in the estimate the

proportion is within 20 percent of the

past results and approximately within

one standard deviation of the proportion

Delta changes to 0.20 and the number of

needed samples is 15.4 is approximately

this screen will focus on the hypothesis

though the basic determinants of

accepting or rejecting a hypothesis

remain the same various tests are used

depending on the type of data from the

figure shown on the screen You can

conclude the type of test to be

performed based on the kind of data and

for discrete data if mean and standard

deviation are both known the z-test is

used and if mean is known but standard

deviation is unknown the t-test is used

if the standard deviation is unknown and

if the sample size is less than 30 it is

preferable to use the t-test if variance

is known one should go for chi-squared

test if mean and standard deviation are

known for a set of continuous data it is

recommended to go for the z-test

for me in comparison of two with

standard deviation unknown go for t-test

and for mean comparison of many with

standard deviation unknown go for f test

also if the variance is known for

continuous data go for f-test the next

few screens we'll discuss in detail the

tests for mean variants and proportions

let us understand hypothesis test for

means theoretical through an example in

the examples of hypothesis testing based

on the types of data and values

available are discussed here

the value of alpha can be assumed to be

five percent or 0.05 suppose you want to

check for the average height of a

population North American males are

selected as the population here

117 men are gathered as the sample and

the readings of their height are taken

the null hypothesis is that the average

height of North American males is 165

centimeters and the alternate hypothesis

is that the height is lesser or greater

than 165 centimeters consider the sample

size n as 117 for z-test and sample size

sample average or X bar is 164.5

using the data given let us calculate

the Z calc value and T calc value

the population height is 165 centimeters

with a standard deviation of 5.2

centimeters and the average height of

the sample group is 164.5 centimeters

the test for significant difference

first let us compute Z calc value using

the formula given on the screen

hence the Zeke calc is 1.04 Which is

less than 1.96 or t critical

therefore the null hypothesis cannot be

rejected since Z 0.05 equals 1.96 the

null hypothesis is not rejected at five

percent level of significance

the statistical notation is shown on the

thus a conclusion based on the sample

collected is that the average height of

North American males is 165 centimeters

if the population standard deviation is

not known a t-test is used it is similar

to the z-test instead of using the

population parameter or Sigma the sample

statistic standard deviation or S is

in this example the S value is 5.0 let

us now compute T value using the formula

the statistical notation to reject null

hypothesis is shown on the screen

the T critical value is 2.064 and we

know the T calc value is 0.5 Which is

therefore the null hypothesis cannot be

rejected at five percent level of

thus a conclusion based on the sample

collected is that the average height of

North American males is 165 centimeters

the conclusion of not rejecting the null

hypothesis is based on the assumption

that the 25 males are randomly selected

from all males in North America

null and alternative hypotheses are same

for both z-test and t-test in both the

examples the null hypothesis is not

in the next screen we will understand

the hypothesis test for variance with an

in hypothesis test for variance

chi-square test is used in the case of a

chi-square test the null and Alternate

hypotheses are defined and the values of

chi-square critical and chi-square are

to understand this concept with an

example click the button given on the

the null hypothesis is that the

proportion of winds in Australia or

abroad is independent of the country

the alternate hypothesis is that the

proportion of winds in Australia or

abroad is dependent on the country

chi-square critical is 6.251 and

chi-square calculated is 1.36

since the calculated value is less than

the critical value the proportion of

Winds of the Australia hockey team is

independent of the country played or

in this screen we will discuss

hypothesis tests for proportions with an

the hypothesis test on population

proportion can be performed to

understand this with an example click

the button given on the screen

let us perform hypothesis tests on

the null hypothesis is that the

proportion of smokers among males in a

the alternative hypothesis is the

proportion is different than 0.10

in notation it is represented as null

hypothesis is p equals P0 against

alternative hypothesis is p different

a sample of 150 adult males are

interviewed and it is found that 23 of

them are smokers thus the sample

proportion is 23 divided by 150 which is

substituting this value in the

expression of Z given on the screen

you can reject the null hypothesis at

level of significance Alpha if Z is

greater than z alpha four five percent

level of confidence the Z value should

be 1.96 since the calculated Z value is

more than what is required for five

percent level of confidence the null

hence it can be concluded that the

proportion of smokers in R is greater

in this screen we will focus on

comparison of means of two processes

means of two processes are compared to

understand whether the outcomes of the

two processes are significantly

this test is helpful in understanding

whether a new process is better than an

this test can also determine whether the

two samples belong to the same

population or different populations

it is especially required for

benchmarking to compare an existing

process with another benchmarked process

let us proceed to the next screen to

learn about the paired comparison

the example of two mean t-test with

unequal variances is discussed here

null and Alternate hypotheses are

the average heights of men in two

different sets of people are compared to

see if the means are significantly

for this test the sample sizes means and

variances are required to calculate the

two samples of sizes N1 of 125 and N2 of

110 are taken from the two populations

the mean value of sample size 1 is

167.3 and sample size 2 is 165.8

the standard deviation for sample sizes

1 and 2 are 4.2 and 5.0 respectively

using the formula given on the screen

the T value is derived as 2.47

the null hypothesis is rejected if the

calculated value of T is more than the

in other words reject null hypothesis at

level of significance a if computed T

value is greater than T of DF a divided

with a t-test we're comparing two means

and the population parameter Sigma is

therefore we're pulling the sample

standard deviations in order to

the variances are weighted by the number

of data points in each sample group

since t223 and 0.025 equals 1.96 the

null hypothesis is rejected at five

percent level of significance

the test used Here is known as the

paired t-test and is considered a very

powerful test in the next screen we will

look into the example of the paired

comparison hypothesis test for variance

it is important to understand the

different types of tests through an

Susan is trying to compare the standard

deviation of two companies according to

her the earnings of company a are more

volatile than those of Company B

she has been obtaining earnings data for

the past 31 years for company a and for

the past 41 years for Company B

she finds that the sample standard

deviation of company A's earnings is

4.40 cents end of company B's earnings

is three dollars ninety cents

determine whether the earnings of

company a have a greater standard

deviation than those of Company B at

five percent level of significance click

the button given on the screen to know

Susan has the data of the earnings of

the companies distributions rarely have

when processes are improved one of the

strategies is to reduce the variation

it is important to be able to compare

a null hypothesis would indicate no

if it can be rejected and the variance

is lower one can claim success

the statistical notation for this

example is given on the screen

suppose one has to compare two sets of

company data Susan has looked at the

she has been studying the effects of

strategy management styles and

Leadership profiles on the earnings of

these companies there are significant

he wants to know if they have an effect

on the variance in the earnings

she has sample data over several decades

for each company by the given data it

can be concluded that earnings of

company a have a greater standard

deviation than those of Company B

in calculating the f-test statistic

always put the greater variance in the

let us look at the f-test example of

hypothesis test for equality of variance

the degrees of freedom for company a and

Company B are 30 and 40 respectively

the critical value from F table equals

the null hypothesis is rejected if the

f-test statistic is greater than 1.74

the calculated value of f test statistic

is 1.273 and therefore at the 5

significance level the null hypothesis

the next screen will focus on hypothesis

tests f-test for independent groups

a restaurant which wants to explore the

recent overuse of avocados suspects

there is a difference between Two Chefs

and the number of avocados used to

prepare the salads the data shown in the

table is the measure of avocados in

the weight of avocado slices used in

salads prepared by two different chefs

is to determine if one Chef is using

perhaps the restaurants expenditures on

avocados is greater this month than the

average of the past 12 months this is

assuming there is no change in avocado

prices or the amount of avocados being

click the tab to learn to conduct an

the f-test is conducted in Ms Excel

through the following steps open MS

click data analysis please follow the

facilitator instruction on how to

select f-test to sample for variances in

variable 1 range select the data set for

group a and select data set for Group B

the screenshot of the f-test window is

in this screen we will discuss the

before interpreting the f-test the

assumptions to be considered are null

hypothesis there is no significant

statistical difference between the

variances of the two groups thus

concluding any variation could be

because of chance this is common cause

there is a significant statistical

difference between the variances of the

two groups thus concluding that

variations could be because of

this is special cause of variation

the following screen will focus on

the interpretations for the conducted

f-test are from the Excel result sheet

if p-value is low or below 0.05 the null

must be rejected thus null hypothesis

with 97 confidence is rejected

also the fact that variation could only

be due to common cause of variation is

it is inferred from the test that there

could be assignable causes of variation

or special causes of variation

Excel provides the descriptive

statistics for each variable it also

gives the degrees of freedom for each

f is the calculated F statistic F

critical is a reference number found in

a statistics Book Table P of f less than

or equal to F is the probability that F

really is less than F critical or that

the null hypothesis would be falsely

since the p-value is less than the alpha

the null hypothesis can be confidently

rejected alongside conducting a

hypothesis test a meaningful conclusion

from the test has been drawn the

following screen will focus on

hypothesis test t-test for independent

as discussed earlier the table shows the

measure of avocados in ounces and the

significant difference in their means

if a significant amount of difference is

found it can be concluded that there is

a possibility of special cause of

the next screen will demonstrate how to

the two sample independent t-test

inspects two groups of data for a

significant difference in their means

the idea is to conclude if there is a

significant amount of difference

if there is a statistical evidence of

variation one can conclude a possibility

of special cause of variation

the steps for conducting a two-sample

open MS Excel click data and click data

select two-sample independent t-test

in variable 1 range select the data set

for group a and select the data set for

keep the hypothesized mean difference as

in the following screen we will focus on

the assumptions for a two-sample

independent t-test are null hypothesis

there is no significant statistical

difference between the means of the two

groups thus concluding any variation

could be because of chance this is

there is a significant statistical

difference between the means of the two

groups thus concluding that variations

could be because of assignable causes

this is special cause of variation

the null hypothesis States the mean of

group a is equal to the mean of Group B

the alternate hypothesis states that the

mean of group a is not equal to the mean

note that alternate hypothesis tests two

mean of a is less than mean of B and

mean of a is greater than mean of B

thus a two-tailed probability needs to

before we interpret the t-test results

hey there Learners check out our

certified lean six Green Belt

certification training course and earn a

Green Belt certification to learn more

about this course you can click the

course Link in the description box below

let us compare the two-tailed and

one-tailed probability in the next

two-tailed probability and one-tailed

probability are used depending on the

direction of the alternate hypothesis

if the alternate hypothesis tests more

than one direction either less or more

use a two-tailed probability value from

the test example if mean of a is not

equal to mean of B then it is two-tailed

if the alternate hypothesis tests Only

One Direction use a one-tailed

probability value from the test example

if mean of a is greater than mean of B

then it is one-tailed probability in the

next screen let us look at the two

sample independent t-test results and

the results are shown in the table on

the inference is as the two-tailed

probability is being tested the p-value

of two-tailed probability testing is

0.24 which is greater than 0.05

if p-value is greater than 0.05 the null

hypothesis is not rejected this means

one cannot reject the fact that there is

no significant statistical difference

between the two means similar to the

f-test Excel provides the descriptive

statistics for each group or variable

the T stat is shown Excel also shows

one-tailed or two-tailed data for the

one-tailed test the alpha is 0.05 the

error is expected to be in One Direction

for the two-tailed test the error is

in this example T Stat or t calculated

is less than either T criticals

therefore the null hypothesis cannot be

thus it can be inferred that both the

groups are statistically same

we will discuss the paired t-test in the

paired t-test is another hypothesis test

from the family of t-tests the following

points will help in understanding the

the paired t-test is one of the most

powerful tests from the t-test family

the paired t-test is conducted before

and after the process to be measured for

example a group of students score X in

cssgb before taking the training program

post the training program the scores are

taken again one needs to find out if

there is a statistical difference

between the two sets of scores

if there is a significant difference the

inference could be that the training was

it is important to note that the paired

t-test interpretation shows the

effectiveness of the Improvement

this is the main reason why paired

t-tests are often used in the improved

stage let us learn about sample variance

sample variance is defined as the

average of the squared differences from

the mean the sample variance that is s

Square can be used to calculate and

understand the degree of variation of a

it can also be used in statistics

however it cannot be used or explained

directly because its value does not

to use the value you have to first

convert it into standard deviation and

then combine it with the mean

click the button to know the steps for

calculating the sample variance

Step 1 calculate the mean or average of

the sample Step 2 subtract each of the

step 3 calculate the square value of the

step 4 take the average of the squared

let us understand how to calculate the

sample variance with the help of an

consider the sample of weights the mean

When you subtract the individual values

from the mean take the Square value of

the results and then take the average of

the squared differences you will get

this number is not useful as it is in

order to get the standard deviation take

the square root of the sample variance

square root of 1936 equals 44.

the standard deviation in combination

with the mean will tell you how much the

in this example if your mean is 140 and

your variance is 44. You can conclude

that the majority of people weigh

between 96 pounds mean minus 44 and 184

pounds mean plus 44. let us proceed to

the next screen that focuses on the

analysis of variance or Anova which is

the comparison of more than two means

a t-test is used for one sample two

sample tests are used for comparing two

means to compare the means of more than

two samples use the Anova method

Anova stands for analysis of variance

Anova does not tell the better mean it

helps in understanding that all the

sample means are not equal the

shortlisted samples based on Anova

output can further be tested one

important aspect of Anova is it

generalizes the t-test to include more

performing multiple two-sample t-tests

would increase the chance of committing

a type 1 error hence anovas is useful in

comparing two or more means the next

screen will help in understanding this

concept through an exam as an example

consider the takeaway food delivery time

of three different Outlets is there any

evidence that the averages for the three

in other words can the delivery time be

the null hypothesis will assume that the

if the null hypothesis is rejected it

would mean that there are at least two

outlets that are different in their

in minitab one can perform Anova in one

ensure that the data of the table is

stacked in two columns in the main menu

go to stat Anova and then one way

the left column of the table will have

the outlets and the right column will

have the time in minutes this is similar

to the table shown on the screen

in the one-way analysis of variance

window select the response as delivery

time and Factor as outlet and click ok

the output of this process is shown here

notice the p-value which is much higher

the steps to perform Anova in Excel are

after entering the data to a spreadsheet

select the Anova single Factor test from

select the array for analysis designate

that the data is in columns and select

Excel shows the descriptive statistics

for each column in the top table

in the second table the Anova analysis

shows whether the variation is greater

between the groups or within the groups

it shows the sum of squares or SS

degrees of freedom DF means of squares

Ms or sum of squares divided by n minus

1 or variance the F statistic Ms between

divided by Ms within p-value and F

critical from a reference table

the F and P are calculated for the

variation that occurs within each of the

groups and between the groups

if the conditions in the groups are

significant it would be expected to see

the between groups SS much higher and

let us now interpret the minitab Anova

since the p-value is more than 0.05 the

this means there is no significant

difference between means of delivery

time for the three Outlets based on the

confidence intervals it is found that

the intervals overlap which means there

is little that separates the means of

in One Way Anova where there was only

one factor to be benchmarked that is the

outlet of delivery if there are two

factors you may use the two-way Anova

in this screen we will learn in detail

about chi-square distribution

the chi-square distribution is one of

the most widely used probability

distributions in inferential statistics

it is also known as hypothesis testing

and the distribution is used in

when used in hypothesis test needs one

sample for the test to be conducted

the chi-square distribution is also

known as Chi Squared it has K1 degrees

of freedom and is the distribution of a

sum of the squares of K independent

standard normal random variables suppose

in a field four nine players player one

comes in and can choose amongst all nine

positions available player 2 can choose

after all the eight players have chosen

their positions the last player gets to

the eight players are free to choose in

a playing field of nine eight is the

degree of freedom for this example

conventionally degree of freedom is N1

for example if w x y and z are four

random variables with standard normal

distributions the random variable F

which is the sum of w Square x square y

square and z-square has a chi-square

the degrees of freedom of the

distribution or the DF equals the number

of normally distributed variables used

in this case DF equals four the formula

for chi-squared distribution is shown on

it is important to note that F of O

stands for an observed frequency and F

of e stands for an expected frequency

the next screen will explain chi-square

suppose the Australian hockey team

wishes to analyze its wins at home and

abroad against four different countries

the data has two classifications and the

table is also known as a two by four

contingency table with two rows and four

the expected frequencies can be

calculated assuming there is a

thus expected frequency for each of the

observed frequency is equal to product

of row total and column total divided by

one has to find out how to calculate the

if the observed frequency is three wins

against South Africa in Australia then

it would convert to Total wins at home

which is 21 divided by the total number

of wins or 31 and is the result

similarly the expected population

parameters for all cases are found

in this step all the information of the

previous screen is combined and the

table is populated the estimated

population parameters are calculated and

added the formula estimates The observed

frequency to calculate the final

chi-square index which in this case is

it is important to note that there is a

different chi-square distribution for

each of the different numbers of degrees

for the chi-square distribution the

degrees of freedom are calculated as per

the number of rows and columns in the

the equation for degrees of freedom

should be noticed the number of degrees

assuming an alpha of 10 percent the

chi-squared distribution in the

chi-square table is noticed and a

critical chi-square index of 6.251 is

arrived at chi-square calculated value

is 1.36 both the values of the

chi-square index should be plotted the

critical chi-square distribution divides

the whole region into acceptance and

rejection while the calculated

chi-square distribution is based on data

and conveys whether the data falls into

an acceptance or rejection region

therefore as the calculated value is

less than the critical value and falls

in the acceptance region the proportion

of Winds of the Aussie team at home or

abroad has nothing to do with the

let us proceed to the next topic of this

lesson in the next screen in this topic

we will learn in detail about hypothesis

testing with non-normal data let us

begin with the mon Whitney test in the

the mon Whitney test also known as the

non-parametric test which is used to

in this test the value of alpha is by

default set at 0.05 and the rejection

and acceptance condition Remains the

Same for different cases that is if p is

less than Alpha reject the null

hypothesis if p is greater than Alpha

reject the alternate hypothesis

the aim of this test is to rank the

entire data available for each condition

and then compare the total outcome of

click the button to know the steps to

to perform the mon Whitney test first

rank all the values from low to high

without paying any attention to the

group to which each value belongs

the smallest number gets a rank of one

the largest number gets a rank of n

where n is the total number of values in

if there are ties continue to rank the

values anyway pretending they are

then find the average of the ranks for

all the identical values and assign that

continue this till all the whole number

ranks have been used next sort the

values into two groups these can now be

used for the non-whitney U test

summate the ranks for the observations

from sample one and then summate the

ranks in Sample two larger group

let us look at an example of the mon

suppose you have two sets of data G1 and

the G1 values are 14 2 5 16 and 9 and

the G2 values are 4 2 18 14 and 8. now

combine the G1 and G2 values sort them

in ascending order and mention the group

next rank the groups from 1 to 10 and

check if any values are identical

take an average of the ranks of the

identical values and place it against

the identical values in the final rank

column hence the average final rank is

similarly the average final rank is 7.5

for rank 7 and 8. next calculate R1 and

R2 by adding the ranks of the groups 1

and 2 respectively in this example the

R1 value is 28 and R2 value is 27. from

the given data 5 is the value of both N1

the formula for the mon Whitney U test

for N1 and N2 values is U1 equals N1

multiplied by N2 plus N1 multiplied by

N1 plus 1 whole divided by 2 minus R1

similarly U2 equals N1 multiplied by N2

plus N2 multiplied by N2 plus 1 whole

in this example the value of U1 is 12

and U2 is 13. now the U value can be

calculated by taking the minimum value

among 12 and 13 which is 12. look up the

mon Whitney U test table for N1 equals 5

and N2 equals 5. you will get the

critical value of U as 2. to be

statistically significant the obtained U

has to be equal to or less than this

critical value the our calculated value

of U is 12 which is not less than two

that means there is no statistical

difference between the means of the two

groups in this screen we will learn

about the kresco Wallace test the Cresco

Wallace test is named after William

kruskel and W Allen Wallace it is also a

non-parametric test used for testing the

source of origin of samples for example

whether the samples originate from the

the characteristics of the kresco

Wallace test are as follows it is the

only way to analyze variants by ranks

this test compares the medians of two or

more samples to find out if the samples

are from different populations

since this test is a non-parametric

method it does not assume the normal

distribution of the residuals unlike the

analogous one-way analysis of variance

for this test the null hypothesis is

that the medians of all groups are equal

and the alternate hypothesis is that at

least one population median of one group

is different from the population median

let us learn about the moods median in

non-parametric test that is used to test

the equality of medians from two or more

this test works when the output y

variable is continuous and discrete

ordinal or discrete count while the

input X variable is discrete with two or

click the button to view the steps

involved in the moods median test

following are the steps in the mood's

first find the median of the combined

data set next find the number of values

in each sample that are greater than the

median and form a contingency table

then find the expected value for each

cell and finally find the chi-square

we will learn about the Friedman test in

this screen the Friedman test is another

form of a non-parametric test it does

not make any assumptions about the

specific shape of the population from

which this sample is drawn and therefore

allows smaller sample data sets to be

analyzed unlike Anova the Friedman test

does not require the data set to be

randomly sampled from normally

distributed populations with equal

this test uses a two-tailed hypothesis

test where the null hypothesis is that

the population medians of each treatment

are statistically identical to the rest

in the next screen we will learn about

the one sample sine test is the simplest

of all the non-parametric tests that can

be used instead of a one sample t-test

it is similar to the concept of testing

if a coin is fair in showing heads or

here the null hypothesis represented as

h0 is the hypothecated median or assumed

median of the sample which belongs to

click the button to view the steps

involved in the one sample sign test

following are the steps in a one sample

first count the number of positive

values these are the values that are

larger than the hypothesized median

next count the number of negative values

these are the values that are smaller

than the hypothesized median finally

test the values to check if there are

significantly more positive values or

negative values than expected

this screen will focus on the one sample

wilcoxon test the one sample wilcoxin

test also known as the wilcoxon signed

rank test is another form of a

this test is equivalent to the

parametric one sample t-test and more

powerful than the non-parametric one

sample sine test let us discuss the

characteristics of this test in the

some of the characteristics of this test

are as follows this test assumes that

the sample is randomly taken from a

population with a symmetric frequency

distribution around the median also in

this test the Symmetry can be observed

with a histogram or by checking if the

median and mean are approximately equal

the conclusion in this test is that if

the value is on the midpoint you can

continue and accept the null hypothesis

if not you need to reject the alternate

click the button given on the screen to

let us consider an example the median

customer satisfaction score of an

organization has always been 3.7 and the

management wants to see if this has

they conduct a survey and get the

results grouped by the customer type

the conclusion will be as follows

if the median value is 3.7 the null

hypothesis ho can be accepted if not the

alternate hypothesis needs to be

rejected the alpha value will be 0.05

choose from over 300 in-demand skills

and get access to 1 000 Plus hours of

video content for free visit skillup by

simply learn click on the link in the

description to know more this lesson

will focus on the improve phase of the

the improved phase comes after the

in the analyze phase the data was

analyzed and some patterns were found to

identify where the problem lies

design of experiments or doe consists of

a series of planned and scientific

experiments that test various input

variables and their eventual impact on

design of experiments can be used as a

One-Stop alternative for analyzing all

influencing factors to arrive at a

doe is applicable where multiple input

variables known as factors affect a

single response variable an output

variable is the variable which may get

affected due to multiple input variables

doe is preferred over one factor at a

time or ofat experiments because it does

with techniques like blocking

experimental error can be eliminated the

trials should be randomized to avoid

concluding that a factory is significant

when the time at which it is measured or

influenced the response's result

an example of blocking is highlighted in

with techniques like replication many

experiments can be conducted to ensure a

we will understand the concept of design

experiments through an example in the

to understand doe and the main effects

consider the following example

suppose the objective of the experiment

is to achieve uniform part Dimensions at

a particular Target value to reduce

the inputs X or factors that affect the

output are cycle time mold temperature

holding pressure holding time and

the process is the molding process and

the output or the response of the

experiment is the part hardness

the components of the doe in this

example will be described in the next

output response factors levels and

interactions are the components of the

doe in the given example click each

the response variable is the part

hardness and is measured as a result of

the experiment and is used to judge the

factors of this experimental setup are

cycle time mold temperature holding

pressure holding time and material type

factors can be varied and are called

levels the molding temperature can be

set at 600 degrees Fahrenheit or 700

plastic type can be fillers and no

fillers and the material type has two

interactions refer to the degree to

which factors depend on one another

some experiments evaluate the effective

in the molding example the interaction

between cycle time and molding

temperature is critical the best level

for time depends on what temperature is

if the temperature level is higher the

cycle time may have to be decreased to

achieve the same response from the

let us understand full factorial

experiments through an example

full factorial experimental design

contains all combinations of all levels

this experimental design ensures no

possible treatment combinations get

omitted hence full factorial designs are

often preferred over other designs the

table shown here is for a two-way heat

treatment experiment there are two

factors oven time X2 and the temperature

X1 at which the material is drawn out of

the output y of the experiment is the

each of the factors has two levels

this example illustrates the concepts of

main factor and interaction effects

from the table it is clear that without

repetition the experiment will have four

different outcomes based on the changes

each experimental trial here is repeated

to give a total of eight values

let us now analyze the mean effect

an analysis of the means helps in

temperature at which the material is

drawn creates a difference in the

average part hardness this affects the

output and is called the main effect

analysis of means also tells how a

change in oven time creates a difference

in the average part hardness this is

analysis of means explains how

interaction between temperature and time

affects the average part hardness this

is known as the interaction effect

let us next understand the concept of

for calculating the main effect the

means have to be calculated hence to

calculate the main effect of draw

temperature the mean of the hardness

the values are populated in the

corresponding Columns of draw

temperatures The Columns have been

the value of the mean of A1 is 91 and of

plotting the data on a graph shows that

changing draw temperatures changes the

similarly we calculate the mean of

hardness values in B1 and B2

the values are 87 and 86 which are

it can be seen that changing the oven

time does not affect the average

now let us understand how the

interaction between temperature and time

affects the average part hardness

to check out draw temperature and oven

time interact the mean values are

calculated by taking the repetition

response hence the cell A1 B1 has the

mean of the values 90 and 87. the cell

A2 B1 has the mean of the values 84 and

after the mean values are calculated

the graph shows that to reduce

interactions low temperature and high

oven time should be selected to have the

desired output of high hardness also

if low hardness is the desired output

the experimental setup should have high

draw temperature and high oven time

the ideal case is represented by the

parallel lines which give the desired

output based on the main effect without

being affected by the interaction

the parallel lines are shown as a dotted

the mean of the factors are also

calculated and shown in the small table

in this we will introduce the concept of

the numbers of experiments in a doe

a full factorial experiment without

replication on five factors and two

levels is two raised to the power of

a full factorial experiment with one

replication on five factors and two

levels is 32 plus 32 which equals 64

a half fractional factorial to

experiment without replication on five

factors and two levels is two raised to

the power of five minus one which equals

a half factorial experiment with one

replication on five factors and two

levels is 16 plus 16 which equals 32

the number of combinations can be

determined using the formula L to the

power of f where L is the number of

levels and F is the number of factors

half fractional factorial is calculated

using the formula L to the power F minus

at three levels five factors full

factorial experiment would amount to 243

trials and half factorial experiments

the difference between full factorial

and half fractional factorial

experiments can be seen from the number

let us proceed to the next topic of this

lesson in the following screen

in this topic we will discuss root cause

we will learn about residuals analysis

while performing the regression analysis

of a linear or non-linear model you will

get a model with the predicted values

some of the data might fit within that

model whereas others may be scattered

the modeled equation predicts one value

for Y at level X however the actual

value for y observed at that level of X

is different from the predicted value

this difference between the observed

value of the dependent variable Y and

the predicted value is called residual

the formula to calculate residual is

observed value minus predicted value

residuals are considered to be errors

and each data point has one residual you

can validate the assumptions on random

errors as they are independent exhibit

normal distribution have a constant

various Sigma Square for all the

settings of the independent variables

and finally have a mean as zero

in the next slide we will continue to

as discussed in the previous screen

while performing any regression analysis

you will observe that not all the data

fits into the linear model as the linear

regression model is not always

appropriate for the data therefore you

should assess the appropriateness of the

model by defining residuals and

if all assumptions are satisfied the

residuals should randomly vary around

zero and the spread of the residuals

should be the same throughout the plot

that is no systematic patterns are

remember in residuals analysis both the

sum and the mean of the residuals are

residuals and diagnostic statistics

allow you to identify patterns that

either poorly fit in the model with a

strong influence on the estimated

parameters or have a high Leverage

it is helpful to interpret these

Diagnostics together to understand any

potential problems with the model

in the next screen we will learn about

data transformation using the Box Cox

method the available data must be

transformed when it does not exhibit the

normal distribution box and Cox in the

year 1964 developed a procedure for

estimating the best transformation to

normality within the family of power

it works by taking the current Y data

and raising it to the power known as

Lambda the formula for transformation of

Y is represented as y asterisk equals y

the whole divided by Lambda this formula

is used where the value of Lambda is not

zero if the value of Lambda is zero you

can use natural logarithm to transform y

the family of power Transformations can

be used for the following for converting

a data set so that parametric statistics

can be used here Lambda is a parameter

to be defined from the data for any

continuous data greater than zero this

will not work when the values are less

than or equal to zero transforming specs

note that the use of the transformation

in the next screen we will continue the

discussion on data transformation using

the table on the screen shows how the

data can be transformed using Lambda the

First Column lists down values of Lambda

and the second column shows the

if the value of Lambda is negative 2 it

becomes y to the power negative 2 after

the transformation which is 1 divided by

similarly if the value of Lambda is

negative one after transformation it

becomes y to the power of negative one

which is one divided by Y and so on note

that you will use a different formula

when you have the value of Lambda as

zero wherein you will take natural log

similarly transform values are also

shown on the screen click the example

let us look at an example of how data

transformation is done using box Cox

the difference between original data and

the data transformed using Fox Cox is

Figure 1 shows the original data plotted

on a histogram here you can see that

in figure 2 the Vox and Cox procedure is

applied on the original data and it is

transformed you can see that the data in

the second figure is more normal than

let us learn about process input and

output variables in the following screen

process Improvement has a few

prerequisites before a process can be

improved it must first be measured to

assess the level of improvement required

the first step is to know the input

variables and output variables and check

the sipoc map and the cause and effect

there are many ways to measure the key

process variables metrics such as the

percent defective operation costs

elapsed time backlog quantity and

documentation errors can be used

critical variables are best identified

once they are identified cause and

effect tools are used to establish the

relationship between variables

a cause and effect Matrix is shown on

the screen the key process input

variables have been listed vertically

and the key process output variables

for each of the output variables a

prioritization number is assigned

numbers which reflect the effect of each

input variable on the output variable

the process output priority is

Multiplied with the input variables to

arrive at the results for each input

the values are added to determine the

results for each input variable

for process input variable 1 the output

variables are 3 4 and 7 with a

prioritization value of 4 7 and 11

respectively therefore multiplying the

output variables with their

corresponding prioritization numbers and

adding those gives 117 which is around

33 percent of the total effect

the process input variables results are

compared to each other to determine

which input variable has the greatest

effect on the output variables

click the cause and effect Matrix

template button to view another template

a sample of the cause effect Matrix or

the CE Matrix gives the correlation

between input and output variables

in this screen we will discuss the steps

the steps for updating the cause and

effect Matrix are list the input

variables vertically under the column

process inputs list the output variables

horizontally under the numbers 1 to 15.

these output variables are important

from the customer's perspective

one can refer to either the qfd or the

ctq tray to know the key output

rank the output variables based on

customer priority these numbers can also

the input variables with the highest

score become the point of focus in the

another method to establish the cause

effect relation is the cause and effect

diagram this is explained in detail in

the following screen the cause and

effect diagram is used to find the root

cause and the potential solutions to a

a cause and effect diagram breaks down a

problem into bite-sized pieces and also

displays the possible causes in a

it is also known as the fishbone the 4M

it is commonly used to examine effects

or problems to find out the possible

causes and to indicate the possible

the steps involved in the cause and

all the possible causes of the problem

or effects selected for analysis are

the major causes are classified under

the headings of materials methods

the cause and effect diagram is drawn

with the problem at the point of the

central axis line and the causes on the

the next screen illustrates the cause

and effect diagram with the help of an

the diagram shows the cause and effect

diagram for the possible causes of

solder defects on a Reflow soldering

this diagram helps in collecting data

and discovering the root cause

during brainstorming the group looked at

all the major causes and then grouped

them under the main headings

under materials causes like types of

solder paste components and the

components packaging used are considered

the major causes under methods are

technology and preventive maintenance

similarly operator and schedule are

while tools and oven are grouped under

the next screen will discuss another

root cause analysis tool in detail

5y is one of the tools used to analyze

the root cause of a problem the

responsibility of the root cause

analysis lies with the 5y analysis team

the technical experts have a great

responsibility as the conclusion will be

drawn from the way the drill down of the

the 5y is a very simple tool as it poses

the why question to every problem till

the root cause is obtained it is

important to know that the 5y tool does

not restrict the interrogation to five

why can be asked as many times as

required till the root cause for the

it can be used along with the cause and

the following screen will explain the

the process for the 5y technique is

identify the problem and emphasize the

arrange for a brainstorming session with

the team including subject matter

experts process owners and team members

explain the purpose and the problem

analyze scenarios working backwards from

ask why for the answers of tanned until

normally reasons like insufficient

resources and time become the root

if the drill down in brainstorming is

carried out in the right direction it is

often found that the root cause is

related to the process therefore the

occurrence of a problem is often due to

the process and not an individual or a

in the next screen we will understand

the concept of the five wide technique

the process for the 5y technique is

identify the problem and emphasize the

problem statement arrange for a

brainstorming session with the team

including subject matter experts process

explain the purpose and the problem

analyze scenarios working backwards from

ask why for the answers of tanned until

normally reasons like insufficient

resources and time become the root

if the drill down in brainstorming is

carried out in the right direction it is

often found that the root cause is

related to the process therefore the

occurrence of a problem is often due to

the process and not an individual or a

in the next screen we will understand

the concept of the 5y technique with the

help of an example hey dear Learners

check out our certified lean Six Sigma

Green Belt certification training course

and earn a Green Belt certification to

learn more about this course you can

click the course Link in the description

box below in this topic we will discuss

lean Tools in detail let us learn about

lean techniques in the following screen

the eight Lane techniques are Kaizen

Boko yoke 5S just in time kanban judoka

tact time and high junka click each

Kaizen or continuous Improvement is the

building block of all lean production

methods Kaizen philosophy implies that

all incremental changes routinely

applied and sustained over a long period

of time results in significant

the second technique is pokayoka it is

also known as mistake proofing it is

good to do it right the first time and

even better to make it impossible to do

it wrong the first time the prompt

received to save the word document

before closing it without saving is an

5S is a set of five Japanese words which

translate to sort set in order shine

this is a simple and yet powerful tool

of lean the sword principle refers to

sorting items according to a rule the

rule could be frequency of use or time

after sorting the objects are set in

the place for everything is defined and

everything is placed accordingly

cleaning of the area refers to the shine

the fourth step requires formation and

circulation of a set of written

the last step refers to sustaining the

process by following the standards set

5S is useful as a framework to create

just in time or jit is another lean

this technique philosophizes about

producing the necessary units in the

necessary quantity at the necessary time

as an item is removed from a shelf of a

Supermart the system confirms it and

automatically sends a note for

this kind of technique can be used in an

organization to prevent accumulation of

the fifth technique is known as kanban

which means signboard in Japanese

kanban utilizes visual display cards to

Signal movement of material between the

this is one of the examples of visual

the next technique is jiduca it means

automation with human touch and is

sometimes known as autonomation jiduca

implements supervisory function in the

production line and stops the process as

soon as a defect is encountered the

process does not start again till the

root cause of the defect is eliminated

tact time is the maximum time in which

the customer demands need to be met for

example a customer needs 100 products

and the company has 420 minutes of

tax time equals time available divided

by demand in this case the company has a

maximum of 4.2 minutes per product this

will be the target for the production

the final technique is hijenka which

means production leveling and smoothing

it is a technique to reduce waste

occurring due to fluctuating customer

let us understand the concept of cycle

time reduction in this screen

cycle time reduction refers to the

reduction in the time taken for a

implementing lean techniques reduces

cycle time and releases resources faster

low cycle time increases productivity

lean techniques release resources early

achieving more production with the same

internal and external waste is reduced

and the operational process is

simplified with a decrease in product

all these factors help in satisfying the

customer and staying ahead in

the following screen describes the

concept of cycle time reduction through

the changes brought by implementing lean

techniques on an existing process are

Illustrated in the given diagram

things to be noticed are number of

operators used work allocation to The

Operators path or the movement in the

process and flow of the process

notice the changes brought about by

implementing lean techniques on the old

first the path followed by the material

in between the process is considerably

reduced this decreases the cycle time

second the number of operators is

reduced to three when compared to five

operator 1 can now work on process one

similarly Operator 2 can work on process

2 and process 3. hence there is an

increased productivity of The Operators

and the remaining skilled operators can

be used in some other process or system

the next screen will introduce the

concept of Kaizen and Kaizen Blitz

Kaizen means good change in Japanese

kaizan is a continuous Improvement

method to improve the functions of an

the improvements could be in process

productivity quality Technology and

Safety it brings in small incremental

and Blitz is known as Kaizen event or

Kaizen Workshop if the event is tightly

defined and the scope is evident for

implementation processes can be easily

changed and improved teams could improve

problem-solving methods in structured

workshops over a short time scale

the next screen will provide the

differences between Kaizen and Kai's n

the differences between Kaizen and

Kaizen Blitz are Kaizen is a method that

brings continuous Improvement in the

organization while kais and Blitz is a

workshop or an event that brings in

in small incremental changes in the

organization there are no major changes

Blitz is applied when a rapid solution

is required the Kaizen method follows a

step-by-step process it standardizes

measures and compares the process with

the requirement before improving it

Kaya's and Blitz plans for the event

executes it arrives at a solution and

all the people of the organization are

involved in Kaizen whereas kais and

Blitz is led by the top management and

others are invited to participate

the decision-making lies with the upper

in Kaizen the process is standardized

and measurements are regularly collected

and compared before the decision is

taken this relatively delays the process

Blitz decisions are taken soon and the

process change is wrapped in three to

Kaizen is continuous Improvement method

whereas the kais and Blitz is part of

en follows pdca in essence plan do and

check and act for the improvement

guys and Blitz uses pdca for execution

where the events are planned conducted

decided implemented and followed up

the following screen will elaborate on

the concepts of Kaizen and Kaizen Blitz

and Blitz are practiced in many

organizations across the world the

examples of Kaizen and Kaizen Blitz

method are shown here click each tab to

the Toyota production system is known

for Kaizen practices in Toyota if any

issue arises in the production line the

line Personnel seesaw the production

until the issue is resolved once the

solution is implemented the team resumes

a wood window company in the state of

Iowa U.S uses the case and Blitz method

to redesign their shop floor and replace

expensive non-flexible automation with

low-cost highly flexible cellular

eliminating scraps reorganizing work

areas and reducing inventory are some of

the examples of quick implementation

through Kaizen Blitz the term lean

refers to creating more value to

customers with fewer resources

it means reducing unwanted activities or

processes that do not add value to the

product or service with a customer

the lean philosophy is to provide

perfect value to the customer through a

perfect value creation process that has

while the ultimate goal is to achieve

zero waste you may not always get that

in the first couple of tries however you

will achieve minimum waste and continue

to move towards zero waste eventually

hence lean is the path towards

lean is about optimizing the process

from beginning to end eliminating

non-value-adding activities in va's and

increasing flow to ensure that parts and

services are provided to customers more

if quality is the word to describe Six

Sigma then speed is the word to describe

lean let's understand the importance of

there are many benefits of lean and some

of them are reduced cost reduced cycle

time more throughput and increase

despite all of these benefits lean is

not implemented by most of the

misconception that it is only suited in

the reason for this misconception is the

beginning of lean it began and grew in

popularity in the manufacturing areas

in recent years one can notice more

applications of Lane in other areas such

as Healthcare and the transactional

however the truth is that lean Concepts

can be applied in any business and in

on the next screen let's discuss how

lean and Six Sigma are two different

principles or methodologies that combine

to form and create one powerful

continuous Improvement methodology

they have various overlapping goals

toward the improvement with the aim of

creating the most efficient system

though the approaches are different the

methods complement each other

lean Six Sigma takes the power and rigor

of Six Sigma methodology and combines it

with lean Concepts leading to faster

results better quality and improved

let's look at the differences between

lean and Six Sigma lean focuses on

efficiency by identifying value from the

customer's point of view removing

unnecessary steps in the process and

improving process speed or velocity

on the other hand Six Sigma focuses on

Effectiveness with the help of

breakthrough processes identifying root

causes and reduction in variation

therefore when six sigmas combine with

lean it is possible to achieve business

so remember lean is about speed with a

focus on efficiency Six Sigma is about

quality with a focus on Effectiveness

and lean Six Sigma brings the best of

peeled a better result first Implement

lean to streamline the process this

helps to understand the chronic problems

and the ways to handle them quickly

once the problem is identified use six

single methodology to analyze the issues

and provide business Improvement

in other words lean is used to reduce

the waste and Six Sigma is used to

reduce the variation hey there Learners

check out our certified lean Six Sigma

Green Belt certification training course

and earn a Green Belt certification to

learn more about this course you can

click the course Link in the description

box below the six signal process is

known as Dometic demand comprises five

phases Define measure analyze improve

these phases are the roadmap to problem

solving and improving our processes

the effectiveness of Six Sigma method is

derived from its structure each phase

has an overarching objective and

specific deliverables that need to be

completed which helps us achieve the

objectives the purpose of the Define

phase is to document the problem the

desired outcome goals and deliverables

the purpose of the measure phase is to

obtain Baseline process performance

levels and quantify the problem the

focus of the analyze phase is to

identify the key root causes for process

variation and defects the purpose of the

improved phase is to develop test and

the goal of the control phase is to

monitor the key factors and maintain the

gains you learn the aspects of the

Dometic process now we'll look at the

tools used in each phase the list of

tools corresponds to the Dometic phase

the use or application of these tools

gives the expected deliverables in each

Dometic phase for a green belt some of

the tools listed are not required in

every Six Sigma Greenbelt project

these tools give us an insight into the

problem and lead us toward the real

issues in our processes that is with

more experience you are likely to know

the tools you need for your projects

in the Define phase we use sidepock

voice of the customer or VOC critical to

quality ctq the quality function

deployment or qfd failure modes and

known as the FMEA or the familiar and

the cause and effect CNE Matrix

in the measure phase we use measurement

system analysis or MSA control charts

process capability and normality plots

in the analyze phase we use Simple

linear regression or SLR Prado charts

fishbone diagram Builder modes and

effects analysis The Familiar

multivariate charts and hypothesis

in the improved phase we use

brainstorming piloting and also the

failure Mode's effects analysis and

in the last phase control will use

control charts a control plan and

measurement system analysis

Pareto chart is a histogram ordered by

the frequency of occurrence of events it

is also known as the 80 20 rule or vital

it helps project teams to focus on the

issues which cause the highest number of

defects or complaints to explain further

the given chart plots all the causes for

defects in a product or service the

values are represented in descending

order by bars and the cumulative total

Pareto chart emphasizes that 80 percent

of the effects come from 20 percent of

the causes thus a Pareto chart Narrows

the scope of the project or problem

solving by identifying the major causes

affecting Quality Burrito charts are

useful only when required data is

if data is not available then other

tools such as brainstorming and

multi-voting should be used to find the

Network diagrams are one of the tools

used by the project manager for project

planning they are also sometimes

referred to as Arrow diagrams because

they use arrows to connect activities

interdependencies between activities of

there are some assumptions that need to

be made while forming the network

diagram the first assumption is that

before a new activity begins all pending

activities have been completed

the second assumption is that all arrows

this means that the direction of the

arrow represents the sequence that

activities need to follow the last

assumption is that a network diagram

must start from a single event and end

with a single event there cannot be

multiple start and endpoints to the

critical path method also known as CPM

is an important tool used by project

managers to monitor the progress of the

project and to ensure that the project

the critical path for a project is the

longest sequence of tasks on the network

diagram the critical path in the given

Network diagram is highlighted in Orange

critical path is characterized by zero

slack for all tasks on the sequence

this means that the smallest delay in

any other tasks on the critical path

will cause a delay in the overall

this makes it very important for the

project manager to closely monitor the

tasks on the critical path and ensure

that the tasks go smoothly if needed the

project manager can divert resources

from other tasks that are not on the

critical path to tasks on the critical

path to ensure that the project is not

when a project manager removes resources

from such tasks he needs to ensure that

the task does not become a critical path

task because of the reduced number of

during the execution of the project the

critical path can easily shift because

of multiple factors and hence needs to

be constantly monitored by the project

manager a complex project can also have

organizational benefits of Six Sigma are

as follows a sex segment process

eliminates the root cause of problems

sometimes the solution is creating

robust products and services that

mitigate the impact of a variable input

or output on a customer's experience for

example many Electrical Utility Systems

have voltage variability up to and

sometimes exceeding a 10 deviation from

nominal value thus most electrical

products are built to tolerate the

variability drawing more amperage

without damage to any components or the

using Six Sigma reduces variation in a

process and thereby reduces waste in a

it ensures customer satisfaction and

provides process standardization

rework is substantially reduced because

one gets it right the very first time

further Six Sigma addresses the key

organizations to gain advantage and

become world leaders in their respective

Fields ultimately the whole Six Sigma

process is to satisfy customers and

Achieve organizational goals

let us understand how Six Sigma Works in

Six Sigma is successful because of the

following reasons Six Sigma is a

management strategy it creates an

environment where the management

supports Six Sigma as a business

strategy and not as a standalone

approach or a program to satisfy some

Six Sigma mainly emphasizes the DMACC

method of problem solving the Focus

teams are assigned well-defined projects

organization's bottom line with customer

satisfaction and increased quality being

Six Sigma also requires extensive use of

statistical methods and that's it for

this Six Sigma bootcap if you like this

session then like share and subscribe if

you have any question then you can drop

them in the comment section below until

next time stay safe and keep learning

hi there if you like this video

subscribe to the simply learned YouTube

channel and click here to watch similar

videos turn it up and get certified