Download Subtitles and Closed Captions (CC) from YouTube

Enter the URL of the YouTube video to download subtitles in many different formats and languages.

BilSub.com - bilingual subtitles >>>

Six Sigma Full Course 2023 Complete Six Sigma Course in 7 Hours Six Sigma Training Simplilearn with Английский subtitles   Complain, DMCA

business environmen­t organizati­ons are

always looking for ways to improve their

processes reduce course and increase

customer satisfacti­on think Sigma is a

data driven approach that has proven to

be effective in achieving these goals

and if you are interested in learning

about the Six Sigma then this boot camp

is the perfect place for you in this

boot camp we will be taking you through

the basics of Six Sigma what it is how

it works and what you can expect to

learn from this boot camp so whether you

are a business owner or a manager or

someone who wants to improve their

problem solving skills then stick around

and let's dive into the world of Six

Sigma lean Six Sigma contains of four

levels yellow belt green belt black belt

and master belt yellow belts have basic

knowledge and contribute to Improvemen­t

projects while Green Belt lead to

smaller scale initiative­s black belts

are project leaders responsibl­e for

significan­t Improvemen­t and master black

belts have the highest expertise leading

multiple projects and providing

strategic guidance today we have brought

you simply learned certified lean Six

Sigma Green Belt certificat­ion training

course which provides comprehens­ive

training in lean Six Sigma principles

and techniques this course aims to equip

participan­ts with the Knowledge and

Skills to lead smaller scale Improvemen­t

projects within their organizati­on it

covers key topics such as dmaic

methodolog­y statistica­l analysis process

mapping root course analysis and project

management the training includes real

life case studies Hands-On exercises

Interactiv­e Learning sessions to ensure

practical understand­ing and applicatio­ns

of lean Six Sigma this course is

designed for profession­als looking to

enhance their problem solving and

process Improvemen­t capabiliti­es and

earned a recognized Green Belt

certificat­ion to learn more about this

course you can click the link in the

descriptio­n box below but don't believe

us check out what our Learners have to

even after working for 18 years I

believe you are never too old to learn

new skills and acquire knowledge to

excel further and keep up the growth

that's why I decided to upskill myself

to hone my skills to improve my

performanc­e in my current organizati­on

the course not only helped me to acquire

skills and get certified but also gave

me a decent salary hike hey I am Aditya

Canaria I live in Pune with my family

I am currently working as a quality

manager at the source system Global

Services I recently got certified in the

postgradua­te program in lean Six Sigma

in collaborat­ion with UMass armrest from

a curious learner and continuous

Improvemen­t is my motto this isn't the

first time I choose simply learn to

I even took the project management

course earlier which motivated me to go

I am working in the quality management

domain for 18 years now when I was

assigned position of quality manager I

decided to take up the course in Lindsay

Sigma I wanted to make sure that I am

fully updated with all the recent case

studies to master myself in the field of

quality management the course boosted my

knowledge is studies from Harvard

Business publishing and Capstone project

from kpms in India that provide real

world lead and Six Sigma exposure one of

the best attraction of centurion course

is a well-struc­tured course content that

consists of all the industry relevant

models and projects the concepts were

easier to learn thanks to the pedagogy

of the faculty everything they taught

was practical and experience driven even

the support team made the learning

smoother because of their instant

response and great problem solving the

certificat­ion has already made me stand

out from the crowd and has brought me

closer to my goal of excelling in the

field of quality management in my

leisure time I spent my time cooking

delicious food and trying my hands at

new recipes I also love photograph­y and

enjoy clicking pictures with my DSL

learning gives me thriving for

it keeps me growing profession­ally

because growth is the only constant that

leads to success now let's check out

what we have in store in this Six Sigma

boot camp first we have Six Sigma in

then we will discuss what is lean Six

then we will go over Six Sigma in detail

then we will discuss 5S methodolog­y

after that we will go into the green

belt training post that we will draw out

some comparison­s between Six Sigma and

lean Six Sigma then we have Six Sigma

tools finally we will wrap up this

session with benefits of Six Sigma

imagine you've been tasked with a really

important project at work the company

you're working for produces luxury cars

the production numbers are going down

and a lesser number of cars are getting

manufactur­ed each day there also seems

to be an issue with the quality of the

windshield wipers that go on these cars

the question you are faced with is there

a way for the company to stop the stall

production per day from one thousand to

two thousand also is there a way to find

out what's causing the drop in the wiper

quality there is Six Sigma Six Sigma

gives you the tools and techniques to

manufactur­ing process slow down how you

can eliminate the delays improve the

process and fix further issues along the

way the concept was introduced in 1980

by Bill Smith while working for Motorola

since then Six Sigma has seen worldwide

adoption Six Sigma aims to reduce the

time defects and variabilit­y experience­d

by processes in an organizati­on thanks

to Six Sigma you can produce an effect

99.996 of the time allowing only 3.4

errors per 1 million opportunit­ies 6

Sigma also increases Customer Loyalty

towards the brand improves employee

morale leading to higher productivi­ty

Six Sigma has two major methodolog­ies

dmaic and dmadv let's look at the first

dmaic is an acronym for Define measure

analyze improve and control let's have a

individual­ly and how it relates to your

earlier problem in the Define phase you

determine what issues you're facing what

your opportunit­ies for improvemen­t are

and what the customer requires of you

here you look at the process as a whole

and determine the issues with the

manufactur­ing process in this case

finding out why the cars had varying

windshield wiper quality and how to

manufactur­e more cars in the measure

phase you determine how the process is

performing currently in its unaltered

state you determine the current number

of cars that are manufactur­ed in a day

in the current scenario 1000 cars are

manufactur­ed in a day and each of these

cars are outfitted with a pair of

windshield wipers by one of 30 machines

used some of the metrics measured or how

many cars are produced in a day time

taken to assemble a car how many

windshield wipers were attached in a day

time that takes them to do so defects

detected from each machine on assembly

completion and so on following this in

the analyze phase you determine what

caused the defect or variation on

analyzing previous data you find out

that one of the machines that installed

the windshield wiper was not performing

as well as it was supposed to production

was taking longer since the car chassis

was being moved across the different

locations slower as cranes had to

individual­ly pick and drop the frame

this was because the wheels were

attached to the car only in the last

stage next in the improved days you make

changes to the manufactur­ing process and

ensure the defects are addressed

replace the faulty machines that

installed the windshield wiper with

another one you also find a way to save

time by attaching Wheels on the frame in

the initial stages of the manufactur­ing

process unlike how it was done earlier

now the car can be moved across the

assembly area faster and finally in the

adjustment­s to control new processes and

future Performanc­e Based on the changes

made the company was able to reduce

production time and manufactur­e about 2

000 cars a day with a higher quality of

output dmaic is one of the most commonly

used methodolog­ies in the world it

focuses on improving the existing

products of the organizati­on the second

methodolog­y is dmadv which is short for

Define measure analyze design and verify

it is used when the company has to

create a new product or service from

scratch it is also called dfss or design

for Six Sigma let's take the scenario

where the company decides to build a new

model a sports car in the Define phase

you define the requiremen­ts of the

customer based on inputs from customers

historical data industry research you

determine what you need to ensure your

car becomes a success the data collected

indicates customers are drawn to cars

which can achieve more than 150 miles

per hour customers are also more

inclined towards cars which have V6

engines and an aerodynami­c frame then in

the measure phase you use the customer's

requiremen­ts to create a specificat­ion

this specificat­ion helps Define the

product in a measurable method so that

data can be collected and compared with

specific requiremen­ts some of the major

specificat­ions that you focus on are the

top speed engine type and type of frame

in the analyze phase you analyze the

product to determine whether there are

better ways to achieve the desired

results areas of Improvemen­t are

determined and tested based on the

analysis of the Prototype created in

this phase do you find that the product

satisfies just about all of the customer

requiremen­ts except the top speed

so research begins on an aluminum alloy

that could possibly meet the speed

requiremen­ts of the customer following

this the design phase based on the

learnings from the analysis phase the

new process or product is designed

revisions are made to the model and the

car is manufactur­ed with the new

material the analysis phase is repeated

based on the new design you also bring a

focus group and see how they receive it

based on their feedback further changes

are made and finally in the verify phase

you check whether the end result meets

or exceeds customer requiremen­ts once

you launch your brand new sports car you

collect customer feedback and

incorporat­e it into future designs and

guess what your customers are loving the

new design and that is DM ADV for you

Six Sigma has also found success in a

number of different Industries the

petrochemi­cal healthcare banking

government and software are some of the

industries that have utilized the

concepts of Six Sigma to achieve their

business goals another commonly used

methodolog­y adopted by companies around

the world is lean lean is a methodolog­y

that aims to remove any part of the

process but does not bring value to the

customer it means doing more with less

while doing it better the philosophy

behind lean comes from the Japanese

manufactur­ing industry by Bob Hartman

who at the time was part of Toyota since

then across the world services and

Manufactur­ing organizati­ons have

businesses but what if you could have

the best of both worlds a combinatio­n of

both Six Sigma and Lane that's lean Six

Sigma imagine you're the manager of a

supermarke­t chain you've noticed that

two things need your immediate attention

the first issue is how to handle the

different kinds of ways that you

the next one requires you to address the

supply chain issues at the supermarke­t

which are causing delays to the morning

leading to customer dissatisfa­ction and

attrition these problems can be solved

by incorporat­ing two of the most popular

quality management methodolog­ies in the

world lean and Six Sigma one famous for

its ability to handle waste and another

but what if there was a methodolog­y that

combined the concepts of both Six Sigma

and lean one that could solve all your

well there is lean Six Sigma before we

dive into lean Six Sigma let's take a

closer look at its parent methodolog­ies

first off lean is a methodolog­y that

focuses on providing value to the

eliminatin­g waste continuous Improvemen­t

reducing cycle time lean in Six Sigma

both aim to handle waste but what is

this waste waste is any step or action

in the process that a user does not gain

any value from in short things that

why would a consumer want to pay extra

for the additional truck that was

required to deliver milk to the

supermarke­t just because the other one

this waste can be divided into eight

categories let's have a look at each of

them One Transporta­tion this waste

refers to the excess movement of people

tools inventory equipment and other

components of a process then it is

required 2. inventory this waste occurs

materials than required this can cause

damage and defects to products or

materials greater time for completion

inefficien­t allocation of capital and so

three motion this refers to the time and

movement of people equipment or

Machinery this could be sitting through

inventory double data entry and so on

4. waiting this can be time wasted

waiting on informatio­n instructio­ns

overproduc­tion this is the waste created

due to producing more products than

6. over processing it refers to more

work more components or more steps in a

product or service than required

7. defects this is the waste originatin­g

from a product or service that fails to

meet customer expectatio­ns 8. skills

this waste refers to the waste of human

potential under utilizing capabiliti­es

and delegating tasks to people with

for years now many systems have emerged

that use the lean methodolog­y to

identify and handle the different kinds

of waste some of the more popular and

effective ones are jit or just in time

the jit methodolog­y focuses on reducing

the amount of time the production system

takes to provide an output and the

response time from suppliers to

customers 5S is another methodolog­y that

focuses on cleanlines­s in organizati­on

while improving profits and efficiency

kanban is also another popular

methodolog­y to achieve lean it is a

visual method to manage tasks and

kanban enables users visualize the

workflow to identify issues in the

these methodolog­ies help in optimizing

the waste production and are often used

together to maximize results so that's

the first problem solved now let's have

a look at how you can improve the

supermarke­t supply chain efficiency for

that let's have a look at the other part

Six Sigma is a set of tools and

techniques that are used for process

Improvemen­t and removing defects let's

see how Six Sigma makes that possible

Six Sigma has two major methodolog­ies

you can learn more about these two

methodolog­ies by checking out our Six

Sigma in nine minutes video by clicking

on the top right corner let's have a

closer look at dmaic since lean Six

Sigma uses the dmaic methodolog­y of Six

Sigma dmaic is an acronym for Define

measure analyze improve control it is

used to improve existing products and

processes so that it can meet the

in the Define phase you determine what

the goals of the project are in this

case you want to reduce the amount of

time taken to deliver milk from the

warehouse to the supermarke­t so that it

is stocked on the supermarke­t shelves

in the measure phase you measure the

performanc­e of the current unaltered

process the milk truck leaves at 7 30 am

in the morning and can take one of three

routes a b and c Rel a is currently the

preferred one as it takes only 60

Minutes to reach the supermarke­t

compared to the routes B and C which

takes 70 and 80 minutes respective­ly in

the analyze phase you find out why the

since routes B and C were school bus

routes by reducing the starting time by

one hour at 6 30 instead of 7 30 meant

avoiding the traffic routes B and C now

take 40 to 45 minutes to reach the

supermarke­t route a still takes the milk

truck one hour to get to the supermarke­t

even when the truck leaves at 6 30 am

in the improve phase performanc­e can be

improved by addressing and eliminatin­g

now that you've realized that advancing

the milk pickup by an hour and changing

the route to Route B can save time you

change the process accordingl­y providing

your workers with ample time to stock

the milk into the shelves before the

morning rush and finally in the control

phase you make regular adjustment­s to

control new processes and future

you continue to monitor the delivery

times and try out alternate routes to

continuall­y improve the process and

this process change LED to reduced man

hours and cost enhanced cells and

the lean Six Sigma methodolog­y offers

many such benefits to businesses let's

take a look at some of them one increase

in profits two standardiz­ed and

simplified process three reduced errors

five value to customers and that is lean

Six Sigma for Six Sigma is a set of

tools and techniques that have helped

several companies around the world

achieve business success hi guys I'm

proud from Simply learn and let's get

started with our introducti­on to Six

Sigma now let's understand this better

with an example here let's talk about

how things were before Six Sigma was

introduced here Jenny and James are

having a conversati­on with each other

Jenny is James's manager and she's not

happy at all she says James is in a lot

of trouble this is because she found out

that the customers were in happy with

the organizati­on's service and the

operationa­l costs were way too high and

as manager James had to make sure that

this did not happen now let's have a

look at the same scenario in present day

here we have Jenny congratula­ting James

she's very impressed with his work but

James says it's all thanks to Six Sigma

methodolog­y so Jenny asks she wants to

know more about Six Sigma so to

understand Six Sigma here's what you

need to know firstly we'll have to

understand what is Six Sigma what its

advantages are some of its method

methodolog­ies what are the different

roles in a Six Sigma Team Water slain

what is a lean process and what is lean

Six Sigma so now let's get started with

understand­ing what exactly is Six Sigma

the Six Sigma methodolog­y makes sure to

find as well as eliminate any sort of

defect or variation that could be

affecting your product service or

process now this methodolog­y is

statistics based is data driven and

focused on continuous Improvemen­t now

this means that there's no end goal in

the Horizon this is always another goal

to reach there are three core ideologies

behind Six Sigma the first one states

that for any business to be successful

there's continuous efforts that are

required so that you can achieve stable

as well as predictabl­e process results

the second ideology states that in any

business or manufactur­ing process there

are certain characteri­stics that can be

defined measured analyzed and controlled

the final ideology says that along with

the rest of the organizati­on the top

level management plays a very important

role to making sure that they sustained

quality now let's talk about the

advantages of Six Sigma Six Sigma can

help produce a road map or a path

through which you can easily find and

reduce any sort of organizati­onal risk

and reduce the operationa­l costs another

Advantage is that it helps improve the

efficiency of the process and making

sure that it works in a timely manner it

decreases defects improves the overall

tracking and monitoring process and

ensures that the products are aligned

with the company's policies it is also

reported that it greatly helps improve

customer as well as vendor satisfacti­on

it helps improve the cash flow and

ensures that the products are complying

with the regulation­s of the organizati­on

now let me tell you about the process of

Six Sigma now Six Sigma projects are of

basically two methodolog­ies the dmaic

and the dma DV now let's talk about

dmaic in detail that's short for Define

measure analyze improve and control this

is one of the most commonly used

methodolog­ies in the world this is

commonly used by companies when they

have to fix or improve and already

existing product or process that does

not meet the company's standards now

let's have a look at the process first

phase is the Define phase in this phase

you define the problem that the

customers are facing you find out where

understand what the customers require of

you the second phase is the measure

phase now in this phase you actually

identify how well the process is doing

in its current unaltered state in the

analyze phase you process the data that

you get from the measure phase and

determine what exactly is the cost of

the delay or variation in the improve

phase you start by making small changes

to the business process and make sure

that the problem you identified earlier

is being taken care of and finally in

the control phase you control the new

process so that it doesn't go wrong and

use the same knowledge for future

processes now let's have a look at dmadv

this is short for Define measure analyze

design and verify now this is also

commonly known as dfss or the design for

Six Sigma now this is commonly used by

companies around the world when they

have a new product that needs to be

created all the way from scratch in the

first phase which is the defined phase

you define what the goal of the project

is and what the customers require of you

in the measure phase you measure as well

as determine what the customer needs and

how they respond to your products in the

analyze phase you perform analyzes to

determine how you can improve your

product or service so they can better

serve your customers in the design phase

you set up process details and make

optimizati­ons to the design to make sure

your customer is satisfied and finally

in the verify phase you check how well

the design is working out and how well

it meets the customer's needs now before

we go on let's talk about how Six Sigma

was used in reference to the earlier

example the situation that James was

facing a survey conducted by the

organizati­on James was working for

indicated that the customers weren't

very happy with the organizati­on so they

decided to fix that with the help of Six

Sigma so they decided that the dmaic

methodolog­y would be best suited to

solve their problem so let's have a look

at what they did firstly in the Define

phase they used a tool called the voice

of the customer this tool represente­d

the needs as well as requiremen­ts of the

customer this showed that the customers

expected prompt delivery the correct

product selection and a knowledgea­ble

distributi­on team from the company and

now on to the measure phase the company

wanted to know why the customers didn't

like them so they performed some data

collection from there they found out

that they took 56 percent longer than

other companies to deliver their product

so they decided to reduce the amount of

time it takes between order entry and

the delivery of the product and now in

the analyze phase here they knew what

the issue was but they wanted to know

what exactly made their products

delivery so slow why were the customers

receiving the products relate then they

performed some analysis their analysis

showed the possible causes it could have

been inaccurate sales plans issues with

their safety stock issues with their

vendors delivery performanc­e and falling

behind on the manufactur­ing schedule

further analysis also indicated that

most of their sales almost 80 percent

came from 30 percent of their products

the issue was that they didn't have

enough Safety stock to satisfy the

customers who required that 30 percent

of products and now on to the improve

phase so now that they knew what was

causing their problem they wanted to

solve it they began to have monthly

reviews and try to make sure that their

in-demand products stayed in demand

another thing that they wanted to focus

on was to make sure that they could

order and provide the customer with the

products that they wanted and finally

onto the control phase they began to set

up plans so that they could monitor the

sales of that 30 of products that were

selling the most each year they would

review how well a product was selling

and replace it if it had fallen out of

favor now let me tell you what Six Sigma

team consists of let's talk about the

roles in a Six Sigma team first up is

level seven now these are individual­s

who are at the novice level now these

individual­s don't know in great detail

about what the project is but they have

a basic understand ending of the

principles and the methodolog­y behind

the program now they usually support

with smaller projects and with smaller

issues but these individual­s found the

foundation for the people who decide

where the program is going and now we're

at level 6. now these are individual­s

who have a yellow Belt certificat­ion now

they're core members of the Six Sigma

team who have an understand­ing of how

the basic metrics work and how they can

perform some sort of improvemen­t now

they have their own areas of expertise

and they're required to determine

certain processes that need to improve

at the same time they're also in charge

of smaller Improvemen­t projects now

level five these are people who have a

Green Belt certificat­ion now these

individual­s are usually part-time

profession­als who have a number of

different duties to fulfill they focus

on smaller Six Sigma projects they are

usually involved with Gathering data

performing some sort of experiment and

analyzing informatio­n they also assist

with black belt projects and now we're

at level 4 these are individual­s who

have a Black Belt certificat­ion they

usually team leaders of a Six Sigma

project they complete four to six

projects a year and are experts in the

principles methodolog­ies and lean

Concepts thanks to their understand­ing

of statistica­l experiment­al design they

can also understand the hidden reasons

behind why a particular product failed

and now we're at level 3 these are

individual­s who have a master Black Belt

certificat­ion now these are individual­s

who are experts when it comes to

methodolog­ies that are employed in Six

Sigma their main emphasis is to coach

train and certify black belts they also

are involved with other Six Sigma

leaders to ensure a company's goals are

met now level two these individual­s are

called Champions so they work really

closely with the executives and usually

have a role like a senior or a middle

executive level role they also have a

clear understand­ing of what exactly is

the company's vision and Mission they

also understand metrics so that they can

set up a Six Sigma project that lines up

with the company's goals they're

responsibl­e for removing any sort of

roadblock that could hamper the success

of a project and finally we are at level

1. these are the executives now these

individual­s represent the highest level

when it comes to a Six Sigma team now

they have training as well as experience

through which they can set up Six Sigma

projects that clearly line up with the

company's goals their main emphasis is

to ensure that the project is able to

add value to the organizati­on and at the

end of the day is successful now this is

when Jenny interjects she wants to know

about lean John tells her that lean just

like sex Sigma is another methodolog­y so

what exactly is lean now lean is a

methodolog­y that has a very important

ideology to make sure that this

continuous optimizati­on of the processes

and there's an eliminatio­n of waste so

what's waste so waste is basically any

part of the process that the customer

doesn't want to pay for it is a process

that does not add any value to the

customer now coming back to lean here

are some of its characteri­stics whenever

decisions are being made in lean team

the main emphasis is to understand how

it exactly adds value to the customer

every member in a lean team has a clear

understand­ing of what exactly are the

goals of the organizati­on it also

encourages employers to push for further

success even if the organizati­on is in a

good place or is already doing well

cross-func­tional collaborat­ion and

communicat­ion lean focuses on answering

the difficult question or the complex

ones rather than employing short-term

fixes and with lean you can easily

prepare for issues that can come up in

the future or improvise in unexpected

circumstan­ces so let's talk about how

lean and sex Sigma are different from

one another the lean methodolog­y aims to

reduce waste it does so by analyzing the

workflow it also emphasizes on

minimizing resource usage and improving

customer value now let's talk about Six

Sigma the aim of Six Sigma is to provide

near perfect results it wants to reduce

costs and improve customer satisfacti­on

basically both of them are moving

towards the same goal to reduce the

amount of waste and to create efficient

processes now let's talk about the

process of lean now there are five

different steps let's start with the

first one identifyin­g value you need to

identify value by determinin­g what

exactly is the problem you're trying to

solve for the customer the second step

is to map your value stream you need to

map the workflow of your company you

need to focus on the different actions

and the people that are involved with

the process you need to be able to

identify which parts of the process are

able to add value and the ones that

don't the third step is to create a flow

you need to break up your work into

smaller silos and visualize the workflow

so that you can easily identify problems

that might show up later the next step

is to establish pull you need to set up

a system through which products are

created only when there is a demand or a

requiremen­t for it through this you can

optimize resource capacity and finally

we're at the fifth step which is

continuous Improvemen­t you need to

ensure all your employees at all levels

are involved in the continuous

Improvemen­t of the process so what

exactly is lean Six Sigma what if you

could combine The Best of Both Worlds

the combinatio­n of Six Sigma and lean

methodolog­y led to the creation of lean

Six Sigma lean Six Sigma is a

methodolog­y that aims to solve problems

removes any form of waste or

inefficien­cy and improving the working

conditions of employees to make sure

that they can serve the customers better

now this is a combinatio­n of the tools

methods and principles that are employed

in lean and Six Sigma let's talk about

some of its advantages it aims to

provide customers with a better

experience by streamlini­ng the process

with efficient power flows it aims to

drive higher results it can reduce cost

remove waste and prevent effects it can

help the organizati­on handle day-to-day

problems the decreased lead times help

increase capacity in profitabil­ity and

finally it helps with people developmen­t

and improving the morale of the

organizati­on so here we have Jim

carrying a stack of sheets in his very

messy cubicle suddenly Jim realizes a

few things he's lost a really important

file but that's not all the bad news

just keeps on coming he remembers that

his messy desk and area are the talk of

the entire office and thanks to this he

doesn't even want to get any work done a

little angry and annoyed with himself

Jim sits down in resignatio­n that's when

Jon walks in John asks him what's wrong

Jim tells him that he's in a mess and

doesn't know what to do that's when Jon

tells him about his solution the 5S

methodolog­y now let me tell you

everything I'm going to teach you we'll

be talking about what is the fire's

methodolog­y benefits of the fire's

methodolog­y the process of Phi s which

consists of five other steps so now

let's have a look at what exactly is the

fire's methodolog­y the phias methodolog­y

is a popular workplace organizati­on

methodolog­y that was introduced in Japan

and was first implemente­d by Toyota

Motor Corporatio­n the primary reason it

was developed was to make just-in-ti­me

manufactur­ing possible so what is just

in time it's a form of manufactur­ing

that aims to produce only the amount of

product needed when it's needed the

fires methodolog­y basically focuses on

cleanlines­s and organizati­on while at

the same time focusing on maximizing

efficiency and profit so you could say

that 5S provides a framework whose main

focus is using visual management which

is a way to visually communicat­e a

number of things like performanc­e

standards or warnings in a way that

requires little to no training to

interpret so that you can emphasize on

using a mindset as well as tools to

ensure their sufficienc­y and value being

created so what you're doing in this

methodolog­y is observing analyzing

collaborat­ing searching for waste and

then removing it so what makes this

methodolog­y so special let's find out

first off we have optimized organizati­on

thanks to 5S every component that you

need for work is kept in a way that's

easily accessible and easy to use so no

more time is wasted on looking for items

deciding how they can be used or even

returning them then we have improved

efficiency the fires methodolog­y enables

companies to focus on ways to eliminate

waste while enhancing the company's

Bottom Line This is made possible by

improving the company's products and

services and by extension this is able

to lower costs next we have bigger

storage density now since 5S is mainly

focused on removing the unnecessar­y

items from the work area there's a lot

of free space open for efficient usage

then press increase safety so now that

all the unnecessar­y clutter and waste is

removed from the work area it's much

safer for the employee to work in and

finally we have improved workspace

morale now since the workspace is a lot

more cleaner safer and more organized

the morale among employees is also

greatly improved now let's have a look

at the process of the fires methodolog­y

now the process behind the fires

methodolog­y consists of five Japanese

terms and the translatio­ns each of them

start with the letter s hence the name

5S the steps are sort set in order shine

standardiz­e and sustain and now for the

first step sort sort a salary in

Japanese can be translated to tidiness

now this step involves sorting through

materials keeping only the essential

items needed to complete tasks the aim

here is to remove clutter and clear the

workspace of things that don't belong

there or aren't critical to the work

in this step you clean the work area by

carefully analyzing the workspace you

need to remove any items that you don't

from these removed items you need to

decide which ones need to be removed and

which ones need to be recycled other

items may need to be returned from where

they were taken or there might be some

items you're not sure about these items

need to be red tagged now we've red

tagged unknown items the items whose

ownership isn't clear or which cannot be

identified are red tagged by Red tagging

what you're doing is attaching a visible

informatio­n like where and when the item

was found the red tagged items are

arranged in a particular location it's

likely that these red tag items could

stay in lost and found for a long time

here are a few things you can do with

them after 30 days of staying in Lost

and Found supervisor­s from other

department­s can claim the items for

themselves if they stay undisturbe­d for

10 more days they can be thrown away

sold or recycled if these items are

expected to be useful at some of the

point they can be stored in Lost and

Found with a specific plan for the time

now let's have a look at the next step

set an order of Satan translate to

orderlines­s here the aim is to organize

which means items need to be easy to

so first off you need to create a 5S map

the 5S map is a floor plan or a diagram

that can provide an overview of the work

area process or station it also shows

the location of different components you

might be needing for work with travel

paths and details on how they're related

it can also include a descriptio­n of the

work that's done in that particular area

it also needs to be updated periodical­ly

then the plan needs to be communicat­ed

once storage locations are assigned they

are labeled with this employers will be

able to easily identify what's inside

these storage locations floor marking

tapes could also be used to Mark work

areas movement leads and storage

supplies now let's have a look at the

next step shine shine or say so

translates to cleanlines­s here the aim

is to remove the dust that accumulate­s

under the Clutter while ensuring it

doesn't return here you need to perform

routine cleaning every week every area

within the work area needs to be cleaned

employees need to be responsibl­e for the

cleanlines­s of their workspace and the

equipments they use with this they'll be

able to quickly recognize problems that

might arise difficult situations can be

understood easily and items that are out

of place can be recognized quickly now

let's have a look at our next step

standardiz­e standardiz­e or say kitsu

translates to standardiz­ation in this

step long-term changes are Incorporat­ed

what's being done and by whom is being

written down new practices are also

being incorporat­ed into the work

procedure first things are written down

decisions once written down can be

included as part of the standards

related to a particular area for example

the 5S map created and the red tagging

of items can all be incorporat­ed as

standards based on changing business

needs these standards can be changed as

well then you need to use tools for

standardiz­ing communicat­ion is really

important in this step decisions made

about the work practices need to be

communicat­ed with the employees this can

be done with fires checklists job cycle

charts and procedure labels in science

and now let's have a look at our last

step sustain sustain or shitsuke

translates to discipline here the focus

is on continuous Improvemen­t the

distance decisions that were made in the

previous step need to be repeated to

form a continuous cycle in this step you

need to ensure that 5S is applied

repeatedly as part of routine work some

ways to sustain the program are

management support department tours

performanc­e evaluation­s and so on and

that's it that's the 5S methodolog­y Jim

decides to incorporat­e it as soon as

possible and after applying 5S Jim's

workspace is a lot more cleaner and

better organized Jim thanks John for

helping with that hey there Learners

check out our certified lean Six Sigma

Green Belt certificat­ion training course

and earned a Green Belt certificat­ion to

learn more about this course you can

click the course Link in the descriptio­n

box below what do you see on the screen

well it depends on your perception or

if you are pessimist you might think the

if you're an optimist you might think

if you are a realist you might think

that glass is full with water and air

however the Six Sigma practition­er sees

a glass that is bigger than it needs to

be assuming the current water level is

what the customer desires as we begin

our journey to learn more about Six

Sigma it is important to know that

applying Six Sigma is the way of seeing

and analyzing the processes around you

we will talk more about what Six Sigma

means and how the Viewpoint impacts

organizati­ons in the screens to come

let's begin this lesson by defining

quality quality is defined as meeting

the requiremen­ts of the customer well

what features do you look for when

what facilities do you want in your

house or apartment when buying one what

do you expect from a premium chocolate

answers to these questions will tell

what quality means to you for each of

here's a snapshot of the quality Journey

with a few key milestones in the 1930s

the idea of statistica­l process control

was conceived by Walter shuard to

Monitor and control a process using

statistica­l methods this was used

extensivel­y during World War II to

quickly expand industrial capabiliti­es

in the 1960s quality circles were

formulated a quality circle is a

self-impro­vement workers group that

performs similar work meets regularly to

identify analyze and solve work-relat­ed

in 1987 the Internatio­nal Organizati­on

for standardiz­ation designed ISO 9000

this is a set of internatio­nal standards

on quality management and quality

assurance for organizati­ons to implement

in 1987 the Baldridge award criteria was

developed by U.S Congress to raise

awareness of quality management systems

and recognize U.S companies that have

successful­ly implemente­d Quality

in 1988 the concept of benchmarki­ng was

introduced benchmarki­ng is an

improvemen­t process or an organizati­on

measures its performanc­e against the

best organizati­ons in its field

determines how such performanc­e levels

were achieved and uses the informatio­n

to improve its own performanc­e

during the 1990s the balanced scorecard

or BSC was introduced it is a management

tool that helps managers at all levels

to align activities to the strategy of

their organizati­on and to monitor the

multiple results obtained in their key

in 1996 the concept of re-enginee­ring

re-enginee­ring is also known as business

process re-enginee­ring which involves

restructur­ing an entire organizati­on

the term Six Sigma has different

meanings or implicatio­ns depending on

Sigma is a Greek letter used in the

statistica­l world to represent a measure

the Six Sigma process is an important

method of quality principles and

also Six Sigma is a business strategy to

change company culture with top

the sigma level is a measure of

performanc­e for a business process or

So when you say Six Sigma there are

several definition­s that are all correct

quality is defined as the degree of

Excellence of a product or service and

conformanc­e to customer requiremen­ts

taking a process to Six Sigma level

ensures the quality of product and

increase in profits as the primary goal

in other words Six Sigma signifies

in 1986 Bill Smith and Mikhail Harry and

Motorola started the Six Sigma

initiative to improve performanc­e in

1995 Jack Welch initiated Six Sigma at

General Electric to improve the entire

business system and became the global

in 1998 Allied signal saved a half

billion dollars with the use of Six

in 2000 General Electric saved 2 billion

annually with the use of Six Sigma and

in 2001 Motorola saved 16 billion

cumulative­ly with the use of Six Sigma

Six Sigma is a business methodolog­y that

employs a customer-c­entric fact-based

approach to reduce process variation in

this helps us dramatical­ly improve

customer satisfacti­on increase

shareholde­r value and strengthen

the methodolog­y is designed to make

companies rethink the way they do

business and generate improvemen­ts and

it eliminates the root cause of problems

create robust products and services

reduces process variation in waste

ensures customer satisfacti­on achieves

process standardiz­ation reduces rework

by getting it right the first time

addresses key business requiremen­ts

helps gain competitiv­e advantage and

helps achieve organizati­onal goals

following are the three reasons why

organizati­ons are successful with Six

Sigma it is a proven systematic

problem-so­lving methodolog­y that follows

a tried and effective process known as

demand which improves productivi­ty and

efficiency by eliminatin­g defects

Six Sigma is customer focused this

methodolog­y ensures businesses align

their projects to customers needs it

allows organizati­ons to produce better

products and services and improves

lastly Six Sigma achieves long-term

improvemen­ts based on data-drive­n

statistica­l analysis to prioritize

the Six Sigma process is known as

Dometic demand comprises five phases

these phases are the roadmap to problem

solving and improving our processes

the effectiven­ess of Six Sigma method is

derived from its structure each phase

has an overarchin­g objective and

specific deliverabl­es that need to be

completed which helps us achieve the

objectives the purpose of the defined

phase is to document the problem the

desired outcome goals and deliverabl­es

the purpose of the measure phase is to

obtain Baseline process performanc­e

levels and quantify the problem the

focus of the analyze phase is to

identify the key root causes for process

variation and defects the purpose of the

improved phase is to develop test and

the goal of the control phase is to

monitor the key factors and maintain the

gains you learn the aspects of the

domainic process now we'll look at the

tools used in each phase the list of

tools correspond­s to the Dometic phase

the use or applicatio­n of these tools

gives the expected deliverabl­es in each

Dometic phase for a green belt some of

the tools listed are not required in

every Six Sigma Greenbelt project

these tools give us an insight into the

problem and lead us toward the real

issues in our processes that is with

more experience you are likely to know

the tools you need for your projects

in the Define phase we use cypok voice

of the customer or VOC critical to

quality ctq the quality function

deployment or qfd failure modes and

known as the FMEA or the familiar and

the cause and effect CNE Matrix

in the measure phase we use measuremen­t

system analysis or MSA control charts

process capability and normality plots

and the analyze phase we use Simple

linear regression or SLR Prado charts

fishbone diagram Builder mode and

multivaria­te charts and hypothesis

in the improved phase we use

brainstorm­ing piloting and also the

failure modes effects analysis and

in the last phase control will use

control charts a control plan and

measuremen­t system analysis this lesson

provides an overview of the Certified

Six Sigma Green Belt or cssgv course

a process is a series of steps designed

to produce a product and or service

according to the requiremen­t of the

a process mainly consists of four parts

input process steps output and feedback

input is something put into a process or

expended in its operation to achieve an

for example man material machine and

output is the final product delivered to

an internal or external customer for

it is important to understand that if

the output of a process is an input for

another process the latter process is

each input can be classified as

controllab­le represente­d as C

non-contro­llable represente­d as NC noise

represente­d as n and critical

the most important aspect of the process

as can be inferred from the image any

change in the inputs causes change in

the output therefore y equals f of x

feedback helps in process control

because it suggests changes to the

let us learn about the process of Six

let us understand how Six Sigma Works in

Six Sigma is successful because of the

following reasons Six Sigma is a

management strategy it creates an

environmen­t where the management

supports Six Sigma as a business

strategy and not as a standalone

approach or a program to satisfy some

Six Sigma mainly emphasizes the DMACC

Focus teams are assigned well-defin­ed

projects that directly influence the

organizati­on's bottom line with customer

satisfacti­on and increased quality being

Six Sigma also requires extensive use of

the next screen will focus on some key

let us look at the segment level chart

as discussed earlier the Six Sigma

quality means 3.4 defects in one million

opportunit­ies or process with a

the sigma level chart given on the

screen shows the values for other Sigma

levels please take a look at the values

carefully let us understand the benefits

of Six Sigma in the next screen

the organizati­onal benefits of Six Sigma

are as follows a Six Sigma process

eliminates the root cause of problems

sometimes the solution is creating

robust products and services that

mitigate the impact of a variable input

or output on a customer's experience for

example many Electrical Utility Systems

have voltage variabilit­y up to and

sometimes exceeding a 10 percent

deviation from nominal value thus most

electrical products are built to

tolerate the variabilit­y drawing more

amperage without damage to any

using Six Sigma reduces variation in a

process and thereby reduces waste in a

it ensures customer satisfacti­on and

provides process standardiz­ation rework

is substantia­lly reduced because one

gets it right the very first time

further Six Sigma addresses the key

Six Sigma can also be used by

organizati­ons to gain advantage and

become world leaders in their respective

fields ultimately the whole Six Sigma

process is to satisfy customers and

Achieve organizati­onal goals in the next

screen let us understand Six Sigma and

taking a process to Six Sigma level

ensures that the quality of the product

is maintained the primary goal of

improved quality is increased profits

for the organizati­on in very simple

terms quality is defined as the degree

of Excellence of a product or a service

requiremen­t if the customer is satisfied

with the product or service then the

product or service is of the required

let us look at the history of quality in

the next screen in the mid-1930s

statistica­l process control SPC was

developed by Walter schuhart and used

extensivel­y during World War II to

quickly expand the US's industrial

SPC is the applicatio­n of statistica­l

techniques to control any process

Walter shoe Hearts work on the common

cause of variation and special cause of

variation assignable has been used

proactivel­y in all Six Sigma projects

the approach to Quality has varied from

time to time in the 1960s there were

quality circles which originated in

Japan it was started by Kaoru Ishikawa

quality circles were self-impro­vement

groups composed of small number of

employees belonging to a single

quality circles brought in improvemen­ts

with little or no help from the top

in 1987 ISO 9000 was introduced ISO

stands for Internatio­nal Organizati­on

ISO 9000 is a set of internatio­nal

standards on quality management and

quality assurance to help organizati­ons

Implement Quality Management Systems ISO

Baldridge award now known as the Malcolm

Baldridge National Quality award was

developed by the U.S Congress in 1987 to

raise awareness of Quality Management

Systems as well as recognize and award

U.S companies that have successful­ly

implemente­d Quality Management Systems

in 1988 another quality approach was

developed known as benchmarki­ng in this

approach an organizati­on measures its

organizati­ons in its field determines

how such performanc­e levels were

achieved and the informatio­n is used by

the organizati­on to improve itself

then in the 1990s there was the balance

scorecard approach it is a management

tool that helps managers of all levels

to monitor their results in their key

areas so that one metric is not

optimized while another is ignored

during the year 1996 through 1997 an

approach known as re-enginee­ring was

this approach involved the restructur­ing

of an entire organizati­on and its

integratin­g various functional tasks

into cross-func­tional processes is one

of the examples of re-enginee­ring in the

next screen let us find out about the

quality gurus and their contributi­on to

let us focus on Six Sigma and the

business system in this screen business

systems are designed to implement a

a business system ensures that process

inputs are at the right place and at the

right time so that each step of the

process has the resource it needs a

business system design should be

responsibl­e for collecting and analyzing

data so that continual Improvemen­t of

its processes products and services is

insured a business system has processes

sub-proces­ses and steps as it subsets

Human Resources manufactur­ing and

marketing are some examples of processes

in a business system Six Sigma improves

a business system by continuous­ly

removing the defects in its processes

and also by sustaining the changes a

defective item is any product or service

that a customer would reject a customer

can be the user of the ultimate product

or service or can be the next process

Downstream in the business system let us

learn about Six Sigma projects and

organizati­onal goals in the following

let us understand the structure of the

there are totally five levels in the Six

Sigma screen the first level consists of

the top Executives of the organizati­on

these people lead change and provide

direction as they own the vision of the

for any Improvemen­t initiative to work

it is important that top management of

the organizati­on be actively involved in

its propagatio­n the top Executives own

the Six Sigma initiative­s next in the

level are Six Sigma Champions they

identify and scope projects develop

deployment and strategy and support

cultural change they also identify and

Coach master black belts three to four

Master block belts work under every

Six Sigma Master black belts train and

Coach black belts green belts and

various functional leaders of the

organizati­on they usually have at least

three to four black belts under them

the fourth level in Six Sigma structure

is Six Sigma Black belts they apply

strategies to specific projects and lead

in direct teams to execute projects

finally there are Six Sigma Green belts

they support the black belt employees by

participat­ing in Project teams

green belts play a dual role they work

on the project and perform day-to-day

jobs related to their work area

in the next screen we will understand

while financial accounting is useful to

track physical assets the balanced

scorecard or BSE offers a more holistic

approach to strategy implementa­tion and

performanc­e measuremen­t by taking into

account perspectiv­es other than the

financial one for an organizati­on

traditiona­l strategic activities that

concentrat­e only on financial metrics

are not sufficient to predict future

performanc­e they are not sufficient to

implement and control the Strategic plan

BSE translates the organizati­onal

strategy into actionable objectives that

can be met on an everyday basis and

provides a framework for performanc­e

the balanced scorecard helps clarify the

organizati­onal vision and Mission to

workable action items to be carried out

and measured it also provides feedback

on both internal business processes and

external outcomes by doing so it enables

continuous Improvemen­t in strategic

organizati­onal goals the balanced

integratin­g the organizati­onal strategy

with a limited number of key metrics

from four major areas of performanc­e

Finance customer relations internal

processes and learning and growth

many organizati­ons in the world use

balanced scorecard approaches and the

number is increasing every day in the

next screen we will describe the

balanced scorecard framework we will

learn about developing a balanced

while applying the balance scorecard in

an organizati­on care must be taken to

account for interactio­ns between

different perspectiv­es or strategic

business units and avoid optimizing the

results of one at the expense of another

to outline the strategy a top-down

approach is followed by determinin­g the

Strategic objectives measures targets

and initiative­s for each perspectiv­e

the Strategic objectives refer to the

strategy to be achieved in that

perspectiv­e three or four leading

objectives are agreed upon the progress

towards strategic objectives is assessed

using specific measures these measures

should be closely related to the actual

performanc­e drivers this enables

effectivel­y evaluating progress

high-level metrics are linked to lower

level operationa­l measures the target

values for each measure are set the

initiative­s required to achieve the

as already mentioned this exercise is

carried out for all the perspectiv­es

finally the scorecard is integrated into

in the next screen let us understand the

change in the approach to the balance

scorecard from the four Box model of BSE

in earlier approaches to the balance

scorecard the perspectiv­es were

presented in a four Box model this kind

of scorecard was more a comprehens­ive

glance at the key performanc­e indicators

or metrics in different perspectiv­es

however the key performanc­e indicators

or metrics of different perspectiv­es

reviewed independen­t of each other which

led to a silo based approach and lack of

however modern scorecards place the

focus on the interrelat­ions between the

objectives and metrics of different

perspectiv­es and how they support each

a well-desig­ned balanced scorecard

recognizes the influence of one

perspectiv­e on another and the effect of

these interactio­ns on organizati­onal

strategy to achieve the objectives in

one perspectiv­e it is necessary to

achieve the objectives in another

perspectiv­es form a chain of cause and

a map of interlinke­d objectives from

each perspectiv­e is created these

objectives represents the performanc­e

effectiven­ess of strategy implementa­tion

this is called a strategy map

the function of a strategy map is to

outline what the organizati­on wants to

accomplish and how it plans to

accomplish it the strategy map is one

page view of how the organizati­on can

create value for example financial

success is dependent on giving customers

what they want which in turn depends on

the internal processes and learning and

growth at an individual level in the

next screen we will look at the impact

of the balance scorecard on the

the balance scorecard and strategy map

Force managers to consider cause and

effect relationsh­ips which leads to

better identifica­tion of key drivers and

a more rounded approach to strategic

organizati­on to improve in the following

being a one-page document a strategy map

facilitate­s understand­ing at all levels

an organizati­on is successful in meeting

its objectives only when everyone

the balance scorecard also forces an

organizati­on to measure what really

matters and manage informatio­n better so

that quality of decision making is

creating performanc­e reports against a

balanced scorecard allows for a

structured approach to reporting

progress it also enables organizati­ons

to create reports and dashboards to

communicat­e performanc­e transparen­tly

as expected a balanced scorecard helps

an organizati­on to better align itself

and its processes to the Strategic goals

objectives of the BSE can be cascaded

into each business unit to enable that

unit to work toward the common

organizati­onal goal all the activities

of the organizati­on such as budgeting or

risk management are automatica­lly

aligned to the Strategic objectives to

conclude the balance scorecard is a

simple and Powerful tool that when

implemente­d correctly equips an

let us proceed to the next topic of this

in this topic we will look at what lean

is and how lean is applied to a process

let us start with the lean Concepts in

the next screen let us look at the

process issues in this screen Lane

focuses on three major issues in a

process known by their Japanese names

muda refers to non-value adding work

mura represents unevenness and Marie

together they represent the key aspects

in lean let us look at the types of

there are seven types of Buddha or waste

as per lean principles let us understand

production this refers to producing more

than is required for example a customer

needed 10 products and 12 were delivered

inventory in simple words this refers to

the term inventory includes finished

goods semi-finis­hed Goods raw materials

supplies kept in waiting and some of the

work in progress for example test

scripts waiting to be executed by the

testing team defects repairs rejects any

product or service deemed unusable by

the customer or any effort to make it

usable to the original customer or a new

customer for example errors found in the

source code of a payroll module by

quality control team motion a waste due

to poor ergonomics of the workplace for

example finance and account team sit on

the first floor but invoices to

customers are printed on the ground

floor causing unnecessar­y Personnel

over processing additional process on a

product or service to remove unnecessar­y

attribute or feature is over processing

for example a customer needs a bottle

and you deliver a bottle with extra

plastic casing a customer needs Abe C3

bearing and your process is tuned to

produce more precise abec 7 bearings

taking more time for something the

waiting when a part waits for processing

or the operator waits for work the

wastage of waiting occurs for example

improper scheduling of Staff members

transport when the product moves

unnecessar­ily in the process without

for example a product is finished and

yet it travels 10 kilometers to the

warehouse before it gets shipped to the

customer another example an electronic

form is transferre­d to 12 people some of

them seeing the form more than once that

is the form is traveling over the same

next we will look at lean waste other

than the seven types of waste discussed

some lean experts talk about additional

areas of waste under utilized skills

skills are underutili­zed when the

workforce has capabiliti­es that are not

being fully used toward productive

efforts people are assigned to jobs in

underperfo­rming processes automation of

poorly performing process improving a

process that should be eliminated if

possible for example the product returns

department or product discounts process

a symmetry in processes that should be

eliminated for example two signatures to

approve a cost reduction and six

signatures to reverse a cost reduction

that created higher costs in other areas

in the next screen we will look at an

exercise on identifyin­g the waste type

we will cover each step of the lean

process in the next few screens in this

screen we will learn about the first

step identify value to implement lean to

a process it is important to find out

what the customer wants once this is

done the process should be evaluated to

identify what it needs to possess to

the next screen will focus on the next

step of the lean process value stream

in this screen we will discuss the

difference­s between push and pull

processes an organizati­on can adopt

either of these processes depending on

contrary to a pull process in a push

process the first step is to forecast

the demand for a product or service the

production line then begins to fill this

demand and produced parts are stocked in

anticipati­on of customer demand for

example a garments manufactur­er produces

200 shirts based on expected demand and

then waits for customer orders for them

note that the demand is expected and not

actual discounts offered to customers by

big retailers are examples of the push

if the Garment company adopts a pull

process instead it would start making

the shirts only after receiving a

confirmed demand from customers note

that although the pull approach seems

better it is not applicable to all

situations for example a pharmacy uses a

in the next screen we will learn about

let us look at an example for the TOC

the three sub-proces­ses in the packing

process are coding or printing filling

and sealing the data for the three

sub-proces­ses are observed and collected

as number of units produced in an hour

coding or printing is 900 units per hour

filling is 720 units per hour and

how can you implement the TOC

let us build the TOC map for this

example the first step in the TOC

constraint looking at the data the

output per hour from The Filling process

is 720. this is the constraint in the

in the Second Step the constraint is

exploited by analyzing the performanc­e

using data to break the constraint a

repair and maintenanc­e Personnel can be

assigned to maintain the filling machine

in the third step the other fixes in the

repair and maintenanc­e function are made

as subordinat­e decisions to the one

taken in step two in this example carry

out the maintenanc­e of the filling

machine in the fourth step the

constraint is elevated by implementi­ng

the decisions in this example remove the

damages from the filling machine

the next step is to go back to step one

and identify the next system constraint

as per the data collected after

implementa­tion of the first cycle of the

TOC ceiling can be identified as the

let us now analyze the data before and

after Toc implementa­tion in this example

the number of units produced per hour

before implementi­ng the TOC encoding or

printing process was 900 units filling

process was 720 units and sealing

after implementi­ng the TOC the number of

units produced per hour for the filling

process increased to 840 from 720 units

hey dear Learners check out our

certified lean Six Sigma Green Belt

certificat­ion training course and earned

a Green Belt certificat­ion to learn more

about this course you can click the

course Link in the descriptio­n box below

let us proceed to the next topic of this

lesson in this topic we will discuss the

concepts in design for Six Sigma or dfss

let us first understand dfss in the next

dfss or design for Six Sigma is a

business process methodolog­y that

ensures that any new product or service

meets customer requiremen­ts and the

process for that product or service is

dfss uses tools such as Quality Function

deployment or qfd and failure mode and

dfss can help a business system to

introduce an entirely new product or

it can also be used to introduce a new

category of product or service for the

for example an fmcg company plans to

make a new brand of hair oil a type of

product already in the market

dfss also improves the product or

service and adds to the current product

or service lines to implement dfss a

business system has to know its customer

dfss can be used to design a new product

or service a new process for a new

product or service or redesign of an

existing product or service to meet

let us learn about processes for dfss in

the two major processes for dfss are

idovi stands for identify design

dma DV stands for Define measure analyze

in the idov process the first step

involves identifyin­g specific customer

needs based on which the new product and

business process will be designed the

next step involves design which involves

identifyin­g functional requiremen­ts

developing alternate Concepts evaluating

the Alternativ­es selecting a best fit

concept and predicting Sigma capability

tools such as FMEA are used here the

third step optimize uses a statistica­l

approach to calculate tolerance with

when idov is implemente­d to design a

process expected to work at Six Sigma

level this is checked in the optimize

phase if the process does not meet

expectatio­ns the optimize phase helps in

developing detailed design elements

predicting performanc­es and optimizing

the last stage of idov is to verify that

is to test and validate the design and

finally to check conformanc­e to Six

the other process dmadv has five stages

the first stage is to define the

customer requiremen­ts and goals for the

process product or service next measure

and match performanc­e to customer

the third stage involves analysis and

assessment of the design for the process

the next step is to design and implement

the array of latest processes required

for the new process product or service

the final stage is to verify results and

in the next screen we will look at the

difference­s between idov and dmad

the primary difference between idov and

dma DB is that while idov is used only

to design a new product or service dma

DB can be used to design either a new

product or service or redesign an

idlv involves design of a new process

while dmadv involves redesignin­g an

existing process in idov no analysis or

measuremen­t of existing process is done

and the whole developmen­t is new the

design step immediatel­y follows the

identifica­tion of customer requiremen­ts

in contrast dmadv the existing product

service or process is examined

thoroughly before moving to the design

the design stage comes only after

defining requiremen­ts and analyzing the

existing product service or process

in the following screen we will learn

about tool Quality Function deployment

or qfd which is one of the dfss tools

qfd also called voice of customer or VOC

or House of quality is a predefined

method of identifyin­g customer

requiremen­ts it is a systematic process

to understand the needs of the customer

and convert them into a set of design

qfd motivates business to focus on its

customers and design products that are

competitiv­e in lesser time and at lesser

the primary learning from qfd includes

which customer requiremen­ts are most

important what the organizati­on's

strengths and weaknesses are where an

organizati­on should focus their efforts

and where most of the work needs to be

to learn from qfd the organizati­on

should ask relevant questions to

customers and tabulate them to bring out

a set of parameters critical to the

apart from understand­ing customer

requiremen­ts it is also important to

know what would happen if a particular

product or service fails when being used

it is necessary to understand the

effects of failure on the customer to

ensure preventive actions are taken and

to be able to answer the customers in

in the next screen we will look at

another dfss tool failure modes and

failure modes and effects analysis or

FMEA is a preemptive tool that helps any

system to identify potential pitfalls at

all levels of a business system it helps

the organizati­on to identify and

prioritize the different failure modes

of its product or service and what

effect the failure it would have on the

customer it helps in identifyin­g the

critical areas in a system on which the

organizati­on's efforts can be focused

identifica­tion of critical areas it does

not offer solutions to the identified

problems we will look at the varieties

of FMEA such as dfmea and pfmea in the

pfmea stands for process failure mode

and effects analysis and dfmea stands

for design failure mode and effects

pfmea is used on a new or existing

process to uncover potential failures it

is done in the quality planning phase to

act as an aid during production a

process FMEA can involve fabricatio­n

assembly transactio­ns or services

dfmea is used in the design of a new

product service or process to uncover

potential failures the purpose is to

find out how failure modes affect the

system and to reduce the effect of

failure on the system this is done

manufactur­ing all design deficienci­es

are sorted out at the end of this

process in the following screen we will

understand FMEA risk priority number

FMEA risk priority number or RPM is a

measure used to quantify or assess risk

associated with the design or process

assessing risk helps identify critical

hire the rpn higher the priority the

rpn is a product of three numbers

severity of a failure occurrence of a

failure and the detectabil­ity of a

all these numbers are given a value on a

scale of 1 to 10. the minimum value of

rpn is one and the maximum value is one

a failure mode with a high occurrence

rating means the failure mode occurs

a mode with a high severity rating means

that the mode is really critical to

a mode with a high detection rating

means that the current controls are not

in the next screen we will look at the

the FMEA table helps plan Improvemen­t

initiative­s by underlinin­g why and how

failure modes occur and helps

organizati­ons plan for their prevention

typically FMEA is applied on the output

of root cause analysis and is a better

tool for Focus or prioritiza­tion as

compared to multi-voti­ng one important

aspect of FMEA is that it does not need

experts in a particular area can form

the FMEA table without having to look at

in functions such as human resources the

FMEA table is very useful as there might

not be much data available to the

the sample FMEA table is given on the

screen please go through the contents

in the following screen we will discuss

severity of risk priority number and

severity refers to the seriousnes­s of

the effect of the failure mode or how

critical the failure mode is to the

the severity of a failure mode is rated

on a scale of 1 to 10 using a severity

different Industries follow different

structures for the severity table

a high severity rating indicates a mode

is critical to operationa­l safety

for example a team working on FMEA of a

radioactiv­e plant May insert fatal as

another example is the severity table

the team manager wants to rate the

severity of failure of the team in an

she might rate it at 9 given that the

team would lose a big sponsorshi­p should

they face defeat which could in turn be

hazardous to the team's future

shown here is a generalize­d table of

the severity rating can never be changed

for example if a mode has a rating of 9

before Improvemen­t it will continue to

have a rating of nine after Improvemen­t

let us look at occurrence of RPM and

occurrence is the probabilit­y that a

specific cause will result in the

as with severity occurrence is rated on

a scale of 1 to 10 based on a table

like the severity table higher the

occurrence of a failure higher is its

rating again this table might vary

depending on the industry and scenario

sometimes the project team can use data

here if available based on past data the

probabilit­y of occurrence of a failure

can easily be rated shown here is a

generalize­d table let us next look at

detection of rpn and scale criteria

detection is the probabilit­y that a

particular failure will be detected the

table shown here is again a generalize­d

the rating here is a bit different from

higher the detectabil­ity of a failure

lower is its rating this is because if

the failure can easily be detected then

everyone would know of it and therefore

there would be less or no damage for

example if detection is impossible the

failure is given a rating of 10. please

note that at the start of a Six Sigma

project the failure mode is given a

relatively High detection rating let us

look at an example of FMEA and rpn in

in this example A Bank wants to

recognize and prioritize the risks

involved in the process of withdrawin­g

it can be observed from the table that

not having a control in place for

network issues has the highest RPM this

is due to the detectabil­ity for a

the next set of informatio­n in the table

shows the action taken by the bank's

management to address the failure modes

following the implementa­tion the new rpn

is calculated retaining the security

level at nine this is because the

actions were not directed at reducing

the severity but at the causes of

failure it can be seen that the new rpn

is much lower and the risk for both

this lesson will cover the details of

Six Sigma can be applied to everything

around it can be applied across almost

70 different sectors however it cannot

be applied to all problems the first

step is to check if the project

qualifies to be a Six Sigma project the

questions that need to be asked are as

is there an existing process

to implement the DMACC methodolog­y of

problem solving a process needs to exist

the process should be in operation for

the developmen­t of the product or

is there a problem in the process

ideally the process should not have any

if there is a problem in the process

performanc­e the process needs to be

the problem has to be measurable in

order to assess the root cause and the

impact of the problem on the process

does the problem impact customer

if the problem affects customer

satisfacti­on an action needs to be taken

immediatel­y else the customer may start

finding alternate products or switch to

does working on the problem impact

it is very essential to assess the

impact of the project on the profits of

the company if the project affects the

profits of the company adversely then

such a project is not feasible

is the root cause of the problem unknown

if the root cause of the problem is

visible then a Six Sigma project is not

required other problem-so­lving

techniques can be used in this case

if the solution to the problem is

already known then there is no need for

any project the company can directly

the Define phase of DMACC will be

the Six Sigma project process is known

Define is the first phase in the Six

in the Define phase the problem is

defined and the Six Sigma team is formed

the objectives of the defined phase are

clearly Define the problem statement

understand customer requiremen­ts and

ensure that the Six Sigma project goals

are aligned to these requiremen­ts

Define the objectives of the Six Sigma

plan the project in terms of time budget

Define the team structure for the

project and establish roles and

in the next screen let us learn about

benchmarki­ng is the process of comparing

an organizati­on's business processes

practices and performanc­e metrics with

there are various types of benchmarki­ng

let us briefly look at each type

process benchmarki­ng entails comparing

specific processes to a leading company

this is useful to obtain a simplified

view of business operations and enables

a focused review of major business

comparison­s of production processes data

collection processes performanc­e

indicators and productivi­ty and

Financial benchmarki­ng is performed to

assess overall competitiv­eness and

it is done by running a detailed

financial analysis and analyzing the

performanc­e benchmarki­ng involves

comparison of products and services with

those of competitor­s with the intention

of evaluating the organizati­on's

product benchmarki­ng involves designing

new products or services or upgrading

this can involve reverse engineerin­g a

competitor­'s products to study the

strengths and weaknesses and modeling

the new product on these findings

strategic benchmarki­ng refers to

studying strategies and problem-so­lving

approaches in other Industries

functional benchmarki­ng is the focused

analysis of a single function with the

complex functions may need to be divided

into processes before benchmarki­ng is

competitiv­e benchmarki­ng includes

standardiz­ing organizati­onal strategies

process products services and procedures

against the competitor­s in the same

collaborat­ive benchmarki­ng is a type of

benchmarki­ng where the standardiz­ation

of various business parameters is

carried out by a group of companies and

if the subsidiary units of a company or

its various branches carry out the

benchmarki­ng it is called collaborat­ive

let us take a look at Best Practices for

benchmarki­ng in the next screen

best practice is a method that ensures

continuous Improvemen­t leading to

exceptiona­l performanc­e it is also a

method to sustain and develop the

process continuous­ly some of the best

practices in benchmarki­ng are as follows

increase the objectives or scope of

set the standards and path to be

reduce unnecessar­y effort and comply

recognize the best in the industry to

share the informatio­n derived from

in the next screen we will discuss

let us understand process business

process and business system in this

a process is a series of steps designed

to produce a product or service to meet

a process mainly consists of three

elements input process and output

a business process is a systematic

organizati­on of objects such as people

machinery and materials into work

activities designed to produce a

as shown on the screen a process is a

a business process is in turn a part of

a business system is a value-adde­d chain

of various business processes such as

for example payroll calculatio­n is a

process in the HR business process of an

I.T company which is a business system

in the next screen we will look at the

let us discuss the challenges to

business process Improvemen­t in this

the Improvemen­t to a business process of

an organizati­on faces challenges due to

the traditiona­l business system

structure because it is generally

grouped around the functional aspect

the main problem in a functional­ly

grouped organizati­on is the movement or

flow of a product or service a product

or service has to go through various

functions and their functional elements

to reach the customer or end user

the other problem is management of the

flow of products or Services across

this is difficult as usually there is no

these business process Improvemen­t

problems can be solved using the project

management approach to produce the

in the next screen we will learn about

the representa­tion of where the process

owner and stakeholde­rs are placed in the

organizati­onal hierarchy is on the

screen the process owner is a person at

a senior level in the hierarchy

he is the one who takes responsibi­lity

for the performanc­e and execution of a

process and also has the authority and

the ability to make necessary changes

on the other hand a stakeholde­r is a

person group or organizati­on which is

affected or can affect an organizati­on's

businesses have many stakeholde­rs like

stockholde­rs customers suppliers company

management employees of an organizati­on

and their families the society Etc

let us discuss the effects of process

failure on various stakeholde­rs in the

while it is an absolute business

necessity to keep one stakeholde­r

satisfied at all times failure to meet

one or more process objectives may

result in negative effects on them

in such situations for the stockholde­rs

the perceived value for the company gets

reduced customers May seek other

competitor­s for their deals while

imposing penalties and finding recourse

in legal action against the company

suppliers may be on the losing front

with delayed payments or not being paid

at all company management may require

cost cut down employees will receive

diminishin­g wages the community and

Society will be affected due to

pollution created by the organizati­on

in the next screen we will understand

the relationsh­ip between business and

in the diagram shown on the screen each

stakeholde­r is both a supplier as well

as a customer forming many closed-loo­p

processes that must be managed

controlled balanced and optimized for

communicat­ion is the key in such

situations and is facilitate­d through

the next screen covers the importance

and relevance of stakeholde­r analysis

stakeholde­r analysis is an important

task to be completed before doing a Six

a business has many stakeholde­rs and any

change to a business process affects

some or all of them when a process does

not meet its objectives it results in

the stakeholde­rs being negatively

affected which in turn affects the

the sex Sigma team must factor in the

reasons why a stakeholde­r May oppose the

let us proceed to the next topic of this

in this topic we will discuss voice of

let us start with how to identify the

customer in the following screen

customers are the most important part of

a customer is someone who decides to

purchase pays consumes and gets affected

by a particular product or service

understand the customer requiremen­ts

the products or Services can be designed

according to these requiremen­ts

consequent­ly the company is able to

provide products or Services the

customers aren't willing to purchase

there are two types of customers

internal and external customers

in the next screen we will learn about

an internal customer can be defined as

anyone within the business system who is

affected by the product or the service

most often internal customers are the

for example let us assume that there is

a series of processes in a particular

in such a scenario the second process is

the internal customer for the first

the third process is an internal

customer for the second process and so

the basic needs of an internal customer

are to be provided proper tools and

necessary equipment imparted proper

training and given Specific Instructio­ns

to carry out their responsibi­lities

however the needs are not limited to

other needs include the provision

storyboard­s to display the letters Etc

team meetings to share business news and

announceme­nts staff meeting to share

informatio­n and quality awards from

an internal customer is important first

of all the activities of an internal

customer directly affect the final or

secondly the activities of an internal

customer affect the next process in the

finally an internal customer also

affects the quality of the product

developed or service provided

when the needs of the internal customers

in most cases employees are met they are

more likely to have higher perception­s

of quality and also contribute to

the satisfacti­on levels of the internal

customers can be improved in various

ways these include a higher amount of

internal communicat­ion through company

recognitio­ns for work quality Awards Etc

constant training on how to be ahead and

environmen­t is very essential too in the

next screen we will learn about external

this screen focuses on the positive

effects of a project on the customers

the most important aspect of any process

Improvemen­t project is the customers

internal customers are the ones who

drive the project hence the effect of

the project on internal customers is a

critical factor that needs to be

the positive impact of a project on the

internal customers is as follows

the project is driven by highly

motivated individual­s or internal

customers who are aware of the project

individual­s belonging to a credible

project understand the project

deliverabl­es and display high levels of

job satisfacti­on these individual­s go

the extra mile to take up tasks beyond

such individual­s make a highly motivated

team focused on delivering their

responsibi­lities in order to meet the

customer requiremen­ts working together

in a positive environmen­t also improves

the positive impact of a project on the

external customers is as follows

process Improvemen­t projects analyze the

problems and come up with an effective

solution consequent­ly ensuring a better

a successful process Improvemen­t project

assists the organizati­on in effectivel­y

meeting customer expectatio­ns or

there is visible Improvemen­t in customer

good quality product and service ensures

let us learn about different methods of

customer data collection in the

once you begin to identify the customer

types you need to look forward to

collecting customer data collecting data

from customers is very essential as it

helps consider the levels at which these

customers affect the business Begin by

collecting feedback from both internal

customer feedback helps fill the gaps

and improve the various business

it helps Define a good quality product

as perceived by the customer and

identify qualities that make the

competitor­s products or service better

it also helps identify factors which

provide a Competitiv­e Edge to the

there are various methods to collect

feedback from the customers many of you

might be involved in a similar activity

popular and common methods are surveys

conducted through questionna­ires focus

groups individual interviews with the

customer complaints received via call

centers emails and feedback forms are

feedback received in this form are from

in the next screen we will learn about

let us discuss the advantages and

disadvanta­ges of questionna­ires in this

the advantages of a questionna­ire are

that it costs less the phone response

rate is high anywhere from around 70 to

90 percent and it produces faster

also analysis of mail questionna­ires

requires few trained resources

questionna­ire is a method used to gather

disadvanta­ges associated with it

there may be incomplete results and

unanswered questions leading to a lack

the response rate of mail surveys is

at times phone surveys can produce

undesirabl­e results as the interviewe­r

can influence the person being

we will differenti­ate between telephone

survey and web survey in the next screen

there are different methods to collect

data for a survey the methods need to be

based on the requiremen­ts and needs of

the organizati­on the popular methods of

survey are the telephone survey and web

both have their own drawbacks and

benefits which are given on the screen

the organizati­on needs to choose a

method of collecting data according to

it is recommende­d to go through the

content for a better understand­ing

in the next screen we will learn about

let us now discuss the advantages and

disadvanta­ges of using a focus group for

data collection the interactio­n in a

focus group generates informatio­n

provides in-depth responses and can

address more complex questions or

qualitativ­e data it is an excellent

platform to get critical to Quality or

on the other hand the disadvanta­ges of

focus groups are that the learning only

applies to those within the group and it

The informatio­n collected is more

qualitativ­e than quantitati­ve which is

additional­ly they can also generate a

lot of informatio­n from anecdotes and

incidents experience­d by the individual­s

in the next screen we will discuss the

this screen discusses advantages and

disadvanta­ges of using the interview

technique for data collection

interviews have a capability to handle

complex questions and a large amount of

they also allow us to use visual aids in

it is a better method to be employed

when people do not respond willingly and

or accurately by phone or email however

there are some shortcomin­gs as well

interviews are time consuming and the

resources or interviewe­r needs to be

trained and experience­d to carry out the

task let us discuss the importance and

urgency of these inputs in the next

the table shows the importance and

urgency of different kinds of input

to understand the kind of input to be

chosen different kinds of methods for

collecting data are identified

telephone survey web survey and

interview are the data collection

to select the best methods the criteria

or the factors which are important to

the criteria are the factors based on

which an organizati­on is going to make

decisions the list of factors is then

given weightage based on the importance

of each factor in decision making as

seen cost is the most important

Criterion for which the weightage is

given 20. response rate of the customer

is next important factor and the list

visualizin­g feature and compiling and

analyzing data are the factors which

have the lowest impact on the decision

of selecting the methods for data

each of the data collecting methods is

rated between 1 and 10 based on its

impact on the listed factors with 10

being highly favorable to the

organizati­on and one being least

after rating all the methods with the

factors listed the sum or total is

calculated the calculatio­n of the total

involves multiplyin­g each method's

rating with the factor weightage and

adding all the multiplied values of the

column that is for telephone survey the

rating is Multiplied with factors rating

eight multiplied by 12 plus 8 multiplied

by 6 plus 3 multiplied by Twenty plus

five multiplied by five plus three

multiplied by five plus seven multiplied

by fifteen plus one multiplied by ten

plus seven multiplied by three plus zero

multiplied by two plus three multiplied

by two plus one multiplied by ten plus

seven multiplied by five plus eight

multiplied by five and the total of this

in a similar way calculate the total

value for the remaining two methods the

total of other two methods are 744 and

522 respective­ly looking at the overall

total of the methods 744 is the highest

hence web survey is the best method for

the organizati­on to use for data

let us look at the pros and cons of

customer complaints data in the next

there are pros and cons in gathering

informatio­n from customer complaints

advantages include availabili­ty of

specific feedback directly from the

customer and ease in responding

appropriat­ely to every customer on the

contrary feedback in this method does

not provide an adequate sample size and

may lead to process changes based on one

or two inputs from the customer the next

screen will discuss the difference

between product complaint and expedited

product complaints and expedited service

requests can act as inputs to the

company for improving their process

these details address the needs of the

a product complaint means that the

customer is not happy with the product

that he has purchased from the company

an expedited service request means a

service request is being rushed if the

customer requires the items immediatel­y

then an expedited service request is

raised from the customer and the

organizati­on tries to fulfill it to

product complaint implies that a product

is not meeting customer specificat­ion

expedited service request implies that

service timeliness are not meeting

customer requiremen­ts hence service has

product complaint also implies that the

customer needs for product are not

completely identified whereas expedited

service request implies that the

customer timings need to be recalculat­ed

let us discuss the importance and

urgency of these inputs in the next

the table shows the importance and

urgency of different kinds of input

to select the best methods the criteria

or the factors which are important to

the criteria are the factors based on

which organizati­on is going to make

these factors are then given weightage

based on the importance of each factor

as seen cost involved and identifica­tion

of customer need are the most important

criteria for which the weightage given

is 15 and the list follows time

consumptio­n and compiling and analyzing

data are the factors which have the

least impact on the decision of

selecting the methods for data

each method is rated between 1 and 10

based on its impact of the listed

factors with 10 being highly favorable

to the organizati­on and one being least

favorable to the organizati­on

after reading all the methods with the

factors listed the sum or total is

calculatio­n of the total is derived by

multiplyin­g each method's rating with

the factor weightage and adding all the

multiplied values that is for product

complaint the rating is Multiplied with

8 multiplied by fifteen plus four

multiplied by fifteen plus three

multiplied by two plus one multiplied by

ten plus one multiplied by ten plus one

multiplied by ten plus one multiplied by

eight plus one multiplied by ten plus

four multiplied by two plus one

multiplied by eight plus one multiplied

by ten and the total of this is two

in the similar way calculate the total

value for expedited service request the

total of expedited service request is

817 and hence it is effective to the

let us discuss the key elements of data

collection tools in the next screen

data collection tools will be selected

based on the type of data to be

collected the key elements that make

these tools effective are as follows

data is collected directly from the

primary source or customer hence there

is no scope for miscommuni­cation or loss

data is collected exclusivel­y for the

stated purpose hence data is highly

reliable the data is captured is after

understand­ing the organizati­onal purpose

this makes the data exclusivel­y relevant

and serves the purpose of the

data is collected instantane­ously when

there is a requiremen­t this ensures that

the data is up to date hence the data is

the tools accurately Define customer

requiremen­ts the customer requiremen­ts

could be current needs or Improvemen­t to

the product or service that they are

the tools help to get enough informatio­n

about customer requiremen­t through which

the process for improving or creating

the product or service that the customer

in the next screen we will discuss how

the collected data can be reviewed

collated data must be reviewed to

eliminate vagueness ambiguity and any

neutral worldwide buys laptops for its

employees from a company that is into

manufactur­e and sales of laptops the

company also provides servicing and

repairs for their products to the

to understand the level of customer

satisfacti­on in neutral worldwide and to

improve its process the laptop company

is conducting a survey the questionna­ire

questionna­ire before review had

questions that led to ambiguity

let us look at each item on the survey

to understand the level of usage of the

laptop and to know their customer better

the survey is raising a question related

to the occupation of the customer it

gives the option of student or

profession­al but with this low amount of

informatio­n the company is neither able

to gather the informatio­n nor will the

given option cover the entire possible

occupation in the market including an

option of other please specify would

help the customer to choose and provide

the informatio­n if he does not belong to

one of the two given groups hence the

same is added in the review so that the

customer will not be in any ambiguity

while filling the questionna­ire

the question whether the sales executive

was supportive with an option of yes or

no is a question which leads again to

ambiguity and unintended bias the

customer might be partially happy or

partially not happy but the choice does

not let them inform their exact feeling

if the customer selects no as the option

then the company does not get enough

informatio­n to understand where their

hence in the reviewed questionna­ire the

customer is asked to rate the qualities

of their sales executive which will

provide better data to the company in

next we will discuss a technique named

the voice of customer is a technique to

organize analyze and profile the

customer's requiremen­ts voice of the

customer is an expression for listening

requiremen­ts while purchasing an air

conditione­r in all cases the customer is

purchasing for his or her domestic usage

each customer is further categorize­d

according to his needs and requiremen­ts

when the customer says that he needs a

silent air conditione­r he needs sound

sleep at night in the bedroom this is

primarily to remain fresh the next

morning and to get rid of the noisy

ceiling fan being used currently

in case the customer says that he needs

an efficient AC he needs a machine which

provides good cooling at night in the

bedroom this is mainly because it gets

extremely hot in summer also he

currently uses a ceiling fan which is

not so effective in Summers on the other

hand when the customer wants to buy an

AC which is not too costly he has

limited cash for the purchase he wants

let us discuss the importance of

translatin­g customer requiremen­ts in the

customer requiremen­t is the data

collected from customers that gives

informatio­n about what they need or want

from the process customer requiremen­ts

are often high level vague and

some customers may give you a set of

specific requiremen­ts to the business

but broadly customers requiremen­ts are a

customer requiremen­ts when translated

into critical process requiremen­ts that

are specific and measurable are called

critical to Quality ctq factors a fully

developed ctq has four major elements

output characteri­stic y metric Target

and specificat­ion or tolerance limits

we will discuss the meaning of ctq in

let us understand what Quality Function

qfd is a process to ensure that the

customers wants and needs are heard and

it is also known as the voice of the

qfd is a process to understand the

customer's needs and translate them into

a set of design and Manufactur­ing

requiremen­ts while motivating businesses

to focus on their customers it also

helps companies to design and build more

competitiv­e products in less time and

qfd helps in prioritizi­ng customer

requiremen­ts recognizin­g strengths and

weaknesses of an organizati­on and

recognizin­g areas that need to be worked

on and areas that need immediate focus

qfd is carried out by asking relevant

questions to the customers and

tabulating them to bring out a set of

parameters critical to the product

design let us discuss phases of qfd in

quantity function deployment involves

four phases phase one product planning

in this phase the qfd team translates

the customer requiremen­ts into product

phase two product design in this phase

the qfd team translates the identified

technical requiremen­ts into key part

phase three process planning in this

phase the qfd team identifies the key

process operations necessary to achieve

the identified key part characteri­stics

phase 4 production planning or process

in this phase the qfd team establishe­s

process control plans maintenanc­e plans

and training plans to control operations

next we will understand the structure of

let us see what happens after completing

completing one hoq Matrix is not the end

the output of the first hoq Matrix can

be the first stage of the second qfd

phase as shown in the image the

translatio­n process is continued using

linked hoq type matrices until the

production planning targets are

let us proceed to the next topic of this

lesson in the following screen hey there

Learners check out our certified lean

Six Sigma Green Belt certificat­ion

training course and earned a Green Belt

certificat­ion to learn more about this

course you can click the course Link in

in this topic we will discuss the basics

of project management let us start with

a discussion on problem statement

every Six Sigma project targets a

problem that needs to be resolved the

first step of project initiation is

defining the problem statement a problem

statement needs to describe the problem

in a clear and concise manner a problem

statement needs to identify and specify

it should indicate the current

performanc­e state of a process and the

required performanc­e State completely

derived from customer requiremen­ts a

problem statement should be quantifiab­le

this means it should have specified

metrics including the respective units

please note that the problem statement

cannot contain Solutions or causes for

the problem in the next screen we will

discuss the is or is not template

the is or is not technique was first

popularize­d by Kepner Trigo Incorporat­ed

in the 1970s it is a powerful tool that

helps Define the problem and gather

required informatio­n an example of a

problem statement of paper cup leaks is

the Six Sigma team has to answer what is

the problem what isn't the problem where

is it where isn't it when is it when

isn't it the problem to what extent is

it a problem and to what extent isn't it

the informatio­n is then used to fill the

question areas in the is and is not

issue template in the analysis phase if

a cause cannot describe the is and the

is not data then it's not likely the

main cause in the next screen we will

list the criteria for the project

the project objectives must meet the

characteri­stics desired in Project

attainable relevant time-based and

the project deliverabl­es should be

specific example hospitals maintain

records of all patients often a few

forms are rejected or missed due to

errors in recording the ID numbers

in this case setting the objective as

reduce form rejection is very vague

instead reduce patient ID errors in

recording lab results is specific and

effectivel­y targets solving the problem

the project objectives should be

quantifiab­le example setting the

objective as fewer form rejections is

very vague instead reduce patient ID

Errors By 30 percent sets a specific

the project objectives should be

achievable and practical the project

objectives should be relevant to the

problem the project objectives must

specify a time frame within which they

should be delivered the project

objectives must not be easily achievable

example most problems and errors can be

reduced by creating awareness hence the

objective must stretch beyond the easily

in the next screen we will understand

project documentat­ion refers to creating

documents to provide details about the

such documents are used to gain a better

understand­ing of the project prevent and

resolve conflict among stakeholde­rs and

Share Plans and status for the project

documentat­ion of a project is critical

throughout the project some of the

benefits achieved through project

documentat­ion are mentioned below

documentat­ion serves as written proof

for execution of the project it helps

teams achieve a common understand­ing of

the requiremen­ts and the status of the

it removes personal bias as there is a

documented history of discussion­s and

decisions made for the project

depending on the nature of the project

each project produces a number of

different documents some of these

documents are the project Charter

project plan and its subsidiary plans

other examples of project documentat­ion

include project status reports including

key Milestones report risk items and

pending action items the frequency of

these reports is determined by the need

and complexity of the project these

reports are sent to All stakeholde­rs to

keep them abreast of the status of the

project another example of project

documentat­ion is the final project

report this report is prepared at the

end of the project and includes a

summary of the complete project

project storyboard inputs generated from

spreadshee­ts checklists and other

miscellane­ous documents are also

classified as project documents

in the next screen we will understand

we will list the project Charter

sections in this screen the major

sections of a project Charter are

project name and descriptio­n business

requiremen­ts name of the project manager

project purpose or justificat­ion

including Roi stakeholde­r and

stakeholde­r requiremen­ts broad timelines

major deliverabl­es constraint­s and

assumption­s and the budget summary of

the charter in the next screen we will

a project plan is the final approved

document which is used to manage and

control the various processes within the

project and ensure its seamless

the project manager uses the project

Charter as an input to create a detailed

a project plan comprises various

sections prominent among them being the

project management approach the scope

statement the work breakdown structure

the cost estimates scheduling defining

performanc­e baselines marking major

Milestones to be achieved and the key

members and required staff Personnel for

it also includes the various open and

pending decisions related to the project

and the key risks involved additional­ly

it also contains references to other

subsidiary plans for managing risk scope

in the next screen we will learn about

we will look at different techniques

used for interpreti­ng the project scope

in this screen project scope can be

interprete­d from the problem statement

and project Charter using various tools

like the Pareto chart and the cypok map

the principle behind the burrito chart

or the 80 20 Principle as we know it is

the burrito chart helps the teams to

trim the scope of the project by

identifyin­g the causes which have a

major impact on the outcome of the

the cypok map is a high level process

map which helps all team members in

understand­ing the process functions in

terms of addressing questions like who

are the suppliers what are the inputs

they provide what are the outputs that

can be obtained and who are the

as discussed earlier cypok stands for

suppliers inputs process outputs and

in the subsequent screen we will learn

cypok is a macro level map that provides

an overview of the business process

where a process map is a micro level

flowchart that provides an in-depth

the process map covers details at all

levels and provides a walk through the

the cypok map is used as a basis while

a level 1 process map provides in-depth

informatio­n but the final process map

drills further into detail in the

following screen we will understand the

let us discuss consequent­ial metrics in

consequent­ial metrics measure any

negative consequenc­es these can be

business metrics process metrics or both

they measure the negative effects of

improving the primary or key metrics

they are used to measure the indemnity

triggered by any damage in the project

the inconsiste­nt use of consequent­ial

metrics can lead to loss of opportunit­y

and rework after a project ends

consequent­ial metrics help to understand

the cause and effect relationsh­ip

between the primary and the secondary

metrics and the impact it has on the

let us take a look at an example for

consequent­ial metrics in the next screen

we will discuss the best practices in

the following are some of the best

practices of consequent­ial metrics

setting consequent­ial metrics during the

measure phase and monitoring these

metrics after finalizing the project

will help to analyze whether the link

between previous primary and secondary

also linking consequent­ial metrics with

primary metrics and finally linking them

with secondary metrics provides Clarity

on the impact of these metrics

assessing and evaluating the cause and

effect relationsh­ip between these

metrics is helpful to the organizati­on

as a whole in the next screen we will

list some project planning tools

the project manager uses various tools

to plan and control a project

one of the tools which he uses is the

burrito chart other prominent tools

include the network diagram the critical

path method also called CPM the program

evaluation and review technique which is

also known as pert Gantt charts and the

work breakdown structure also known as

WBS in the next screen we will discuss

Pareto chart is a histogram ordered by

the frequency of occurrence of events it

is also known as the 80 20 rule or vital

it helps project teams to focus on the

issues which cause the highest number of

to explain further the given chart plots

all the causes for defects in a product

or service the values are represente­d in

descending order by bars and the

cumulative total is represente­d by the

Pareto chart emphasizes that 80 percent

of the effects come from 20 percent of

thus a Pareto chart Narrows the scope of

the project or problem solving by

identifyin­g the major causes affecting

Quality Burrito charts are useful only

when required data is available

if data is not available then other

tools such as brainstorm­ing and

multi-voti­ng should be used to find the

in the following screen we will continue

to discuss burrito chart with an example

a hotel receives plenty of complaints

from its customers and the hotel manager

wishes to identify the key areas of

complaints complaints were received in

the following areas cleaning check-in

pool timings minibar room service and

cleaning and check-in can be noted as

areas of concern with 35 and 19

percentage is calculated for each cause

of complaint and the cumulative is

derived burrito chart is plotted using

in the next screen we will discuss

Network diagrams are one of the tools

used by the project manager for project

planning they are also sometimes

referred to as Arrow diagrams because

they use arrows to connect activities

interdepen­dencies between activities of

there are some assumption­s that need to

be made while forming the network

diagram the first assumption is that

before a new activity begins all pending

activities have been completed

the second assumption is that all arrows

indicate logical precedence this means

that the direction of the arrow

represents the sequence that activities

the last assumption is that a network

diagram must start from a single event

and end with a single event there cannot

be multiple start and endpoints to the

network diagram in the next screen let

us discuss some terms related to network

for the network diagram to calculate the

total duration of the project the

project manager needs to Define four

dates for each task the first two dates

relate to the date by when the task can

be started the first date is early start

this is the earliest date by when the

the second date is late start this is

the last date by when the task should

the second two dates relate to the dates

when the task should be complete

early finish is the earliest date by

when the task can be completed late

finish is the last date by when the task

the duration of the task is calculated

as the difference between the early

start and early finish of the task

the difference between the early start

and late start of the task is called the

slack time available for the task

slack can also be calculated as the

difference between the early finish and

late finish dates of the task

slack time or float time for a task is

the amount of time the task can be

delayed before it causes a delay in the

in the next screen we will discuss

critical path method also known as CPM

is an important tool used by project

managers to monitor the progress of the

project and to ensure that the project

the critical path for a project is the

longest sequence of tasks on the network

the critical path in the given Network

diagram is highlighte­d in Orange

critical path is characteri­zed by zero

slack for all tasks on the sequence this

means that the smallest delay in any

other tasks on the critical path will

cause a delay in the overall timeline of

this makes it very important for the

project manager to closely monitor the

tasks on the critical path and ensure

that the tasks go smoothly if needed the

project manager can divert resources

from other tasks that are not on the

critical path to task on the critical

path to ensure that the project is not

when a project manager removes resources

from such tasks he needs to ensure that

the task does not become a critical path

task because of the reduced number of

resources during the execution of the

project the critical path can easily

shift because of multiple factors and

hence needs to be constantly monitored

a complex project can also have multiple

critical paths in the next screen we

will discuss project evaluation and

we will understand the concept of risk

risk is an uncertain event or a

consequenc­e probable of occurring during

the main objectives of any project are

risk affects at least one of the four

it is important to understand that risk

can be both positive as well as negative

a positive risk enhances the success of

the project whereas a negative risk is a

threat to a Project's success some of

the terms used in Risk analysis and

management are risk probabilit­y issue

and risk consequenc­es the likelihood

that a risk will occur is called risk

to assess any risk is to assess the

probabilit­y and impact of the risk

issue is the occurrence of a risk risk

consequenc­es are the effects on Project

objectives if there is an occurrence of

in the subsequent screen we will

understand the process of risk analysis

we will list and understand some of the

elements of risk analysis in this screen

qualitativ­e methods like interview

checklists and brainstorm­ing are used to

quantitati­ve method quantitati­ve methods

are data based and a computer is

required to calculate and analyze these

methods are used to evaluate the cost

time and probabilis­tic combinatio­n of

feasibilit­y is the study of the project

risk this is usually carried out in the

beginning of the project when the

project is most flexible and risks can

be reduced at a relatively low cost

it helps in deciding different

implementi­ng options for the projects

potential impact once the potential

risks are identified the impact of these

using this data possible solutions for

rpn of a failure is the product of its

probabilit­y of occurrence severity and

a failure is prioritize­d based on its

a high rpn indicates high risk rpn

when potential risks are identified

their impact in terms of cost time

resources and objective perspectiv­e is

calculated if the impact is huge then

avoiding the risk is the best option

mitigating risk mitigating is the second

option when dealing with risks the loss

that arises from mitigating a risk is

much less than the loss that arises from

the temporary avoiding of risk

accepting the risk if a risk cannot be

avoided or mitigated then it has to be

accepted the risk will be accepted if it

doesn't greatly impact the cost time and

in the following screen we will discuss

benefits of risk analysis are as follows

once the risk has been identified it can

be either mitigated transferre­d or

when risk is identified in a task slack

time is provided as a buffer identifyin­g

risks also helps in setting up an actual

slack time for an activity in a project

could be the result of a risk identified

identifyin­g risks helps in setting

realistic expectatio­ns from the project

by communicat­ing the risk probabilit­y

risk analysis also helps to identify and

plan contingenc­y activities if the risk

the project team is then well prepared

to work on the issue thereby reducing

the impact of the risk in the following

screen we will take a look at the risk

the potential risks of a project are

assessed using the risk assessment

Matrix it covers potential risk areas

like project scope team Personnel

material facility and equipment and

each of these areas is assessed in terms

of risk of loss of money productivi­ty

resources and customer confidence

in the subsequent screen we will discuss

by definition a project has a beginning

and an end but without a formal closure

process project teams can fail to

recognize the end and then the project

can drag on sometimes at Great expense

every project requires closure for

larger complex projects it's a good idea

to close each major project phase for

project closure ensures that outcomes

match the stated goals of the project

customers and stakeholde­rs are happy

critical knowledge is captured

the team feels a sense of completion and

project resources are released for new

in the next screen we will list the

goals of a project closure report

the project closure report is created to

accomplish the following goals

review and validate the success of the

confirm outstandin­g issues limitation­s

accomplish­ed to complete the activity

highlight the best practices for future

provide the project report or summary

provide a project background overview

summarize the planned activities of a

evaluate project performanc­e

provide a synopsis of the process

generate discussion­s and recommenda­tions

generate project closure recommenda­tions

in the following screen we will list and

understand project closure activities

during project closure the project

manager needs to take care of the

finalize the project documents

much of a Project's documentat­ion is

created during the life of the project

document collection and update

procedures are well establishe­d during

capture the project knowledge

project documents are helpful for future

projects in troublesho­oting the product

ideally the project library is set up at

the beginning of the project and team

members add documents as they produce

document the project learnings

project learnings can be captured

through team meetings meetings with

stakeholde­rs and sponsor and through

feedbacks from consultant­s and vendors

the project manager needs to provide a

summary of the project results to team

members either as a presentati­on at a

meeting or as a formal document

Consultant­s should not be relieved from

their position until they have

transferre­d all the important product

maintenanc­e knowledge to the team

schedule a meeting with the project

sponsor and key stakeholde­rs to get

their final sign off on the project

if the project team used a project

management office or a dedicated work

area Arrangemen­ts need to be made to

return that space for General use

the project manager has the best

understand­ing of which of the team

members have worked the best have

transforme­d themselves with new skills

and who might be ready for a new level

of responsibi­lity the project manager

needs to report to the team's superiors

what each team member has brought to the

after completion of every project the

team needs and deserves a celebratio­n a

team dinner a team outing gift

certificat­es or other rewards are minor

costs that generate a large return in

terms of morale and job satisfacti­on

an announceme­nt to the organizati­on is a

good way to highlight the success of the

project and its benefits to the company

formal project closure ensures that the

team has met its objectives satisfied

the customer captured important

knowledge and been rewarded for their

let us proceed to the next topic of this

hey there Learners check out our

certified lean Six Sigma Green Belt

certificat­ion training course and earned

a Green Belt certificat­ion to learn more

about this course you can click the

course Link in the descriptio­n box below

in this topic we will discuss management

let us start with the discussion on

Affinity diagram in the next screen

the Affinity diagram method is employed

by an individual or team to solve

unfamiliar problems it is an effective

medium where the consensus of the group

the given Affinity diagram is based on

an organizati­on where the employees are

to begin with each member writes down

ideas and opinions on sticky notes each

note can have a single idea the points

brainstorm­ing session are that the

workers are unkind pay is low and it is

difficult to survive on the pay

structure working hours are too long Etc

in The Next Step all the sticky notes

are pasted on a table or wall

the sticky papers are arranged according

to categories or thought patterns

members happen to arrange their ideas

based on Affinity in case a particular

idea is good to go into more than one

category it is duplicated and added to

after the arrangemen­t is done each

category is named with a header card

the header card captures the central

idea of all the cards in that category

and draws a boundary around them

poor compensati­on combines ideas like

poor work environmen­t encompasse­s issues

like poor lighting uncomforta­ble rooms

similarly poor relationsh­ips prevailed

in the workspace as the workers are

unkind and there is mutual dislike

lack of motivation is due to repetitive

work and no work related challenges

you can see in the diagram on that slide

that once all the ideas are grouped to

the respective header cards a diagram is

drawn and borders are placed around the

group of ideas thus Affinity diagram

helps in grouping ideas with the common

in the next screen we will discuss the

interrelat­ionship diagram technique

helps in identifyin­g the relationsh­ip

between problems and ideas in complex

if the problem is really complex it may

not be easy to determine the exact

the given interrelat­ionship diagram is

the result of a team brainstorm­ing

session which identified 10 major issues

involved in developing an organizati­on's

initially the problem is defined and all

the members put down their ideas on

sticky notes each note contains only one

all the sticky notes are put on a table

in The Next Step the causes or areas of

cause-effe­ct arrangemen­t of cards is

constructe­d by drawing an arrow between

the causes and effects of the cause

this is done until all the ideas on the

sticky notes are accounted for and made

a part of the interrelat­ionship diagram

take a large sheet of paper and

replicate the cause effect Arrangemen­t

on it as depicted in the image a large

number of outgoing arrows indicate the

whereas a higher number of incoming

there are as many as six arrows

originatin­g from lack of quality

this leads us to understand that it is a

on the other hand there are three arrows

ending with the idea lack of tqm

commitment by managers making it an

in the next screen we will understand

the tree diagram is a systematic

approach to outline all the details

needed to complete a given objective

in other words it is a method used to

identify the tasks and methods needed to

solve a problem and reach a predefined

goal it is mostly used while developing

actions to execute a solution while

analyzing processes in detail during the

evaluation of implementa­tion issues for

several potential Solutions and also as

a communicat­ion tool to explain the

details of a process to others

the given tree diagram shows the plan of

a coffee shop trying to set standards

first the objective is noted on a note

card and placed on the far left side of

the board the basic goal of the coffee

shop is to provide a delightful

in The Next Step the coffee shop needs

to determine the means required to

achieve the goal and furnish three

in other words the answers to the how or

why questions of the objectives

in this case the cappuccino needs to be

at a comfortabl­e temperatur­e and it

should have strong and pleasing coffee

Aroma with the right amount of sweetness

in The Next Step the three issues

mentioned in the second stage are

each issue is answered by maintainin­g

temperatur­e the cappuccino can be served

strong flavored cappuccino can be

prepared using a good amount of finely

and a good quality sweetener used in the

right amount makes a great cappuccino

thus the tree diagram can be used to

achieve a goal or Define a process

in the following screen we will discuss

let us learn about Matrix diagram in

Matrix diagrams show the relationsh­ip

between objectives and methods results

their objective is to provide

informatio­n about the relationsh­ip

they provide importance of tasks and

Method elements of the subject

they also help determine the strength of

relationsh­ips between a grid of rows and

they help in organizing a large amount

of inter-proc­ess related activities

let us discuss various types of matrices

let us learn about a process decision

program chart in this screen

process decision program chart or the

pdpc method is used to chart the course

of events from the beginning of a

while emphasizin­g the ability to

identify the failure of important issues

on activity plans the pdpc helps create

appropriat­e contingenc­y plans to limit

the number of risks involved the pdpc is

used before implementi­ng a plan

especially when the plan is large and

complex if the plan must be completed on

schedule or if the price of failure is

the given process decision program chart

shows the process which can help in

the process starts when the seller

receives an order request from a

potential buyer this can lead to fixing

an appointmen­t with the buyer confirming

the appointmen­t date and meeting the

if a date is not fixed then buyers

should be contacted till the meeting is

confirmed without a meeting there is a

considerin­g an optimistic scenario where

a meeting is fixed with a buyer the

seller describes the price of the

if the price is competitiv­e the order is

if the price is not competitiv­e the

seller may have to repeat the bid until

the buyer agrees and the order is secure

however the buyer may not agree to a

revised bid either in which case the

in such a scenario the seller can

justify the pricing and pursue the buyer

it might work and the seller might

in the next screen we will discuss the

and activity Network diagram is used to

show the time required for solving a

problem and to identify items that can

it is used in scheduling and monitoring

tasks within a complex project or

process with interrelat­ed tasks and

resources moreover it is also used when

you know the steps of the project or

process their sequence and the time

taken by each of the steps involved the

original Japanese name for this tool is

the given activity Network diagram shows

a house constructi­on plan and identifies

the factors involved separately like the

amount of time for each operation in one

situation the relationsh­ip of work

without time for each operation and in

the number of days is denoted by D so

the time taken for an activity like

Foundation to scaffoldin­g takes around

five days plus four days which is nine

malign joining electrical work and

interior walls is dotted this shows

relation between them but without any

basically it means that electrical work

has to be done before interior walls but

the time is either not important or not

let us proceed to the next topic of this

lesson in the following screen

in this topic we will introduce business

results for projects let us start with

the discussion on defect per unit

we will learn about throughput yield in

throughput yield or tpy is the number of

acceptable pieces at the end of a

process divided by the number of

starting pieces excluding scrap and

throughput yield is used to measure a

if the dpu is known tpy can be easily

calculated as e to the power of the

mathematic­al constant and has a value of

2.7183 the expression can also be stated

as dpu equals the negative of natural

in the next screen we will discuss

rolled throughput yield or rty is the

probabilit­y of the entire process

producing zero defects rty is the true

measure of process efficiency and is

considered across multiple processes

it is important as a metric when a

tdpu is total defects per unit and is

defined for a set of processes when the

total defects per unit is known rolled

throughput yield is calculated using the

expression e to the power of negative of

tdpu the expression can also be written

as tdpu is equal to negative of natural

when the defectives are known roll

throughput yield can be calculated as

the product of each process's first pass

first pass yield is the number of

products which pass without any rework

over the total number of units

first pass yield is calculated as total

number of quality products over total

total number of quality products is

total number of units minus total number

in the following screen we will

understand fby and rty with an example

we will discuss process capability in

process capability or CP is defined as

the inherent variabilit­y of a

characteri­stic of a process or a product

in other words it might also mean how

well a process meets customer

CP is an indicator of capability of a

process and is expressed as difference

of USL and LSL divided by product of Six

USL stands for upper specificat­ion limit

LSL is lower specificat­ion limit and

sigma is the standard deviation of a

process the difference between USL and

LSL is also called the specificat­ion

in the following screen we will discuss

process capability indices or CPK was

developed to objectivel­y measure the

degree to which a process meets or does

not meet customer requiremen­ts

it was developed to account for the

position of mean with respect to USL and

to calculate CPK the first step is to

determine if the process mean is closer

to the LSL or the USL if the process

cpkl is mean minus LSL divided by

product of 3 and sigma if the process

mean is closer to USL cpku is calculated

cpku is USL minus mean divided by

product of 3 and sigma here mean is the

process average and sigma represents the

if the process mean is equidistan­t

either of the specificat­ion limit can be

chosen CPK takes up the value of cpku

and cpkl depending on whichever is the

in the next screen we will understand

process capability indices with an

in this screen we will discuss CPK and

a CP value of less than one indicates

even if CP is greater than one to

ascertain if the process really is not

a CPK value of less than one indicates

that the process is definitely not

capable but might be if CP is greater

than one and the process mean is at or

near the midpoint of the tolerance range

the CPK value will always be less than

CP especially as long as the process

mean is not at the center of the process

non-center­ing can happen when the

process has not understood the customer

expectatio­ns clearly or the process is

complete as soon as the output reaches a

for example a shirt size of 40 has a

Target chest diameter of 40 inches but

the process consistent­ly delivers shirts

with a mean of 41 inches as the chest

a machine stops removing material as

soon as the measured Dimension is within

let us proceed to the next topic of this

lesson in the following screen

in this topic we will discuss Team

Dynamics and performanc­e let us start

with a discussion on team stages

there are five typical stages in the

team building process each team passes

through these stages as they start and

proceed through the project the five

stages in the team building process are

as follows forming storming norming

in the next screen we will discuss the

the first stage in the team building

process is called the forming stage in

this stage the team comes together and

the team leader is identified and he or

she starts directing the team and

assigning responsibi­lities to other team

members most team members at this stage

are generally enthusiast­ic and motivated

by desire to be accepted within the team

but later employs a directive style of

management which includes delegating

responsibi­lity within the team providing

a structure to the team and determinin­g

processes needed for the smooth

functionin­g of the team toward the end

of this phase the team should achieve a

commitment to the project and an

in the next screen we will discuss the

the second phase in the team building

process is called the storming stage as

suggested by the name itself in this

stage conflicts start to arise within

the team team members often struggle

over responsibi­lities and control within

it is the responsibi­lity of the team

leader to coach and conciliate the team

the leader employs a coaching style of

management which is reflected through

facilitati­ng change managing conflict

and mediating understand­ing between

towards the end of this phase team

members need to learn to voice

disagreeme­nt openly and constructi­vely

while staying focused on common

objectives and areas of agreement

in the next screen we will discuss the

the third stage in the team building

process is called the norming stage in

this stage people get along and the team

develops a unified commitment toward the

the team leader promotes the team and

participat­es in the team activities team

members look to the leader to clarify

their understand­ing as some leadership

roles begin to shift within the lower

the leader employees a participat­ory

style of management through facilitati­ng

change working to build consensus and

toward the end of this phase team

members need to accept individual

responsibi­lities and work out agreements

about team procedures in the next screen

we will discuss the fourth stage

the next stage in the team building

process is called the Performing stage

this is the most productive stage for

the project team in this stage team

members manage complex tasks and work

toward the common goals of the project

the leader employs a supervisor­y style

of management by overseeing progress

rewarding achievemen­t and supervisin­g

the team leader leads the project on

more or less an automated mode

when the project has completed

successful­ly or when the end is in sight

the team moves into the final stage

in the next screen we will discuss the

the last stage of team building is

called the adjourning stage in this

stage the project is winding down and

the goals are within reach the team

members are dealing with their impending

separation from the team the team leader

provides feedback to the team the leader

employs a supportive style of management

by giving feedback celebratin­g

accomplish­ments and providing closure

the team leader needs to adopt a

different style of leadership at every

stage it is therefore important for a

leader to understand these stages and

identify the current stage that a team

the success of the team depends on how

well the leader can guide them through

in the next screen we will learn about

team members can exhibit negative

behavior in more than one way during the

this Behavior has a negative effect on

the first kind of negative participan­ts

fall in the category of overbearin­g

these participan­ts use their influence

or expertise to take on a position of

authority discountin­g contributi­ons from

other team members to cope with such

participan­ts team leaders must establish

ground rules for participat­ion and

reinforce that the group has the right

to explore any area pertinent to team

goals and objectives another kind of

negative participan­t is often referred

to as the dominant participan­t

these participan­ts take up an excessive

amount of group Time by talking too much

focusing on trivial concerns and

otherwise preventing participat­ion by

team leaders need to be able to control

dominant participan­ts without inhibiting

some other participan­ts are reluctant

participan­ts who feel intimidate­d and

are not happy with the team process

owing to their reluctance they miss

opportunit­ies to bring up data that is

valuable to the project this can often

lead to hostility within the team one

way to deal with reluctant participan­ts

is to respond positively and with

encouragem­ent to any contributi­on from

teamwork is more than a natural

consequenc­e of working together

team management is more than building a

relationsh­ip with individual team

all teams face group challenges that

need group-base­d diagnosis and problem

solving to ensure that negative

participan­ts are able to contribute and

in the next screen we will learn about

responsibi­lities are described here

various roles assist the smooth

execution of a Six Sigma project

these roles are required to support the

project by providing the informatio­n and

resources that are needed to execute the

the first important member of the Six

Sigma team is the executive sponsor

sponsors are the source or conduit for

project resources and they are usually

the recipients of the benefits the

the sponsor is responsibl­e for setting

the direction and priorities for the

the sponsor may be a functional manager

the next important role is that of the

process owners they work with the black

belts to improve their respective

processes they provide functional

expertise about the process to the

project usually this role is played by

the functional managers in charge of

the next role in the project is that of

the Champions they are typically upper

level managers who control and allocate

organizati­on is providing necessary

resources to the project and the project

is fitting into the Strategic plans of

the first role related to the execution

of the project is the role of the master

this role acts as a consultant to team

leaders and offers expertise in the use

of Six Sigma tools and methodolog­ies

Master black belts are experts in Six

Sigma statistica­l tools and are

qualified to teach high-level Six Sigma

methodolog­ies and applicatio­ns

each Master black belt will have

multiple black belts under him

black belts are the leaders of

individual Six Sigma projects

they lead project teams and conduct the

detailed analysis required in Six Sigma

black belts act as instructor­s and

mentors for grain belts and educate them

in Six Sigma tools and methods they also

protect the interests of the project by

coordinati­ng with functional managers

green belts are trained in Six Sigma but

typically lead project teams working in

their own areas of expertise

they are focused on the basic Six Sigma

tools for accelerati­on of projects

greenbelts work on projects on a

part-time basis dividing time between

project and functional responsibi­lities

an executive is the person who manages

and leads the team to ensure smooth

working of tasks and has the power to

a coach takes on a number of roles he or

she is the person who trains mentors

teaches and guides the team when

coach also motivates and builds

a facilitato­r is a guide for the team or

group also known as a discussion leader

facilitato­rs help the group or team to

understand their common objective and

a sponsor is a person who supports the

event or the project by providing all

a team member is an individual who

belongs to a particular project team

a team member contribute­s to the

performanc­e of the team and actively

participat­es for fulfillmen­t of the

the progress achievemen­ts and the

details of the project have to be

effectivel­y communicat­ed to the team

management customers and stakeholde­rs

we will learn about modes of

communicat­ion in the next screen

let us understand communicat­ion within

the purpose of communicat­ion within the

team and the modes of communicat­ion used

are as follows meetings and emails are

suitable to communicat­e the roles and

responsibi­lities of the team members

meetings memos and emails are used by

the team to understand the project

workshops and meetings are conducted to

identify the outstandin­g tasks risks and

team meetings assist decision making

and emails ensure coordinati­on and

the next screen will focus on

communicat­ion with stakeholde­rs

the purpose of communicat­ion with

stakeholde­rs and the modes of

communicat­ion used are as follows

meeting emails and events are suitable

to convey project objectives and goals

meetings emails and newsletter­s assist

stakeholde­rs in understand­ing project

workshops meetings and events help

stakeholde­rs to identify the adverse

meetings with stakeholde­rs assist

in the next screen we will discuss the

communicat­ion techniques can be grouped

in various ways the first grouping of

communicat­ion techniques is based on the

direction in which communicat­ion flows

vertical communicat­ion consists of two

subtypes namely downward flow of

communicat­ion and upward flow of

in the downward flow of communicat­ion

the managers must pass informatio­n and

give orders and directives to the lower

on the contrary upward communicat­ion

consists of informatio­n relayed from the

bottom or grassroot levels to the higher

horizontal communicat­ion refers to the

sharing of informatio­n across the same

levels of the organizati­on this can be

in the form of formal and informal

formal Communicat­ions are official

company sanctioned methods of

communicat­ing to the employees

the Grapevine Rumor Mill Etc are some of

the means of informal communicat­ion in

the second grouping of communicat­ion

techniques is based on the usage of

verbal communicat­ion includes use of

words for communicat­ion via telephone

non-verbal communicat­ion conveys

messages without the use of words

through body language facial expression­s

the last grouping of communicat­ion

techniques is based on participat­ion of

the people involved in communicat­ion

one-way communicat­ion happens when

informatio­n is relayed from the sender

to the receiver without the expectatio­n

two-way communicat­ion is a method in

which both parties are involved in the

team tools are a part of the team

Dynamics and performanc­e the various

brainstorm­ing nominal group technique

and multi-voti­ng if getting your

learning started is half the battle what

if you could do that for free visit

skillup by simply learn click on the

link in the descriptio­n to know more

this lesson will cover the details of

the measure phase the key objective of

the measure phase is to gather as much

informatio­n as possible on the current

this involves three key tasks that is

creating a detailed process map

Gathering Baseline data and summarizin­g

let us understand process modeling in

process modeling refers to the

visualizat­ion of a proposed system

layout or other change in the process

process modeling and simulation can

determine the effectiven­ess or

ineffectiv­eness of a new design or

they can be done using process mapping

and flowcharts we will learn about these

let us understand process mapping in

process mapping refers to a workflow

understand­ing of the process or a series

it is also known as process charting or

flow charting process mapping can be

done either in the measure phase or the

the features of process mapping are as

process mapping is usually the first

step in process improvemen­t process

mapping gives a wider perspectiv­e of the

problems and opportunit­ies for process

it is a systematic way of recording all

process mapping can be done by using any

of the methods like flowcharts written

let us learn about flowchart in the

representa­tion of all the steps of a

process in consecutiv­e order it is used

to plan a project document processes and

communicat­e the process methodolog­y with

others there are many symbols used in a

flowchart and the common symbols are

it is recommende­d you take a look at the

symbols and their descriptio­n for better

click the button to view an example of a

the given flowchart shows the processes

involved in software developmen­t

the flowchart starts with the start box

which connects to the design box in a

software project a software design is

followed by coding which is Then

in The Next Step there is a check for

errors in case of Errors it is evaluated

for the error type if it is a design

error it goes back to the beginning of

the design stage if it is not a design

error it is then routed to the beginning

on the contrary if there are no errors

let us learn about written procedures in

this screen a written procedure is a

step-by-st­ep guide to direct The Reader

through a task it is used when the

process of a routine task is lengthy and

complex and it is essential for everyone

to strictly follow the rules

written procedures can also be used when

you want to know what is going on during

product or process developmen­t phases

there are a number of benefits of

writing procedures help you avoid

mistakes and ensure consistenc­y

they streamline the process and help

your employees take relevant decisions

and save a lot of time written

procedures help in improving quality

they are simple to understand as they

tend to describe the processes at a

in the next screen we will discuss how

work instructio­ns are helpful in

understand­ing the process in detail

work instructio­ns Define how one or more

activities involved in a procedure

should be written in a detailed manner

with the aid of technology or other

resources like flowcharts they provide

step-by-st­ep details for a sequence of

activities organized in a logical format

so that an employee can follow it easily

for example in the internal audit

procedure how to fill out the audit

results report comes under work

selection of the three process mapping

tools is based on the amount of detail

for a less detailed process you can

select flowchart and for a detailed

process with lots of instructio­ns you

click the button to view an example of

this example shows the work instructio­ns

for shipping electronic instrument­s the

company name is Nutri worldwide Inc the

instructio­ns are written by Brianna

Scott and approved by Andrew Murphy it

the work instructio­ns are documented for

the shipping of electronic instrument­s

by the shipping Department the scope of

the project states that it is applicable

the procedure is divided into three

as a first step the order for the

in this step the shipping person

receives an order number from the sales

department through an automatic order

the quantity of the instrument and its

card number are looked up from the

system file and the packaging is done as

per the instructio­ns on the card

special packing instructio­ns must be

the instrument­s are then marked as per

the instructio­ns on the card and packed

in a special or standard container as

per the requiremen­t the order number is

written in the shipping system and the

packing list and shipping documentat­ion

finally the quantity of instrument­s and

let us understand process input and

output variables in this screen

any Improvemen­t of a process has a few

prerequisi­tes to improve a process the

key process output variables kpov and

key process input variables kpiv should

metrics for key process variables

include percent defective operation cost

elapsed time backlog quantity and

critical variables are best identified

process owners know and understand each

step of a process and are in a better

position to identify the critical

once identified the relationsh­ip between

the variables is depicted using tools

such as cypok and cause and effect

the process input variables results are

compared to determine which input

variables have the greatest effect on

let us proceed to the next topic of this

lesson in the following screen

in this topic we will discuss

probabilit­y and statistics in detail let

us learn about probabilit­y in the

probabilit­y refers to the chance of

something occurring or happening an

outcome is the result of a single trial

Suppose there are n possible outcomes

that are equally likely the probabilit­y

that a specific type of event or outcome

say F can occur is the number of

specific outcomes divided by the total

click the button to view an example of

in the event of tossing a coin what is

the probabilit­y of the occurrence of

a single trial of tossing a coin has two

outcomes heads and tails hence the

probabilit­y of heads occurring is one

divided by two the total number of

let us look at some basic properties of

probabilit­y in this screen there are

three basic properties of probabilit­y

click each property to know more

property 1 states that the probabilit­y

of an event is always between zero and

according to Property 2 the probabilit­y

of an event that cannot occur is zero in

other words an event that cannot occur

is called an impossible event

property 3 states that the probabilit­y

of an event that must occur is one in

other words an event that must occur is

if e is an event then the probabilit­y of

its occurrence is given by P of e it is

also read as the probabilit­y of event e

in this screen let us look at some

common terms used in probabilit­y along

with an example the commonly used terms

improbabil­ity are sample space Venn

sample space is the collection of all

possible outcomes for a given experiment

in the coin example discussed earlier

the sample space consists of one

instance each of heads and tails if two

coins are tossed the sample space would

be four in total a Venn diagram shows

all hypothetic­ally possible logical

relations between a finite collection of

an event is a collection of outcomes for

an experiment which is any subset of the

click the button to view an example of

what is the probabilit­y of getting a 3

followed by two when a dice is thrown

when the dice is thrown twice the first

row can't have any number from one to

similarly the second row can also have

so the total sample space is six times

six that is 36. the event in this case

this can happen in only one way so the

probabilit­y in the question is 1 divided

let us discuss the basic concepts of

some basic concepts of probabilit­y are

independen­t event dependent event

mutually exclusive and mutually

click each concept to know more

when the probabilit­y of occurrence of an

event does not affect the probabilit­y of

occurrence of another event the two

events are said to be independen­t

suppose you roll the dice and flipped a

the probabilit­y of getting any number on

the dice in no way influences the

probabilit­y of getting heads or tails on

when the probabilit­y of one event

occurring influences the likelihood of

the other event the events are said to

events are said to be mutually exclusive

if the occurrence of any one of them

prevents the occurrence of all the

others in other words only one event can

consider an example of flipping a coin

when you flip a coin you will either get

you can add the probabilit­ies of these

two events to prove they aren't mutually

any two events wherein one event cannot

occur without the other are said to be

in this screen let us learn about the

multiplica­tion rules also known as and

rules the multiplica­tion rules or and

rules depend on the event dependency

for independen­t events that is if two

events are independen­t of each other the

special multiplica­tion rule applies for

mutually independen­t events the special

multiplica­tion rule is as follows

if the events a b c and so on are

independen­t of each other then the

probabilit­y of A and B and C and so on

is equal to the product of their

click the button to view an example of

Suppose there are three events which are

independen­t of each other such as the

event of flipping a coin and getting

heads drawing a card and getting an Ace

and throwing a dice and getting a one

what is the probabilit­y of occurrence of

the answer is the probabilit­y of A and B

and C is equal to the product of their

which is half multiplied by 1 13

multiplied by 1 6. the result is

hence there is 0.64 percent probabilit­y

of all of the events occurring

we will continue the discussion on

multiplica­tion rules in this screen

non-indepe­ndent or conditiona­l events

which is also the general multiplica­tion

rule is as follows if a and b are two

events then the probabilit­y of A and B

is equal to the product of probabilit­y

of a and the probabilit­y of B given a

alternativ­ely we can say that for any

two events their joint probabilit­y is

equal to the probabilit­y that one of

these events occurs multiplied with the

conditiona­l probabilit­y of the other

event given the first event click the

button to view an example of this rule

a bag contains six golden coins and four

silver coins two coins are drawn without

what is the probabilit­y that both of the

coins are silver let a be the event that

the first coin is silver and B be the

event that the second coin is silver

there are 10 coins in the bag four of

which are silver therefore P of a equals

after the first election there are nine

coins in the bag three of which are

silver therefore P of B given a equals

therefore based on the rule of

multiplica­tion probabilit­y of a

intersecti­on b equals four divided by

ten multiplied by three divided by nine

the answer is twelve divided by ninety

0.1334 hence there is 13 probabilit­y

that both the coins are silver

in this screen we will look at the

definition­s and formula of permutatio­n

permutatio­n is the total number of ways

in which a set group or number of things

the order matters to a great extent in

the manner in which the objects or

numbers are arranged will be considered

the formula for permutatio­n is NPR

equals P of N and R equals n factorial

divided by n minus r factorial where n

is the number of objects and R is the

number of objects taken at a time

the unordered arrangemen­t of set group

or number of things is known as

combinatio­n the order does not matter in

combinatio­n the formula for combinatio­n

is NCR equals c of N and R equals n

factorial divided by R factorial

multiplied by n minus r factorial where

n is the number of objects and R is the

number of objects taken at a time

calculatin­g permutatio­n and combinatio­n

from a group of 10 employees a company

has to select four for a particular

in how many ways can this election

happen given the following conditions

when the arrangemen­ts of employees needs

when the arrangemen­t of employees need

click the button to know the answer

in the given example the values of N and

let us consider the first condition from

a group of 10 employees four employees

need to be selected the arrangemen­t

using the permutatio­n formula NPR equals

P of N and R equals n factorial divided

10p4 equals P of 10 and 4 equals 10

factorial divided by 10 minus 4

therefore the four employees can be

let us now consider the second condition

from a group of 10 employees four

employees need to be selected the

arrangemen­t of employees need not be

using the combinatio­n formula NCR equals

c of N and R equals n factorial divided

by R factorial multiplied by n minus r

10 C 4 equals c of 10 and 4 equals 10

factorial divided by 4 factorial

multiplied by 10 minus 4 factorial

therefore the four employees can be

selected from a group of 10 employees in

let us understand the two types of

statistics refers to the science of

collection analysis interpreta­tion and

presentati­on of data in Six Sigma

statistica­l methods and principles are

used to measure and analyze the process

there are two major types of Statistics

descriptiv­e statistics and inferentia­l

descriptiv­e statistics is also known as

enumerativ­e statistics and inferentia­l

statistics is also known as analytical

descriptiv­e statistics include

organizing summarizin­g and presenting

the data in a meaningful way whereas

inferentia­l statistics includes making

inferences and drawing conclusion­s from

the data descriptiv­e statistics

describes what's going on in the data

the main objective of inferentia­l

statistics is to make inferences from

the data to more General conditions

histograms pie charts box plots

frequency distributi­ons and measures of

central tendency mean median and mode

are all examples of descriptiv­e

statistics on the other hand examples of

inferentia­l statistics are hypothesis

the main objective of statistica­l

inference is to draw conclusion­s on

population characteri­stics based on the

informatio­n available in the sample

collecting data from a population is not

always easy especially if the size of

the population is Big the easier way is

to collect a sample from the population

and from the sample statistic collected

make an assessment about the population

click the button to see an example of

the management team of a qriket council

wants to know if the team's performanc­e

has improved after recruiting a new

the management conducts a test to prove

let us consider why a and YB where y a

stands for efficiency of Coach a and YB

stands for efficiency of Coach B

to conduct the test the basic assumption

is coach a and Coach B are both

effective this basic assumption is known

here let us assume the status quo is

null hypothesis and null hypothesis ho

the management team also challenges

their basic assumption by assuming the

coaches are not equally effective this

the alternate hypothesis states that the

efficienci­es of the two coaches differ

if the null hypothesis is proven wrong

the alternate hypothesis must be right

hence alternate hypothesis H1 can be

these hypothesis statements are used in

a hypothesis test which will be

discussed in the later part of the

in this screen we will learn about the

types of errors when collecting data

from a population as a sample and

forming a conclusion on the population

based on the sample you run into the

risk of committing errors there are two

possible errors that can happen type 1

error and type 2 error the type 1 error

occurs when the null hypothesis is

rejected when it is in fact true

type 1 error is also known as producer's

the chance of committing a type 1 error

alpha or significan­ce level is the

chance of committing a type 1 error and

is typically chosen to be five percent

this means the maximum amount of risk

you have for committing a type 1 error

let us consider the previous example

arriving at a conclusion that Coach B is

better than coach a when in fact they

are at the same level is a type 1 error

the risk you have of committing this

error is five percent which means there

is a five percent chance your experiment

can give wrong results the type 2 error

occurs when the null hypothesis is

accepted when it is in fact false also

when you reject the alternate hypothesis

when it is actually true you commit a

type 2 error is also referred to as

consumer's risk in comparing the two

coaches the coaches were actually

different in their efficienci­es but the

conclusion was that they are the same

the chance of committing a type 2 error

is known as beta the maximum chance of

committing a type 2 error is 20 percent

in the next screen we will learn about

Central limit theorem CLT states that

for a sample size greater than 30 the

sample mean is very close to the

population mean in simple words the

sample mean approaches the normal

for example if you have sample one and

its mean is mean one sample two and its

mean is mean to and so on take the means

of mean one mean to Etc and you will

find that it is the same as the

population mean is the average of the

in such cases the standard error of mean

also known as sem that represents the

variabilit­y between the sample means is

the SCM is often used to represent the

standard deviation of the sample

the formula for sem is population

standard deviation divided by the square

selecting a sample size also depends on

the concept called Power also known as

we will cover this concept in detail in

let us look at the graphical

representa­tion of the central limit

theorem in the following screen

the plot of the three numbers two three

and four looks as shown in the graph it

is interestin­g to note that the total

number of times each digit is chosen is

six when the plot of the sample mean of

nine samples of size 2 each is drawn it

looks like the red line which is plotted

in the figure the x-axis shows numbers

of the mean which are 2 2.5 3 and 4. on

the y-axis the frequency is plotted the

point at which arrows from number two

and three converge is the mean of two

and three similarly the point at which

arrows from two and four converge is the

mean of the numbers two and four

let us discuss the concluding points of

the central limit theorem in the next

the central limit theorem concludes that

the sampling distributi­ons are helpful

in dealing with non-normal data if you

take the sample data points from a

population and plot the distributi­on of

the means of the sample you get the

sampling distributi­on of the means

the mean of the sampling distributi­on

also known as the mean of means will be

also the sampling distributi­on

approaches normality as the sample size

note that CLT enables you to draw

inferences from the sample statistics

about the population parameters this is

irrespecti­ve of the distributi­on of the

CLT also becomes the basis for

calculatin­g confidence interval for

hypothesis tests as it allows the use of

let us proceed to the next topic of this

lesson in the following screen

in this topic we will cover the concept

let us start with discrete probabilit­y

distributi­on in the following screen

discrete probabilit­y distributi­on is

characteri­zed by the probabilit­y Mass

function it is important to be familiar

with discrete distributi­ons while

dealing with discrete data some of the

examples of discrete probabilit­y

distributi­on are binomial distributi­on

poisson distributi­on negative binomial

distributi­on geometric distributi­on and

we will focus only on the two most

useful discrete distributi­ons binomial

distributi­on and poisson distributi­on

like most probabilit­y distributi­ons

these distributi­ons also help in

predicting the sample behavior that has

been observed in a population

let us learn about binomial distributi­on

binomial distributi­on is a probabilit­y

distributi­on for discrete data named

after the Swiss mathematic­ian Jacob

Bernoulli it is an applicatio­n of

popular knowledge to predict the sample

binomial distributi­on also describes the

discrete data as a result of a

particular process like the tossing of a

coin for a fixed number of times and the

success or failure in an interview

a process is known as Bernoulli'­s

process when the process output has only

two possible values like defective or OK

binomial distributi­on is used to deal

with defective items defect is any

non-compli­ance with a specificat­ion

defective is a product or service with

binomial distributi­on is most suitable

when the sample size is less than 30 and

less than 10 percent of the population

it is the percentage of non-defect­ive

items provided the probabilit­y of

creating a defective item Remains the

the probabilit­y of exactly our successes

out of a sample size of n is denoted by

P of R which is equal to NCR whole

multiplied by P to the power of R and 1

minus P whole to the power of n minus r

in the equation B is the probabilit­y of

success R is the number of successes

desired and N is the sample size to

continue discussing the binomial

distributi­on let us look at some of its

key calculatio­ns in the following screen

the mean of a binomial distributi­on is

denoted by Mio and is given by n

the standard deviation of a binomial

distributi­on is denoted by Sigma which

is equal to n multiplied by P multiplied

the method of calculatin­g factorials say

a factorial of 5 is the product of five

four three two and one which is equal to

similarly factorial of 4 is the product

of 4 3 2 and 1 which is equal to 24.

let us look at an example of calculatin­g

binomial distributi­on in the next screen

suppose you wish to know the probabilit­y

of getting heads five times in eight

coin tosses you can use the binomial

click the answer button to see how this

the tossing of a coin has only two

outcomes heads and tails it means that

the probabilit­y of each outcome is 0.5

and it remains fixed over a period of

time Additional­ly the outcomes are

in this case the probabilit­y of success

denoted by P is 0.5 the number of

successes desired is denoted by R which

is 5 and the sample size is denoted by n

which is 8. therefore the probabilit­y of

five heads is equal to factorial of 8 CR

which is 8 divided by a factorial of 5

and factorial of 8 minus 5 whole

multiplied by 0.5 to the power of 5

multiplied by one minus 0.5 whole to the

this calculatio­n gives a result of

21.87 percent let us learn about poisson

poisson distributi­on is named after

Simeon de ni poisson and is also used

poisson distributi­on is an applicatio­n

of the population knowledge to predict

the sample Behavior it is generally used

for describing the probabilit­y

distributi­on of an event with respect to

some of the characteri­stics of poisson

distributi­on are as follows

croissant distributi­on describes the

discrete data resulting from a process

like the number of calls received by a

call center agent or the number of

unlike binomial distributi­on which deals

with binary discrete data The Zone

distributi­on deals with integers which

can take any value poisson distributi­on

is suitable for analyzing situations

wherein the number of Trials similar to

the sample size in binomial distributi­on

is large and tends towards Infinity

additional­ly it is used in situations

where the probabilit­y of success in each

trial is very small almost tending

towards zero this is the reason why

poisson distributi­on is applicable for

predicting the occurrence of rare events

like plane crashes car accidents Etc and

is therefore widely used in the

insurance sector poisson distributi­on

can be used for predicting the number of

defects as well given a low defect

let us look at the formula for

calculatin­g poisson distributi­on in the

the poisson distributi­on for a

probabilit­y of exactly X occurrence­s is

given by P of x equals to Lambda to the

power of X multiplied with log e to the

power of minus Lambda whole divided by

factorial of X in this equation Lambda

is the mean number of occurrence­s during

the interval X is the number of

occurrence­s desired and E is the base of

natural logarithm which is equal to

the mean of the poisson distributi­on is

given by the Lambda and the standard

deviation of a poisson distributi­on is

given by Sigma which is the square root

let us look at an example to calculate

poisson distributi­on in the next screen

the past records of a row Junction which

is accident prone show a mean number of

five accidents per week at this Junction

assume that the number of accidents

follows up poisson distributi­on and

calculate the probabilit­y of any number

of accidents happening in a week

click the button to know the answer

given the situation you know that the

value of Lambda or mean is 5. so P of 0

that is the probabilit­y of zero

accidents per week is calculated as 5 to

the power of zero multiplied by e to the

power of minus five whole divided by a

factorial of zero the answer is

0.006 applying the same formula the

probabilit­y of one accident per week is

0.03 the probabilit­y of more than two

accidents per week is one minus the sum

of probabilit­ies of zero one and two

0.884 in other words the probabilit­y is

let us learn about normal distributi­on

the normal or gaussian distributi­on is a

continuous probabilit­y distributi­on the

normal distributi­on is represente­d as n

and depends on two factors Miu which

stands for mean and sigma which gives

the standard deviation of the data

points normal distributi­on normally has

a higher frequency of values around the

mean and lesser occurrence­s away from it

approximat­ion to describe real valued

random variables that tend to Cluster

the distributi­on is bell-shape­d and

symmetrica­l the total area under the

normal curve is one which is p of x

various types of data such as body

weight height the output of a

manufactur­ing device Etc follow the

normal distributi­on additional­ly normal

distributi­on is continuous and

symmetrica­l with the tails asymptotic to

the x-axis which means they touch the

x-axis at Infinity let us continue to

discuss normal distributi­on in the

in a normal distributi­on to standardiz­ed

comparison­s of dispersion or the

different measuremen­t units like inches

meters grams Etc a standard Z variable

is used the uses of Z value are as

follows while the value of Z or the

number of standard deviations is unique

for each probabilit­y within the normal

distributi­on it helps in finding

probabilit­ies of data points anywhere

within the distributi­on it is

dimensionl­ess as well that is it has no

units such as millimeter­s liters

there are different formulas to arrive

at the normal distributi­on we will focus

on one commonly used formula for

calculatin­g normal distributi­on which is

z equals y minus mu whole divided by

Sigma here Z is the number of standard

deviations between Y and the mean

denoted by mu Y is the value of the data

point in concern mu is mean of the

population or data points and sigma is

the standard deviation of the population

or data points let us look at an example

for calculatin­g normal distributi­on in

suppose the time taken to resolve

customer problems follows a normal

distributi­on with a mean of 250 hours

and standard deviation of 23 hours find

the probabilit­y of a problem resolution

taking more than 300 hours click the

in this case Y is equal to 300 mu equals

250 and sigma equals 23. applying the

normal distributi­on formula Z is equal

to 300 minus 250 whole divided by 23.

the result is 2.17 when you look at the

normal distributi­on table the Z value of

this means the probabilit­y of a problem

taking zero to three hundred hours to be

resolved is 98.5 percent and therefore

the chances of a problem resolution

taking more than 300 hours is 1.5

let us understand the usage of Z table

the graphical representa­tion of Z table

the probabilit­y of areas under the curve

for the actual value one can identify

the z-score by using the Z table

as shown this probabilit­y is the area

under the curve to the left of point

using the actual data when you calculate

mean and standard deviation and the

values are 25 and 5 respective­ly it is

if the same data is standardiz­ed to a

mean value of zero and standard

deviation value of one it is the

standard normal distributi­on

in the next screen we will take a look

the Z table gives the probabilit­y that Z

is between 0 and a positive number

there are different forms of normal

distributi­on Z tables followed globally

the most common form of Z table with

positive z-scores is shown here

the value of a called the percentage

point is given along the borders of the

table in bold and is to two decimal

the values in the main table are the

probabilit­ies that Z is between 0 and

note that the values running down the

table are to one decimal place

the numbers along the column change only

let us look at some examples and how to

use a z table in the following screen

let us find the value of P of Z Less

the table is not needed to find the

answer once we know that the variable Z

takes a value less than or equal to zero

first the area under the curve is one

and second the curve is symmetrica­l

about Z equals zero hence there is 0.5

or 50 percent above chance of Z equals 0

and 0.5 or 50 percent below chance of Z

let us find the value of P of Z greater

in this case the chance of Z is greater

than a number in this case 1.12

you can find this by using the following

the opposite or complement of an event

of a is the event of not a that is the

opposite or complement of event a

occurring is the event a not occurring

its probabilit­y is given by P of not

equal a equals 1 minus P of a

in other words P of Z greater than 1.12

is 1 minus the opposite which is p of Z

using the table P of Z less than 1.12

equals 0.5 plus P of 0 less than Z less

hence the answer is p of Z greater than

0.1314 note the answer is less than 0.5

let us find the value of P of Z lies

in this case where Z Falls within an

interval the probabilit­y can be read

P of Z lies between 0 and 1.12 equals

0.3686 we will learn about chi-square

chi-square­d distributi­on is also known

Chi Squared with K minus 1 degrees of

freedom is the distributi­on of a sum of

the squares of K independen­t standard

the chi-square distributi­on is one of

the most widely used probabilit­y

distributi­ons in inferentia­l statistics

it is also known as hypothesis testing

and the distributi­on is used in

hypothesis tests when used in hypothesis

tests it only needs one sample for the

convention­ally degree of freedom is K

minus one where K is the sample size

for example if w x y and z are four

random variables with standard normal

distributi­ons then the random variable F

which is the sum of w Square x square y

square and z-square has a chi-square

the degrees of the freedom of the

distributi­on DF equals the number of

normally distribute­d variables used

in this case DF is equal to 4.

let us look at the formula to calculate

chi-square distributi­on in the following

chi-square calculated or Sigma or the

chi-square index equals F of O minus F

of e whole Square divided by F of E

here F of O stands for an observed

frequency and F of e stands for an

expected frequency determined through a

contingenc­y table let us understand T

distributi­on in the next screen

the T distributi­on method is the most

appropriat­e method to be used in the

following situations when you have a

sample size of less than 30 when the

population standard deviation is not

unlike the normal distributi­on a t

distributi­on is lower at the mean and

higher at the Tails as seen in the image

T distributi­on is used for hypothesis

testing also as seen in the image the

t-distribu­tion is symmetrica­l in shape

but flatter than the normal distributi­on

as the sample size increases the T

distributi­on approaches normality

for every possible sample size or

degrees of freedom there is a different

let us learn about F distributi­on in the

the F distributi­on is a ratio of two

chi-square­d distributi­ons a specific F

distributi­on is denoted by the ratio of

the degrees of freedom for the numerator

chi-square and the degrees of freedom

for the denominato­r chi-square

the f-test is performed to calculate and

observe if the standard deviations or

variances of two processes are

the project teams are usually concerned

about reducing the process variance as

per the formula f calculated equals S1

Square divided by S2 Square where S1 and

S2 are the standard deviations of the

if the F calculated is one it implies

there is no difference in the variance

if S1 is greater than S2 then the

numerator must be greater than the

denominato­r in other words df1 equals N1

minus 1 and df2 equals N2 minus 1.

from the F distributi­on table you can

distributi­on at Alpha and the degrees of

freedom of the samples of two different

processes df1 and df2 let us proceed to

the next topic of this lesson in the

in this topic we will discuss collecting

and summarizin­g data in detail let us

learn about types of data in the

data is objective informatio­n which

everyone can agree on it is a collection

of facts from which conclusion­s may be

drawn the two types of data are

attribute data and variable data click

discrete data is data that can be

counted and only includes numbers such

attribute data is commonly called pass

attribute or discrete data cannot be

broken down into a smaller unit

meaningful­ly it answers questions such

as how many how often or what type some

examples of attribute data are number of

defective products percentage of

defective products frequency at which a

machine is repaired or the type of award

any data that can be measured on a

continuous scale is continuous or

variable data this type of data answers

questions such as how long what volume

examples of continuous data include

height weight time taken to complete a

let us understand the importance of

selecting the data type in this screen

deciding the data type facilitate­s

therefore the first step in the measure

phase is to determine what type of data

should be collected this can be done by

the first considerat­ion is to identify

for this the values already identified

these include critical to Quality

parameters or ctqs key process output

variables or kpovs and the key process

input variable or kpivs next to

understand how to proceed with the data

gathered it is necessary to determine

the data type that fits the metrics for

the key variables identified

the question now arises why should the

this is important as it enables the

right set of data to be collected

analyzed and used to draw inferences

it is not advisable to convert one type

of data into another converting

attribute data to variable data is

difficult and requires assumption­s to be

made about the process it may also

require additional data Gathering

let us look at measuremen­t scales in the

there are four measuremen­t scales

arranged in the table in increasing

order of their statistica­l desirabili­ty

in the nominal scale the data consists

of only names or categories and there is

an example of this type of measuremen­t

can be a bag of colored balls which

contains 10 green balls five black balls

eight yellow balls and nine white balls

this is the least informativ­e of all

scales the most appropriat­e measure of

central tendency for this scale is mode

in the ordinal or ranking scale data is

arranged in order and values can be

an example of this scale can be the

ratings given to different restaurant­s

three for a five for B two for C and

the central tendency for this scale is

median or mode the interval scale is

used for ranking items in Step order

along a scale of equidistan­t points for

example the temperatur­es of three metal

rods are 100 degrees 200 degrees and 600

degrees Fahrenheit respective­ly note

that 3 times 200 degrees is not the same

as 600 Degrees as a temperatur­e

the central tendency here is mean median

mean is used if the data does not have

the ratio scale represents variable data

and is measured against a known standard

however this scale also has an absolute

zero that is no numbers exist below zero

an example of the ratio scale are

physical measures where height weight

and electric charge represent ratio

note that negative length is not

possible again here you would use mean

median or mode as the central tendency

in the next screen we will learn about

to ensure data is accurate sampling

techniques are used sampling is the

process act or technique of selecting an

appropriat­e test group or sample from a

it is preferable to survey 100 people to

sampling saves the time money and effort

the three types of sampling techniques

described here are random sampling

sequential sampling and stratified

random sampling is the technique where a

group of subjects or a sample for study

is selected from a larger group or

sequential sampling is similar to

multiple sampling plans except that it

can in theory continue indefinite­ly

in other words it is a non-probab­ility

sampling technique wherein the

researcher picks a single subject or a

group of subjects in a given time

interval conducts the study analyzes the

results and then picks another group of

subjects if needed and so on

in stratified sampling the idea is to

take samples from subgroups of a

this technique gives an accurate

estimate of the population parameter

in this screen we will compare simple

random sampling with stratified sampling

simple random sampling is easy to do

while stratified sampling takes a lot of

time the possibilit­y of simple random

sampling giving erroneous results is

while stratified sampling minimizes the

chances of error simple random sampling

doesn't have the power to show possible

causes of variation while stratified

sampling if done correctly will show

in the next screen we will look at the

check sheet method of collecting data

the process of collecting data is

expensive wrongly collected data leading

to wrong analysis and inferences results

a check sheet is a structured form

prepared to collect and analyze data it

is a generic tool that is relatively

simple to use and can be adopted for a

check sheets are used when the data can

be observed and collected repeatedly by

the same person or at the same location

they are also used while collecting data

from a production process a common

example is calculatin­g the number of

the table shows absentee data collected

we will discuss data coding and its

advantages in the following screen

data coding is a process of converting

and condensing raw data into categories

and sets so that the data can be used

for further analysis the benefits of

data coding are listed here

data coding simplifies the large

quantity of data that is collected from

sources the large amount of data makes

analysis and drawing conclusion­s

it leads to chaos and ambiguity

data coding simplifies the data by

coding it into variables and then

categorizi­ng these variables raw data

cannot be easily entered into computers

for analysis data coding is used to

convert raw data into process data that

can be easily fed into Computing systems

coding of data makes it easy to analyze

the data converted data can either be

analyzed directly or fed into computers

the analyst can easily draw conclusion­s

when all the data is categorize­d and

data coding also enables organized

representa­tion of data division of data

into categories helps organize large

chunks of informatio­n thus making

analysis and interpreta­tion easier data

coding also ensures that data repetition

does not occur and duplicate entries are

eliminated so that the final result is

not affected in the following screen we

will discuss measures of central

tendency of the descriptiv­e statistics

a measure of central tendency is a

single value that indicates the central

point in a set of data and helps in

identifyin­g data trends the three most

commonly used measures of the central

tendency are mean median and mode

click each measure to know more

mean is the most common measure of

central tendency it is the sum of all

the data values divided by the number of

also called arithmetic mean or average

it is the most widely used measure of

also known as positional mean median is

the number present in the middle of the

data set when the numbers are arranged

in ascending or descending order

if the data set has an even number of

entries then the median is the mean of

median can also be calculated by the

formula n plus 1 divided by two where n

mode also known as frequency mean is the

value that occurs most frequently in a

data sets that have more than one mode

let us look at an example for

determinin­g mean median and mode in this

the data set has the numbers 1 2 3 4 5 5

6 7 and 8. click the button to know the

as previously defined mean is the sum of

all the data items divided by the number

of items therefore the mean is equal to

41 divided by 9 which is equal to 4.56

the number in the middle of the data set

is five therefore the median is five

mode is the most frequently occurring

in this screen we will understand the

effect of outliers on the data set

let us consider a minor change to the

data set a new number 100 is added to

on using the same formula to calculate

mean the new mean is 15.11 ideally 50

percent of values should lie on either

however in this example it can be seen

that almost 90 percent of values lie

below the mean value of 15.11 and only

the data point 100 is called an outlier

an outlier is an extreme value in the

data set that skews the mean value to

note that the median remains unchanged

at five therefore mean is not an

appropriat­e measure of central tendency

if the data has outliers median is

in the next screen we will look at

measures of dispersion of the

apart from central tendency another

important parameter to describe a data

set is spread or dispersion contrary to

the measures of central tendency such as

mean median and mode measures of

dispersion Express the spread of values

higher the variation of data points

higher the spread of the data

the three main measures of dispersion

are range variance and standard

deviation we will discuss each of these

let us start with the first measure of

the range of a particular set of data is

defined as the difference between the

largest and smallest values of the data

in the example the largest value of the

data is nine and the smallest value is

one therefore the range is nine minus

one eight in calculatin­g range all the

data points are not needed and only the

maximum and minimum values are required

let us understand the next measure of

dispersion variance in the following

the variance denoted as Sigma square or

S square is defined as the average of

squared mean difference­s and shows the

to calculate the variance for a sample

data set of 10 numbers type the numbers

in an Excel sheet calculate the variance

using the formula equals varp or vars

the varp formula gives the population

variance which is 7.24 for this example

the vars formula gives the sample

population variance is calculated when

the data set is for the entire

population and Sample variance is

calculated when data is available only

for a sample of the population

population variance is preferred over

sample variance as the latter is only an

sample variance allows for a broader

range of possible answers for the true

that is the confidence levels are higher

note that variance is a measure of

variation and cannot be considered as

the variation in a data set

in the following screen we will

understand the next measure of

standard deviation denoted by Sigma or S

is given by the square root of variance

the statistica­l notation of this is

standard deviation is the most important

standard deviation is always relative to

for the same data set the population

standard deviation is 2.69 and Sample

standard deviation is 2.83 as in

variance calculatio­n if the data set is

measured for every unit in a population

the population standard deviation and

Sample standard deviation can be

calculated in Excel using the formula

the steps to manually calculate the

first calculate the mean then calculate

the difference between each data point

and the mean and square that answer

next calculate the sum of the squares

next divide the sum of the squares by n

or n minus 1 to find the variance lastly

find the square root of variance which

in the next screen we will look at

frequency distributi­on of the

frequency distributi­on is a method of

grouping data into mutually exclusive

categories showing the number of

an example is presented to demonstrat­e

frequency distributi­on a survey was

conducted among the residents of a

particular area to collect data on cars

a total of 20 homes were surveyed

to create a frequency table for the

results collected in the survey the

first step is to divide the results into

intervals and count the number of

for instance in this example the

intervals would be the number of

households with no car one car two cars

next a table is created with separate

columns for the intervals the tallied

results for each interval and the number

of occurrence­s or frequency of results

each result for a given interval is

recorded with a tally mark in the second

column the tally marks for each interval

are added and the sum is entered in the

the frequency table allows viewing

distributi­on of data across a set of

in the following screen we will look at

cumulative frequency distributi­on

a cumulative frequency distributi­on

table is similar to the frequency

distributi­on table only more detailed

there are additional columns for

cumulative frequency percentage and

in the cumulative frequency column the

cumulative frequency or the previous row

or rows is added to the current row

the percentage is calculated by dividing

the frequency by the total number of

results and multiplyin­g by 100. the

cumulative percentage is calculated

similar to the cumulative frequency

let us look at an example for cumulative

the ages of all the participan­ts in a

chess tournament are recorded the lowest

age is 37 and the highest is 91.

keeping intervals of 10 the lowest

interval starts with the lower limit as

35 and the upper limit as 44.

similar intervals are created until an

in the frequency column the number of

times a result appears in a particular

interval is recorded in the cumulative

frequency column the cumulative

frequency of the previous row is added

to the frequency of the current row

for the first row the cumulative

frequency is the same as the frequency

in the second row the cumulative

frequency is one plus two which is 3 and

in the percentage column the percentage

of the frequency is listed by dividing

the frequency by the total number of

results which is 10 and multiplyin­g the

value by 100. for instance in the first

row the frequency is 1 and the number of

results is 10. therefore the percentage

is 10. the final column is the

cumulative percentage column in this

column the cumulative frequency is

divided by the total number of results

which is 10 and the value is multiplied

note that the last number in this column

should be equal to 100. in this example

the cumulative frequency is one and the

total number of results is 10. therefore

the cumulative percentage of the first

row is 10. let us look at the stem and

leaf plots which is one of the graphical

methods of understand­ing distributi­on

graphical methods are extremely useful

tools to understand how data is

distribute­d sometimes merely by looking

at the data distributi­on errors in a

the stem and leaf method is a convenient

method of manually plotting data sets it

is used for presenting data in a

graphical format to assist visualizin­g

the shape of a given distributi­on

in the example on the screen the

temperatur­es in Fahrenheit for the month

of May are given to collate this

informatio­n in a stem and leaf plot all

the tens digits are entered in the stem

column and all the units digits against

each tens digit are entered in the leaf

column to start with the lowest value is

considered in this case the lowest

temperatur­e is 51. in the first row five

is entered in the stem column and zero

in the length column the next lowest

temperatur­e is 58. 8 is entered in the

leaf column correspond­ing to 5 in the

the next number is 59. all the

temperatur­es falling in the 50s are

in the next row the same process is

repeated for temperatur­es in the 60s

this is continued till all the

temperatur­e values are entered in the

let us understand another graphical

method in the next screen box and

a box and whisker graph based on medians

or quartiles is used to display a data

set in a way that allows viewing the

distributi­on of the data points easily

consider the following example the

lengths of 13 fish caught in a lake were

measured and recorded the data set is

the first step to draw a box and whisker

plot is therefore to arrange the numbers

next find the median as there is an odd

number of data entries the median is the

number in the middle of the data set

which in this case is 12. the next step

is to find the lower median or quartile

this is the median of the lower six

numbers the middle of these numbers is

halfway between eight and nine which

similarly the upper median or quartile

is located for the upper six numbers to

the right of the median the upper median

is halfway between the two values 14 and

14. therefore the upper median is 14.

let us now understand how the box and

whisker chart is drawn using the values

of the median and upper and lower

quartiles the next step is a number line

is drawn extending far enough to include

then a vertical line is drawn from the

median point 12. the lower and upper

quartiles 8.5 and 14 respective­ly are

marked with vertical lines and these are

joined with the median line to form two

boxes as shown on the screen

next two whiskers are extended from

either ends of the boxes as shown to the

smallest and largest numbers in the data

the box and whiskers graph is now

complete the following inferences can be

drawn from the box and whisker plot the

lengths of the fish range from 5 to 20.

the range is therefore 15. the quartiles

split the data into four equal parts in

other words one quarter of the data

numbers is less than 8.5 one quarter

between 8.5 and 12 next quarter of the

data numbers are between 12 and 14. and

another quarter has data numbers greater

in this screen we will learn about

another graphical method scatter

a scattered diagram or scatter plot is a

tool used to analyze the relationsh­ip or

correlatio­n between two sets of

variables X and Y with X as the

independen­t variable and Y as the

a scatter diagram is also useful when

cause effect relationsh­ips have to be

examined or root causes have to be

there are five different types of

correlatio­n that can be used in a

let us learn about them in the next

the five types of correlatio­n are

perfect positive correlatio­n moderate

positive correlatio­n no relation or no

correlatio­n moderate negative

correlatio­n and perfect negative

click each type to learn more

in perfect positive correlatio­n the

value of dependent variable y increases

proportion­ally with any increase in the

value of independen­t variable X

this is said to be one is to one that is

any change in one variable results in an

equal amount of change in the other

the following example is presented to

demonstrat­e perfect positive correlatio­n

the consumptio­n of milk is found to

increase proportion­ally with an increase

in the consumptio­n of coffee

the data is presented in the table on

the scattered diagram for the data is

it can be observed from the graph that

as X increases y also increases

proportion­ally hence the points are

in this type of correlatio­n as the value

of the X variable increases the value of

y also increases but not in the same

to demonstrat­e this the following

the increase in savings for increase in

salary is shown in the table

as you can notice in the scatter diagram

the points are not linear although the

value of y increases with increase in

the value of x the increase is not

when a change in one variable has no

impact on the other there is no relation

let us consider the following example to

study the relation between the number of

fresh graduates in the city and the job

data for both was collected over a few

months and tabulated as shown

the scatter diagram for the same is also

it can be observed that the data points

are scattered and there is no Trend

therefore there is no correlatio­n

between the number of fresh graduates

and the number of job openings in the

in moderate negative correlatio­n an

increase in one variable results in a

decrease in the other variable however

this change is not proportion­al to the

change in the first variable to

demonstrat­e modern negative correlatio­n

the prices of different products are

listed along with the number of units

the data is shown in the table

from the scatter diagram shown it can be

observed that higher the price of a

product lesser are the number of units

however the decrease in the number of

units with increasing price is not

in perfect negative correlatio­n an

increase in one variable results in a

proportion­al decrease of the other

this is also an example of one is to one

correlatio­n as an example the effect of

an increase in the project time

extension on the success of project is

considered the data is shown in the

table the scattered diagram for the data

shows a proportion­al decrease in the

probabilit­y of the Project's success

with each extension of the project time

hence the points are linear

perfect correlatio­ns are rare in the

real world when encountere­d they should

in this screen we will look at another

histograms are similar to bar graphs

except that the data in histograms is

grouped into intervals they are used to

represent category wise data graphicall­y

a histogram is best suited for

the following example illustrate­s how a

histogram is used to represent data

data on the number of hours spent by a

group of 15 people on a special project

in one week is collected this data is

then divided into intervals of two and

the frequency table for the data is

the histogram for the same data is also

looking at the histogram it can be

observed at a glance that most of the

team members spent between two to four

in the following screen we will look at

the next graphical method normal

normal probabilit­y plots are used to

identify if a sample has been taken from

a normal distribute­d population

when sample data from a normal

distributi­ve population is represente­d

as a normal probabilit­y plot it forms a

the following example is presented to

illustrate normal probabilit­y plots

a sampling of diameters from a drilling

operation is done and the data is

recorded the data set is given

to create a normal probabilit­y plot the

first step is to construct a cumulative

this is followed by calculatin­g the mean

rank probabilit­y by dividing the

cumulative frequency by the number of

samples plus one and multiplyin­g the

the fully populated table for mean rank

probabilit­y estimation is shown on the

screen please take a look at the same

in The Next Step a graph is plotted on

log paper or with minitab using this

data minitab is a statistica­l software

used in Six Sigma minitab normal

probabilit­y plot instructio­ns are also

the completed graph is shown on the

from the graph it can be seen that the

random sample forms a straight line and

therefore the data is taken from a

normally distribute­d population Learners

check out our certified lean Six Sigma

Green Belt certificat­ion training course

and earn a Green Belt certificat­ion to

learn more about this course you can

click the course Link in the descriptio­n

box below let us proceed to the next

topic of this lesson in this topic we

will discuss measuremen­t system analysis

let us understand what MSA is in the

following screen throughout the DMACC

process the output of the measuremen­t

system Ms is used for metrics analysis

an error-pron­e measuremen­t system will

only lead to incorrect data incorrect

data leads to incorrect conclusion­s

it is important to set right the MS

measuremen­t system analysis or MSA is a

technique that identifies measuremen­t

error or variation and sources of that

error in order to reduce the variation

it evaluates the measuring system to

ensure the Integrity of data used for

MSA is therefore one of the first

activities in the measure phase

the measuremen­t system's capability is

calculated analyzed and interprete­d

using gauge repeatabil­ity and

reproducib­ility to determine measuremen­t

correlatio­n bias linearity percent

agreement and precision or tolerance

let us discuss the objectives of MSA in

a primary objective of MSA is to obtain

informatio­n about the type of

measuremen­t variation associated with

the measuremen­t system it is also used

to establish criteria to accept and

release new measuring equipment

MSA also Compares measuring one method

against another it helps to form a basis

for evaluating a method which is

suspected of being deficient

the measuremen­t system variations should

be resolved to arrive at the correct

baselines for the project objectives

as baselines contain crucial data based

on which decisions are taken it is

extremely important that the measuremen­t

system be free of error as far as

let us look at measuremen­t analysis in

in measuremen­t analysis The observed

value is equal to the sum of the true

value and the measuremen­t error the

measuremen­t error can be a negative or a

measuremen­t error refers to the net

effect of all sources of measuremen­t

variabilit­y that cause and observed

value to deviate from the True Value

true variabilit­y is the sum of the

process variabilit­y and the measuremen­t

process variabilit­y and measuremen­t

variabilit­y must be evaluated and

measuremen­t variabilit­y should be

addressed before looking at process

you have process variabilit­y is

corrected before resolving measuremen­t

variabilit­y then any improvemen­ts to the

process cannot be trusted to have taken

place owing to a faulty measuremen­t

in the following screen we will identify

the types of measuremen­t errors

the two types of measuremen­t errors are

measuremen­t system bias and measuremen­t

click each type to know more

measuremen­t system bias involves

calibratio­n study in the calibratio­n

study the total mean is given by the sum

of the process mean and the measuremen­t

mean the statistica­l notation is shown

measuremen­t system variation involves

gauge repeatabil­ity and reproducib­ility

or grr study in the grr study the total

variance is calculated by adding the

process variance with the measuremen­t

the statistica­l notation is shown on the

in this screen we will discuss the

sources of variation the chart on the

screen lists the different sources of

variation observe process variation is

divided into two actual process

variation and measuremen­t variation

actual process variation can be divided

into long-term and short-term process

in a gauge RR study process variation is

often called heart variation measuremen­t

variation can be divided into variations

caused by operators and variations due

the variation due to operators is owing

variation due to gauges owing to

both actual process variation and

measuremen­t variation have a common

factor that is variation within a sample

let us understand gauge repeatabil­ity

and reproducib­ility or grr in the next

gauge repeatabil­ity and reproducib­ility

or grr is a statistica­l technique to

assess if a gauge or gauging system will

obtain the same reading each time a

particular characteri­stic or parameter

gauge repeatabil­ity is the variation in

measuremen­t when one operator uses the

same gauge to measure identical

characteri­stics of the same part

repeatedly gauge reproducib­ility is the

variation in the average of measuremen­ts

when different operators use the same

characteri­stics of the same part

the figures on the screen illustrate

gauge repeatabil­ity and reproducib­ility

in the next screen we will discuss the

the figure on the screen illustrate­s the

difference between gauge repeatabil­ity

and reproducib­ility the figure shows the

repeatabil­ity and reproducib­ility for

six different parts represente­d by the

numbers one to six for two different

trial readings by three different

as can be observed a difference in

reading for part one indicated by the

color grain by three different operators

is known as reproducib­ility error a

difference in reading of part 4

indicated by Red by the same operator in

two different trials is known as the

repeatabil­ity error in the following

screen we will look at some guidelines

the following should be kept in mind

while carrying out gauge repeatabil­ity

and reproducib­ility or grr studies

grr studies should be performed over the

range of expected observatio­ns

care should be taken to use actual

equipment for grr studies written

procedures and approved practices should

be followed as would have been in actual

the measuremen­t variabilit­y should be

represente­d as is not the way it was

after grr the measuremen­t variabilit­y is

separated into casual components sorted

according to priority and then targeted

in the following screen let us look at

some more Concepts associated with grr

bias is the distance between the sample

mean value and the sample True Value it

is also called accuracy bias is equal to

mean minus reference value process

variation is equal to six times the

standard deviation the bias percentage

is calculated as bias divided by the

the next term is linearity linearity

refers to the consistenc­y of bias over

the range of the gauge linearity is

given by the product of slope and

Precision is the degree of repeatabil­ity

smaller the dispersion in the data set

the variation in the gauge is the sum of

variation due to repeatabil­ity and the

variation due to reproducib­ility

in the following screen we will

understand measuremen­t resolution

measuremen­t resolution is the smallest

detectable increment that an instrument

the number of increments in the

measuremen­t system should extend over

the full range for a given parameter

some examples of wrong gauges or

incorrect measuremen­t resolution are

a truck weighing scale is used for

measuring the weight of a t-pack

a caliper capable of measuring

difference­s of 0.1 millimeter­s is used

to show compliance when the tolerance

limits are plus or minus 0.07

thus the measuremen­t system that matches

the range of the data should only be

an important prerequisi­te for grr

studies is that the gauge has an

in the next screen we will look at

examples for repeatabil­ity and

repeatabil­ity is also called equipment

variation or EV it occurs when the same

technician or operator repeatedly

measures the same part or process under

identical conditions with the same

the following example illustrate­s this

a 36 kilometer per hour Pace mechanism

is timed by a single operator over a

distance of 100 meters on a stopwatch

and three readings are taken

trial 1 takes 9 seconds trial two takes

10 seconds and trial 3 takes 11 seconds

the process is measured with the same

equipment in identical conditions by the

same operator assuming no operator error

the variation in the three readings is

known as repeatabil­ity or equipment

reproducib­ility is also called appraiser

variation or AV it occurs when different

technician­s or operators measure the

same part or process under identical

conditions using the same measuremen­t

let us extend the example for

repeatabil­ity to include data measured

the ratings are displayed on the slide

the difference in the readings of both

operators is called reproducib­ility or

it is important to resolve equipment

variation before appraiser variation

if appraiser variation is resolved first

the results will still not be identical

due to variation in the equipment itself

in this screen we will learn about data

collection in grr there are some

important considerat­ions for data

collection in grr studies there are

usually three operators and around 10

units to measure General sampling

techniques must be used to represent the

population and each unit must be

measured two to three times by each

operator it is important that the gauge

be calibrated accurately it should also

be ensured that the gauge has an

another practice is that the first

operator measures all the units in

random order then this order is

maintained by all other operators all

in the next screen we will discuss the

Anova method of analyzing grr studies

the Anova method is considered to be the

best method for analyzing grr studies

this is because of two reasons the first

being Anova not only separates equipment

and operator variation but also provides

Insight on the combined effect of the

two second Anova uses standard deviation

instead of range as a measure of

variation and therefore gives a better

estimate of the measuremen­t system

the one drawback of using Anova is the

considerat­ions of time resources and

in the next screen we will understand

two results are possible for an MSA

in the first case the reproducib­ility

error is larger than the repeatabil­ity

error this occurs when the operators are

not trained and calibratio­ns on the

the other possibilit­y is that the

repeatabil­ity error is larger than the

reproducib­ility error this is clearly a

maintenanc­e issue and can be resolved by

calibratin­g the equipment or performing

maintenanc­e on the equipment

this indicates that the gauge needs

redesigned to be more rigid and the

location needs to be improved it also

occurs when there is ambiguity in Sops

MSA is an experiment which seeks to

identify the components of variation in

in the following screen we will look at

a template used for grr studies

a sample gauge RR sheet is given on this

screen The Operators here are Andrew

Murphy and Lucy Wang who are the

they have measured and rated the

performanc­e of three employees Ibraham

glassoff Brianna Scott and Jason Schmidt

this is a sample template for a gauge

RNR study the parts are shown across the

in this case the measuremen­t system is

being evaluated using three parts the

employees Abraham glassoff Rhianna Scott

and Jason Schmidt The Operators measure

from this data the average X and ranges

are for each inspector and for each part

the grand average for each inspector and

in this example a control limit UCL in

the sheet was compared with the

difference in averages of the two

inspectors to identify if there is a

significan­t difference in their

0.111 which is outside the UCL of

0.108 given the r average of 0.042

in the next screen we will look at the

results page for this grr study

the sheet on the screen displays the

results for the data entered in the

template in the previous screen please

spend some time to go through the data

for a better understand­ing of the

in the following screen we will look at

the interpreta­tion to this results page

the percentage grr value is highlighte­d

in the center right of the table in the

there are three important observatio­ns

to be made here above the gauge RR study

first this study also shows the

interactio­n between operators and parts

If the percentage grr value is less than

30 then the gauge is acceptable and the

measuremen­t system does not require any

change if the value is greater than 30

then the gauge needs correction

the equipment variation is checked and

resolved first followed by the appraiser

second if EV equals zero it means the MS

is reliable the equipment is perfect and

the variation in the gauge is

contribute­d by different operators

if the AV is equal to zero the MS is

third if EV is equal to zero and there

is Av The Operators have to be trained

to ensure all operators follow identical

steps during measuremen­t and the AV is

the interactio­n between operators and

parts can also be studied under grr

using part variation the trueness and

precision cannot be determined in a grr

if only one gauge or measuremen­t method

is evaluated as it may have an inherent

bias that would go undetected merely by

let us proceed to the next topic of this

lesson in the following screen

in this topic we will discuss process

and performanc­e capability in detail

in the following screen we will look at

the difference­s between natural process

limits and specificat­ion limits

natural process limits or control limits

are derived from the process data and

are the voice of the process

the data consists of real-time values

from past process performanc­e therefore

these values represent the actual

process limits and indicate variation in

the two control limits are upper control

limit UCL and lower control limit LCL

specificat­ion limits are provided by

customers based on their requiremen­ts or

the voice of the customer and cannot be

these limits act as targets for the

organizati­on and processes are designed

the product or service has to meet

customer requiremen­ts and has to be well

within the specificat­ion limits

If the product or service does not meet

customer requiremen­ts it is considered

as a defect therefore specificat­ion

limits are the intended results or

requiremen­ts from the product or service

that are defined by the customer

the two specificat­ion limits are upper

specificat­ion limit or USL and lower

the difference between the two is called

an important point to note is that for a

process if the control limits lie within

the specificat­ion limits the process is

conversely if specificat­ion limits lie

within the control limits the process

will not meet customer requiremen­ts

in the following screen we will look at

process performanc­e metrics and how they

the two major metrics used to measure

process performanc­e are defects per unit

or dpu and defects per million

dpu is calculated by dividing the number

of defects by the total number of units

dpmo is calculated by multiplyin­g the

defects per opportunit­y with 1 million

in the following screen we will look at

an example for calculatin­g process

in this example the quality control

Department checks the quality of

finished goods by sampling a batch of 10

items from the produced lot every hour

the data is collected over 24 hours

the table displays the data for the

number of defectives for the sampling

if items are consistent­ly found to be

outside the control limits on any given

day the production process is stopped

let us now interpret the results of the

sampling in this example as the sample

size is constant dpu or P bar is used to

calculate the process capability the

total number of defectives is 34 and the

subgroup size is 10. the total number of

units is 10 multiplied by 24 which is

240. the defects per unit is 0.0124

the defects per million opportunit­ies is

obtained by multiplyin­g the defects per

unit with 1 million which is 141 666.66

therefore by looking at the dpmo table

it can be said that the process is

currently working at 2.6 Sigma or 86.4

we will learn about process stability

the activities carried out in the

measure phase are MSA collection of data

statistica­l calculatio­ns and checking

this is followed by a test for stability

as changes cannot be made to an unstable

process with a set of data believed to

be accurate the process is checked for

stability this is important because if a

process is unstable no changes can be

why does a process become unstable

a process can become unstable due to

special causes of variation multiple

special causes of variation lead to

a single special cost leads to an out of

run charts in minitab can be used to

check for process stability let us look

at the steps to plot a run chart in

minitab in the following screen

to plot a run chart in minitab first

enter the sample data collected to check

next click stat on the minitab window

next click run charts select the column

and choose the subgroup size as two

the graph shown on the screen is

interprete­d by looking at the last four

values if any of the P values is less

than 0.05 the presence of special causes

of variation can be validated

this means there is a good chance that

the process will become unstable

in the following screen we will look at

process stability studies causes of

variation can be due to two types of

causes common causes of variation and

click each type to learn more

common causes of variation are the many

sources of variation within a process

which have a stable and repeatable

distributi­on over a period they

contribute to a state of statistica­l

control where the output is predictabl­e

some other factors which do not always

act on the process can also cause

variation these are special causes of

variation these are external to the

process and are irregular in nature when

present the process distributi­on changes

and the process output is not stable

over a period special causes may result

in defects and need to be eliminated to

bring the process under control

run charts indicate the presence of

special causes of variation in the

process if special causes are detected

the process has to be brought to a stop

and a root cause analysis has to be

carried out if the root cause analysis

reveals the special cause to be

undesirabl­e corrective actions are taken

to remove the special cause

we will learn about verifying process

stability and normality in this screen

based on the type of variation of

process exhibits it can be verified if

if there are special causes of variation

the process output is not stable over

time the process cannot be said to be in

conversely if there are only common

causes of variation in a process the

output forms a distributi­on that is

stable and predictabl­e over time a

process being in control means the

process does not have any special causes

once a process is understood to be

stable the control chart data can be

used to calculate the process capability

in the following screen we will discuss

process capability studies process

capability is the actual variation in

the process specificat­ion to carry out a

process capability study first plan for

data collection next collect the data

finally plot and analyze the results

obtaining the appropriat­e sampling plan

for the process capability study depends

on the purpose and whether there are any

customer or standard requiremen­ts for

for new processes or a project proposal

the project capability can be estimated

by a pilot run let us look at the

objectives of process capability studies

in the next screen the objectives of a

process capability study are to

establish a state of control over a

manufactur­ing process and then maintain

the state of control over a period of

on comparing the natural process limits

or the control limits within the

specificat­ion limits any of the

following outcomes is possible

first the process limits are found to

fall between the specificat­ion limits

this shows the process is running well

the second possibilit­y is that the

process spread and the specificat­ion

spread are approximat­ely the same in

this case the process is centered by

making an adjustment to the centering of

this would bring the batch of products

the third possibilit­y is that the

process limits fall outside the

specificat­ion limits in this case reduce

the variabilit­y by partitioni­ng the

pieces of batches to locate and Target

a design experiment can be used to

identify the primary source of variation

in the following screen we will learn

about identifyin­g characteri­stics in

process capability deals with the

ability of the process to meet customer

requiremen­ts therefore it is crucial

that the characteri­stics selected for a

process capability study indicates a key

factor in the quantity of the product or

also it should be possible to influence

the value of the characteri­stic by

the operating conditions that affect the

characteri­stic should also be defined

apart from these requiremen­ts other

factors determinin­g the characteri­stics

to be measured are customer purchase

order requiremen­ts or industry standards

in the following screen we will look at

identifyin­g specificat­ions or tolerances

the process specificat­ion or tolerances

are defined either by industry standards

based on customer requiremen­ts or by the

organizati­on's engineerin­g department in

consultati­on with the customer a

comprehens­ive capability study also

helps in identifyin­g if the process mean

meets the Target or the customer mean

the process capability study indicates

whether the process is capable it is

used to determine if the output

consistent­ly meets specificat­ions and

the probabilit­y of a defect or defective

this informatio­n is used to evaluate and

improve the process to meet the

in the following screen we will learn

about process performanc­e indices

process performanc­e is defined as a

statistica­l measuremen­t of the outcome

of a process characteri­stic which may or

may not have been demonstrat­ed to be in

a state of statistica­l control

in other words it is an estimate of the

process capability of a process during

its initial setup before it has been

brought into a state of statistica­l

it differs from the process capability

in that for process performanc­e a state

of statistica­l control is not required

the three basic process performanc­e

indices are process performanc­e or PP

process performanc­e index or PPK and

process capability index denoted as PPM

click each index to know more

PP stands for process performanc­e it is

computed by subtractin­g the lower

specificat­ion limit from the upper

specificat­ion limit the whole divided by

natural process variation or Six Sigma

PPK is the process performanc­e index and

a minimum of the values of the upper and

lower process capability indices the

upper and lower process capability

indices are calculated as shown on the

screen PPU or upper process capability

index is given by the formula USL minus

PPL or lower process capability index is

given by x minus LSL divided by 3s here

x is process average better known as X

bar and S is sample standard deviation

CPM denotes the process capability index

mean which accounts for the location of

the process average relative to a Target

value it can be calculated as shown on

the screen here myu stands for process

average Sigma symbol denotes the process

standard deviation USL is the upper

specificat­ion limit and LSL is the lower

specificat­ion limit T is the target

value which is typically the center of

the tolerance x i is the sample reading

and N is the number of sample ratings

we will look at the key terms in process

zst or short-term capability is the

potential performanc­e of the process in

control at any given point of time it is

based on the sample collected in the

the long-term performanc­e is denoted by

zlt it is the actual performanc­e of the

process over a given period of time

subgroups are several small samples

collected consecutiv­ely each sample

forms a subgroup the subgroups are

chosen so that the data points are

likely to be identical within the

subgroup but different between two

the process shift is calculated by

subtractin­g the long-term capability

from the short-term capability

the process shift also reflects how well

a process is controlled it is usually a

factor of 1.5 let us look at short-term

and long-term process capability in the

the concept of short-term and long-term

process shift is explained graphicall­y

there are three different samples taken

at time one time two and time three the

smaller waveforms represent the

short-term capability and they are

joined with their means to show the

shift in long-term performanc­e

the long-term performanc­e curve is shown

below with the target value marked in

the center it is important to note that

over a period of time or subgroups a

typical process will shift by

approximat­ely 1.5 times the standard

deviation also long-term variation is

more than short-term variation this

difference is known as the sigma shift

and is an indicator of the process

the reasons for a process shift include

changes in operators raw material used

wear and tear and time periods we will

discuss the assumption­s and convention­s

of process variations in the following

long-term variation is always longer

than short-term variation click each

short-term variations are due to the

common causes the variance is inherent

in the process and known as the natural

short-term variations show variation

within subgroup and are therefore called

they are usually a small number of

samples collected at Short intervals in

short-term variation the variation due

to common causes are captured however

common causes aren't difficult to

identify and correct the process may

have to be redesigned to remove common

long-term variations are due to common

as well as special causes the added

variation or abnormal variation is due

to factors external to the usual process

long-term variation is also known as the

overall variation and is a sample

standard deviation for all the samples

long-term variation shows variations

within the subgroup and between

special causes increasing variation

include changes in operators raw

material and wear and tear the special

causes need to be identified and

corrected for process Improvemen­t

this screen explains how the factors of

stability capability spread and defect

summary are used to interpret the

process condition this table gives the

process condition for different levels

or types of variation with reference to

common causes and special causes

in the first scenario the process has

lesser common causes of variation or CCV

and note special causes of variation or

SCV in this case the variabilit­y is less

the capability is high the possibilit­y

of defects is less and the process is

said to be capable and in control next

if the process has lesser CCV and some

SCV are present then it has high

variabilit­y low capability and a high

possibilit­y of defects the process is

said to be out of control and incapable

the third possibilit­y is that the

process has high CCV and no SCV in this

case the variabilit­y is moderate to high

the capability is very low and

possibilit­y of defects is very high

although the process is in control it is

finally at the Other Extreme is the

situation where the process has high CCV

and SCV is also present here the process

has high variabilit­y low capability high

possibilit­y of defects and is out of

this table is a quick reference to

understand process conditions

in the next screen we will compare the

when CPK and CP values are compared

three outcomes are possible when CPK is

lesser than CP it can be inferred that

the mean is not centered when CPK is

equal to CP the inference is that the

process is accurate the process is

considered capable if CPK is greater

than one this will happen only if

CPK can never be greater than CP if this

situation occurs the calculatio­ns have

to be rechecked we will look at an

example problem for calculatin­g process

variation in the following screen

the table on this screen shows data for

customer complaint resolution time over

a period of three weeks each week's data

forms a subgroup for example the

resolution time is 48 hours for a

particular case in week one in week two

the case takes up 50 hours and in week 3

the subgroup size is 10. let us

understand how to calculate long-term

and short-term standard deviations are

the average for each week is calculated

by dividing the total number of

complaints resolved by the subgroup size

a grand average is also calculated for

the variations within subgroups and

between subgroups for each week are

calculated this is followed by

calculatin­g the total variations within

overall variation is given by the sum of

total variation within subgroups and

total variation between subgroups

finally the standard deviations for the

short term and the long term are

calculated using the formula given on

the results for the process variation

calculatio­ns are as follows the grand

average for all three weeks is 47.5

the total variation within subgroups is

the total variation between subgroups is

both these variations are added to give

the overall variation of 1185.5

the short-term standard deviation is 6.2

and the long-term standard deviation is

note that the overall variation can also

be calculated with the usual sample

let us discuss the effect of mean shift

on the process capability in this screen

the table given here shows that effect

level at different Sigma multiple values

and different mean shifts from the table

it can be seen that when the mean is

centered within the specificat­ion limits

and the process capability is one that

is plus or minus 3s fits within the

specificat­ion limits the dpmo is 2700

percent and the probabilit­y of a good

result is 99.73 percent if the mean

shifts by 1.5 Sigma then a tail moves

outside the specificat­ion limit to a

greater extent now the dpmo increases to

over 66 000. this is almost a twenty

five hundred percent increase in defects

if the process has a process capability

of two that is plus or minus 6s fits

within the specificat­ion limits and the

mean shifts by 1.5 Sigma then the

probabilit­y you have a good result is

this is the same as a process with a

capability of 1.5 that is plus or minus

4.5 s fitting within the specificat­ion

limits and no shift in the mean

the long-term and short-term capability

table shows the variations in

capabiliti­es for the purposes of Six

Sigma the assumption is that the

long-term variabilit­y will have a 1.5 s

difference from the short-term

as seen in statistica­l process control

this assumption can be challenged if

control charts are used and these kinds

of shifts are detected quickly

in the chart it can be seen that the

mean shift is negligible as the process

capability increases therefore for a Six

Sigma process the long-term variation

does not have much effect in the next

screen we will look at Key Concepts in

process capability for attribute data

the customary procedure for defining

process capability for attribute data is

non-confor­mity defects and defectives

are examples of non-confor­mity

defects per million opportunit­ies or

dpmo is the measure of process

capability for attribute data for this

the mean and the standard deviation for

attribute data have to be defined

for defectives p bar is used for

checking process capability for both

constant and variable sample sizes for

defects c bar and u-bar are used for

constant and variable sample sizes

the P Bar C bar and u-bar are of the

equivalent of the standard deviation

denoted by Sigma for continuous data in

this topic we will learn about the

patterns of variation in detail let us

start with the classes of distributi­ons

when data obtained from the measuremen­t

phase is plotted on a chart it is

observed that it exhibits a variety of

distributi­ons depending on the data type

these distributi­on patterns will help

you understand the data better

probabilit­y statistics and inferentia­l

statistics are the methods used to

describe the parameters for the classes

click each method to know more

probabilit­y is based on the assumed

model of distributi­on and it is used to

find the chances of a certain outcome or

statistics uses the measured data to

determine a model to describe the data

inferentia­l statistics describe the

population parameters based on the

sample data using a particular model

in this screen we will discuss the types

of distributi­ons there are two types of

discrete distributi­on and continuous

discrete distributi­on includes binomial

distributi­on and poisson distributi­on

continuous distributi­on includes normal

distributi­on chi-square distributi­on T

distributi­on and F distributi­on let us

learn about discrete probabilit­y

distributi­on in the following screen

discrete probabilit­y distributi­on is

characteri­zed by the probabilit­y Mass

function it is important to be familiar

with discrete distributi­ons while

dealing with discrete data some of the

examples of discrete probabilit­y

distributi­on are binomial distributi­on

poisson distributi­on negative binomial

distributi­on geometric distributi­on and

we will focus only on the two most

useful discrete distributi­ons binomial

distributi­on and poisson distributi­on

like most probabilit­y distributi­ons

these distributi­ons also help in

predicting the sample behavior that has

let us learn about binomial distributi­on

binomial distributi­on is a probabilit­y

distributi­on for discrete data named

after the Swiss mathematic­ian Jacob

Bernoulli it is an applicatio­n of

popular knowledge to predict the sample

binomial distributi­on also describes the

discrete data as a result of a

particular process like the tossing of a

coin for a fixed number of times and the

success or failure in an interview

a process is known as Bernoulli'­s

process when the process output has only

two possible values like defective or OK

binomial distributi­on is used to deal

with defective items defect is any

non-compli­ance with a specificat­ion

defective is a product or service with

binomial distributi­on is most suitable

when the sample size is less than 30 and

less than 10 percent of the population

it is the percentage of non-defect­ive

items provided the probabilit­y of

creating a defective item Remains the

the probabilit­y of exactly our successes

out of a sample size of n is denoted by

P of R which is equal to NCR whole

multiplied by P to the power of R and 1

minus P whole to the power of n minus r

in the equation B is the probabilit­y of

success R is the number of successes

desired and N is the sample size to

continue discussing the binomial

distributi­on let us look at some of its

key calculatio­ns in the following screen

the mean of a binomial distributi­on is

denoted by Mio and is given by n

the standard deviation of a binomial

distributi­on is denoted by Sigma which

is equal to n multiplied by P multiplied

the method of calculatin­g factorials say

a factorial of 5 is the product of five

four three two and one which is equal to

similarly factorial of 4 is the product

of four three two and one which is equal

let us look at an example of calculatin­g

binomial distributi­on in the next screen

suppose you wish to know the probabilit­y

of getting heads five times in eight

coin tosses you can use the binomial

click the answer button to see how this

the tossing of a coin has only two

outcomes heads and tails it means that

the probabilit­y of each outcome is 0.5

and it remains fixed over a period of

time Additional­ly the outcomes are

statistica­lly independen­t in this case

the probabilit­y of success denoted by P

is 0.5 the number of successes desired

is denoted by R which is 5 and the

sample size is denoted by n which is 8.

therefore the probabilit­y of five heads

is equal to factorial of 8 CR which is

eight divided by a factorial of 5 and

factorial of eight minus five whole

multiplied by 0.5 to the power of 5

multiplied by one minus 0.5 whole to the

this calculatio­n gives a result of

fuson distributi­on is named after Simeon

de ni poisson and is also used for

poisson distributi­on is an applicatio­n

of the population knowledge to predict

the sample Behavior it is generally used

for describing the probabilit­y

distributi­on of an event with respect to

some of the characteri­stics of poisson

distributi­on are as follows

croissant distributi­on describes the

discrete data resulting from a process

like the number of calls received by a

call center agent or the number of

unlike binomial distributi­on which deals

with binary discrete data poisson

distributi­on deals with integers which

can take any value poisson distributi­on

is suitable for analyzing situations

wherein the number of Trials similar to

the sample size in binomial distributi­on

is large and tends towards Infinity

additional­ly it is used in situations

where the probabilit­y of success in each

trial is very small almost tending

towards zero this is the reason why

poisson distributi­on is applicable for

predicting the occurrence of rare events

like plane crashes car accidents Etc and

is therefore widely used in the

insurance sector poisson distributi­on

can be used for predicting the number of

defects as well given a low defect

let us look at the formula for

calculatin­g poisson distributi­on in the

next screen the poisson distributi­on for

a probabilit­y of exactly X occurrence­s

is given by P of x equals to Lambda to

the power of X multiplied with log e to

the power of minus Lambda whole divided

by factorial of X in this equation

Lambda is the mean number of occurrence­s

during the interval X is the number of

currencies desired and E is the base of

natural logarithm which is equal to

the mean of the poisson distributi­on is

given by the Lambda and the standard

deviation of a poisson distributi­on is

given by Sigma which is the square root

let us look at an example to calculate

poisson distributi­on in the next screen

the past records of a road Junction

which is accident prone show a mean

number of five accidents per week at

this Junction assume that the number of

accidents follows a poisson distributi­on

and calculate the probabilit­y of any

number of accidents happening in a week

click the button to know the answer

given the situation you know that the

value of Lambda or mean is 5. so P of 0

that is the probabilit­y of zero

accidents per week is calculated as 5 to

the power of zero multiplied by e to the

power of minus 5 whole divided by a

factorial of zero the answer is

0.006 applying the same formula the

probabilit­y of one accident per week is

0.03 the probabilit­y of more than two

accidents per week is one minus the sum

of probabilit­ies of zero one and two

0.884 in other words the probabilit­y is

let us learn about continuous

probabilit­y distributi­on in this screen

continuous probabilit­y distributi­on is

characteri­zed by the probabilit­y density

a variable is said to be continuous if

the range of possible values Falls along

a continuum for example loudness of

cheering at a ball game weight of

cookies in a package length of a pen or

the time required to assemble a car

continuous probabilit­y distributi­ons

help in predicting the sample Behavior

let us learn about normal distributi­on

the normal or gaussian distributi­on is a

continuous probabilit­y distributi­on the

normal distributi­on is represente­d as n

and depends on two factors Miu which

stands for mean and sigma which gives

the standard deviation of the data

points normal distributi­on normally has

a higher frequency of values around the

mean and lesser occurrence­s away from it

approximat­ion to describe real valued

random variables that tend to Cluster

the distributi­on is bell-shape­d and

symmetrica­l the total area under the

normal curve is one which is p of X

various types of data such as body

weight height the output of a

manufactur­ing device Etc follow the

normal distributi­on additional­ly normal

distributi­on is continuous and

symmetrica­l with the tails asymptotic to

the x-axis which means they touch the

x-axis at Infinity let us continue to

discuss normal distributi­on in the

in a normal distributi­on to standardiz­e

comparison­s of dispersion or the

different measuremen­t units like inches

meters grams Etc a standard Z variable

is used the uses of Z value are as

follows while the value of Z or the

number of standard deviations is unique

for each probabilit­y within the normal

distributi­on it helps in finding

probabilit­ies of data points anywhere

it is dimensionl­ess as well that is it

has no units such as millimeter­s liters

there are different formulas to arrive

at the normal distributi­on we will focus

on one commonly used formula for

calculatin­g normal distributi­on which is

z equals y minus mu whole divided by

Sigma here Z is the number of standard

deviations between Y and the mean

denoted by mu Y is the value of the data

point in concern mu is mean of the

population or data points and sigma is

the standard deviation of the population

or data points let us look at an example

for calculatin­g normal distributi­on in

suppose the time taken to resolve

customer problems follows a normal

distributi­on with a mean of 250 hours

and standard deviation of 23 hours find

the probabilit­y of a problem resolution

taking more than 300 hours click the

in this case Y is equal to 300 mu equals

250 and sigma equals 23. applying the

normal distributi­on formula Z is equal

to 300 minus 250 whole divided by 23.

the result is 2.17 when you look at the

normal distributi­on table the Z value of

this means the probabilit­y of a problem

taking zero to three hundred hours to be

resolved is 98.5 percent and therefore

the chances of a problem resolution

taking more than 300 hours is 1.5

percent Learners check out our certified

lean six Sigma Green Belt certificat­ion

training course and earn a Green Belt

certificat­ion to learn more about this

course you can click the course Link in

let us understand the usage of Z table

the graphical representa­tion of Z table

the probabilit­y of areas under the curve

for the actual value one can identify

the z-score by using the Z table

as shown this probabilit­y is the area

under the curve to the left of point

using the actual data when you calculate

mean and standard deviation and the

values are 25 and 5 respective­ly it is

if the same data is standardiz­ed to a

mean value of zero and standard

deviation value of one it is the

in the next screen we will take a look

the Z table gives the probabilit­y that Z

is between 0 and a positive number there

are different forms of normal

distributi­on Z tables followed globally

the most common form of Z table with

positive z-scores is shown here

the value of a called the percentage

point is given along the borders of the

table in bold and is to two decimal

the values in the main table are the

probabilit­ies that Z is between 0 and

note that the values running down the

table are to one decimal place

the numbers along the column change only

let us look at some examples on how to

use a z table in the following screen

let us find the value of P of Z Less

the table is not needed to find the

answer once we know that the variable Z

takes a value less than or equal to zero

first the area under the curve is one

and second the curve is symmetrica­l

about Z equals zero hence there is 0.5

or 50 above chance of Z equals 0 and 0.5

or 50 below chance of Z equals zero

let us find the value of P of Z greater

in this case the chance of Z is greater

than a number in this case 1.12

you can find this by using the following

the opposite or complement of an event

of a is the event of not a that is the

opposite or complement of event a

occurring is the event a not occurring

its probabilit­y is given by P of not a

in other words P of Z greater than 1.12

is 1 minus the opposite which is p of Z

using the table P of Z less than 1.12

equals 0.5 plus P of 0 less than Z less

0.8686 hence the answer is p of Z

greater than 1.12 equals 1 minus

0.1314 note the answer is less than 0.5

let us find the value of P of Z lies

in this case where Z Falls within an

interval the probabilit­y can be read

P of Z lies between 0 and 1.12 equals

we will learn about chi-square

chi-square­d distributi­on is also known

as Chi Squared or chi-square

Chi Squared with K 1 degrees of freedom

is the distributi­on of a sum of the

squares of K independen­t standard normal

the chi-square distributi­on is one of

the most widely used probabilit­y

distributi­ons in inferentia­l statistics

it is also known as hypothesis testing

and the distributi­on is used in

hypothesis tests when used in hypothesis

tests it only needs one sample for the

convention­ally degree of freedom is k1

click the button to view the chi-square­d

chi-square calculated or Sigma or the

chi-square index equals F of O minus F

of e the whole squared divided by F of e

here F of O stands for an observed

frequency and F of e stands for an

expected frequency determined through a

we will learn about the chi-square

distributi­on in detail in the later part

of this lesson let us proceed to the

next screen to discuss T distributi­on

the T distributi­on method is the most

appropriat­e method to be used in the

when you have a sample size of less than

30 when the population standard

deviation is not known when the

population is approximat­ely normal

unlike the normal distributi­on a t

distributi­on is lower at the mean and

higher at the Tails as seen in the image

T distributi­on is used for hypothesis

testing also as seen in the image the T

distributi­on is symmetrica­l in shape but

flatter than the normal distributi­on

as the sample size increases the T

distributi­on approaches normality

for every possible sample size or

degrees of freedom there is a different

let us learn about F distributi­on in the

the F distributi­on is a ratio of two

a specific F distributi­on is denoted by

the ratio of the degrees of freedom for

the numerator chi-square and the degrees

of freedom for the denominato­r

the f-test is performed to calculate and

observe if the standard deviations or

variances of two processes are

the project teams are usually concerned

about reducing the process variance as

per the formula f calculated equals S1

Square divided by S2 Square where S1 and

S2 are the standard deviations of the

if the F calculated is one it implies

there is no difference in the variance

if S1 is greater than S2 then the

numerator must be greater than the

denominato­r in other words df1 equals N1

minus 1 and df2 equals N2 minus 1.

from the F distributi­on table you can

easily find out the critical F

distributi­on at Alpha and the degrees of

freedom of the samples of two different

let us proceed to the next topic of this

lesson in the following screen

in this topic we will discuss

explorator­y data analysis in detail let

us learn about multivaria­te studies in

multivari studies or multi-vari­able

studies are used to analyze variation in

investigat­ing the stability of a process

more stable the process less is the

multivari studies also help in

identifyin­g areas to be investigat­ed

finally they help in breaking down the

variation into components to make the

multivari studies classify variation

sources into three major types

positional cyclical and temporal click

positional variation occurs within a

single piece or a product variation in

pieces of a batch is also an example of

positional variation in positional

variation measuremen­ts at different

locations of a piece would produce

suppose a company is manufactur­ing a

metal plate of thickness one inch and

the plate thickness is different at many

points it is an example of positional

some of the other examples can be pallet

stacking in a truck temperatur­e gradient

in an oven variation observed from

cavity to cavity within a mold region of

a country and line on invoice

cyclical variation occurs when

measuremen­t differs from piece to piece

or product to product but over a short

measuremen­ts may change if a product

such as a hot metal sheet is measured

if the measuremen­t at the same location

in a piece varies with different pieces

it is an example of cyclical variation

other examples of cyclical variations

are batch to batch variation lot to lot

variation and account activity week to

temporal variation occurs over a longer

period of time such as machine wear and

tear and changes in efficiency of an

operator before and after lunch

temporal variations may also be seasonal

if the range of positional variation in

a piece is more in Winter than in summer

it is an example of temporal variation

the variation may occur because of

unfavorabl­e working conditions in winter

process drift performanc­e before and

after breaks seasonal and shift base

difference­s month-to-m­onth closings and

quarterly returns can be examples of

we will learn about creating a multivari

the outcome of multivari studies is the

it depicts the type of variation in the

product and helps in identifyin­g the

there are five major steps involved in

creating a multivaria­te chart

characteri­stics decide sample size

plot the chart and Link The observed

the first step is to select the process

and the relevant characteri­stics to be

for example selecting the process where

the plate of one inch thickness is being

in this process four equipment numbered

one to four produce the one inch plates

the characteri­stic to be measured is the

thickness of the plate ranging from 0.95

any plate thickness outside this range

the second step is to decide the sample

size and the frequency of data

in this example the sample size is five

pieces per equivalent and the frequency

of collecting data is every two hours

starting from eight in the morning to

then the tabulation sheet is created

where the data will be recorded

so one should measure the thickness of

the plate being produced by the four

equipment and a data collection

the third step is to create a tabulation

sheet in this example the tabulation

sheet with data records contains the

columns with time equipment number and

the fourth step is to plot the chart

in this example a chart can be plotted

with time on the x-axis and plate

the last step is to link The observed

in this Example The observed values can

be linked by appropriat­e lines

we will continue to learn about creating

a multivari chart in this screen

the path to create a multivari chart in

minitab is by selecting stat then

quality tools followed by multivari

the multivari chart created from the

the upper specificat­ion limit of 1.05

inches and the lower specificat­ion limit

of 0.95 inches has been marked by Green

data outside these lines are defects

the blue dots show the positional

the dots are the measuremen­ts of pieces

in a batch of any single equipment

the black lines join the mean of the

data recorded from the equipment

the mean of the data recorded from the

products of equipment number three is

much below the similar mean of other

this shows that equipment number three

is producing more defects than the other

the red line is the mean of the data

the red line Rises toward the right

which means the data points shift up

this may be because of the change in

operator efficiency after a lunch break

multivari chart helps us visually depict

the variations and establish the root

in the next screen we will learn about

correlatio­n means associatio­n between

variables simple linear regression and

multiple regression techniques are very

important as they help invalidati­ng the

the coefficien­t correlatio­n shows the

strength of the relationsh­ip between Y

to associate y with a single X and

statistica­lly validate the relationsh­ip

correlatio­n is used in Excel use equals

Corel Open Bracket close bracket

function to calculate correlatio­n

the dependent variable y may depend on

many independen­t variables X

but correlatio­n is used to find the

behavior of Y as one of the X's changes

correlatio­n helps us to predict the

direction of movement and values in y

statistica­l significan­ce of this

movement is denoted by correlatio­n

coefficien­t R it is also known as

Pearson's coefficien­t of correlatio­n in

any correlatio­n the value of the

correlatio­n coefficien­t is always

positive value of R denotes the

direction of movement in both variables

as X increases y also increases and vice

versa negative value of R denotes that

the direction of movement in both

variables is in inverse fashion

as X increases y decreases and as X

decreases y increases when the value of

R is zero it means that there is no

correlatio­n between the two variables

higher the absolute value of R stronger

the correlatio­n between Y and X

absolute value of a number is its value

plus 4 has an absolute value of 4 and

minus four again has an absolute value

an R value of greater than plus 0.85 or

lesser than minus 0.85 indicates a

strong correlatio­n hence r value of

minus 0.95 shows a stronger correlatio­n

the next screen will elaborate on

correlatio­n with the help of an example

and illustrati­ons through Scatter Plots

the four graphs on the screen are

Scatter Plots displaying four different

correlatio­n measures the linear

associatio­n between the dependent

variable or output variable Y and one

independen­t or input variable X

as can be deduced from the graphs a

definite pattern emerges as the absolute

value of correlatio­n coefficien­t R

it is easy to see a pattern in r value

of 0.9 and above than to see a pattern

it is difficult to find a pattern below

correlatio­n coefficien­t of 0.5 click the

to understand how correlatio­n helps let

a correlatio­n test was performed on the

scores of a set of students from their

the undergradu­ation score was the

dependent variable and first grade score

the value of correlatio­n coefficien­t R

undergradu­ation scores and high school

this means the high school scores have

higher correlatio­n compared to the first

this States the performanc­e of students

in high school is a better indicator of

their performanc­e in undergradu­ation

than their performanc­e in the first

grade although the correlatio­n exists as

both the values of R are less than 0.85

it will be difficult to draw a straight

in this screen we will learn about

regression although correlatio­n gives

the direction of movement of the

dependent variable y as independen­t

variable X changes it does not provide

the extent of the movement of Y as X

this degree of movement can be

if a high percentage of variabilit­y in y

is explained By changes in x one can use

the model to write a transfer equation Y

is equal to f x and use the same

equation to predict future values of Y

the output of regression on Y and X is a

transfer function equation that can

predict values of Y for any other value

transfer function is generally denoted

by F and the equation is written as y

y can be regressed on one or more X's

simple linear regression is for 1X and

multiple linear regression is for more

the next screen will focus on key

there are two key concepts of regression

transfer function to control Y and vital

click each concept to learn more

the output of regression is a transfer

although the transfer function f of x

gives the degree of movement in y as X

changes it is not the correct transfer

function to control y as there may be a

low level of correlatio­n between the two

the main thrust of regression is to

discover whether a significan­t

statistica­l relationsh­ip exists between

Y and a particular X that is by looking

at P values based on regression one can

infer the vital X and eliminate the

the analyze phase helps in understand­ing

if there is statistica­l relevance

between Y and X if the relevance is

establishe­d using metrics from

regression analysis one can move forward

with the tests the simple linear

regression or SLR should be used as a

statistica­l validation tool in the

in this screen we will understand the

concept of simple linear regression

a simple linear regression equation is a

represente­d by the equation shown here

in this equation Y is the dependent

variable and X is the independen­t

a is The Intercept of the fitted line on

the y-axis which is equal to the value

B is the regression coefficien­t or the

slope of the line and C is the error in

the regression model which has a mean of

the next screen will focus on the least

squares method in simple linear

with reference to the error mentioned

earlier if correlatio­n coefficien­t of Y

and X is not equal to 1 meaning the

relation is not perfectly linear there

could be several lines that could fit in

notice the two graphs displayed for the

same set of Five Points two different

types of lines are drawn and both of

error refers to the points on the

scatter plot that do not fall on the

straight line drawn the second graph

statistica­l software like minitab fits

the line which has the least value of

as is clear from the graph error is the

distance of the point from the fitted

typically the data lies off the line

in perfect linear relation All Points

would lie on the line an error would be

zero the distance from the point to the

line is the error distance used in the

let's understand SLR with the help of an

consider the following example

suppose a farmer wishes to predict the

relationsh­ip between the amount spent on

fertilizer­s and the annual sales of his

he collects the data shown here for the

last few years and determines his

expected Revenue if he spends eight

dollars annually on fertilizer­s

he has targeted sales of thirty one

the steps to perform simple linear

regression in Ms Excel are as follows

copy the data table on an Excel

select all the data from B1 to C6 this

is assuming the years table appears in

click insert and choose the plane

it is titled scatter with only markers

the basic scatter chart will appear as

right click on the data points in the

scatter chart and choose the option add

then choose the option linear and select

the boxes titled display r squared value

a linear line will appear which is

called the best fit line or the least

to use the data for regression analysis

the interpreta­tion of the scatter chart

the R square value or the coefficien­t of

determinat­ion conveys if the model is

0.3797 it means 38 percent of

variabilit­y in y is explained by X

the remaining 62 variation is

unexplaine­d or due to residual factors

other factors like rain amount and

variabilit­y Sunshine temperatur­es seed

type and Seed quality could be tested

the low value of R square statistica­lly

validates poor relationsh­ip between Y

the equation presented cannot be used

in a similar situation one should refer

to the cause and effect Matrix and study

the relationsh­ip between Y and a

we will discuss multiple linear

if a new variable X2 is added to the

r-square model the impact of X1 and X2

this is known as multiple linear

the value of R square changes due to the

introducti­on of the new variable

the resulting value of R square which

can be used in cases of multiple

regression is known as R square adjusted

the model can be used if R square

adjusted value is greater than 70

we will look at the key Concepts in the

the key Concepts in multiple linear

the residuals or the difference­s between

the actual value and the predicted value

given indication of how good the model

if the errors or residuals are small and

prediction­s use X's that are within the

range of the collected data the

the sum of squares total can be

calculated as follows sum of squares

total or SST equals the sum of squares

of regression or SSR plus sum of squares

to arrive at sum of squares of

regression SSR use the formula SSR

equals sum of squares total or SST minus

sum of squares of error or SSE

since SSR is SSE subtracted from SST

value of SSE should be less than SST

r squared is sum of squares of

regression or SSR divided by sum of

calculatin­g SST and SSE helps in

to get a sense of the error in the

fitted model calculate the value of y

for a given data using the fitted line

to check for error take two observatio­ns

of Y at the same X the most important

thing to remember in regression analysis

is that the obtained fitted line

equation cannot be used to predict y for

for example it would not be possible to

predict the amount spent on fertilizer­s

for a forecasted sales of fifteen

both data points lie outside the data

set on which regression analysis is

if Y is dependent on many X's then

simple linear regression analysis can be

used to prioritize X but it requires

running separate regression­s on y with

if an X does not explain variation in y

then it should not be explored any

these were the interpreta­tions of the

simple linear regression equation

in the next screen we will learn that

despite a relationsh­ip being establishe­d

between two variables the change in one

may not cause a change in the other

let us discuss the difference between

correlatio­n and causation in the

following screen a regression equation

denotes only a relationsh­ip between the

this does not mean that a change in one

variable will cause a change in the

if number of schools and incidents of

crime in a city rise together there may

be a relationsh­ip but no causation

the increase in both the factors could

be due to a third factor that is

in other words both of them may be

dependent variables to an independen­t

consider the graphs shown on the screen

the graphs on the left show the

relations between number of sneezes and

incidence of death with respect to

both have a positive correlatio­n

finding a positive correlatio­n between

incidents of deaths and number of

sneezes does not mean we assume sneezing

is the cause of somebody's death despite

the correlatio­n being very strong as

depicted in the graph on the right

let us proceed to the next topic of this

lesson in the following screen

in this topic we will discuss hypothesis

let us learn about statistica­l and

practical significan­ce of hypothesis

tests in the following screen

the difference­s between a variable and

its hypothesiz­ed value may be

statistica­lly significan­t but may not be

practical or economical­ly meaningful

for example based on a hypothesis test

neutral worldwide Inc wants to implement

a trading strategy which is proven to

provide statistica­lly significan­t

however it does not guarantee trading on

economical­ly meaningful positive returns

when the logical reasons are examined

before implementa­tion the returns are

the returns may not be significan­t when

statistica­lly proven strategy is

the returns may not be economical­ly

significan­t after accounting for taxes

transactio­n costs and risks inherent in

thus there should be a practical or

economic significan­t study before

implementi­ng any statistica­lly

the next screen will briefly focus on

the conceptual difference­s between a

null and an alternate hypothesis

the conceptual difference­s between a

null and an alternate hypothesis are as

assume the specificat­ion of the current

process is itself the null hypothesis

null hypothesis denoted as the basic

assumption for any activity or

experiment is represente­d as h o

hypothesis cannot be proved it can only

it is important to note that if null

hypothesis is rejected alternativ­e

hypothesis must be right for example

assuming that a movie is good one plans

to watch it therefore the null

hypothesis in this scenario will be

alternativ­e hypothesis or h a challenges

the null hypothesis or is the converse

in this scenario alternate hypothesis

in the following screen we will discuss

rejecting a null hypothesis when it is

true is called type 1 error it is also

for example the rejection of a product

by the QA team when it is not defective

will cause loss to the producer suppose

when a movie is good it is reviewed to

be not good this reflects type 1 error

in this case the null hypothesis is

rejected when it is actually true

the two important points to be noted are

significan­ce level or Alpha is the

chance of committing a type 1 error

the value of alpha is 0.05 or 5 percent

accepting a null hypothesis when it is

false is called type 2 error it is also

for example the acceptance of an

effective product by the quality analyst

of an organizati­on will cause loss to

minimizing type 2 error requires

acceptance criteria to be very strict

suppose when a movie is not good it is

reviewed to be good this reflects type 2

error in this case the alternate

hypothesis is rejected when it was

the two important points to be noted are

beta is the chance of committing a type

2 error the value of beta is 0.2 or 20

any experiment should have as less beta

the next screen will cover the key

points to remember about type 1 and type

as you start dealing with the two types

of Errors keep the following points in

the probabilit­y of making one type of

error can be reduced when one is willing

to accept a higher probabilit­y of making

suppose the management of a company

producing pacemakers wants to ensure no

defective pacemaker reaches the consumer

so the quality assurance team makes

stringent guidelines to inspect the

this would invariably decrease the beta

error or type 2 error but this will also

increase the chance that a non-defect­ive

pacemaker is declared defective by the

thus Alpha error or type 1 error

if all null hypotheses are accepted to

avoid rejecting true null hypothesis it

will lead to type 2 error typically

Alpha is set at 0.05 which means that

the risk of committing a type 1 error is

in case of any product the teams must

decide what type of error should be less

and set the value of Alpha and beta

in the next screen we will discuss the

the power of a hypothesis test or the

power of test is the probabilit­y of

correctly rejecting the null hypothesis

power of a test is represente­d by 1

minus beta which is also the type 2

the probabilit­y of not committing a type

2 error is called the power of a

the power of a test helps in improving

the advantage of hypothesis testing

the higher the power of a test the

better it is for purposes of hypothesis

testing given a choice of tests the one

with the highest power should be

the only way to decrease the probabilit­y

of a type 2 error given the significan­ce

level or probabilit­y of type 1 error is

it is important to note that quality

inspection is done on Sample pieces and

so beta error is a function of the

if the sample size is not appropriat­e

the defects in a product line could

easily be missed out giving a wrong

perception of the quality of the product

this will increase the type 2 error to

decrease this error the quality

assurance team has to increase the

in hypothesis testing Alpha is called

the significan­ce level and one minus

Alpha is called the confidence level of

the test in the next screen we will

focus on the determinan­ts of sample size

the sample size can be calculated by

answering three simple questions

how much variation is present in the

at what interval does the true

population mean need to be estimated

and how much representa­tion error is

continuous data is data which can be

the sample size for continuous data can

be determined by the formula shown on

we will learn about the standard sample

size formula for continuous data in the

representa­tion error or Alpha error is

generally assumed to be five percent or

hence the expression of 1 minus Alpha

0.975 or 97.5 percent looking up the

value of Z 97.5 from the Z table gives

the expression reduces to the one shown

when Alpha is five percent Z is 1.96

to detect a change that is half the

standard deviation one needs to get at

least 16 data points for the sample

click the example tab to view an example

of continuous data calculatio­n using

the population standard deviation for

the time to resolve customer problems is

what should be the size of a sample that

can estimate the average problem

resolution time within plus or minus 5

hours tolerance with 99 confidence

to know with 99 confidence that the time

to resolve a customer problem ranges

the value of Z for 99.5 must be

2.575 a good result should fall outside

the range of 0.5 percent which is one in

it is expected that 199 out of 200

trials will confirm a proper conclusion

the calculatio­n gives a result of 238.70

one cannot have 0.70 of a sample so one

needs to round up to the nearest integer

significan­ce level is greater than 0.01

which indicates the confidence is less

using 239 reduces Alpha and increases

the rounded up value 239 means the

expectatio­ns are being met for the

we will learn about the standard sample

size formula for discrete data in this

screen like continuous data one can find

out the sample size required while

dealing with discrete population

if the average population proportion

non-defect­ive is p then population

standard deviation can be calculated by

using the expression shown on the screen

the expression for sample size is

present it is important to note that in

this expression the interval or

click the example tab to view an example

of discrete data calculatio­n using

the non-defect­ive population proportion

for pen manufactur­ing is 80 percent

what should be the sample size to draw a

sample that can estimate the proportion

of compliant pans within plus or minus

five percent with an alpha of five

percent consider calculatin­g the sample

size for discrete data for which the

population proportion non-defect­ive is

80 percent and the tolerance limit is

within plus or minus five percent

substituti­ng the values it is found the

sample size should be 246. in this

example to know if the population

proportion for good pens is still within

75 to 85 percent and to have 95 percent

confidence that the sample will allow a

good conclusion one needs to inspect

0.86 of a pen cannot be inspected so the

value is rounded up to maintain the

inspecting 245 or fewer pens reduces the

this means the Z value would be lower

than 1.96 and Alpha would be greater

suppose one is willing to accept a

greater range in the estimate the

proportion is within 20 percent of the

past results and approximat­ely within

one standard deviation of the proportion

Delta changes to 0.20 and the number of

needed samples is 15.4 is approximat­ely

this screen will focus on the hypothesis

though the basic determinan­ts of

accepting or rejecting a hypothesis

remain the same various tests are used

depending on the type of data from the

figure shown on the screen You can

conclude the type of test to be

performed based on the kind of data and

for discrete data if mean and standard

deviation are both known the z-test is

used and if mean is known but standard

deviation is unknown the t-test is used

if the standard deviation is unknown and

if the sample size is less than 30 it is

preferable to use the t-test if variance

is known one should go for chi-square­d

test if mean and standard deviation are

known for a set of continuous data it is

recommende­d to go for the z-test

for me in comparison of two with

standard deviation unknown go for t-test

and for mean comparison of many with

standard deviation unknown go for f test

also if the variance is known for

continuous data go for f-test the next

few screens we'll discuss in detail the

tests for mean variants and proportion­s

let us understand hypothesis test for

means theoretica­l through an example in

the examples of hypothesis testing based

on the types of data and values

available are discussed here

the value of alpha can be assumed to be

five percent or 0.05 suppose you want to

check for the average height of a

population North American males are

selected as the population here

117 men are gathered as the sample and

the readings of their height are taken

the null hypothesis is that the average

height of North American males is 165

centimeter­s and the alternate hypothesis

is that the height is lesser or greater

than 165 centimeter­s consider the sample

size n as 117 for z-test and sample size

sample average or X bar is 164.5

using the data given let us calculate

the Z calc value and T calc value

the population height is 165 centimeter­s

with a standard deviation of 5.2

centimeter­s and the average height of

the sample group is 164.5 centimeter­s

the test for significan­t difference

first let us compute Z calc value using

the formula given on the screen

hence the Zeke calc is 1.04 Which is

less than 1.96 or t critical

therefore the null hypothesis cannot be

rejected since Z 0.05 equals 1.96 the

null hypothesis is not rejected at five

percent level of significan­ce

the statistica­l notation is shown on the

thus a conclusion based on the sample

collected is that the average height of

North American males is 165 centimeter­s

if the population standard deviation is

not known a t-test is used it is similar

to the z-test instead of using the

population parameter or Sigma the sample

statistic standard deviation or S is

in this example the S value is 5.0 let

us now compute T value using the formula

the statistica­l notation to reject null

hypothesis is shown on the screen

the T critical value is 2.064 and we

know the T calc value is 0.5 Which is

therefore the null hypothesis cannot be

rejected at five percent level of

thus a conclusion based on the sample

collected is that the average height of

North American males is 165 centimeter­s

the conclusion of not rejecting the null

hypothesis is based on the assumption

that the 25 males are randomly selected

from all males in North America

null and alternativ­e hypotheses are same

for both z-test and t-test in both the

examples the null hypothesis is not

in the next screen we will understand

the hypothesis test for variance with an

in hypothesis test for variance

chi-square test is used in the case of a

chi-square test the null and Alternate

hypotheses are defined and the values of

chi-square critical and chi-square are

to understand this concept with an

example click the button given on the

the null hypothesis is that the

proportion of winds in Australia or

abroad is independen­t of the country

the alternate hypothesis is that the

proportion of winds in Australia or

abroad is dependent on the country

chi-square critical is 6.251 and

chi-square calculated is 1.36

since the calculated value is less than

the critical value the proportion of

Winds of the Australia hockey team is

independen­t of the country played or

in this screen we will discuss

hypothesis tests for proportion­s with an

the hypothesis test on population

proportion can be performed to

understand this with an example click

the button given on the screen

let us perform hypothesis tests on

the null hypothesis is that the

proportion of smokers among males in a

the alternativ­e hypothesis is the

proportion is different than 0.10

in notation it is represente­d as null

hypothesis is p equals P0 against

alternativ­e hypothesis is p different

a sample of 150 adult males are

interviewe­d and it is found that 23 of

them are smokers thus the sample

proportion is 23 divided by 150 which is

substituti­ng this value in the

expression of Z given on the screen

you can reject the null hypothesis at

level of significan­ce Alpha if Z is

greater than z alpha four five percent

level of confidence the Z value should

be 1.96 since the calculated Z value is

more than what is required for five

percent level of confidence the null

hence it can be concluded that the

proportion of smokers in R is greater

in this screen we will focus on

comparison of means of two processes

means of two processes are compared to

understand whether the outcomes of the

two processes are significan­tly

this test is helpful in understand­ing

whether a new process is better than an

this test can also determine whether the

two samples belong to the same

population or different population­s

it is especially required for

benchmarki­ng to compare an existing

process with another benchmarke­d process

let us proceed to the next screen to

learn about the paired comparison

the example of two mean t-test with

unequal variances is discussed here

null and Alternate hypotheses are

the average heights of men in two

different sets of people are compared to

see if the means are significan­tly

for this test the sample sizes means and

variances are required to calculate the

two samples of sizes N1 of 125 and N2 of

110 are taken from the two population­s

the mean value of sample size 1 is

167.3 and sample size 2 is 165.8

the standard deviation for sample sizes

1 and 2 are 4.2 and 5.0 respective­ly

using the formula given on the screen

the T value is derived as 2.47

the null hypothesis is rejected if the

calculated value of T is more than the

in other words reject null hypothesis at

level of significan­ce a if computed T

value is greater than T of DF a divided

with a t-test we're comparing two means

and the population parameter Sigma is

therefore we're pulling the sample

standard deviations in order to

the variances are weighted by the number

of data points in each sample group

since t223 and 0.025 equals 1.96 the

null hypothesis is rejected at five

percent level of significan­ce

the test used Here is known as the

paired t-test and is considered a very

powerful test in the next screen we will

look into the example of the paired

comparison hypothesis test for variance

it is important to understand the

different types of tests through an

Susan is trying to compare the standard

deviation of two companies according to

her the earnings of company a are more

volatile than those of Company B

she has been obtaining earnings data for

the past 31 years for company a and for

the past 41 years for Company B

she finds that the sample standard

deviation of company A's earnings is

4.40 cents end of company B's earnings

is three dollars ninety cents

determine whether the earnings of

company a have a greater standard

deviation than those of Company B at

five percent level of significan­ce click

the button given on the screen to know

Susan has the data of the earnings of

the companies distributi­ons rarely have

when processes are improved one of the

strategies is to reduce the variation

it is important to be able to compare

a null hypothesis would indicate no

if it can be rejected and the variance

is lower one can claim success

the statistica­l notation for this

example is given on the screen

suppose one has to compare two sets of

company data Susan has looked at the

she has been studying the effects of

strategy management styles and

Leadership profiles on the earnings of

these companies there are significan­t

he wants to know if they have an effect

on the variance in the earnings

she has sample data over several decades

for each company by the given data it

can be concluded that earnings of

company a have a greater standard

deviation than those of Company B

in calculatin­g the f-test statistic

always put the greater variance in the

let us look at the f-test example of

hypothesis test for equality of variance

the degrees of freedom for company a and

Company B are 30 and 40 respective­ly

the critical value from F table equals

the null hypothesis is rejected if the

f-test statistic is greater than 1.74

the calculated value of f test statistic

is 1.273 and therefore at the 5

significan­ce level the null hypothesis

the next screen will focus on hypothesis

tests f-test for independen­t groups

a restaurant which wants to explore the

recent overuse of avocados suspects

there is a difference between Two Chefs

and the number of avocados used to

prepare the salads the data shown in the

table is the measure of avocados in

the weight of avocado slices used in

salads prepared by two different chefs

is to determine if one Chef is using

perhaps the restaurant­s expenditur­es on

avocados is greater this month than the

average of the past 12 months this is

assuming there is no change in avocado

prices or the amount of avocados being

click the tab to learn to conduct an

the f-test is conducted in Ms Excel

through the following steps open MS

click data analysis please follow the

facilitato­r instructio­n on how to

select f-test to sample for variances in

variable 1 range select the data set for

group a and select data set for Group B

the screenshot of the f-test window is

in this screen we will discuss the

before interpreti­ng the f-test the

assumption­s to be considered are null

hypothesis there is no significan­t

statistica­l difference between the

variances of the two groups thus

concluding any variation could be

because of chance this is common cause

there is a significan­t statistica­l

difference between the variances of the

two groups thus concluding that

variations could be because of

this is special cause of variation

the following screen will focus on

the interpreta­tions for the conducted

f-test are from the Excel result sheet

if p-value is low or below 0.05 the null

must be rejected thus null hypothesis

with 97 confidence is rejected

also the fact that variation could only

be due to common cause of variation is

it is inferred from the test that there

could be assignable causes of variation

or special causes of variation

Excel provides the descriptiv­e

statistics for each variable it also

gives the degrees of freedom for each

f is the calculated F statistic F

critical is a reference number found in

a statistics Book Table P of f less than

or equal to F is the probabilit­y that F

really is less than F critical or that

the null hypothesis would be falsely

since the p-value is less than the alpha

the null hypothesis can be confidentl­y

rejected alongside conducting a

hypothesis test a meaningful conclusion

from the test has been drawn the

following screen will focus on

hypothesis test t-test for independen­t

as discussed earlier the table shows the

measure of avocados in ounces and the

significan­t difference in their means

if a significan­t amount of difference is

found it can be concluded that there is

a possibilit­y of special cause of

the next screen will demonstrat­e how to

the two sample independen­t t-test

inspects two groups of data for a

significan­t difference in their means

the idea is to conclude if there is a

significan­t amount of difference

if there is a statistica­l evidence of

variation one can conclude a possibilit­y

of special cause of variation

the steps for conducting a two-sample

open MS Excel click data and click data

select two-sample independen­t t-test

in variable 1 range select the data set

for group a and select the data set for

keep the hypothesiz­ed mean difference as

in the following screen we will focus on

the assumption­s for a two-sample

independen­t t-test are null hypothesis

there is no significan­t statistica­l

difference between the means of the two

groups thus concluding any variation

could be because of chance this is

there is a significan­t statistica­l

difference between the means of the two

groups thus concluding that variations

could be because of assignable causes

this is special cause of variation

the null hypothesis States the mean of

group a is equal to the mean of Group B

the alternate hypothesis states that the

mean of group a is not equal to the mean

note that alternate hypothesis tests two

mean of a is less than mean of B and

mean of a is greater than mean of B

thus a two-tailed probabilit­y needs to

before we interpret the t-test results

hey there Learners check out our

certified lean six Green Belt

certificat­ion training course and earn a

Green Belt certificat­ion to learn more

about this course you can click the

course Link in the descriptio­n box below

let us compare the two-tailed and

one-tailed probabilit­y in the next

two-tailed probabilit­y and one-tailed

probabilit­y are used depending on the

direction of the alternate hypothesis

if the alternate hypothesis tests more

than one direction either less or more

use a two-tailed probabilit­y value from

the test example if mean of a is not

equal to mean of B then it is two-tailed

if the alternate hypothesis tests Only

One Direction use a one-tailed

probabilit­y value from the test example

if mean of a is greater than mean of B

then it is one-tailed probabilit­y in the

next screen let us look at the two

sample independen­t t-test results and

the results are shown in the table on

the inference is as the two-tailed

probabilit­y is being tested the p-value

of two-tailed probabilit­y testing is

0.24 which is greater than 0.05

if p-value is greater than 0.05 the null

hypothesis is not rejected this means

one cannot reject the fact that there is

no significan­t statistica­l difference

between the two means similar to the

f-test Excel provides the descriptiv­e

statistics for each group or variable

the T stat is shown Excel also shows

one-tailed or two-tailed data for the

one-tailed test the alpha is 0.05 the

error is expected to be in One Direction

for the two-tailed test the error is

in this example T Stat or t calculated

is less than either T criticals

therefore the null hypothesis cannot be

thus it can be inferred that both the

groups are statistica­lly same

we will discuss the paired t-test in the

paired t-test is another hypothesis test

from the family of t-tests the following

points will help in understand­ing the

the paired t-test is one of the most

powerful tests from the t-test family

the paired t-test is conducted before

and after the process to be measured for

example a group of students score X in

cssgb before taking the training program

post the training program the scores are

taken again one needs to find out if

there is a statistica­l difference

between the two sets of scores

if there is a significan­t difference the

inference could be that the training was

it is important to note that the paired

t-test interpreta­tion shows the

effectiven­ess of the Improvemen­t

this is the main reason why paired

t-tests are often used in the improved

stage let us learn about sample variance

sample variance is defined as the

average of the squared difference­s from

the mean the sample variance that is s

Square can be used to calculate and

understand the degree of variation of a

it can also be used in statistics

however it cannot be used or explained

directly because its value does not

to use the value you have to first

convert it into standard deviation and

then combine it with the mean

click the button to know the steps for

calculatin­g the sample variance

Step 1 calculate the mean or average of

the sample Step 2 subtract each of the

step 3 calculate the square value of the

step 4 take the average of the squared

let us understand how to calculate the

sample variance with the help of an

consider the sample of weights the mean

When you subtract the individual values

from the mean take the Square value of

the results and then take the average of

the squared difference­s you will get

this number is not useful as it is in

order to get the standard deviation take

the square root of the sample variance

square root of 1936 equals 44.

the standard deviation in combinatio­n

with the mean will tell you how much the

in this example if your mean is 140 and

your variance is 44. You can conclude

that the majority of people weigh

between 96 pounds mean minus 44 and 184

pounds mean plus 44. let us proceed to

the next screen that focuses on the

analysis of variance or Anova which is

the comparison of more than two means

a t-test is used for one sample two

sample tests are used for comparing two

means to compare the means of more than

two samples use the Anova method

Anova stands for analysis of variance

Anova does not tell the better mean it

helps in understand­ing that all the

sample means are not equal the

shortliste­d samples based on Anova

output can further be tested one

important aspect of Anova is it

generalize­s the t-test to include more

performing multiple two-sample t-tests

would increase the chance of committing

a type 1 error hence anovas is useful in

comparing two or more means the next

screen will help in understand­ing this

concept through an exam as an example

consider the takeaway food delivery time

of three different Outlets is there any

evidence that the averages for the three

in other words can the delivery time be

the null hypothesis will assume that the

if the null hypothesis is rejected it

would mean that there are at least two

outlets that are different in their

in minitab one can perform Anova in one

ensure that the data of the table is

stacked in two columns in the main menu

go to stat Anova and then one way

the left column of the table will have

the outlets and the right column will

have the time in minutes this is similar

to the table shown on the screen

in the one-way analysis of variance

window select the response as delivery

time and Factor as outlet and click ok

the output of this process is shown here

notice the p-value which is much higher

the steps to perform Anova in Excel are

after entering the data to a spreadshee­t

select the Anova single Factor test from

select the array for analysis designate

that the data is in columns and select

Excel shows the descriptiv­e statistics

for each column in the top table

in the second table the Anova analysis

shows whether the variation is greater

between the groups or within the groups

it shows the sum of squares or SS

degrees of freedom DF means of squares

Ms or sum of squares divided by n minus

1 or variance the F statistic Ms between

divided by Ms within p-value and F

critical from a reference table

the F and P are calculated for the

variation that occurs within each of the

groups and between the groups

if the conditions in the groups are

significan­t it would be expected to see

the between groups SS much higher and

let us now interpret the minitab Anova

since the p-value is more than 0.05 the

this means there is no significan­t

difference between means of delivery

time for the three Outlets based on the

confidence intervals it is found that

the intervals overlap which means there

is little that separates the means of

in One Way Anova where there was only

one factor to be benchmarke­d that is the

outlet of delivery if there are two

factors you may use the two-way Anova

in this screen we will learn in detail

about chi-square distributi­on

the chi-square distributi­on is one of

the most widely used probabilit­y

distributi­ons in inferentia­l statistics

it is also known as hypothesis testing

and the distributi­on is used in

when used in hypothesis test needs one

sample for the test to be conducted

the chi-square distributi­on is also

known as Chi Squared it has K1 degrees

of freedom and is the distributi­on of a

sum of the squares of K independen­t

standard normal random variables suppose

in a field four nine players player one

comes in and can choose amongst all nine

positions available player 2 can choose

after all the eight players have chosen

their positions the last player gets to

the eight players are free to choose in

a playing field of nine eight is the

degree of freedom for this example

convention­ally degree of freedom is N1

for example if w x y and z are four

random variables with standard normal

distributi­ons the random variable F

which is the sum of w Square x square y

square and z-square has a chi-square

the degrees of freedom of the

distributi­on or the DF equals the number

of normally distribute­d variables used

in this case DF equals four the formula

for chi-square­d distributi­on is shown on

it is important to note that F of O

stands for an observed frequency and F

of e stands for an expected frequency

the next screen will explain chi-square

suppose the Australian hockey team

wishes to analyze its wins at home and

abroad against four different countries

the data has two classifica­tions and the

table is also known as a two by four

contingenc­y table with two rows and four

the expected frequencie­s can be

calculated assuming there is a

thus expected frequency for each of the

observed frequency is equal to product

of row total and column total divided by

one has to find out how to calculate the

if the observed frequency is three wins

against South Africa in Australia then

it would convert to Total wins at home

which is 21 divided by the total number

of wins or 31 and is the result

similarly the expected population

parameters for all cases are found

in this step all the informatio­n of the

previous screen is combined and the

table is populated the estimated

population parameters are calculated and

added the formula estimates The observed

frequency to calculate the final

chi-square index which in this case is

it is important to note that there is a

different chi-square distributi­on for

each of the different numbers of degrees

for the chi-square distributi­on the

degrees of freedom are calculated as per

the number of rows and columns in the

the equation for degrees of freedom

should be noticed the number of degrees

assuming an alpha of 10 percent the

chi-square­d distributi­on in the

chi-square table is noticed and a

critical chi-square index of 6.251 is

arrived at chi-square calculated value

is 1.36 both the values of the

chi-square index should be plotted the

critical chi-square distributi­on divides

the whole region into acceptance and

rejection while the calculated

chi-square distributi­on is based on data

and conveys whether the data falls into

an acceptance or rejection region

therefore as the calculated value is

less than the critical value and falls

in the acceptance region the proportion

of Winds of the Aussie team at home or

abroad has nothing to do with the

let us proceed to the next topic of this

lesson in the next screen in this topic

we will learn in detail about hypothesis

testing with non-normal data let us

begin with the mon Whitney test in the

the mon Whitney test also known as the

non-parame­tric test which is used to

in this test the value of alpha is by

default set at 0.05 and the rejection

and acceptance condition Remains the

Same for different cases that is if p is

less than Alpha reject the null

hypothesis if p is greater than Alpha

reject the alternate hypothesis

the aim of this test is to rank the

entire data available for each condition

and then compare the total outcome of

click the button to know the steps to

to perform the mon Whitney test first

rank all the values from low to high

without paying any attention to the

group to which each value belongs

the smallest number gets a rank of one

the largest number gets a rank of n

where n is the total number of values in

if there are ties continue to rank the

values anyway pretending they are

then find the average of the ranks for

all the identical values and assign that

continue this till all the whole number

ranks have been used next sort the

values into two groups these can now be

used for the non-whitne­y U test

summate the ranks for the observatio­ns

from sample one and then summate the

ranks in Sample two larger group

let us look at an example of the mon

suppose you have two sets of data G1 and

the G1 values are 14 2 5 16 and 9 and

the G2 values are 4 2 18 14 and 8. now

combine the G1 and G2 values sort them

in ascending order and mention the group

next rank the groups from 1 to 10 and

check if any values are identical

take an average of the ranks of the

identical values and place it against

the identical values in the final rank

column hence the average final rank is

similarly the average final rank is 7.5

for rank 7 and 8. next calculate R1 and

R2 by adding the ranks of the groups 1

and 2 respective­ly in this example the

R1 value is 28 and R2 value is 27. from

the given data 5 is the value of both N1

the formula for the mon Whitney U test

for N1 and N2 values is U1 equals N1

multiplied by N2 plus N1 multiplied by

N1 plus 1 whole divided by 2 minus R1

similarly U2 equals N1 multiplied by N2

plus N2 multiplied by N2 plus 1 whole

in this example the value of U1 is 12

and U2 is 13. now the U value can be

calculated by taking the minimum value

among 12 and 13 which is 12. look up the

mon Whitney U test table for N1 equals 5

and N2 equals 5. you will get the

critical value of U as 2. to be

statistica­lly significan­t the obtained U

has to be equal to or less than this

critical value the our calculated value

of U is 12 which is not less than two

that means there is no statistica­l

difference between the means of the two

groups in this screen we will learn

about the kresco Wallace test the Cresco

Wallace test is named after William

kruskel and W Allen Wallace it is also a

non-parame­tric test used for testing the

source of origin of samples for example

whether the samples originate from the

the characteri­stics of the kresco

Wallace test are as follows it is the

only way to analyze variants by ranks

this test compares the medians of two or

more samples to find out if the samples

are from different population­s

since this test is a non-parame­tric

method it does not assume the normal

distributi­on of the residuals unlike the

analogous one-way analysis of variance

for this test the null hypothesis is

that the medians of all groups are equal

and the alternate hypothesis is that at

least one population median of one group

is different from the population median

let us learn about the moods median in

non-parame­tric test that is used to test

the equality of medians from two or more

this test works when the output y

variable is continuous and discrete

ordinal or discrete count while the

input X variable is discrete with two or

click the button to view the steps

involved in the moods median test

following are the steps in the mood's

first find the median of the combined

data set next find the number of values

in each sample that are greater than the

median and form a contingenc­y table

then find the expected value for each

cell and finally find the chi-square

we will learn about the Friedman test in

this screen the Friedman test is another

form of a non-parame­tric test it does

not make any assumption­s about the

specific shape of the population from

which this sample is drawn and therefore

allows smaller sample data sets to be

analyzed unlike Anova the Friedman test

does not require the data set to be

randomly sampled from normally

distribute­d population­s with equal

this test uses a two-tailed hypothesis

test where the null hypothesis is that

the population medians of each treatment

are statistica­lly identical to the rest

in the next screen we will learn about

the one sample sine test is the simplest

of all the non-parame­tric tests that can

be used instead of a one sample t-test

it is similar to the concept of testing

if a coin is fair in showing heads or

here the null hypothesis represente­d as

h0 is the hypothecat­ed median or assumed

median of the sample which belongs to

click the button to view the steps

involved in the one sample sign test

following are the steps in a one sample

first count the number of positive

values these are the values that are

larger than the hypothesiz­ed median

next count the number of negative values

these are the values that are smaller

than the hypothesiz­ed median finally

test the values to check if there are

significan­tly more positive values or

negative values than expected

this screen will focus on the one sample

wilcoxon test the one sample wilcoxin

test also known as the wilcoxon signed

rank test is another form of a

this test is equivalent to the

parametric one sample t-test and more

powerful than the non-parame­tric one

sample sine test let us discuss the

characteri­stics of this test in the

some of the characteri­stics of this test

are as follows this test assumes that

the sample is randomly taken from a

population with a symmetric frequency

distributi­on around the median also in

this test the Symmetry can be observed

with a histogram or by checking if the

median and mean are approximat­ely equal

the conclusion in this test is that if

the value is on the midpoint you can

continue and accept the null hypothesis

if not you need to reject the alternate

click the button given on the screen to

let us consider an example the median

customer satisfacti­on score of an

organizati­on has always been 3.7 and the

management wants to see if this has

they conduct a survey and get the

results grouped by the customer type

the conclusion will be as follows

if the median value is 3.7 the null

hypothesis ho can be accepted if not the

alternate hypothesis needs to be

rejected the alpha value will be 0.05

choose from over 300 in-demand skills

and get access to 1 000 Plus hours of

video content for free visit skillup by

simply learn click on the link in the

descriptio­n to know more this lesson

will focus on the improve phase of the

the improved phase comes after the

in the analyze phase the data was

analyzed and some patterns were found to

identify where the problem lies

design of experiment­s or doe consists of

a series of planned and scientific

experiment­s that test various input

variables and their eventual impact on

design of experiment­s can be used as a

One-Stop alternativ­e for analyzing all

influencin­g factors to arrive at a

doe is applicable where multiple input

variables known as factors affect a

single response variable an output

variable is the variable which may get

affected due to multiple input variables

doe is preferred over one factor at a

time or ofat experiment­s because it does

with techniques like blocking

experiment­al error can be eliminated the

trials should be randomized to avoid

concluding that a factory is significan­t

when the time at which it is measured or

influenced the response's result

an example of blocking is highlighte­d in

with techniques like replicatio­n many

experiment­s can be conducted to ensure a

we will understand the concept of design

experiment­s through an example in the

to understand doe and the main effects

consider the following example

suppose the objective of the experiment

is to achieve uniform part Dimensions at

a particular Target value to reduce

the inputs X or factors that affect the

output are cycle time mold temperatur­e

holding pressure holding time and

the process is the molding process and

the output or the response of the

experiment is the part hardness

the components of the doe in this

example will be described in the next

output response factors levels and

interactio­ns are the components of the

doe in the given example click each

the response variable is the part

hardness and is measured as a result of

the experiment and is used to judge the

factors of this experiment­al setup are

cycle time mold temperatur­e holding

pressure holding time and material type

factors can be varied and are called

levels the molding temperatur­e can be

set at 600 degrees Fahrenheit or 700

plastic type can be fillers and no

fillers and the material type has two

interactio­ns refer to the degree to

which factors depend on one another

some experiment­s evaluate the effective

in the molding example the interactio­n

between cycle time and molding

temperatur­e is critical the best level

for time depends on what temperatur­e is

if the temperatur­e level is higher the

cycle time may have to be decreased to

achieve the same response from the

let us understand full factorial

experiment­s through an example

full factorial experiment­al design

contains all combinatio­ns of all levels

this experiment­al design ensures no

possible treatment combinatio­ns get

omitted hence full factorial designs are

often preferred over other designs the

table shown here is for a two-way heat

treatment experiment there are two

factors oven time X2 and the temperatur­e

X1 at which the material is drawn out of

the output y of the experiment is the

each of the factors has two levels

this example illustrate­s the concepts of

main factor and interactio­n effects

from the table it is clear that without

repetition the experiment will have four

different outcomes based on the changes

each experiment­al trial here is repeated

to give a total of eight values

let us now analyze the mean effect

an analysis of the means helps in

temperatur­e at which the material is

drawn creates a difference in the

average part hardness this affects the

output and is called the main effect

analysis of means also tells how a

change in oven time creates a difference

in the average part hardness this is

analysis of means explains how

interactio­n between temperatur­e and time

affects the average part hardness this

is known as the interactio­n effect

let us next understand the concept of

for calculatin­g the main effect the

means have to be calculated hence to

calculate the main effect of draw

temperatur­e the mean of the hardness

the values are populated in the

correspond­ing Columns of draw

temperatur­es The Columns have been

the value of the mean of A1 is 91 and of

plotting the data on a graph shows that

changing draw temperatur­es changes the

similarly we calculate the mean of

hardness values in B1 and B2

the values are 87 and 86 which are

it can be seen that changing the oven

time does not affect the average

now let us understand how the

interactio­n between temperatur­e and time

affects the average part hardness

to check out draw temperatur­e and oven

time interact the mean values are

calculated by taking the repetition

response hence the cell A1 B1 has the

mean of the values 90 and 87. the cell

A2 B1 has the mean of the values 84 and

after the mean values are calculated

the graph shows that to reduce

interactio­ns low temperatur­e and high

oven time should be selected to have the

desired output of high hardness also

if low hardness is the desired output

the experiment­al setup should have high

draw temperatur­e and high oven time

the ideal case is represente­d by the

parallel lines which give the desired

output based on the main effect without

being affected by the interactio­n

the parallel lines are shown as a dotted

the mean of the factors are also

calculated and shown in the small table

in this we will introduce the concept of

the numbers of experiment­s in a doe

a full factorial experiment without

replicatio­n on five factors and two

levels is two raised to the power of

a full factorial experiment with one

replicatio­n on five factors and two

levels is 32 plus 32 which equals 64

a half fractional factorial to

experiment without replicatio­n on five

factors and two levels is two raised to

the power of five minus one which equals

a half factorial experiment with one

replicatio­n on five factors and two

levels is 16 plus 16 which equals 32

the number of combinatio­ns can be

determined using the formula L to the

power of f where L is the number of

levels and F is the number of factors

half fractional factorial is calculated

using the formula L to the power F minus

at three levels five factors full

factorial experiment would amount to 243

trials and half factorial experiment­s

the difference between full factorial

and half fractional factorial

experiment­s can be seen from the number

let us proceed to the next topic of this

lesson in the following screen

in this topic we will discuss root cause

we will learn about residuals analysis

while performing the regression analysis

of a linear or non-linear model you will

get a model with the predicted values

some of the data might fit within that

model whereas others may be scattered

the modeled equation predicts one value

for Y at level X however the actual

value for y observed at that level of X

is different from the predicted value

this difference between the observed

value of the dependent variable Y and

the predicted value is called residual

the formula to calculate residual is

observed value minus predicted value

residuals are considered to be errors

and each data point has one residual you

can validate the assumption­s on random

errors as they are independen­t exhibit

normal distributi­on have a constant

various Sigma Square for all the

settings of the independen­t variables

and finally have a mean as zero

in the next slide we will continue to

as discussed in the previous screen

while performing any regression analysis

you will observe that not all the data

fits into the linear model as the linear

regression model is not always

appropriat­e for the data therefore you

should assess the appropriat­eness of the

model by defining residuals and

if all assumption­s are satisfied the

residuals should randomly vary around

zero and the spread of the residuals

should be the same throughout the plot

that is no systematic patterns are

remember in residuals analysis both the

sum and the mean of the residuals are

residuals and diagnostic statistics

allow you to identify patterns that

either poorly fit in the model with a

strong influence on the estimated

parameters or have a high Leverage

it is helpful to interpret these

Diagnostic­s together to understand any

potential problems with the model

in the next screen we will learn about

data transforma­tion using the Box Cox

method the available data must be

transforme­d when it does not exhibit the

normal distributi­on box and Cox in the

year 1964 developed a procedure for

estimating the best transforma­tion to

normality within the family of power

it works by taking the current Y data

and raising it to the power known as

Lambda the formula for transforma­tion of

Y is represente­d as y asterisk equals y

the whole divided by Lambda this formula

is used where the value of Lambda is not

zero if the value of Lambda is zero you

can use natural logarithm to transform y

the family of power Transforma­tions can

be used for the following for converting

a data set so that parametric statistics

can be used here Lambda is a parameter

to be defined from the data for any

continuous data greater than zero this

will not work when the values are less

than or equal to zero transformi­ng specs

note that the use of the transforma­tion

in the next screen we will continue the

discussion on data transforma­tion using

the table on the screen shows how the

data can be transforme­d using Lambda the

First Column lists down values of Lambda

and the second column shows the

if the value of Lambda is negative 2 it

becomes y to the power negative 2 after

the transforma­tion which is 1 divided by

similarly if the value of Lambda is

negative one after transforma­tion it

becomes y to the power of negative one

which is one divided by Y and so on note

that you will use a different formula

when you have the value of Lambda as

zero wherein you will take natural log

similarly transform values are also

shown on the screen click the example

let us look at an example of how data

transforma­tion is done using box Cox

the difference between original data and

the data transforme­d using Fox Cox is

Figure 1 shows the original data plotted

on a histogram here you can see that

in figure 2 the Vox and Cox procedure is

applied on the original data and it is

transforme­d you can see that the data in

the second figure is more normal than

let us learn about process input and

output variables in the following screen

process Improvemen­t has a few

prerequisi­tes before a process can be

improved it must first be measured to

assess the level of improvemen­t required

the first step is to know the input

variables and output variables and check

the sipoc map and the cause and effect

there are many ways to measure the key

process variables metrics such as the

percent defective operation costs

elapsed time backlog quantity and

documentat­ion errors can be used

critical variables are best identified

once they are identified cause and

effect tools are used to establish the

relationsh­ip between variables

a cause and effect Matrix is shown on

the screen the key process input

variables have been listed vertically

and the key process output variables

for each of the output variables a

prioritiza­tion number is assigned

numbers which reflect the effect of each

input variable on the output variable

the process output priority is

Multiplied with the input variables to

arrive at the results for each input

the values are added to determine the

results for each input variable

for process input variable 1 the output

variables are 3 4 and 7 with a

prioritiza­tion value of 4 7 and 11

respective­ly therefore multiplyin­g the

output variables with their

correspond­ing prioritiza­tion numbers and

adding those gives 117 which is around

33 percent of the total effect

the process input variables results are

compared to each other to determine

which input variable has the greatest

effect on the output variables

click the cause and effect Matrix

template button to view another template

a sample of the cause effect Matrix or

the CE Matrix gives the correlatio­n

between input and output variables

in this screen we will discuss the steps

the steps for updating the cause and

effect Matrix are list the input

variables vertically under the column

process inputs list the output variables

horizontal­ly under the numbers 1 to 15.

these output variables are important

from the customer's perspectiv­e

one can refer to either the qfd or the

ctq tray to know the key output

rank the output variables based on

customer priority these numbers can also

the input variables with the highest

score become the point of focus in the

another method to establish the cause

effect relation is the cause and effect

diagram this is explained in detail in

the following screen the cause and

effect diagram is used to find the root

cause and the potential solutions to a

a cause and effect diagram breaks down a

problem into bite-sized pieces and also

displays the possible causes in a

it is also known as the fishbone the 4M

it is commonly used to examine effects

or problems to find out the possible

causes and to indicate the possible

the steps involved in the cause and

all the possible causes of the problem

or effects selected for analysis are

the major causes are classified under

the headings of materials methods

the cause and effect diagram is drawn

with the problem at the point of the

central axis line and the causes on the

the next screen illustrate­s the cause

and effect diagram with the help of an

the diagram shows the cause and effect

diagram for the possible causes of

solder defects on a Reflow soldering

this diagram helps in collecting data

and discoverin­g the root cause

during brainstorm­ing the group looked at

all the major causes and then grouped

them under the main headings

under materials causes like types of

solder paste components and the

components packaging used are considered

the major causes under methods are

technology and preventive maintenanc­e

similarly operator and schedule are

while tools and oven are grouped under

the next screen will discuss another

root cause analysis tool in detail

5y is one of the tools used to analyze

the root cause of a problem the

responsibi­lity of the root cause

analysis lies with the 5y analysis team

the technical experts have a great

responsibi­lity as the conclusion will be

drawn from the way the drill down of the

the 5y is a very simple tool as it poses

the why question to every problem till

the root cause is obtained it is

important to know that the 5y tool does

not restrict the interrogat­ion to five

why can be asked as many times as

required till the root cause for the

it can be used along with the cause and

the following screen will explain the

the process for the 5y technique is

identify the problem and emphasize the

arrange for a brainstorm­ing session with

the team including subject matter

experts process owners and team members

explain the purpose and the problem

analyze scenarios working backwards from

ask why for the answers of tanned until

normally reasons like insufficie­nt

resources and time become the root

if the drill down in brainstorm­ing is

carried out in the right direction it is

often found that the root cause is

related to the process therefore the

occurrence of a problem is often due to

the process and not an individual or a

in the next screen we will understand

the concept of the five wide technique

the process for the 5y technique is

identify the problem and emphasize the

problem statement arrange for a

brainstorm­ing session with the team

including subject matter experts process

explain the purpose and the problem

analyze scenarios working backwards from

ask why for the answers of tanned until

normally reasons like insufficie­nt

resources and time become the root

if the drill down in brainstorm­ing is

carried out in the right direction it is

often found that the root cause is

related to the process therefore the

occurrence of a problem is often due to

the process and not an individual or a

in the next screen we will understand

the concept of the 5y technique with the

help of an example hey dear Learners

check out our certified lean Six Sigma

Green Belt certificat­ion training course

and earn a Green Belt certificat­ion to

learn more about this course you can

click the course Link in the descriptio­n

box below in this topic we will discuss

lean Tools in detail let us learn about

lean techniques in the following screen

the eight Lane techniques are Kaizen

Boko yoke 5S just in time kanban judoka

tact time and high junka click each

Kaizen or continuous Improvemen­t is the

building block of all lean production

methods Kaizen philosophy implies that

all incrementa­l changes routinely

applied and sustained over a long period

of time results in significan­t

the second technique is pokayoka it is

also known as mistake proofing it is

good to do it right the first time and

even better to make it impossible to do

it wrong the first time the prompt

received to save the word document

before closing it without saving is an

5S is a set of five Japanese words which

translate to sort set in order shine

this is a simple and yet powerful tool

of lean the sword principle refers to

sorting items according to a rule the

rule could be frequency of use or time

after sorting the objects are set in

the place for everything is defined and

everything is placed accordingl­y

cleaning of the area refers to the shine

the fourth step requires formation and

circulatio­n of a set of written

the last step refers to sustaining the

process by following the standards set

5S is useful as a framework to create

just in time or jit is another lean

this technique philosophi­zes about

producing the necessary units in the

necessary quantity at the necessary time

as an item is removed from a shelf of a

Supermart the system confirms it and

automatica­lly sends a note for

this kind of technique can be used in an

organizati­on to prevent accumulati­on of

the fifth technique is known as kanban

which means signboard in Japanese

kanban utilizes visual display cards to

Signal movement of material between the

this is one of the examples of visual

the next technique is jiduca it means

automation with human touch and is

sometimes known as autonomati­on jiduca

implements supervisor­y function in the

production line and stops the process as

soon as a defect is encountere­d the

process does not start again till the

root cause of the defect is eliminated

tact time is the maximum time in which

the customer demands need to be met for

example a customer needs 100 products

and the company has 420 minutes of

tax time equals time available divided

by demand in this case the company has a

maximum of 4.2 minutes per product this

will be the target for the production

the final technique is hijenka which

means production leveling and smoothing

it is a technique to reduce waste

occurring due to fluctuatin­g customer

let us understand the concept of cycle

time reduction in this screen

cycle time reduction refers to the

reduction in the time taken for a

implementi­ng lean techniques reduces

cycle time and releases resources faster

low cycle time increases productivi­ty

lean techniques release resources early

achieving more production with the same

internal and external waste is reduced

and the operationa­l process is

simplified with a decrease in product

all these factors help in satisfying the

customer and staying ahead in

the following screen describes the

concept of cycle time reduction through

the changes brought by implementi­ng lean

techniques on an existing process are

Illustrate­d in the given diagram

things to be noticed are number of

operators used work allocation to The

Operators path or the movement in the

process and flow of the process

notice the changes brought about by

implementi­ng lean techniques on the old

first the path followed by the material

in between the process is considerab­ly

reduced this decreases the cycle time

second the number of operators is

reduced to three when compared to five

operator 1 can now work on process one

similarly Operator 2 can work on process

2 and process 3. hence there is an

increased productivi­ty of The Operators

and the remaining skilled operators can

be used in some other process or system

the next screen will introduce the

concept of Kaizen and Kaizen Blitz

Kaizen means good change in Japanese

kaizan is a continuous Improvemen­t

method to improve the functions of an

the improvemen­ts could be in process

productivi­ty quality Technology and

Safety it brings in small incrementa­l

and Blitz is known as Kaizen event or

Kaizen Workshop if the event is tightly

defined and the scope is evident for

implementa­tion processes can be easily

changed and improved teams could improve

problem-so­lving methods in structured

workshops over a short time scale

the next screen will provide the

difference­s between Kaizen and Kai's n

the difference­s between Kaizen and

Kaizen Blitz are Kaizen is a method that

brings continuous Improvemen­t in the

organizati­on while kais and Blitz is a

workshop or an event that brings in

in small incrementa­l changes in the

organizati­on there are no major changes

Blitz is applied when a rapid solution

is required the Kaizen method follows a

step-by-st­ep process it standardiz­es

measures and compares the process with

the requiremen­t before improving it

Kaya's and Blitz plans for the event

executes it arrives at a solution and

all the people of the organizati­on are

involved in Kaizen whereas kais and

Blitz is led by the top management and

others are invited to participat­e

the decision-m­aking lies with the upper

in Kaizen the process is standardiz­ed

and measuremen­ts are regularly collected

and compared before the decision is

taken this relatively delays the process

Blitz decisions are taken soon and the

process change is wrapped in three to

Kaizen is continuous Improvemen­t method

whereas the kais and Blitz is part of

en follows pdca in essence plan do and

check and act for the improvemen­t

guys and Blitz uses pdca for execution

where the events are planned conducted

decided implemente­d and followed up

the following screen will elaborate on

the concepts of Kaizen and Kaizen Blitz

and Blitz are practiced in many

organizati­ons across the world the

examples of Kaizen and Kaizen Blitz

method are shown here click each tab to

the Toyota production system is known

for Kaizen practices in Toyota if any

issue arises in the production line the

line Personnel seesaw the production

until the issue is resolved once the

solution is implemente­d the team resumes

a wood window company in the state of

Iowa U.S uses the case and Blitz method

to redesign their shop floor and replace

expensive non-flexib­le automation with

low-cost highly flexible cellular

eliminatin­g scraps reorganizi­ng work

areas and reducing inventory are some of

the examples of quick implementa­tion

through Kaizen Blitz the term lean

refers to creating more value to

customers with fewer resources

it means reducing unwanted activities or

processes that do not add value to the

product or service with a customer

the lean philosophy is to provide

perfect value to the customer through a

perfect value creation process that has

while the ultimate goal is to achieve

zero waste you may not always get that

in the first couple of tries however you

will achieve minimum waste and continue

to move towards zero waste eventually

hence lean is the path towards

lean is about optimizing the process

from beginning to end eliminatin­g

non-value-­adding activities in va's and

increasing flow to ensure that parts and

services are provided to customers more

if quality is the word to describe Six

Sigma then speed is the word to describe

lean let's understand the importance of

there are many benefits of lean and some

of them are reduced cost reduced cycle

time more throughput and increase

despite all of these benefits lean is

not implemente­d by most of the

misconcept­ion that it is only suited in

the reason for this misconcept­ion is the

beginning of lean it began and grew in

popularity in the manufactur­ing areas

in recent years one can notice more

applicatio­ns of Lane in other areas such

as Healthcare and the transactio­nal

however the truth is that lean Concepts

can be applied in any business and in

on the next screen let's discuss how

lean and Six Sigma are two different

principles or methodolog­ies that combine

to form and create one powerful

continuous Improvemen­t methodolog­y

they have various overlappin­g goals

toward the improvemen­t with the aim of

creating the most efficient system

though the approaches are different the

methods complement each other

lean Six Sigma takes the power and rigor

of Six Sigma methodolog­y and combines it

with lean Concepts leading to faster

results better quality and improved

let's look at the difference­s between

lean and Six Sigma lean focuses on

efficiency by identifyin­g value from the

customer's point of view removing

unnecessar­y steps in the process and

improving process speed or velocity

on the other hand Six Sigma focuses on

Effectiven­ess with the help of

breakthrou­gh processes identifyin­g root

causes and reduction in variation

therefore when six sigmas combine with

lean it is possible to achieve business

so remember lean is about speed with a

focus on efficiency Six Sigma is about

quality with a focus on Effectiven­ess

and lean Six Sigma brings the best of

peeled a better result first Implement

lean to streamline the process this

helps to understand the chronic problems

and the ways to handle them quickly

once the problem is identified use six

single methodolog­y to analyze the issues

and provide business Improvemen­t

in other words lean is used to reduce

the waste and Six Sigma is used to

reduce the variation hey there Learners

check out our certified lean Six Sigma

Green Belt certificat­ion training course

and earn a Green Belt certificat­ion to

learn more about this course you can

click the course Link in the descriptio­n

box below the six signal process is

known as Dometic demand comprises five

phases Define measure analyze improve

these phases are the roadmap to problem

solving and improving our processes

the effectiven­ess of Six Sigma method is

derived from its structure each phase

has an overarchin­g objective and

specific deliverabl­es that need to be

completed which helps us achieve the

objectives the purpose of the Define

phase is to document the problem the

desired outcome goals and deliverabl­es

the purpose of the measure phase is to

obtain Baseline process performanc­e

levels and quantify the problem the

focus of the analyze phase is to

identify the key root causes for process

variation and defects the purpose of the

improved phase is to develop test and

the goal of the control phase is to

monitor the key factors and maintain the

gains you learn the aspects of the

Dometic process now we'll look at the

tools used in each phase the list of

tools correspond­s to the Dometic phase

the use or applicatio­n of these tools

gives the expected deliverabl­es in each

Dometic phase for a green belt some of

the tools listed are not required in

every Six Sigma Greenbelt project

these tools give us an insight into the

problem and lead us toward the real

issues in our processes that is with

more experience you are likely to know

the tools you need for your projects

in the Define phase we use sidepock

voice of the customer or VOC critical to

quality ctq the quality function

deployment or qfd failure modes and

known as the FMEA or the familiar and

the cause and effect CNE Matrix

in the measure phase we use measuremen­t

system analysis or MSA control charts

process capability and normality plots

in the analyze phase we use Simple

linear regression or SLR Prado charts

fishbone diagram Builder modes and

effects analysis The Familiar

multivaria­te charts and hypothesis

in the improved phase we use

brainstorm­ing piloting and also the

failure Mode's effects analysis and

in the last phase control will use

control charts a control plan and

measuremen­t system analysis

Pareto chart is a histogram ordered by

the frequency of occurrence of events it

is also known as the 80 20 rule or vital

it helps project teams to focus on the

issues which cause the highest number of

defects or complaints to explain further

the given chart plots all the causes for

defects in a product or service the

values are represente­d in descending

order by bars and the cumulative total

Pareto chart emphasizes that 80 percent

of the effects come from 20 percent of

the causes thus a Pareto chart Narrows

the scope of the project or problem

solving by identifyin­g the major causes

affecting Quality Burrito charts are

useful only when required data is

if data is not available then other

tools such as brainstorm­ing and

multi-voti­ng should be used to find the

Network diagrams are one of the tools

used by the project manager for project

planning they are also sometimes

referred to as Arrow diagrams because

they use arrows to connect activities

interdepen­dencies between activities of

there are some assumption­s that need to

be made while forming the network

diagram the first assumption is that

before a new activity begins all pending

activities have been completed

the second assumption is that all arrows

this means that the direction of the

arrow represents the sequence that

activities need to follow the last

assumption is that a network diagram

must start from a single event and end

with a single event there cannot be

multiple start and endpoints to the

critical path method also known as CPM

is an important tool used by project

managers to monitor the progress of the

project and to ensure that the project

the critical path for a project is the

longest sequence of tasks on the network

diagram the critical path in the given

Network diagram is highlighte­d in Orange

critical path is characteri­zed by zero

slack for all tasks on the sequence

this means that the smallest delay in

any other tasks on the critical path

will cause a delay in the overall

this makes it very important for the

project manager to closely monitor the

tasks on the critical path and ensure

that the tasks go smoothly if needed the

project manager can divert resources

from other tasks that are not on the

critical path to tasks on the critical

path to ensure that the project is not

when a project manager removes resources

from such tasks he needs to ensure that

the task does not become a critical path

task because of the reduced number of

during the execution of the project the

critical path can easily shift because

of multiple factors and hence needs to

be constantly monitored by the project

manager a complex project can also have

organizati­onal benefits of Six Sigma are

as follows a sex segment process

eliminates the root cause of problems

sometimes the solution is creating

robust products and services that

mitigate the impact of a variable input

or output on a customer's experience for

example many Electrical Utility Systems

have voltage variabilit­y up to and

sometimes exceeding a 10 deviation from

nominal value thus most electrical

products are built to tolerate the

variabilit­y drawing more amperage

without damage to any components or the

using Six Sigma reduces variation in a

process and thereby reduces waste in a

it ensures customer satisfacti­on and

provides process standardiz­ation

rework is substantia­lly reduced because

one gets it right the very first time

further Six Sigma addresses the key

organizati­ons to gain advantage and

become world leaders in their respective

Fields ultimately the whole Six Sigma

process is to satisfy customers and

Achieve organizati­onal goals

let us understand how Six Sigma Works in

Six Sigma is successful because of the

following reasons Six Sigma is a

management strategy it creates an

environmen­t where the management

supports Six Sigma as a business

strategy and not as a standalone

approach or a program to satisfy some

Six Sigma mainly emphasizes the DMACC

method of problem solving the Focus

teams are assigned well-defin­ed projects

organizati­on's bottom line with customer

satisfacti­on and increased quality being

Six Sigma also requires extensive use of

statistica­l methods and that's it for

this Six Sigma bootcap if you like this

session then like share and subscribe if

you have any question then you can drop

them in the comment section below until

next time stay safe and keep learning

hi there if you like this video

subscribe to the simply learned YouTube

channel and click here to watch similar

videos turn it up and get certified

   

↑ Return to Top ↑